Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
Abstract Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predic...
Guardado en:
Autores principales: | Boukje Habets, Patrick Bruns, Brigitte Röder |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2017
|
Materias: | |
Acceso en línea: | https://doaj.org/article/84189243941d40ff94aa51fb61e67fea |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Interaction of perceptual grouping and crossmodal temporal capture in tactile apparent-motion.
por: Lihan Chen, et al.
Publicado: (2011) -
Coupled oscillations enable rapid temporal recalibration to audiovisual asynchrony
por: Therese Lennert, et al.
Publicado: (2021) -
Audio-visual speech timing sensitivity is enhanced in cluttered conditions.
por: Warrick Roseboom, et al.
Publicado: (2011) -
Bioinspired multisensory neural network with crossmodal integration and recognition
por: Hongwei Tan, et al.
Publicado: (2021) -
Audio-visual experience strengthens multisensory assemblies in adult mouse visual cortex
por: Thomas Knöpfel, et al.
Publicado: (2019)