Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony

Abstract Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predic...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Boukje Habets, Patrick Bruns, Brigitte Röder
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2017
Materias:
R
Q
Acceso en línea:https://doaj.org/article/84189243941d40ff94aa51fb61e67fea
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:84189243941d40ff94aa51fb61e67fea
record_format dspace
spelling oai:doaj.org-article:84189243941d40ff94aa51fb61e67fea2021-12-02T15:05:46ZExperience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony10.1038/s41598-017-01252-y2045-2322https://doaj.org/article/84189243941d40ff94aa51fb61e67fea2017-05-01T00:00:00Zhttps://doi.org/10.1038/s41598-017-01252-yhttps://doaj.org/toc/2045-2322Abstract Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhancement of multisensory binding as assessed with a simultaneity judgment task. In an initial learning phase participants were exposed to a subset of auditory-visual combinations. In the test phase the previously encountered audio-visual stimuli were presented together with new combinations of the auditory and visual stimuli from the learning phase, audio-visual stimuli containing one learned and one new sensory component, and audio-visual stimuli containing completely new auditory and visual material. Auditory-visual asynchrony was manipulated. A higher proportion of simultaneity judgements was observed for the learned cross-modal combinations than for new combinations of the same auditory and visual elements, as well as for all other conditions. This result suggests that prior exposure to certain auditory-visual combinations changed the expectation (i.e., the prior) that their elements belonged to the same event. As a result, multisensory binding became more likely despite unchanged sensory evidence of the auditory and visual elements.Boukje HabetsPatrick BrunsBrigitte RöderNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 7, Iss 1, Pp 1-7 (2017)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Boukje Habets
Patrick Bruns
Brigitte Röder
Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
description Abstract Bayesian models propose that multisensory integration depends on both sensory evidence (the likelihood) and priors indicating whether or not two inputs belong to the same event. The present study manipulated the prior for dynamic auditory and visual stimuli to co-occur and tested the predicted enhancement of multisensory binding as assessed with a simultaneity judgment task. In an initial learning phase participants were exposed to a subset of auditory-visual combinations. In the test phase the previously encountered audio-visual stimuli were presented together with new combinations of the auditory and visual stimuli from the learning phase, audio-visual stimuli containing one learned and one new sensory component, and audio-visual stimuli containing completely new auditory and visual material. Auditory-visual asynchrony was manipulated. A higher proportion of simultaneity judgements was observed for the learned cross-modal combinations than for new combinations of the same auditory and visual elements, as well as for all other conditions. This result suggests that prior exposure to certain auditory-visual combinations changed the expectation (i.e., the prior) that their elements belonged to the same event. As a result, multisensory binding became more likely despite unchanged sensory evidence of the auditory and visual elements.
format article
author Boukje Habets
Patrick Bruns
Brigitte Röder
author_facet Boukje Habets
Patrick Bruns
Brigitte Röder
author_sort Boukje Habets
title Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_short Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_full Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_fullStr Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_full_unstemmed Experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
title_sort experience with crossmodal statistics reduces the sensitivity for audio-visual temporal asynchrony
publisher Nature Portfolio
publishDate 2017
url https://doaj.org/article/84189243941d40ff94aa51fb61e67fea
work_keys_str_mv AT boukjehabets experiencewithcrossmodalstatisticsreducesthesensitivityforaudiovisualtemporalasynchrony
AT patrickbruns experiencewithcrossmodalstatisticsreducesthesensitivityforaudiovisualtemporalasynchrony
AT brigitteroder experiencewithcrossmodalstatisticsreducesthesensitivityforaudiovisualtemporalasynchrony
_version_ 1718388714383605760