Concurrent talking in immersive virtual reality: on the dominance of visual speech cues

Abstract Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized au...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Mar Gonzalez-Franco, Antonella Maselli, Dinei Florencio, Nikolai Smolyanskiy, Zhengyou Zhang
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2017
Materias:
R
Q
Acceso en línea:https://doaj.org/article/d1f4b2e4b6244767b60e8f35d85d26dc
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:d1f4b2e4b6244767b60e8f35d85d26dc
record_format dspace
spelling oai:doaj.org-article:d1f4b2e4b6244767b60e8f35d85d26dc2021-12-02T11:52:17ZConcurrent talking in immersive virtual reality: on the dominance of visual speech cues10.1038/s41598-017-04201-x2045-2322https://doaj.org/article/d1f4b2e4b6244767b60e8f35d85d26dc2017-06-01T00:00:00Zhttps://doi.org/10.1038/s41598-017-04201-xhttps://doaj.org/toc/2045-2322Abstract Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized audio. Exposing 32 participants to an Information Masking Task with concurrent speakers, we find significantly more errors in the decision-making processes triggered by asynchronous audiovisual speech cues. More precisely, the results show that lips on the Target speaker matched to a secondary (Mask) speaker’s audio severely increase the participants’ comprehension error rates. In a control experiment (n = 20), we further explore the influences of the visual modality over auditory selective attention. The results show a dominance of visual-speech cues, which effectively turn the Mask into the Target and vice-versa. These results reveal a disruption of selective attention that is triggered by bottom-up multisensory integration. The findings are framed in the sensory perception and cognitive neuroscience theories. The VR setup is validated by replicating previous results in this literature in a supplementary experiment.Mar Gonzalez-FrancoAntonella MaselliDinei FlorencioNikolai SmolyanskiyZhengyou ZhangNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 7, Iss 1, Pp 1-11 (2017)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Mar Gonzalez-Franco
Antonella Maselli
Dinei Florencio
Nikolai Smolyanskiy
Zhengyou Zhang
Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
description Abstract Humans are good at selectively listening to specific target conversations, even in the presence of multiple concurrent speakers. In our research, we study how auditory-visual cues modulate this selective listening. We do so by using immersive Virtual Reality technologies with spatialized audio. Exposing 32 participants to an Information Masking Task with concurrent speakers, we find significantly more errors in the decision-making processes triggered by asynchronous audiovisual speech cues. More precisely, the results show that lips on the Target speaker matched to a secondary (Mask) speaker’s audio severely increase the participants’ comprehension error rates. In a control experiment (n = 20), we further explore the influences of the visual modality over auditory selective attention. The results show a dominance of visual-speech cues, which effectively turn the Mask into the Target and vice-versa. These results reveal a disruption of selective attention that is triggered by bottom-up multisensory integration. The findings are framed in the sensory perception and cognitive neuroscience theories. The VR setup is validated by replicating previous results in this literature in a supplementary experiment.
format article
author Mar Gonzalez-Franco
Antonella Maselli
Dinei Florencio
Nikolai Smolyanskiy
Zhengyou Zhang
author_facet Mar Gonzalez-Franco
Antonella Maselli
Dinei Florencio
Nikolai Smolyanskiy
Zhengyou Zhang
author_sort Mar Gonzalez-Franco
title Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_short Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_full Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_fullStr Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_full_unstemmed Concurrent talking in immersive virtual reality: on the dominance of visual speech cues
title_sort concurrent talking in immersive virtual reality: on the dominance of visual speech cues
publisher Nature Portfolio
publishDate 2017
url https://doaj.org/article/d1f4b2e4b6244767b60e8f35d85d26dc
work_keys_str_mv AT margonzalezfranco concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT antonellamaselli concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT dineiflorencio concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT nikolaismolyanskiy concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
AT zhengyouzhang concurrenttalkinginimmersivevirtualrealityonthedominanceofvisualspeechcues
_version_ 1718395132685844480