Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.

It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Sonja Schall, Katharina von Kriegstein
Formato: article
Lenguaje:EN
Publicado: Public Library of Science (PLoS) 2014
Materias:
R
Q
Acceso en línea:https://doaj.org/article/afb7865c21384d9fa8a96bffecaccdf1
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:afb7865c21384d9fa8a96bffecaccdf1
record_format dspace
spelling oai:doaj.org-article:afb7865c21384d9fa8a96bffecaccdf12021-11-18T08:36:09ZFunctional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.1932-620310.1371/journal.pone.0086325https://doaj.org/article/afb7865c21384d9fa8a96bffecaccdf12014-01-01T00:00:00Zhttps://www.ncbi.nlm.nih.gov/pmc/articles/pmid/24466026/pdf/?tool=EBIhttps://doaj.org/toc/1932-6203It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.Sonja SchallKatharina von KriegsteinPublic Library of Science (PLoS)articleMedicineRScienceQENPLoS ONE, Vol 9, Iss 1, p e86325 (2014)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Sonja Schall
Katharina von Kriegstein
Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
description It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers' voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker's face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
format article
author Sonja Schall
Katharina von Kriegstein
author_facet Sonja Schall
Katharina von Kriegstein
author_sort Sonja Schall
title Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
title_short Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
title_full Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
title_fullStr Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
title_full_unstemmed Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
title_sort functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception.
publisher Public Library of Science (PLoS)
publishDate 2014
url https://doaj.org/article/afb7865c21384d9fa8a96bffecaccdf1
work_keys_str_mv AT sonjaschall functionalconnectivitybetweenfacemovementandspeechintelligibilityareasduringauditoryonlyspeechperception
AT katharinavonkriegstein functionalconnectivitybetweenfacemovementandspeechintelligibilityareasduringauditoryonlyspeechperception
_version_ 1718421581354500096