Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search

Abstract Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Siyi Chen, Zhuanghua Shi, Hermann J. Müller, Thomas Geyer
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/4bb5e9f8834c4b8fa04fd4602b3a5b7e
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:4bb5e9f8834c4b8fa04fd4602b3a5b7e
record_format dspace
spelling oai:doaj.org-article:4bb5e9f8834c4b8fa04fd4602b3a5b7e2021-12-02T14:41:52ZMultisensory visuo-tactile context learning enhances the guidance of unisensory visual search10.1038/s41598-021-88946-62045-2322https://doaj.org/article/4bb5e9f8834c4b8fa04fd4602b3a5b7e2021-05-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-88946-6https://doaj.org/toc/2045-2322Abstract Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target’s orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing ‘contextual cueing’. This effect was enhanced in the multisensory session—importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift–diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.Siyi ChenZhuanghua ShiHermann J. MüllerThomas GeyerNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-15 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Siyi Chen
Zhuanghua Shi
Hermann J. Müller
Thomas Geyer
Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
description Abstract Does multisensory distractor-target context learning enhance visual search over and above unisensory learning? To address this, we had participants perform a visual search task under both uni- and multisensory conditions. Search arrays consisted of one Gabor target that differed from three homogeneous distractors in orientation; participants had to discriminate the target’s orientation. In the multisensory session, additional tactile (vibration-pattern) stimulation was delivered to two fingers of each hand, with the odd-one-out tactile target and the distractors co-located with the corresponding visual items in half the trials; the other half presented the visual array only. In both sessions, the visual target was embedded within identical (repeated) spatial arrangements of distractors in half of the trials. The results revealed faster response times to targets in repeated versus non-repeated arrays, evidencing ‘contextual cueing’. This effect was enhanced in the multisensory session—importantly, even when the visual arrays presented without concurrent tactile stimulation. Drift–diffusion modeling confirmed that contextual cueing increased the rate at which task-relevant information was accumulated, as well as decreasing the amount of evidence required for a response decision. Importantly, multisensory learning selectively enhanced the evidence-accumulation rate, expediting target detection even when the context memories were triggered by visual stimuli alone.
format article
author Siyi Chen
Zhuanghua Shi
Hermann J. Müller
Thomas Geyer
author_facet Siyi Chen
Zhuanghua Shi
Hermann J. Müller
Thomas Geyer
author_sort Siyi Chen
title Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
title_short Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
title_full Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
title_fullStr Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
title_full_unstemmed Multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
title_sort multisensory visuo-tactile context learning enhances the guidance of unisensory visual search
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/4bb5e9f8834c4b8fa04fd4602b3a5b7e
work_keys_str_mv AT siyichen multisensoryvisuotactilecontextlearningenhancestheguidanceofunisensoryvisualsearch
AT zhuanghuashi multisensoryvisuotactilecontextlearningenhancestheguidanceofunisensoryvisualsearch
AT hermannjmuller multisensoryvisuotactilecontextlearningenhancestheguidanceofunisensoryvisualsearch
AT thomasgeyer multisensoryvisuotactilecontextlearningenhancestheguidanceofunisensoryvisualsearch
_version_ 1718389868724224000