The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence
Abstract Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment wh...
Guardado en:
Autores principales: | , , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/2e1e1e55954b425c8ef20a09f25dfcab |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Abstract Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment while concomitantly acquiring Electroencephalography (EEG) and Galvanic Skin Response (GSR) to investigate how multisensory signals impact target detection in two conditions, high and low perceptual load. Different multimodal stimuli (auditory and vibrotactile) were presented, alone or in combination with the visual target. Results showed that only in the high load condition, multisensory stimuli significantly improve performance, compared to visual stimulation alone. Multisensory stimulation also decreases the EEG-based workload. Instead, the perceived workload, according to the “NASA Task Load Index” questionnaire, was reduced only by the trimodal condition (i.e., visual, auditory, tactile). This trimodal stimulation was more effective in enhancing the sense of presence, that is the feeling of being in the virtual environment, compared to the bimodal or unimodal stimulation. Also, we show that in the high load task, the GSR components are higher compared to the low load condition. Finally, the multimodal stimulation (Visual-Audio-Tactile—VAT and Visual-Audio—VA) induced a significant decrease in latency, and a significant increase in the amplitude of the P300 potentials with respect to the unimodal (visual) and visual and tactile bimodal stimulation, suggesting a faster and more effective processing and detection of stimuli if auditory stimulation is included. Overall, these findings provide insights into the relationship between multisensory integration and human behavior and cognition. |
---|