Assessing how visual search entropy and engagement predict performance in a multiple-objects tracking air traffic control task
Behavioral performance metrics employed to assess the usability of visual displays are increasingly coupled with eye tracking measures to provide additional insights into the decision-making processes supported by visual displays. Eye tracking metrics can be coupled with users' neural data to i...
Guardado en:
Autores principales: | , , , , , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/d716ff7a8809456facdbe2241a6b3be5 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Sumario: | Behavioral performance metrics employed to assess the usability of visual displays are increasingly coupled with eye tracking measures to provide additional insights into the decision-making processes supported by visual displays. Eye tracking metrics can be coupled with users' neural data to investigate how human cognition interplays with emotions during visuo-spatial tasks. To contribute to these efforts, we present results of a study in a realistic air traffic control (ATC) setting with animated ATC displays, where ATC experts and novices were presented with an aircraft movement detection task. We find that higher stationary gaze entropy – which indicates a larger spatial distribution of visual gaze on the display – and expertise result in better response accuracy, and that stationary entropy positively predicts response time even after controlling for animation type and expertise. As a secondary contribution, we found that a single component comprised of engagement, measured by EEG and self-reported judgments, spatial abilities, and gaze entropy predicts task accuracy, but not completion time. We also provide MATLAB open source code for calculating the EEG measures utilized in the study. Our findings suggest designing spatial information displays that adapt their content according to users’ affective and cognitive states, especially for emotionally laden usage contexts. |
---|