Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers

Abstract Visual reasoning is critical in many complex visual tasks in medicine such as radiology or pathology. It is challenging to explicitly explain reasoning processes due to the dynamic nature of real-time human cognition. A deeper understanding of such reasoning processes is necessary for impro...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Yu Li, Hongfei Cao, Carla M. Allen, Xin Wang, Sanda Erdelez, Chi-Ren Shyu
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2020
Materias:
R
Q
Acceso en línea:https://doaj.org/article/bcddfbfd54a34ea9bc59b68f55c1976f
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:bcddfbfd54a34ea9bc59b68f55c1976f
record_format dspace
spelling oai:doaj.org-article:bcddfbfd54a34ea9bc59b68f55c1976f2021-12-02T12:33:15ZComputational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers10.1038/s41598-020-77550-92045-2322https://doaj.org/article/bcddfbfd54a34ea9bc59b68f55c1976f2020-12-01T00:00:00Zhttps://doi.org/10.1038/s41598-020-77550-9https://doaj.org/toc/2045-2322Abstract Visual reasoning is critical in many complex visual tasks in medicine such as radiology or pathology. It is challenging to explicitly explain reasoning processes due to the dynamic nature of real-time human cognition. A deeper understanding of such reasoning processes is necessary for improving diagnostic accuracy and computational tools. Most computational analysis methods for visual attention utilize black-box algorithms which lack explainability and are therefore limited in understanding the visual reasoning processes. In this paper, we propose a computational method to quantify and dissect visual reasoning. The method characterizes spatial and temporal features and identifies common and contrast visual reasoning patterns to extract significant gaze activities. The visual reasoning patterns are explainable and can be compared among different groups to discover strategy differences. Experiments with radiographers of varied levels of expertise on 10 levels of visual tasks were conducted. Our empirical observations show that the method can capture the temporal and spatial features of human visual attention and distinguish expertise level. The extracted patterns are further examined and interpreted to showcase key differences between expertise levels in the visual reasoning processes. By revealing task-related reasoning processes, this method demonstrates potential for explaining human visual understanding.Yu LiHongfei CaoCarla M. AllenXin WangSanda ErdelezChi-Ren ShyuNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 10, Iss 1, Pp 1-11 (2020)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Yu Li
Hongfei Cao
Carla M. Allen
Xin Wang
Sanda Erdelez
Chi-Ren Shyu
Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
description Abstract Visual reasoning is critical in many complex visual tasks in medicine such as radiology or pathology. It is challenging to explicitly explain reasoning processes due to the dynamic nature of real-time human cognition. A deeper understanding of such reasoning processes is necessary for improving diagnostic accuracy and computational tools. Most computational analysis methods for visual attention utilize black-box algorithms which lack explainability and are therefore limited in understanding the visual reasoning processes. In this paper, we propose a computational method to quantify and dissect visual reasoning. The method characterizes spatial and temporal features and identifies common and contrast visual reasoning patterns to extract significant gaze activities. The visual reasoning patterns are explainable and can be compared among different groups to discover strategy differences. Experiments with radiographers of varied levels of expertise on 10 levels of visual tasks were conducted. Our empirical observations show that the method can capture the temporal and spatial features of human visual attention and distinguish expertise level. The extracted patterns are further examined and interpreted to showcase key differences between expertise levels in the visual reasoning processes. By revealing task-related reasoning processes, this method demonstrates potential for explaining human visual understanding.
format article
author Yu Li
Hongfei Cao
Carla M. Allen
Xin Wang
Sanda Erdelez
Chi-Ren Shyu
author_facet Yu Li
Hongfei Cao
Carla M. Allen
Xin Wang
Sanda Erdelez
Chi-Ren Shyu
author_sort Yu Li
title Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
title_short Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
title_full Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
title_fullStr Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
title_full_unstemmed Computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
title_sort computational modeling of human reasoning processes for interpretable visual knowledge: a case study with radiographers
publisher Nature Portfolio
publishDate 2020
url https://doaj.org/article/bcddfbfd54a34ea9bc59b68f55c1976f
work_keys_str_mv AT yuli computationalmodelingofhumanreasoningprocessesforinterpretablevisualknowledgeacasestudywithradiographers
AT hongfeicao computationalmodelingofhumanreasoningprocessesforinterpretablevisualknowledgeacasestudywithradiographers
AT carlamallen computationalmodelingofhumanreasoningprocessesforinterpretablevisualknowledgeacasestudywithradiographers
AT xinwang computationalmodelingofhumanreasoningprocessesforinterpretablevisualknowledgeacasestudywithradiographers
AT sandaerdelez computationalmodelingofhumanreasoningprocessesforinterpretablevisualknowledgeacasestudywithradiographers
AT chirenshyu computationalmodelingofhumanreasoningprocessesforinterpretablevisualknowledgeacasestudywithradiographers
_version_ 1718393881984237568