Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Morteza Esmaeili, Riyas Vettukattil, Hasan Banitalebi, Nina R. Krogh, Jonn Terje Geitung
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
R
Acceso en línea:https://doaj.org/article/419fbc418eee4fbf9d8ce1756e6bb850
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:419fbc418eee4fbf9d8ce1756e6bb850
record_format dspace
spelling oai:doaj.org-article:419fbc418eee4fbf9d8ce1756e6bb8502021-11-25T18:08:04ZExplainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization10.3390/jpm111112132075-4426https://doaj.org/article/419fbc418eee4fbf9d8ce1756e6bb8502021-11-01T00:00:00Zhttps://www.mdpi.com/2075-4426/11/11/1213https://doaj.org/toc/2075-4426Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (<i>R</i> = 0.46, <i>p</i> = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.Morteza EsmaeiliRiyas VettukattilHasan BanitalebiNina R. KroghJonn Terje GeitungMDPI AGarticletumor localizationblack box CNNexplainable AIgliomasmachine learningMedicineRENJournal of Personalized Medicine, Vol 11, Iss 1213, p 1213 (2021)
institution DOAJ
collection DOAJ
language EN
topic tumor localization
black box CNN
explainable AI
gliomas
machine learning
Medicine
R
spellingShingle tumor localization
black box CNN
explainable AI
gliomas
machine learning
Medicine
R
Morteza Esmaeili
Riyas Vettukattil
Hasan Banitalebi
Nina R. Krogh
Jonn Terje Geitung
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
description Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (<i>R</i> = 0.46, <i>p</i> = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.
format article
author Morteza Esmaeili
Riyas Vettukattil
Hasan Banitalebi
Nina R. Krogh
Jonn Terje Geitung
author_facet Morteza Esmaeili
Riyas Vettukattil
Hasan Banitalebi
Nina R. Krogh
Jonn Terje Geitung
author_sort Morteza Esmaeili
title Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_short Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_full Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_fullStr Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_full_unstemmed Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
title_sort explainable artificial intelligence for human-machine interaction in brain tumor localization
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/419fbc418eee4fbf9d8ce1756e6bb850
work_keys_str_mv AT mortezaesmaeili explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT riyasvettukattil explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT hasanbanitalebi explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT ninarkrogh explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
AT jonnterjegeitung explainableartificialintelligenceforhumanmachineinteractioninbraintumorlocalization
_version_ 1718411554749153280