Mitigating belief projection in explainable artificial intelligence via Bayesian teaching

Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee v...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Scott Cheng-Hsin Yang, Wai Keen Vong, Ravi B. Sojitra, Tomas Folke, Patrick Shafto
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2021
Materias:
R
Q
Acceso en línea:https://doaj.org/article/feea80d1ede04d4f84ba50138beb7648
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:feea80d1ede04d4f84ba50138beb7648
record_format dspace
spelling oai:doaj.org-article:feea80d1ede04d4f84ba50138beb76482021-12-02T17:01:49ZMitigating belief projection in explainable artificial intelligence via Bayesian teaching10.1038/s41598-021-89267-42045-2322https://doaj.org/article/feea80d1ede04d4f84ba50138beb76482021-05-01T00:00:00Zhttps://doi.org/10.1038/s41598-021-89267-4https://doaj.org/toc/2045-2322Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees’ inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI’s classifications will match their own, but explanations generated by Bayesian teaching improve their ability to predict the AI’s judgements by moving them away from this prior belief. Bayesian teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.Scott Cheng-Hsin YangWai Keen VongRavi B. SojitraTomas FolkePatrick ShaftoNature PortfolioarticleMedicineRScienceQENScientific Reports, Vol 11, Iss 1, Pp 1-17 (2021)
institution DOAJ
collection DOAJ
language EN
topic Medicine
R
Science
Q
spellingShingle Medicine
R
Science
Q
Scott Cheng-Hsin Yang
Wai Keen Vong
Ravi B. Sojitra
Tomas Folke
Patrick Shafto
Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
description Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesian teaching, which evaluates explanations by how much they shift explainees’ inferences toward a desired goal. We assess Bayesian teaching in a binary image classification task across a variety of contexts. Absent intervention, participants predict that the AI’s classifications will match their own, but explanations generated by Bayesian teaching improve their ability to predict the AI’s judgements by moving them away from this prior belief. Bayesian teaching further allows each case to be broken down into sub-examples (here saliency maps). These sub-examples complement whole examples by improving error detection for familiar categories, whereas whole examples help predict correct AI judgements of unfamiliar cases.
format article
author Scott Cheng-Hsin Yang
Wai Keen Vong
Ravi B. Sojitra
Tomas Folke
Patrick Shafto
author_facet Scott Cheng-Hsin Yang
Wai Keen Vong
Ravi B. Sojitra
Tomas Folke
Patrick Shafto
author_sort Scott Cheng-Hsin Yang
title Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
title_short Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
title_full Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
title_fullStr Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
title_full_unstemmed Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
title_sort mitigating belief projection in explainable artificial intelligence via bayesian teaching
publisher Nature Portfolio
publishDate 2021
url https://doaj.org/article/feea80d1ede04d4f84ba50138beb7648
work_keys_str_mv AT scottchenghsinyang mitigatingbeliefprojectioninexplainableartificialintelligenceviabayesianteaching
AT waikeenvong mitigatingbeliefprojectioninexplainableartificialintelligenceviabayesianteaching
AT ravibsojitra mitigatingbeliefprojectioninexplainableartificialintelligenceviabayesianteaching
AT tomasfolke mitigatingbeliefprojectioninexplainableartificialintelligenceviabayesianteaching
AT patrickshafto mitigatingbeliefprojectioninexplainableartificialintelligenceviabayesianteaching
_version_ 1718382060259770368