Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee v...
Guardado en:
Autores principales: | Scott Cheng-Hsin Yang, Wai Keen Vong, Ravi B. Sojitra, Tomas Folke, Patrick Shafto |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/feea80d1ede04d4f84ba50138beb7648 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
An Explainable Artificial Intelligence Model for Detecting Xenophobic Tweets
por: Gabriel Ichcanziho Pérez-Landa, et al.
Publicado: (2021) -
Untangling hybrid hydrological models with explainable artificial intelligence
por: Daniel Althoff, et al.
Publicado: (2021) -
Raman spectroscopy and artificial intelligence to predict the Bayesian probability of breast cancer
por: Ragini Kothari, et al.
Publicado: (2021) -
A Systematic Review of Human–Computer Interaction and Explainable Artificial Intelligence in Healthcare With Artificial Intelligence Techniques
por: Mobeen Nazar, et al.
Publicado: (2021) -
A deep explainable artificial intelligent framework for neurological disorders discrimination
por: Soroosh Shahtalebi, et al.
Publicado: (2021)