Understanding, Explanation, and Active Inference
While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machi...
Guardado en:
Autores principales: | Thomas Parr, Giovanni Pezzulo |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/9d11f3a3f9a5462085c8de59506623cc |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Comparison and Explanation of Forecasting Algorithms for Energy Time Series
por: Yuyi Zhang, et al.
Publicado: (2021) -
Brain-Inspired Hardware Solutions for Inference in Bayesian Networks
por: Leila Bagheriye, et al.
Publicado: (2021) -
Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension
por: Francisco S. Marcondes, et al.
Publicado: (2021) -
Turning the blackbox into a glassbox: An explainable machine learning approach for understanding hospitality customer
por: Ritu Sharma, et al.
Publicado: (2021) -
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
por: Morteza Esmaeili, et al.
Publicado: (2021)