Understanding, Explanation, and Active Inference
While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machi...
Enregistré dans:
Auteurs principaux: | Thomas Parr, Giovanni Pezzulo |
---|---|
Format: | article |
Langue: | EN |
Publié: |
Frontiers Media S.A.
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/9d11f3a3f9a5462085c8de59506623cc |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Comparison and Explanation of Forecasting Algorithms for Energy Time Series
par: Yuyi Zhang, et autres
Publié: (2021) -
Brain-Inspired Hardware Solutions for Inference in Bayesian Networks
par: Leila Bagheriye, et autres
Publié: (2021) -
Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension
par: Francisco S. Marcondes, et autres
Publié: (2021) -
Turning the blackbox into a glassbox: An explainable machine learning approach for understanding hospitality customer
par: Ritu Sharma, et autres
Publié: (2021) -
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
par: Morteza Esmaeili, et autres
Publié: (2021)