Understanding, Explanation, and Active Inference

While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machi...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Thomas Parr, Giovanni Pezzulo
Formato: article
Lenguaje:EN
Publicado: Frontiers Media S.A. 2021
Materias:
Acceso en línea:https://doaj.org/article/9d11f3a3f9a5462085c8de59506623cc
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:9d11f3a3f9a5462085c8de59506623cc
record_format dspace
spelling oai:doaj.org-article:9d11f3a3f9a5462085c8de59506623cc2021-11-05T14:23:27ZUnderstanding, Explanation, and Active Inference1662-513710.3389/fnsys.2021.772641https://doaj.org/article/9d11f3a3f9a5462085c8de59506623cc2021-11-01T00:00:00Zhttps://www.frontiersin.org/articles/10.3389/fnsys.2021.772641/fullhttps://doaj.org/toc/1662-5137While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machine is able to explain its behaviour (i.e., explainable AI). In this article, we address the issue of machine understanding from the perspective of active inference. This paradigm enables decision making based upon a model of how data are generated. The generative model contains those variables required to explain sensory data, and its inversion may be seen as an attempt to explain the causes of these data. Here we are interested in explanations of one’s own actions. This implies a deep generative model that includes a model of the world, used to infer policies, and a higher-level model that attempts to predict which policies will be selected based upon a space of hypothetical (i.e., counterfactual) explanations—and which can subsequently be used to provide (retrospective) explanations about the policies pursued. We illustrate the construct validity of this notion of understanding in relation to human understanding by highlighting the similarities in computational architecture and the consequences of its dysfunction.Thomas ParrGiovanni PezzuloFrontiers Media S.A.articleactive inferenceexplainable AIinsightdecision makinggenerative modelunderstandingNeurosciences. Biological psychiatry. NeuropsychiatryRC321-571ENFrontiers in Systems Neuroscience, Vol 15 (2021)
institution DOAJ
collection DOAJ
language EN
topic active inference
explainable AI
insight
decision making
generative model
understanding
Neurosciences. Biological psychiatry. Neuropsychiatry
RC321-571
spellingShingle active inference
explainable AI
insight
decision making
generative model
understanding
Neurosciences. Biological psychiatry. Neuropsychiatry
RC321-571
Thomas Parr
Giovanni Pezzulo
Understanding, Explanation, and Active Inference
description While machine learning techniques have been transformative in solving a range of problems, an important challenge is to understand why they arrive at the decisions they output. Some have argued that this necessitates augmenting machine intelligence with understanding such that, when queried, a machine is able to explain its behaviour (i.e., explainable AI). In this article, we address the issue of machine understanding from the perspective of active inference. This paradigm enables decision making based upon a model of how data are generated. The generative model contains those variables required to explain sensory data, and its inversion may be seen as an attempt to explain the causes of these data. Here we are interested in explanations of one’s own actions. This implies a deep generative model that includes a model of the world, used to infer policies, and a higher-level model that attempts to predict which policies will be selected based upon a space of hypothetical (i.e., counterfactual) explanations—and which can subsequently be used to provide (retrospective) explanations about the policies pursued. We illustrate the construct validity of this notion of understanding in relation to human understanding by highlighting the similarities in computational architecture and the consequences of its dysfunction.
format article
author Thomas Parr
Giovanni Pezzulo
author_facet Thomas Parr
Giovanni Pezzulo
author_sort Thomas Parr
title Understanding, Explanation, and Active Inference
title_short Understanding, Explanation, and Active Inference
title_full Understanding, Explanation, and Active Inference
title_fullStr Understanding, Explanation, and Active Inference
title_full_unstemmed Understanding, Explanation, and Active Inference
title_sort understanding, explanation, and active inference
publisher Frontiers Media S.A.
publishDate 2021
url https://doaj.org/article/9d11f3a3f9a5462085c8de59506623cc
work_keys_str_mv AT thomasparr understandingexplanationandactiveinference
AT giovannipezzulo understandingexplanationandactiveinference
_version_ 1718444245384167424