Neural Network Explainable AI Based on Paraconsistent Analysis: An Extension
This paper explores the use of paraconsistent analysis for assessing neural networks from an explainable AI perspective. This is an early exploration paper aiming to understand whether paraconsistent analysis can be applied for understanding neural networks and whether it is worth further develop th...
Guardado en:
Autores principales: | Francisco S. Marcondes, Dalila Durães, Flávio Santos, José João Almeida, Paulo Novais |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/379b49d3b1c84723b28939aaae538024 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Feature-Based Interpretation of the Deep Neural Network
por: Eun-Hun Lee, et al.
Publicado: (2021) -
Monte Carlo Tree Search as a Tool for Self-Learning and Teaching People to Play Complete Information Board Games
por: Víctor Gonzalo-Cristóbal, et al.
Publicado: (2021) -
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
por: Alfonso Ortega, et al.
Publicado: (2021) -
Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization
por: Morteza Esmaeili, et al.
Publicado: (2021) -
Comparison and Explanation of Forecasting Algorithms for Energy Time Series
por: Yuyi Zhang, et al.
Publicado: (2021)