Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning

Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive l...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Alfonso Ortega, Julian Fierrez, Aythami Morales, Zilong Wang, Marina de la Cruz, César Luis Alonso, Tony Ribeiro
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/77e40effadd648ebb011d1dd19b2ed88
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:77e40effadd648ebb011d1dd19b2ed88
record_format dspace
spelling oai:doaj.org-article:77e40effadd648ebb011d1dd19b2ed882021-11-25T17:17:31ZSymbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning10.3390/computers101101542073-431Xhttps://doaj.org/article/77e40effadd648ebb011d1dd19b2ed882021-11-01T00:00:00Zhttps://www.mdpi.com/2073-431X/10/11/154https://doaj.org/toc/2073-431XMachine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity.Alfonso OrtegaJulian FierrezAythami MoralesZilong WangMarina de la CruzCésar Luis AlonsoTony RibeiroMDPI AGarticleexplainable artificial intelligenceinductive logic programmingfair recruitmentfair income levelpropositional logicElectronic computers. Computer scienceQA75.5-76.95ENComputers, Vol 10, Iss 154, p 154 (2021)
institution DOAJ
collection DOAJ
language EN
topic explainable artificial intelligence
inductive logic programming
fair recruitment
fair income level
propositional logic
Electronic computers. Computer science
QA75.5-76.95
spellingShingle explainable artificial intelligence
inductive logic programming
fair recruitment
fair income level
propositional logic
Electronic computers. Computer science
QA75.5-76.95
Alfonso Ortega
Julian Fierrez
Aythami Morales
Zilong Wang
Marina de la Cruz
César Luis Alonso
Tony Ribeiro
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
description Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity.
format article
author Alfonso Ortega
Julian Fierrez
Aythami Morales
Zilong Wang
Marina de la Cruz
César Luis Alonso
Tony Ribeiro
author_facet Alfonso Ortega
Julian Fierrez
Aythami Morales
Zilong Wang
Marina de la Cruz
César Luis Alonso
Tony Ribeiro
author_sort Alfonso Ortega
title Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
title_short Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
title_full Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
title_fullStr Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
title_full_unstemmed Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
title_sort symbolic ai for xai: evaluating lfit inductive programming for explaining biases in machine learning
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/77e40effadd648ebb011d1dd19b2ed88
work_keys_str_mv AT alfonsoortega symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
AT julianfierrez symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
AT aythamimorales symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
AT zilongwang symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
AT marinadelacruz symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
AT cesarluisalonso symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
AT tonyribeiro symbolicaiforxaievaluatinglfitinductiveprogrammingforexplainingbiasesinmachinelearning
_version_ 1718412538415153152