Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-m...
Guardado en:
Autores principales: | , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/b765b6586cc548ca911a50167e55362a |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:b765b6586cc548ca911a50167e55362a |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:b765b6586cc548ca911a50167e55362a2021-11-18T00:08:17ZPrivacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis2169-353610.1109/ACCESS.2021.3124844https://doaj.org/article/b765b6586cc548ca911a50167e55362a2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9598877/https://doaj.org/toc/2169-3536Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals’ privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network’s potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility.Helena MontenegroWilson SilvaJaime S. CardosoIEEEarticleCase-based interpretabilitydeep learninggenerative adversarial networksprivacy-preserving machine learningmedical image analysisElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 148037-148047 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Case-based interpretability deep learning generative adversarial networks privacy-preserving machine learning medical image analysis Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Case-based interpretability deep learning generative adversarial networks privacy-preserving machine learning medical image analysis Electrical engineering. Electronics. Nuclear engineering TK1-9971 Helena Montenegro Wilson Silva Jaime S. Cardoso Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis |
description |
Although Deep Learning models have achieved incredible results in medical image classification tasks, their lack of interpretability hinders their deployment in the clinical context. Case-based interpretability provides intuitive explanations, as it is a much more human-like approach than saliency-map-based interpretability. Nonetheless, since one is dealing with sensitive visual data, there is a high risk of exposing personal identity, threatening the individuals’ privacy. In this work, we propose a privacy-preserving generative adversarial network for the privatization of case-based explanations. We address the weaknesses of current privacy-preserving methods for visual data from three perspectives: realism, privacy, and explanatory value. We also introduce a counterfactual module in our Generative Adversarial Network that provides counterfactual case-based explanations in addition to standard factual explanations. Experiments were performed in a biometric and medical dataset, demonstrating the network’s potential to preserve the privacy of all subjects and keep its explanatory evidence while also maintaining a decent level of intelligibility. |
format |
article |
author |
Helena Montenegro Wilson Silva Jaime S. Cardoso |
author_facet |
Helena Montenegro Wilson Silva Jaime S. Cardoso |
author_sort |
Helena Montenegro |
title |
Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis |
title_short |
Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis |
title_full |
Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis |
title_fullStr |
Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis |
title_full_unstemmed |
Privacy-Preserving Generative Adversarial Network for Case-Based Explainability in Medical Image Analysis |
title_sort |
privacy-preserving generative adversarial network for case-based explainability in medical image analysis |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/b765b6586cc548ca911a50167e55362a |
work_keys_str_mv |
AT helenamontenegro privacypreservinggenerativeadversarialnetworkforcasebasedexplainabilityinmedicalimageanalysis AT wilsonsilva privacypreservinggenerativeadversarialnetworkforcasebasedexplainabilityinmedicalimageanalysis AT jaimescardoso privacypreservinggenerativeadversarialnetworkforcasebasedexplainabilityinmedicalimageanalysis |
_version_ |
1718425247300976640 |