Disparate Vulnerability to Membership Inference Attacks

A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Kulynych Bogdan, Yaghini Mohammad, Cherubin Giovanni, Veale Michael, Troncoso Carmela
Formato: article
Lenguaje:EN
Publicado: Sciendo 2022
Materias:
Acceso en línea:https://doaj.org/article/c3bf6102ce1b4060b1996827dc5cbec5
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:c3bf6102ce1b4060b1996827dc5cbec5
record_format dspace
spelling oai:doaj.org-article:c3bf6102ce1b4060b1996827dc5cbec52021-12-05T14:11:10ZDisparate Vulnerability to Membership Inference Attacks2299-098410.2478/popets-2022-0023https://doaj.org/article/c3bf6102ce1b4060b1996827dc5cbec52022-01-01T00:00:00Zhttps://doi.org/10.2478/popets-2022-0023https://doaj.org/toc/2299-0984A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorithmic fairness and to differential privacy. We show that fairness can only prevent disparate vulnerability against limited classes of adversaries. Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model. We show that estimating disparate vulnerability by naïvely applying existing attacks can lead to overestimation. We then establish which attacks are suitable for estimating disparate vulnerability, and provide a statistical framework for doing so reliably. We conduct experiments on synthetic and real-world data finding significant evidence of disparate vulnerability in realistic settings.Kulynych BogdanYaghini MohammadCherubin GiovanniVeale MichaelTroncoso CarmelaSciendoarticlemembership inference attacksmachine learningfairnessEthicsBJ1-1725Electronic computers. Computer scienceQA75.5-76.95ENProceedings on Privacy Enhancing Technologies, Vol 2022, Iss 1, Pp 460-480 (2022)
institution DOAJ
collection DOAJ
language EN
topic membership inference attacks
machine learning
fairness
Ethics
BJ1-1725
Electronic computers. Computer science
QA75.5-76.95
spellingShingle membership inference attacks
machine learning
fairness
Ethics
BJ1-1725
Electronic computers. Computer science
QA75.5-76.95
Kulynych Bogdan
Yaghini Mohammad
Cherubin Giovanni
Veale Michael
Troncoso Carmela
Disparate Vulnerability to Membership Inference Attacks
description A membership inference attack (MIA) against a machine-learning model enables an attacker to determine whether a given data record was part of the model’s training data or not. In this paper, we provide an in-depth study of the phenomenon of disparate vulnerability against MIAs: unequal success rate of MIAs against different population subgroups. We first establish necessary and sufficient conditions for MIAs to be prevented, both on average and for population subgroups, using a notion of distributional generalization. Second, we derive connections of disparate vulnerability to algorithmic fairness and to differential privacy. We show that fairness can only prevent disparate vulnerability against limited classes of adversaries. Differential privacy bounds disparate vulnerability but can significantly reduce the accuracy of the model. We show that estimating disparate vulnerability by naïvely applying existing attacks can lead to overestimation. We then establish which attacks are suitable for estimating disparate vulnerability, and provide a statistical framework for doing so reliably. We conduct experiments on synthetic and real-world data finding significant evidence of disparate vulnerability in realistic settings.
format article
author Kulynych Bogdan
Yaghini Mohammad
Cherubin Giovanni
Veale Michael
Troncoso Carmela
author_facet Kulynych Bogdan
Yaghini Mohammad
Cherubin Giovanni
Veale Michael
Troncoso Carmela
author_sort Kulynych Bogdan
title Disparate Vulnerability to Membership Inference Attacks
title_short Disparate Vulnerability to Membership Inference Attacks
title_full Disparate Vulnerability to Membership Inference Attacks
title_fullStr Disparate Vulnerability to Membership Inference Attacks
title_full_unstemmed Disparate Vulnerability to Membership Inference Attacks
title_sort disparate vulnerability to membership inference attacks
publisher Sciendo
publishDate 2022
url https://doaj.org/article/c3bf6102ce1b4060b1996827dc5cbec5
work_keys_str_mv AT kulynychbogdan disparatevulnerabilitytomembershipinferenceattacks
AT yaghinimohammad disparatevulnerabilitytomembershipinferenceattacks
AT cherubingiovanni disparatevulnerabilitytomembershipinferenceattacks
AT vealemichael disparatevulnerabilitytomembershipinferenceattacks
AT troncosocarmela disparatevulnerabilitytomembershipinferenceattacks
_version_ 1718371321791905792