<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning
Differentially private stochastic gradient descent (DP-SGD) that perturbs the clipped gradients is a popular approach for private machine learning. Gaussian mechanism GM, combined with the moments accountant (MA), has demonstrated a much better privacy-utility tradeoff than using the advanced compos...
Guardado en:
Autores principales: | , , , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/d91648a81c8e4395a2b8d3247e9c873c |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:d91648a81c8e4395a2b8d3247e9c873c |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:d91648a81c8e4395a2b8d3247e9c873c2021-11-26T00:00:50Z<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning2169-353610.1109/ACCESS.2021.3129130https://doaj.org/article/d91648a81c8e4395a2b8d3247e9c873c2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9618957/https://doaj.org/toc/2169-3536Differentially private stochastic gradient descent (DP-SGD) that perturbs the clipped gradients is a popular approach for private machine learning. Gaussian mechanism GM, combined with the moments accountant (MA), has demonstrated a much better privacy-utility tradeoff than using the advanced composition theorem. However, it is unclear whether the tradeoff can be further improved by other mechanisms with different noise distributions. To this end, we extend GM (<inline-formula> <tex-math notation="LaTeX">$p=2$ </tex-math></inline-formula>) to the generalized <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-power exponential mechanism (<inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM with <inline-formula> <tex-math notation="LaTeX">$p>0$ </tex-math></inline-formula>) family and show its privacy guarantee. Straightforwardly, we can enhance the privacy-utility tradeoff of GM by searching noise distribution in the wider mechanism space. To implement <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM in practice, we design an effective sampling method and extend MA to <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM for tightly estimating privacy loss. Besides, we formally prove the non-optimality of GM based on the variation method. Numerical experiments validate the properties of <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM and illustrate a comprehensive comparison between <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM and the other two state-of-the-art methods. Experimental results show that <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM is preferred when the noise variance is relatively small to the signal and the dimension is not too high.Yanan LiXuebin RenFangyuan ZhaoShusen YangIEEEarticlePrivacy protectionprivacy-utility trade-offnoise varianceGaussian mechanismmoments accountantElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 155018-155034 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Privacy protection privacy-utility trade-off noise variance Gaussian mechanism moments accountant Electrical engineering. Electronics. Nuclear engineering TK1-9971 |
spellingShingle |
Privacy protection privacy-utility trade-off noise variance Gaussian mechanism moments accountant Electrical engineering. Electronics. Nuclear engineering TK1-9971 Yanan Li Xuebin Ren Fangyuan Zhao Shusen Yang <italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning |
description |
Differentially private stochastic gradient descent (DP-SGD) that perturbs the clipped gradients is a popular approach for private machine learning. Gaussian mechanism GM, combined with the moments accountant (MA), has demonstrated a much better privacy-utility tradeoff than using the advanced composition theorem. However, it is unclear whether the tradeoff can be further improved by other mechanisms with different noise distributions. To this end, we extend GM (<inline-formula> <tex-math notation="LaTeX">$p=2$ </tex-math></inline-formula>) to the generalized <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>-power exponential mechanism (<inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM with <inline-formula> <tex-math notation="LaTeX">$p>0$ </tex-math></inline-formula>) family and show its privacy guarantee. Straightforwardly, we can enhance the privacy-utility tradeoff of GM by searching noise distribution in the wider mechanism space. To implement <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM in practice, we design an effective sampling method and extend MA to <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM for tightly estimating privacy loss. Besides, we formally prove the non-optimality of GM based on the variation method. Numerical experiments validate the properties of <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM and illustrate a comprehensive comparison between <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM and the other two state-of-the-art methods. Experimental results show that <inline-formula> <tex-math notation="LaTeX">$p$ </tex-math></inline-formula>EM is preferred when the noise variance is relatively small to the signal and the dimension is not too high. |
format |
article |
author |
Yanan Li Xuebin Ren Fangyuan Zhao Shusen Yang |
author_facet |
Yanan Li Xuebin Ren Fangyuan Zhao Shusen Yang |
author_sort |
Yanan Li |
title |
<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning |
title_short |
<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning |
title_full |
<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning |
title_fullStr |
<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning |
title_full_unstemmed |
<italic>p</italic>-Power Exponential Mechanisms for Differentially Private Machine Learning |
title_sort |
<italic>p</italic>-power exponential mechanisms for differentially private machine learning |
publisher |
IEEE |
publishDate |
2021 |
url |
https://doaj.org/article/d91648a81c8e4395a2b8d3247e9c873c |
work_keys_str_mv |
AT yananli italicpitalicpowerexponentialmechanismsfordifferentiallyprivatemachinelearning AT xuebinren italicpitalicpowerexponentialmechanismsfordifferentiallyprivatemachinelearning AT fangyuanzhao italicpitalicpowerexponentialmechanismsfordifferentiallyprivatemachinelearning AT shusenyang italicpitalicpowerexponentialmechanismsfordifferentiallyprivatemachinelearning |
_version_ |
1718409997049659392 |