Analysis of Application Examples of Differential Privacy in Deep Learning
Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy...
Guardado en:
Autores principales: | , |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Hindawi Limited
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a8167e7c4ee64b5785d00ee332a31e25 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
id |
oai:doaj.org-article:a8167e7c4ee64b5785d00ee332a31e25 |
---|---|
record_format |
dspace |
spelling |
oai:doaj.org-article:a8167e7c4ee64b5785d00ee332a31e252021-11-08T02:37:08ZAnalysis of Application Examples of Differential Privacy in Deep Learning1687-527310.1155/2021/4244040https://doaj.org/article/a8167e7c4ee64b5785d00ee332a31e252021-01-01T00:00:00Zhttp://dx.doi.org/10.1155/2021/4244040https://doaj.org/toc/1687-5273Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy in deep learning. Differential privacy, as a popular topic in privacy-preserving in recent years, which provides rigorous privacy guarantee, can also be used to preserve privacy in deep learning. Although many articles have proposed different methods to combine differential privacy and deep learning, there are no comprehensive papers to analyze and compare the differences and connections between these technologies. For this purpose, this paper is proposed to compare different differential private methods in deep learning. We comparatively analyze and classify several deep learning models under differential privacy. Meanwhile, we also pay attention to the application of differential privacy in Generative Adversarial Networks (GANs), comparing and analyzing these models. Finally, we summarize the application of differential privacy in deep neural networks.Zhidong ShenTing ZhongHindawi LimitedarticleComputer applications to medicine. Medical informaticsR858-859.7Neurosciences. Biological psychiatry. NeuropsychiatryRC321-571ENComputational Intelligence and Neuroscience, Vol 2021 (2021) |
institution |
DOAJ |
collection |
DOAJ |
language |
EN |
topic |
Computer applications to medicine. Medical informatics R858-859.7 Neurosciences. Biological psychiatry. Neuropsychiatry RC321-571 |
spellingShingle |
Computer applications to medicine. Medical informatics R858-859.7 Neurosciences. Biological psychiatry. Neuropsychiatry RC321-571 Zhidong Shen Ting Zhong Analysis of Application Examples of Differential Privacy in Deep Learning |
description |
Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy in deep learning. Differential privacy, as a popular topic in privacy-preserving in recent years, which provides rigorous privacy guarantee, can also be used to preserve privacy in deep learning. Although many articles have proposed different methods to combine differential privacy and deep learning, there are no comprehensive papers to analyze and compare the differences and connections between these technologies. For this purpose, this paper is proposed to compare different differential private methods in deep learning. We comparatively analyze and classify several deep learning models under differential privacy. Meanwhile, we also pay attention to the application of differential privacy in Generative Adversarial Networks (GANs), comparing and analyzing these models. Finally, we summarize the application of differential privacy in deep neural networks. |
format |
article |
author |
Zhidong Shen Ting Zhong |
author_facet |
Zhidong Shen Ting Zhong |
author_sort |
Zhidong Shen |
title |
Analysis of Application Examples of Differential Privacy in Deep Learning |
title_short |
Analysis of Application Examples of Differential Privacy in Deep Learning |
title_full |
Analysis of Application Examples of Differential Privacy in Deep Learning |
title_fullStr |
Analysis of Application Examples of Differential Privacy in Deep Learning |
title_full_unstemmed |
Analysis of Application Examples of Differential Privacy in Deep Learning |
title_sort |
analysis of application examples of differential privacy in deep learning |
publisher |
Hindawi Limited |
publishDate |
2021 |
url |
https://doaj.org/article/a8167e7c4ee64b5785d00ee332a31e25 |
work_keys_str_mv |
AT zhidongshen analysisofapplicationexamplesofdifferentialprivacyindeeplearning AT tingzhong analysisofapplicationexamplesofdifferentialprivacyindeeplearning |
_version_ |
1718443029169176576 |