Analysis of Application Examples of Differential Privacy in Deep Learning
Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy...
Enregistré dans:
Auteurs principaux: | Zhidong Shen, Ting Zhong |
---|---|
Format: | article |
Langue: | EN |
Publié: |
Hindawi Limited
2021
|
Sujets: | |
Accès en ligne: | https://doaj.org/article/a8167e7c4ee64b5785d00ee332a31e25 |
Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Privacy-first health research with federated learning
par: Adam Sadilek, et autres
Publié: (2021) -
Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study
par: Qi Dou, et autres
Publié: (2021) -
Privacy protections to encourage use of health-relevant digital data in a learning health system
par: Deven McGraw, et autres
Publié: (2021) -
Bias and privacy in AI's cough-based COVID-19 recognition
par: Humberto Perez-Espinosa, et autres
Publié: (2021) -
Bias and privacy in AI's cough-based COVID-19 recognition – Authors' reply
par: Harry Coppock, et autres
Publié: (2021)