Analysis of Application Examples of Differential Privacy in Deep Learning
Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy...
Saved in:
Main Authors: | Zhidong Shen, Ting Zhong |
---|---|
Format: | article |
Language: | EN |
Published: |
Hindawi Limited
2021
|
Subjects: | |
Online Access: | https://doaj.org/article/a8167e7c4ee64b5785d00ee332a31e25 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Privacy-first health research with federated learning
by: Adam Sadilek, et al.
Published: (2021) -
Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study
by: Qi Dou, et al.
Published: (2021) -
Privacy protections to encourage use of health-relevant digital data in a learning health system
by: Deven McGraw, et al.
Published: (2021) -
Bias and privacy in AI's cough-based COVID-19 recognition
by: Humberto Perez-Espinosa, et al.
Published: (2021) -
Bias and privacy in AI's cough-based COVID-19 recognition – Authors' reply
by: Harry Coppock, et al.
Published: (2021)