Analysis of Application Examples of Differential Privacy in Deep Learning
Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy...
Guardado en:
Autores principales: | Zhidong Shen, Ting Zhong |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Hindawi Limited
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/a8167e7c4ee64b5785d00ee332a31e25 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Privacy-first health research with federated learning
por: Adam Sadilek, et al.
Publicado: (2021) -
Federated deep learning for detecting COVID-19 lung abnormalities in CT: a privacy-preserving multinational validation study
por: Qi Dou, et al.
Publicado: (2021) -
Privacy protections to encourage use of health-relevant digital data in a learning health system
por: Deven McGraw, et al.
Publicado: (2021) -
Bias and privacy in AI's cough-based COVID-19 recognition
por: Humberto Perez-Espinosa, et al.
Publicado: (2021) -
Bias and privacy in AI's cough-based COVID-19 recognition – Authors' reply
por: Harry Coppock, et al.
Publicado: (2021)