Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, pattern analysis, and intrusion detection. Recently, the adversarial example attack, in which the input data are only slightly modified, although not an issue for human interpretation, is a serious threat to a DNN...
Guardado en:
Autores principales: | Hyun Kwon, Yongchul Kim, Ki-Woong PARK, Hyunsoo Yoon, Daeseon Choi |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2018
|
Materias: | |
Acceso en línea: | https://doaj.org/article/d847cd17c9f642d58113ec58df1a3762 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
por: Hyun Kwon, et al.
Publicado: (2019) -
Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes
por: Hyun Kwon, et al.
Publicado: (2019) -
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
por: Naveed Akhtar, et al.
Publicado: (2021) -
Adversarial attacks on deep learning models in smart grids
por: Jingbo Hao, et al.
Publicado: (2022) -
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
por: Chuan Du, et al.
Publicado: (2021)