Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassifi...
Guardado en:
Autores principales: | Hyun Kwon, Hyunsoo Yoon, Daeseon Choi |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2019
|
Materias: | |
Acceso en línea: | https://doaj.org/article/18e15e9596274820aa6894a854aac8f4 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
por: Hyun Kwon, et al.
Publicado: (2018) -
Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes
por: Hyun Kwon, et al.
Publicado: (2019) -
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
por: Naveed Akhtar, et al.
Publicado: (2021) -
Adversarial attacks on deep learning models in smart grids
por: Jingbo Hao, et al.
Publicado: (2022) -
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
por: Chuan Du, et al.
Publicado: (2021)