Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassifi...
Enregistré dans:
| Auteurs principaux: | Hyun Kwon, Hyunsoo Yoon, Daeseon Choi |
|---|---|
| Format: | article |
| Langue: | EN |
| Publié: |
IEEE
2019
|
| Sujets: | |
| Accès en ligne: | https://doaj.org/article/18e15e9596274820aa6894a854aac8f4 |
| Tags: |
Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!
|
Documents similaires
-
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
par: Hyun Kwon, et autres
Publié: (2018) -
Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes
par: Hyun Kwon, et autres
Publié: (2019) -
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
par: Naveed Akhtar, et autres
Publié: (2021) -
Adversarial attacks on deep learning models in smart grids
par: Jingbo Hao, et autres
Publié: (2022) -
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
par: Chuan Du, et autres
Publié: (2021)