Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
Deep neural networks (DNNs) show superior performance in image and speech recognition. However, adversarial examples created by adding a little noise to an original sample can lead to misclassification by a DNN. Conventional studies on adversarial examples have focused on ways of causing misclassifi...
Saved in:
| Main Authors: | Hyun Kwon, Hyunsoo Yoon, Daeseon Choi |
|---|---|
| Format: | article |
| Language: | EN |
| Published: |
IEEE
2019
|
| Subjects: | |
| Online Access: | https://doaj.org/article/18e15e9596274820aa6894a854aac8f4 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
by: Hyun Kwon, et al.
Published: (2018) -
Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes
by: Hyun Kwon, et al.
Published: (2019) -
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
by: Naveed Akhtar, et al.
Published: (2021) -
Adversarial attacks on deep learning models in smart grids
by: Jingbo Hao, et al.
Published: (2022) -
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
by: Chuan Du, et al.
Published: (2021)