Textual Adversarial Attacking with Limited Queries
Recent studies have shown that natural language processing (NLP) models are vulnerable to adversarial examples, which are maliciously designed by adding small perturbations to benign inputs that are imperceptible to the human eye, leading to false predictions by the target model. Compared to charact...
Guardado en:
Autores principales: | Yu Zhang, Junan Yang, Xiaoshuai Li, Hui Liu, Kun Shao |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/89ac0f34923e4dbbb9b19901a365a476 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
por: Naveed Akhtar, et al.
Publicado: (2021) -
Search-and-Attack: Temporally Sparse Adversarial Perturbations on Videos
por: Hwan Heo, et al.
Publicado: (2021) -
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
por: Chuan Du, et al.
Publicado: (2021) -
A Distributed Biased Boundary Attack Method in Black-Box Attack
por: Fengtao Xiang, et al.
Publicado: (2021) -
Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
por: Tommaso Zoppi, et al.
Publicado: (2021)