Textual Backdoor Defense via Poisoned Sample Recognition
Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attacks based on data poisoning in existing research is as high as 100%. In order to enhance the natural language processing model’s defense against backdoor attacks, we propose a textual backdoor defense m...
Guardado en:
Autores principales: | Kun Shao, Yu Zhang, Junan Yang, Hui Liu |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/539075c4b9b94a4daaa69c6db1972118 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
por: Naveed Akhtar, et al.
Publicado: (2021) -
Textual Adversarial Attacking with Limited Queries
por: Yu Zhang, et al.
Publicado: (2021) -
Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
por: Tommaso Zoppi, et al.
Publicado: (2021) -
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
por: Hyun Kwon, et al.
Publicado: (2018) -
Restricted Evasion Attack: Generation of Restricted-Area Adversarial Example
por: Hyun Kwon, et al.
Publicado: (2019)