Textual Backdoor Defense via Poisoned Sample Recognition

Deep learning models are vulnerable to backdoor attacks. The success rate of textual backdoor attacks based on data poisoning in existing research is as high as 100%. In order to enhance the natural language processing model’s defense against backdoor attacks, we propose a textual backdoor defense m...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Kun Shao, Yu Zhang, Junan Yang, Hui Liu
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
T
Acceso en línea:https://doaj.org/article/539075c4b9b94a4daaa69c6db1972118
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!