Textual Adversarial Attacking with Limited Queries

Recent studies have shown that natural language processing (NLP) models are vulnerable to adversarial examples, which are maliciously designed by adding small perturbations to benign inputs that are imperceptible to the human eye, leading to false predictions by the target model. Compared to charact...

Description complète

Enregistré dans:
Détails bibliographiques
Auteurs principaux: Yu Zhang, Junan Yang, Xiaoshuai Li, Hui Liu, Kun Shao
Format: article
Langue:EN
Publié: MDPI AG 2021
Sujets:
Accès en ligne:https://doaj.org/article/89ac0f34923e4dbbb9b19901a365a476
Tags: Ajouter un tag
Pas de tags, Soyez le premier à ajouter un tag!