Search-and-Attack: Temporally Sparse Adversarial Perturbations on Videos

Modern neural networks are known to be vulnerable to adversarial attacks in various domains. Although most attack methods usually densely change the input values, recent works have shown that deep neural networks (DNNs) are also vulnerable to sparse perturbations. Spatially sparse attacks on images...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Hwan Heo, Dohwan Ko, Jaewon Lee, Youngjoon Hong, Hyunwoo J. Kim
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/7b93adf29cfb4975b70c64124a8cca42
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Modern neural networks are known to be vulnerable to adversarial attacks in various domains. Although most attack methods usually densely change the input values, recent works have shown that deep neural networks (DNNs) are also vulnerable to sparse perturbations. Spatially sparse attacks on images or frames of a video are proven effective but the temporally sparse perturbations on videos have been less explored. In this paper, we present a novel framework to generate a temporally sparse adversarial attack, called <italic>Search-and-Attack</italic> scheme, on videos. The <italic>Search-and-Attack</italic> scheme first retrieves the most vulnerable frames and then attacks only those frames. Since identifying the most vulnerable set of frames involves an expensive combinatorial optimization problem, we introduce alternative definitions or surrogate objective functions: Magnitude of the Gradients (MoG) and Frame-wise Robustness Intensity (FRI). Combining them with iterative search schemes, extensive experiments on three public benchmark datasets (UCF, HMDB, and Kinetics) show that the proposed method achieves comparable performance to state-of-the-art dense attack methods.