Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring

Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the app...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Tommaso Zoppi, Andrea Ceccarelli
Formato: article
Lenguaje:EN
Publicado: IEEE 2021
Materias:
Acceso en línea:https://doaj.org/article/212646a711e04129ba84416ccb6de4ac
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:212646a711e04129ba84416ccb6de4ac
record_format dspace
spelling oai:doaj.org-article:212646a711e04129ba84416ccb6de4ac2021-11-18T00:08:24ZDetect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring2169-353610.1109/ACCESS.2021.3125920https://doaj.org/article/212646a711e04129ba84416ccb6de4ac2021-01-01T00:00:00Zhttps://ieeexplore.ieee.org/document/9605606/https://doaj.org/toc/2169-3536Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the application of DNNs in security-critical tasks: consequently, relevant research effort is put in securing DNNs. Typical approaches either increase model robustness, or add detection capabilities in the model, or operate on the input data. Instead, in this paper we propose to detect ongoing attacks through monitoring performance indicators of the underlying Graphics Processing Unit (GPU). In fact, adversarial attacks generate images that activate neurons of DNNs in a different way than legitimate images. This also causes an alteration of GPU activities, that can be observed through software monitors and anomaly detectors. This paper presents our monitoring and detection system, and an extensive experimental analysis that includes a total of 14 adversarial attacks, 3 datasets, and 12 models. Results show that, despite limitations on the monitoring resolution, adversarial attacks can be detected in most cases, with peaks of detection accuracy above 90%.Tommaso ZoppiAndrea CeccarelliIEEEarticleAttack detectionanomaly detectiongraphics processing unitdeep Neural Networksadversarial attacksimage classificationElectrical engineering. Electronics. Nuclear engineeringTK1-9971ENIEEE Access, Vol 9, Pp 150579-150591 (2021)
institution DOAJ
collection DOAJ
language EN
topic Attack detection
anomaly detection
graphics processing unit
deep Neural Networks
adversarial attacks
image classification
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
spellingShingle Attack detection
anomaly detection
graphics processing unit
deep Neural Networks
adversarial attacks
image classification
Electrical engineering. Electronics. Nuclear engineering
TK1-9971
Tommaso Zoppi
Andrea Ceccarelli
Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
description Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the application of DNNs in security-critical tasks: consequently, relevant research effort is put in securing DNNs. Typical approaches either increase model robustness, or add detection capabilities in the model, or operate on the input data. Instead, in this paper we propose to detect ongoing attacks through monitoring performance indicators of the underlying Graphics Processing Unit (GPU). In fact, adversarial attacks generate images that activate neurons of DNNs in a different way than legitimate images. This also causes an alteration of GPU activities, that can be observed through software monitors and anomaly detectors. This paper presents our monitoring and detection system, and an extensive experimental analysis that includes a total of 14 adversarial attacks, 3 datasets, and 12 models. Results show that, despite limitations on the monitoring resolution, adversarial attacks can be detected in most cases, with peaks of detection accuracy above 90%.
format article
author Tommaso Zoppi
Andrea Ceccarelli
author_facet Tommaso Zoppi
Andrea Ceccarelli
author_sort Tommaso Zoppi
title Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
title_short Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
title_full Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
title_fullStr Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
title_full_unstemmed Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
title_sort detect adversarial attacks against deep neural networks with gpu monitoring
publisher IEEE
publishDate 2021
url https://doaj.org/article/212646a711e04129ba84416ccb6de4ac
work_keys_str_mv AT tommasozoppi detectadversarialattacksagainstdeepneuralnetworkswithgpumonitoring
AT andreaceccarelli detectadversarialattacksagainstdeepneuralnetworkswithgpumonitoring
_version_ 1718425212345647104