Detect Adversarial Attacks Against Deep Neural Networks With GPU Monitoring
Deep Neural Networks (DNNs) are the preferred choice for image-based machine learning applications in several domains. However, DNNs are vulnerable to adversarial attacks, that are carefully-crafted perturbations introduced on input images to fool a DNN model. Adversarial attacks may prevent the app...
Guardado en:
Autores principales: | Tommaso Zoppi, Andrea Ceccarelli |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
IEEE
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/212646a711e04129ba84416ccb6de4ac |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Search-and-Attack: Temporally Sparse Adversarial Perturbations on Videos
por: Hwan Heo, et al.
Publicado: (2021) -
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
por: Hyun Kwon, et al.
Publicado: (2018) -
Adversarial attacks on deep learning models in smart grids
por: Jingbo Hao, et al.
Publicado: (2022) -
Advances in Adversarial Attacks and Defenses in Computer Vision: A Survey
por: Naveed Akhtar, et al.
Publicado: (2021) -
Presentation Attack Detection on Limited-Resource Devices Using Deep Neural Classifiers Trained on Consistent Spectrogram Fragments
por: Kacper Kubicki, et al.
Publicado: (2021)