Humans can decipher adversarial images
Convolutional Neural Networks (CNNs) have reached human-level benchmarks in classifying images, but they can be “fooled” by adversarial examples that elicit bizarre misclassifications from machines. Here, the authors show how humans can anticipate which objects CNNs will see in adversarial images....
Guardado en:
Autores principales: | Zhenglong Zhou, Chaz Firestone |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
Nature Portfolio
2019
|
Materias: | |
Acceso en línea: | https://doaj.org/article/11bc950f138d40c9b8eaad6a445e6db4 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Deciphering functional redundancy in the human microbiome
por: Liang Tian, et al.
Publicado: (2020) -
Intraoral image generation by progressive growing of generative adversarial network and evaluation of generated image quality by dentists
por: Kazuma Kokomoto, et al.
Publicado: (2021) -
Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
por: Li Fan, et al.
Publicado: (2021) -
The study on the inverse problem of applied current thermoacoustic imaging based on generative adversarial network
por: Liang Guo, et al.
Publicado: (2021) -
DisasterGAN: Generative Adversarial Networks for Remote Sensing Disaster Image Generation
por: Xue Rui, et al.
Publicado: (2021)