Humans can decipher adversarial images

Convolutional Neural Networks (CNNs) have reached human-level benchmarks in classifying images, but they can be “fooled” by adversarial examples that elicit bizarre misclassifications from machines. Here, the authors show how humans can anticipate which objects CNNs will see in adversarial images....

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Zhenglong Zhou, Chaz Firestone
Formato: article
Lenguaje:EN
Publicado: Nature Portfolio 2019
Materias:
Q
Acceso en línea:https://doaj.org/article/11bc950f138d40c9b8eaad6a445e6db4
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
Descripción
Sumario:Convolutional Neural Networks (CNNs) have reached human-level benchmarks in classifying images, but they can be “fooled” by adversarial examples that elicit bizarre misclassifications from machines. Here, the authors show how humans can anticipate which objects CNNs will see in adversarial images.