Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their perform...
Guardado en:
Autores principales: | Li Fan, Wei Li, Xiaohui Cui |
---|---|
Formato: | article |
Lenguaje: | EN |
Publicado: |
MDPI AG
2021
|
Materias: | |
Acceso en línea: | https://doaj.org/article/1afc76354295434683cdaa5a75e68368 |
Etiquetas: |
Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
|
Ejemplares similares
-
Face Swapping Consistency Transfer with Neural Identity Carrier
por: Kunlin Liu, et al.
Publicado: (2021) -
Fair and Effective Policing for Neighborhood Safety: Understanding and Overcoming Selection Biases
por: Weijeiying Ren, et al.
Publicado: (2021) -
THE EXPERT SYSTEM OF CONTROL AND KNOWLEDGE ASSESSMENT
por: V. Golovachyova, et al.
Publicado: (2020) -
Pattern Recognition of Human Face With Photos Using KNN Algorithm
por: Dedy Kurniadi, et al.
Publicado: (2021) -
Optimization and improvement of fake news detection using deep learning approaches for societal benefit
por: Tavishee Chauhan, M.E, et al.
Publicado: (2021)