Deepfake-Image Anti-Forensics with Adversarial Examples Attacks

Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their perform...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Li Fan, Wei Li, Xiaohui Cui
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Acceso en línea:https://doaj.org/article/1afc76354295434683cdaa5a75e68368
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:1afc76354295434683cdaa5a75e68368
record_format dspace
spelling oai:doaj.org-article:1afc76354295434683cdaa5a75e683682021-11-25T17:39:56ZDeepfake-Image Anti-Forensics with Adversarial Examples Attacks10.3390/fi131102881999-5903https://doaj.org/article/1afc76354295434683cdaa5a75e683682021-11-01T00:00:00Zhttps://www.mdpi.com/1999-5903/13/11/288https://doaj.org/toc/1999-5903Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, we propose the Poisson noise DeepFool (PNDF), an improved iterative adversarial examples generation method. This method can simply and effectively attack forensics detectors by adding perturbations to images in different directions. Our attacks can reduce its AUC from 0.9999 to 0.0331, and the detection accuracy of deepfake images from 0.9997 to 0.0731. Compared with state-of-the-art studies, our work provides an important defense direction for future research on deepfake-image detectors, by focusing on the generalization performance of detectors and their resistance to adversarial example attacks.Li FanWei LiXiaohui CuiMDPI AGarticleadversarial examplesdeepfakegeneral detectorsPoisson noiseInformation technologyT58.5-58.64ENFuture Internet, Vol 13, Iss 288, p 288 (2021)
institution DOAJ
collection DOAJ
language EN
topic adversarial examples
deepfake
general detectors
Poisson noise
Information technology
T58.5-58.64
spellingShingle adversarial examples
deepfake
general detectors
Poisson noise
Information technology
T58.5-58.64
Li Fan
Wei Li
Xiaohui Cui
Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
description Many deepfake-image forensic detectors have been proposed and improved due to the development of synthetic techniques. However, recent studies show that most of these detectors are not immune to adversarial example attacks. Therefore, understanding the impact of adversarial examples on their performance is an important step towards improving deepfake-image detectors. This study developed an anti-forensics case study of two popular general deepfake detectors based on their accuracy and generalization. Herein, we propose the Poisson noise DeepFool (PNDF), an improved iterative adversarial examples generation method. This method can simply and effectively attack forensics detectors by adding perturbations to images in different directions. Our attacks can reduce its AUC from 0.9999 to 0.0331, and the detection accuracy of deepfake images from 0.9997 to 0.0731. Compared with state-of-the-art studies, our work provides an important defense direction for future research on deepfake-image detectors, by focusing on the generalization performance of detectors and their resistance to adversarial example attacks.
format article
author Li Fan
Wei Li
Xiaohui Cui
author_facet Li Fan
Wei Li
Xiaohui Cui
author_sort Li Fan
title Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
title_short Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
title_full Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
title_fullStr Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
title_full_unstemmed Deepfake-Image Anti-Forensics with Adversarial Examples Attacks
title_sort deepfake-image anti-forensics with adversarial examples attacks
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/1afc76354295434683cdaa5a75e68368
work_keys_str_mv AT lifan deepfakeimageantiforensicswithadversarialexamplesattacks
AT weili deepfakeimageantiforensicswithadversarialexamplesattacks
AT xiaohuicui deepfakeimageantiforensicswithadversarialexamplesattacks
_version_ 1718412091446001664