APSIPA Transactions on Signal and Information Processing > Vol 14 > Issue 3

Navigating Real and Fake in the Era of Advanced Generative AI

Huy H. Nguyen, National Institute of Informatics, Japan, huyhnguyen.work@gmail.com , Siyun Liang, Technical University of Munich, Germany, Junichi Yamagishi, National Institute of Informatics, Japan, Isao Echizen, National Institute of Informatics, Japan AND The University of Tokyo, Japan
 
Suggested Citation
Huy H. Nguyen, Siyun Liang, Junichi Yamagishi and Isao Echizen (2025), "Navigating Real and Fake in the Era of Advanced Generative AI", APSIPA Transactions on Signal and Information Processing: Vol. 14: No. 3, e202. http://dx.doi.org/10.1561/116.20240091

Publication Date: 25 Jun 2025
© 2025 H. H. Nguyen, S. Liang, J. Yamagishi and I. Echizen
 
Subjects
Classification and prediction,  Deep learning,  Robustness,  Privacy,  Artificial intelligence methods in security and privacy,  Cyber-physical systems security and privacy,  Forensics,  Security architectures,  Security,  Pattern recognition and learning
 
Keywords
AI-generated contentdeepfakereal or fakegenerative AIAI-powered frameworkcountermeasuresprovenanceintentioncontext
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 21 times

In this article:
Introduction 
Benign Users and Adversaries in the Era of Generative AI 
A Generalized AI-powered Framework 
Deepfake Attacks on the AI-powered Frameworks 
Deepfake Countermeasures in the Era of Generative AI 
Case Study 1: AI-powered Communication Framework 
Case Study 2: Non-interactive AI-powered Framework for Content Editing 
Conclusions 
Acknowledgments 
References 

Abstract

Over the past few decades, AI generative methods have advanced significantly, making it increasingly challenging to distinguish genuine photographs from AI-generated images, sometimes also referred to as deepfakes. In response, numerous deepfake detection methods and models have been developed, achieving high accuracy. However, the evaluation of these detection methods is often limited to a single dataset, which is typically created by generating multiple images using a specific deepfake generation methods and a fixed set of hyperparameters. This dataset is then randomly split into training and testing sets, but such an approach cannot take into account the variations of hyperparameters on deepfake detection performance. This paper addresses the fundamental question of source mismatch, where a model is trained on a specific deepfake generation source (including hyperparameters) and tested on a different one, highlighting the need to investigate the causes and impacts of such a mismatch as well as to develop solutions to this critical issue.

DOI:10.1561/116.20240091

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Deepfakes, Unrestricted Adversaries, and Synthetic Realities in the Generative AI Era
See the other articles that are part of this special issue.