APSIPA Transactions on Signal and Information Processing > Vol 14 > Issue 3

Navigating Real and Fake in the Era of Advanced Generative AI

Huy H. Nguyen, National Institute of Informatics, Japan, huyhnguyen.work@gmail.com , Siyun Liang, Technical University of Munich, Germany, Junichi Yamagishi, National Institute of Informatics, Japan, Isao Echizen, National Institute of Informatics, Japan AND The University of Tokyo, Japan
 
Suggested Citation
Huy H. Nguyen, Siyun Liang, Junichi Yamagishi and Isao Echizen (2025), "Navigating Real and Fake in the Era of Advanced Generative AI", APSIPA Transactions on Signal and Information Processing: Vol. 14: No. 3, e202. http://dx.doi.org/10.1561/116.20240091

Publication Date: 25 Jun 2025
© 2025 H. H. Nguyen, S. Liang, J. Yamagishi and I. Echizen
 
Subjects
Classification and prediction,  Deep learning,  Robustness,  Privacy,  Artificial intelligence methods in security and privacy,  Cyber-physical systems security and privacy,  Forensics,  Security architectures,  Security,  Pattern recognition and learning
 
Keywords
AI-generated contentdeepfakereal or fakegenerative AIAI-powered frameworkcountermeasuresprovenanceintentioncontext
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 474 times

In this article:
Introduction 
Benign Users and Adversaries in the Era of Generative AI 
A Generalized AI-powered Framework 
Deepfake Attacks on the AI-powered Frameworks 
Deepfake Countermeasures in the Era of Generative AI 
Case Study 1: AI-powered Communication Framework 
Case Study 2: Non-interactive AI-powered Framework for Content Editing 
Conclusions 
Acknowledgments 
References 

Abstract

In this era of advanced generative artificial intelligence (AI), in which machine-generated content coexists with human-created content, the question of authenticity extends beyond the binary “real or fake.” Media forensics must evolve to encompass three crucial dimensions. Provenance: was the content created by a person, AI, or a combination of both? Intention: was the content created with a specific purpose, such as to inform, entertain, deceive, or manipulate? Context: how is the content being presented, used, and interpreted within its particular context? As AI-generated content becomes increasingly prevalent, the focus should shift towards ensuring transparency, ethical use, and accountability in content creation and dissemination, regardless of its origin.

In this paper, we present a comprehensive countermeasure framework designed to address a broad spectrum of attacks related to generative AI, where merely distinguishing between “real” and “fake” content is no longer adequate. To highlight the necessity of this new perspective, we present two case studies that demonstrate the feasibility and effectiveness of our proposed framework. By implementing such a framework, we can more effectively navigate the challenges posed by advanced generative AI while harnessing the opportunities it presents.

DOI:10.1561/116.20240091

Companion

APSIPA Transactions on Signal and Information Processing Special Issue - Deepfakes, Unrestricted Adversaries, and Synthetic Realities in the Generative AI Era
See the other articles that are part of this special issue.