APSIPA Transactions on Signal and Information Processing > Vol 14 > Issue 1

How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception

Sahibzada Adil Shahzad, Academia Sinica, Taiwan AND National Chengchi University, Taiwan, Ammarah Hashmi, Academia Sinica, Taiwan AND National Tsing Hua University, Taiwan, Yan-Tsung Peng, National Tsing Hua University, Taiwan, Yu Tsao, National Tsing Hua University, Taiwan, Hsin-Min Wang, Academia Sinica, Taiwan, whm@iis.sinica.edu.tw
 
Suggested Citation
Sahibzada Adil Shahzad, Ammarah Hashmi, Yan-Tsung Peng, Yu Tsao and Hsin-Min Wang (2025), "How Good is ChatGPT at Audiovisual Deepfake Detection: A Comparative Study of ChatGPT, AI Models and Human Perception", APSIPA Transactions on Signal and Information Processing: Vol. 14: No. 1, e11. http://dx.doi.org/10.1561/116.20250004

Publication Date: 11 Jun 2025
© 2025 S. A. Shahzad, A. Hashmi, Y. T. Peng, Y. Tsao and H. M. Wang
 
Subjects
Signal processing for security and forensic analysis,  Multimodal signal processing,  Speech and spoken language processing,  Video analysis and event recognition,  Forensics,  Artificial intelligence methods in security and privacy,  Deep learning
 
Keywords
LLMChatGPTdeepfakeaudiovisual deepfakemulti-modalityvideo forensicsforgery detection
 

Share

Open Access

This is published under the terms of CC BY-NC.

Downloaded: 22 times

In this article:
Introduction 
Related Work 
Methodology 
Experiments and Results 
Ablation Study 
Limitations and Discussion 
Conclusions 
References 

Abstract

Multimodal deepfakes involving audiovisual manipulations are a growing threat because they are difficult to detect with the naked eye or using unimodal deep learning-based forgery detection methods. Audiovisual forensic models, while more capable than unimodal models, require large training datasets and are computationally expensive for training and inference. Furthermore, these models lack interpretability and often do not generalize well to unseen manipulations. In this study, we examine the detection capabilities of a large language model (LLM) (i.e., ChatGPT) to identify and account for any possible visual and auditory artifacts and manipulations in audiovisual deepfake content. Extensive experiments are conducted on videos from two benchmark multimodal deepfake datasets to evaluate the detection performance of ChatGPT and compare it with the detection capabilities of stateof- the-art multimodal forensic models and humans. Experimental results demonstrate the importance of domain knowledge and prompt engineering for video forgery detection tasks using LLMs. Unlike approaches based on end-to-end learning, ChatGPT can account for spatial and spatiotemporal artifacts and inconsistencies that may exist within or across modalities. Additionally, we discuss the limitations of ChatGPT for multimedia forensic tasks.

DOI:10.1561/116.20250004