Over the past few decades, AI generative methods have advanced significantly, making it increasingly challenging to distinguish genuine photographs from AI-generated images, sometimes also referred to as deepfakes. In response, numerous deepfake detection methods and models have been developed, achieving high accuracy. However, the evaluation of these detection methods is often limited to a single dataset, which is typically created by generating multiple images using a specific deepfake generation methods and a fixed set of hyperparameters. This dataset is then randomly split into training and testing sets, but such an approach cannot take into account the variations of hyperparameters on deepfake detection performance. This paper addresses the fundamental question of source mismatch, where a model is trained on a specific deepfake generation source (including hyperparameters) and tested on a different one, highlighting the need to investigate the causes and impacts of such a mismatch as well as to develop solutions to this critical issue.
Companion
APSIPA Transactions on Signal and Information Processing Special Issue - Deepfakes, Unrestricted Adversaries, and Synthetic Realities in the Generative AI Era
See the other articles that are part of this special issue.