This study introduces an innovative framework for deepfake facial image detection by integrating machine learning techniques with GAN-based image synthesis. As synthetic media technologies advance, the proliferation of deepfakes has emerged as a critical threat to digital identity, media authenticity, and cybersecurity. To address this challenge, the proposed approach employs a Deep Convolutional Generative Adversarial Network (DCGAN), which serves a dual purpose: generating realistic fake facial images and reusing its discriminator network for real/fake image classification. The model is trained over multiple epochs, allowing both the generator and discriminator to progressively refine their understanding of facial features. Designed without a graphical user interface, the lightweight architecture is optimized for real-time performance and deployment in low-resource environments, such as IoT systems and mobile platforms. The system\'s effectiveness is validated using standard evaluation metrics including accuracy, precision, recall, and F1-score. Results confirm the model\'s high detection capability with minimal computational cost. By unifying generation and detection processes within a single framework, this work contributes to the development of efficient adversarial learning-based security solutions.
Introduction
Deepfake technologies, powered largely by Generative Adversarial Networks (GANs), create highly realistic synthetic facial images and videos. While useful in entertainment and VR, deepfakes pose serious risks by enabling misinformation, cybercrime, and identity manipulation. Among GAN variants, Deep Convolutional GANs (DCGANs) are popular for generating realistic human faces.
This study presents an innovative framework that uses DCGAN not only to generate synthetic facial images but also to detect deepfakes by repurposing the discriminator as a real-vs-fake classifier. This eliminates the need for a separate detection network, making the system lightweight, GUI-independent, and capable of real-time operation on resource-constrained devices like mobiles and IoT platforms.
The methodology includes preprocessing facial images (resizing, normalization), generating synthetic images from random noise via the DCGAN generator, and training the discriminator to distinguish real from fake images using adversarial training. The discriminator outputs a confidence score indicating image authenticity.
Key advantages of this approach are computational efficiency, ease of deployment, and adaptability to various datasets. Extensive experiments using metrics such as accuracy, precision, recall, and F1-score demonstrate that the discriminator effectively learns to detect deepfakes with high confidence. The system’s real-time classification capability makes it suitable for practical applications in media verification, digital forensics, and combating misinformation.
Conclusion
In this research, a deepfake detection framework was developed using Generative Adversarial Networks (GANs), where the generator synthesizes realistic face images and the discriminator evaluates their authenticity. The proposed system effectively differentiates between real and fake images using a CNN-based discriminator trained on generated and original image datasets.
The system successfully demonstrates its ability to generate high-resolution synthetic faces and detect fakes with impressive performance metrics — achieving an accuracy of 94.5%, precision of 93.2%, recall of 92.8%, and an F1-score of 93.0%. The training results validate the effectiveness of the architecture, and the generated images show clear progression over epochs.
Unlike many previous approaches that relied on complex models like XceptionNet or external pre-trained classifiers, our model maintains simplicity while ensuring competitive accuracy. Additionally, it performs real-time detection based on direct output from the discriminator without needing external classifiers or heavy post-processing.
The results of this project confirm that a lightweight DCGAN framework can be effectively used for both generation and discrimination of deepfake images. The approach holds promise for real-world applications where rapid detection and interpretability are crucial.
References
[1] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J. Ortega-Garcia, “DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection,” IEEE Access, vol. 8, pp. 30630–30652, 2020.
[2] H. Nguyen, J. Yamagishi, and I. Echizen, “Use of a Capsule Network to Detect Fake Images and Videos,” arXiv preprint arXiv:1910.12467, 2019.
[3] A. Rossler et al., “FaceForensics++: Learning to Detect Manipulated Facial Images,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., pp. 1–11, 2019.
[4] D. Cozzolino, J. Thies, A. Rossler, C. Riess, M. Nießner, and L. Verdoliva, “ID-Reveal: Identity-aware Deepfake Video Detection,” in Proc. IEEE Int. Conf. on Computer Vision (ICCV), pp. 15048–15057, 2021.
[5] X. Yang, Y. Li, and S. Lyu, “Exposing Deep Fakes Using Inconsistent Head Poses,” in IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), 2019.
[6] Y. Li, M. C. Chang, and S. Lyu, “In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking,” IEEE International Workshop on Information Forensics and Security (WIFS), 2018.
[7] Z. Wang et al., “CNN-generated Images are Surprisingly Easy to Spot...for Now,” IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020.
[8] H. Dang, F. Liu, J. Stehouwer, X. Liu, and A. Jain, “On the Detection of Digital Face Manipulation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 5781–5790, 2020.
[9] M. Matern, C. Riess, and M. Stamminger, “Exploiting Visual Artifacts to Expose Deepfakes and Face Manipulations,” in Proc. IEEE Winter Conf. Appl. Comput. Vis. Workshops, 2019.
[10] Y. Nirkin, Y. Keller, and T. Hassner, “DeepFake Detection Based on Discrepancies Between Faces and Their Context,” IEEE Trans. Pattern Anal. Mach. Intell., 2023.
[11] P. Korshunov and S. Marcel, “DeepFakes: A New Threat to Face Recognition? Assessment and Detection,” arXiv preprint arXiv:1812.08685, 2018.
[12] G. Guarnera, L. Bondi, P. Bestagini, and S. Tubaro, “DeepFake Detection by Analyzing Convolutional Traces,” in IEEE Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), 2020.
[13] M. Ciftci, I. Demir, and L. Yin, “FakeCatcher: Detection of Synthetic Portrait Videos Using Biological Signals,” IEEE Trans. Pattern Anal. Mach. Intell., 2022.
[14] A. Amerini, G. Galteri, L. Uricchio, and A. Del Bimbo, “Deepfake Video Detection Through Optical Flow Based CNN,” in Proc. IEEE Int. Conf. Comput. Vis. Workshops (ICCVW), pp. 1205–1209, 2019.
[15] A. M. Rössler, L. Verdoliva, and M. Nießner, “FaceForensics: A Large-Scale Video Dataset for Forgery Detection in Human Faces,” IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, 2018.
[16] S. Sabir, H. Qian, Y. Chen, P. Markham, and S. Li, “Recurrent Convolutional Strategies for Face Manipulation Detection in Videos,” IEEE Conf. Comput. Vis. Pattern Recognit. Workshops, 2019.
[17] A. Agarwal, R. Singh, and M. Vatsa, “Swapped! Digital Face Presentation Attack Detection via Learned Representation,” IEEE Trans. Information Forensics and Security, vol. 15, pp. 2425–2436, 2020.
[18] D. J. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “Mesonet: A Compact Facial Video Forgery Detection Network,” IEEE Int. Workshop Inf. Forensics Secur. (WIFS), 2018.
[19] L. Verdoliva, “Media Forensics and DeepFakes: An Overview,” IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 5, pp. 910–932, Aug. 2020.
[20] J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2Face: Real-Time Face Capture and Reenactment of RGB Videos,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016.