Ijraset Journal For Research in Applied Science and Engineering Technology
Authors: Aditya Charpe, Dr. Rahul Khokale, Dheeraj Ghaghre
DOI Link: https://doi.org/10.22214/ijraset.2025.71021
Certificate: View Certificate
Deepfakes, synthetic videos generated by artificial intelli- gence, pose severe threats to multimedia integrity, enabling misinfor- mation, financial fraud, and identity theft [34]. Powered by Generative AdversarialNetworks (GANs)[1]andGenerativeTransformerNetworks (GTNs) [2], these hyper-realistic forgeries demand robust, real-time detection to safeguard video and audio platforms. This review synthe- sizes 80 peer-reviewed studies from 2014 to 2024, analyzing GAN-and GTN-based deepfake generation and detection methods, bench- mark datasets (e.g., FaceForensics++ [11], Celeb-DF [12], DFDC [13], WildDeepfake [18], DeeperForensics [71]), and performance metrics like accuracy, AUROC, and latency. We explore real-time detection frame- works, edge-compatible models, ethical challenges (e.g., dataset bias, privacy risks) [35], and global regulatory frameworks. Case studies of deepfakeincidentshighlightreal-worldimpacts,whilegapsincomputa- tionalefficiency(<100ms)andcross-datasetgeneralizationunderscore theneed foradvanced solutions. Thispaper providesa comprehensive roadmap for researchers and practitioners, emphasizing multimedia- focuseddetectionto counterdeepfakethreatsinhigh-stakesscenarios like social media, security surveillance, and democratic processes.
Overview:
Deepfakes—AI-generated synthetic videos—pose serious threats to digital trust, with applications in misinformation, financial scams, and identity theft. Enabled by Generative Adversarial Networks (GANs) and Generative Transformer Networks (GTNs), deepfakes have become increasingly realistic, challenging detection systems across platforms like X (formerly Twitter), YouTube, and TikTok.
Real-World Impact:
High-profile incidents include a 2023 political deepfake influencing elections (12M+ views) and a 2024 CEO impersonation resulting in a $30M fraud. These events stress the urgent need for real-time detection (<100 ms latency), which current systems (often >200 ms) struggle to meet.
Early (2017): Autoencoders with noticeable artifacts.
2014–2018: GANs introduced photorealism; tools like DeepFaceLab democratized creation.
2017–2024: GTNs enhanced temporal and audio-visual realism; deepfakes became harder to detect.
CNN-Based Methods:
Target spatial features (e.g., MesoNet, EfficientNet).
Achieve 85–95% accuracy but often have high latency (~300 ms).
Transformer-Based Models:
Use attention for spatiotemporal detection (e.g., Swin Transformer).
High accuracy (92–95%) but heavy computation limits edge deployment.
Frequency & Artifact Analysis:
Detect spectral inconsistencies (e.g., DFT, Haar wavelets).
Fast (50–100 ms) but less effective on high-quality deepfakes.
Multimodal Detection:
Fuse visual, audio, and motion cues (e.g., lip-sync errors).
Strong performance (90–93%) but complex and slow (~300 ms).
Ensemble Methods:
Combine CNNs, transformers for robustness across datasets.
High accuracy but computationally expensive.
Adversarial Attack Detection:
Target evasion techniques using adversarial training or distillation.
Important for resilience in high-stakes contexts.
Latency: Most models exceed real-time requirements (>200 ms).
Generalization: Accuracy drops significantly across datasets (e.g., from 85% to 65–75%).
Bias: Datasets like Celeb-DF are Western-biased, leading to poor performance on non-Western subjects.
Privacy & Ethics: Detection often relies on biometric data, raising regulatory concerns (e.g., GDPR).
Dynamic Attention Fusion (DAF): A proposed hybrid GAN-GTN approach combining high accuracy with sub-100 ms latency and ethical safeguards like federated learning.
Future Directions: Real-time deployment, multimodal integration, fairness, and regulation-aware detection are critical.
Thisreviewsynthesizes80peer-reviewedstudiesfrom 2014 to 2024, providing a comprehensive analysis of GAN- and GTN-based deepfake detection methods, benchmark datasets, real-time techniques, ethical considerations, and global regulatory frameworks [39], [40]. CNNs, transform-ers,andmultimodalframeworksachieve80–95%accuracy,but persistent challenges in latency (200 ms) and cross- dataset generalization hinder real-time deployment in dy- namicenvironmentslikesocialmediaorlivestreaming[15], [16]. Case studies of political misinformation, financial fraud, influencer scams, and legal manipulations underscore the profound societal and economic impacts of deepfakes, necessitating multimedia-focused detection systems capa- ble of rapid, accurate identification [34], [36], [37], [66]. Ethical issues, including dataset bias, privacy risks, and erosionofsocietaltrust,demandstandardizedframeworks to ensure fairness and transparency, while global regu- lations require harmonization to balance innovation with accountability [34], [35]. Future research should prioritize lightweight models, hybrid GAN-GTN architectures, cross- dataset validation, federated learning, and cross-cultural datasets to counter deepfake threats in multimedia appli- cations [14], [61], [69]. This roadmap equips researchers, practitioners,andpolicymakerswiththeinsightsneededto develop robust, ethical detection systems, fostering multi- disciplinarycollaborationacrosstechnical,ethical,andreg- ulatorydomainstosafeguardtrustindigitalmediaforhigh- stakes scenarios, from democratic processes to financial security [80].
[1] I. Goodfellow et al., “Generative adversarial nets,” in Proc. Adv.NeuralInf.Process.Syst.,2014,pp.2672–2680,doi:10.48550/arXiv.1406.2661. [2] A.Vaswaniet al.,“Attentionisallyouneed,”inProc. Adv. NeuralInf. Process. Syst., 2017, pp. 5998–6008, doi: 10.48550/arXiv.1706.03762. [3] T. Karras, S. Laine, and T. Aila, “A style-based generator archi-tecture for generative adversarial networks,” in Proc. IEEE/CVFConf. Comput. Vis. Pattern Recognit., 2019, pp. 4401–4410, doi:10.1109/CVPR.2019.00453. [4] M.Arjovsky,S.Chintala,andL.Bottou,“WassersteinGAN,”inProc.Int.Conf.Mach.Learn.,2017,pp.214–223,doi:10.48550/arXiv.1701.07875. [5] A. Radford, L. Metz, and S. Chintala, “Unsupervised repre-sentation learning with deep convolutional generative adver-sarial networks,” in Proc. Int. Conf. Learn. Represent., 2016, doi:10.48550/arXiv.1511.06434. [6] Y.Choietal.,“StarGAN:Unifiedgenerativeadversarialnetworksfor multi-domain image-to-image translation,” in Proc. IEEE/CVFConf. Comput. Vis. Pattern Recognit., 2018, pp. 8789–8797, doi: 10.1109/CVPR.2018.00916. [7] J. Thies, M. Zollho¨fer, M. Stamminger, C. Theobalt, and M.Nießner, “Face2Face: Real-time face capture and reenactment ofRGB videos,” in Proc.IEEE/CVFConf.Comput.Vis.PatternRecog- nit.,2016,pp.2387–2395,doi:10.1109/CVPR.2016.262. [8] T.Karras,S.Laine,M.Aittala,J.Hellsten,J.Lehtinen,andT.Aila,“AnalyzingandimprovingtheimagequalityofStyleGAN,”in Proc.IEEE/CVFConf.Comput.Vis.PatternRecognit.,2020,pp.8110–8119,doi:10.1109/CVPR42600.2020.00813. [9] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image trans-lation with conditional adversarial networks,” in Proc.IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1125–1134, doi:10.1109/CVPR.2017.632. [10] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-imagetranslationusingcycle-consistentadversarialnetworks,”inProc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2223–2232, doi: 10.1109/ICCV.2017.244. [11] A. Ro¨ssler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, and M.Nießner,“FaceForensics++:Learningtodetectmanipulatedfacialimages,” in Proc.IEEE/CVFInt.Conf.Comput.Vis., 2019, pp. 1–11,doi:10.1109/ICCV.2019.00009. [12] Y. Li, X. Yang, P. Sun, H. Qi, and S. Lyu, “Celeb-DF: A large-scalechallenging dataset for deepfake forensics,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 3207–3216, doi:10.1109/CVPR42600.2020.00327. [13] B. Dolhanskyet al., “The deepfake detection challenge (DFDC)dataset,” arXiv preprint arXiv:2006.07397, 2020, doi: 10.48550/arXiv.2006.07397. [14] H.Zhao,W.Zhou,D.Chen,T.Wei,W.Zhang,andN.Yu,“Multi-attentional deepfake detection,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021, pp. 2185–2194, doi: 10.1109/CVPR46437.2021.00222. [15] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo,“Swin transformer: Hierarchical vision transformer using shiftedwindows,” in Proc. IEEE/CVF Int. Conf. Comput. Vis., 2021, pp.10012–10022, doi: 10.48550/arXiv.2103.14030. [16] K.Gandhi,P.Kulkarni,T.Shah,P.Chaudhari,M.Narvekar,andK.Ghag,“Amultimodalframeworkfordeepfakedetection,”arXivpreprint arXiv:2410.03487, 2024, doi: 10.48550/arXiv.2410.03487. [17] J.Hu,X.Liao,J.Liang,W.Zhou,andZ.Qin,“FInfer:Frameinference-baseddeepfakedetectionforhigh-visual-qualityvideos,” in Proc. AAAI Conf. Artif. Intell., vol. 36, no. 1, 2022, pp.951–959, doi: 10.1609/aaai.v36i1.19978. [18] B. Zi et al., “WildDeepfake: A challenging real-world dataset,” inProc.Eur.Conf.Comput.Vis.(ECCV), 2020, pp. 123–134, doi: 10.48550/arXiv.2101.01456. [19] A. Dosovitskiyet al., “An image is worth 16x16 words: Trans-formers for image recognition at scale,” in Proc. Int. Conf. Learn. Represent., 2021, doi: 10.48550/arXiv.2010.11929. [20] Y. Jiang, S. Chang, and Z. Wang, “TransGAN: Two pure trans-formers can make one strong GAN, and that can scale up,” inProc.Adv.NeuralInf.Process.Syst., 2021, pp. 14745–14758, doi:10.48550/arXiv.2102.07074. [21] P. Esser, R. Rombach, and B. Ommer, “Taming transformers forhigh-resolutionimagesynthesis,”inProc. IEEE/CVF Conf. Com-put. Vis. Pattern Recognit., 2021, pp. 12873–12883, doi: 10.1109/CVPR46437.2021.01268. [22] D. A. Hudson and C. L. Zitnick, “Generative adversarial trans-formers,” arXiv preprint arXiv:2302.04567, 2023, doi: 10.48550/arXiv.2103.01209. [23] Z. Liu et al., “Swin transformer V2: Scaling up capacity andresolution,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit.,2022, pp. 11999–12009, doi: 10.1109/CVPR52688.2022.01170. [24] Z.Liu,P.Luo,X.Wang,andX.Tang,“Deeplearningfaceattributesinthewild,”inProc.IEEEInt.Conf.Comput.Vis.,2015,pp.3730–3738, doi: 10.1109/ICCV.2015.425. [25] L.Chen,H.Zhang,J.Xiao,L.Nie,J.Shao,W.Liu,andT.-S.Chua, “SCA-CNN: Spatial and channel-wise attention in con-volutional networks for image captioning,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2017, pp. 5659–5667, doi:10.48550/arXiv.1611.05594. [26] M. Tan and Q. V. Le, “EfficientNet: Rethinking model scaling forconvolutional neural networks,” in Proc. Int. Conf. Mach. Learn.,2019, pp. 6105–6114, doi: 10.48550/arXiv.1905.11946. [27] T. Karras et al., “Progressive growing of GANs for improvedquality, stability, and variation,” arXiv preprint arXiv:1710.10196,2017, doi: 10.48550/arXiv.1710.10196. [28] H.Liu,Z.Dai,D.So,andQ.V.Le,“PayattentiontoMLPs,”in Proc.Adv.NeuralInf.Process.Syst., 2021, pp. 9204–9215, doi:10.48550/arXiv.2105.08050. [29] A. Tewari et al., “State of the art on neural rendering,” Comput.Graph. Forum, vol. 39, no. 2, pp. 701–727, May 2020, doi: 10.1111/cgf.14022. [30] L. Guarnera, O. Giudice, and S. Battiato, “Deepfake detection byanalyzing convolutional traces,” in Proc. IEEE/CVF Conf. Comput.Vis. Pattern Recognit. Workshops, 2020, pp. 666–667, doi: 10.1109/CVPRW50498.2020.00341. [31] P.KorshunovandS.Marcel,“Deepfakes:Anewthreattoface recognition? Assessment and detection,” arXiv preprint arXiv:1812.08685, 2018, doi: 10.48550/arXiv.1812.08685. [32] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “MesoNet: Acompactfacialvideoforgerydetectionnetwork,”inProc.IEEEInt. WorkshopInf.ForensicsSecur., 2018, pp. 1–7, doi: 10.1109/WIFS.2018.8630761. [33] J. Yang, A. Li, S. Xiao, W. Lu, and X. Gao, “MTD-Net: Learning todetect deepfakes images by multi-scale texture difference,” IEEE Trans.Inf.ForensicsSecur., vol. 16, pp. 4234–4245, 2021, doi: 10.1109/TIFS.2021.3102487. [34] R.ChesneyandD.K.Citron,“Deepfakes:Aloomingchallengeforprivacy,democracy,andnationalsecurity,”Calif.LawRev.,vol.107, no. 6, pp. 1753–1820, Dec. 2019, doi: 10.15779/Z38RV0D15J. [35] M. Westerlund, “The emergence of deepfake technology: A re-view,” Technol.Innov. Manag.Rev., vol. 9, no. 11, pp. 39–52, Nov.2019,doi:10.22215/timreview/1282. [36] C. Vaccari and A. Chadwick, “Deepfakes and disinformation:Exploring the impact of synthetic media on democracy,” Soc. MediaSoc.,vol.6,no.1,pp.1–12,Jan.2020,doi:10.1177/2056305120903408. [37] T.Hwang,“Deepfakes:Agroundedthreatassessment,”CenterforSecurityandEmergingTechnology,Jul.2020. [Online]. Available: https://cset.georgetown.edu/research/deepfakes-a-grounded-threat-assessment/, doi:10.51593/20190030. [38] Y. Mirsky and W. Lee, “The creation and detection of deepfakes:A survey,” ACMComput.Surv., vol. 54, no. 1, pp. 1–41, Jan. 2021,doi: 10.1145/3425780. [39] G.Pei,J.Zhang,M.Hu,Z.Zhang,C.Wang,Y.Wu,G.Zhai,J.Yang, C. Shen, and D. Tao, “Deepfake generation and detection: Abenchmark and survey,” arXiv preprint arXiv:2403.17881, 2024, doi:10.48550/arXiv.2403.17881. [40] P. Edwards, J.-C. Nebel, D. Greenhill, and X. Liang, “A review ofdeepfaketechniques:Architecture,detection,anddatasets,”IEEE Access,vol.12,pp.154718–154742,2024,doi:10.1109/ACCESS.2024.3477257. [41] L.Verdoliva,“Media forensicsand deepfakes:Anoverview,” IEEEJ. Sel. Topics Signal Process., vol. 14, no. 5, pp. 910–932, Aug. 2020,doi: 10.1109/JSTSP.2020.3002101. [42] R. Tolosana, R. Vera-Rodriguez, J. Fierrez, A. Morales, and J.Ortega-Garcia, “Deepfakes and beyond: A survey of face manip-ulation and fake detection,” Inf.Fusion, vol. 64, pp. 131–148, Dec.2020,doi:10.1016/j.inffus.2020.06.014. [43] I. Goodfellow, Y. Bengio, and A. Courville, DeepLearning. Cam-bridge,MA,USA:MITPress,2016.[Online].Available:https://www.deeplearningbook.org. [44] M. Abadi et al., “TensorFlow: A system for large-scale machinelearning,” in Proc. 12th USENIX Symp. Oper. Syst. Des. Implementa-tion, 2016, pp. 265–283, doi: 10.5555/3026877.3026899. [45] A. Paszkeetal., “PyTorch: An imperative style, high-performancedeep learning library,” in Proc.Adv.NeuralInf.Process.Syst., 2019,pp.8026–8037,doi:10.48550/arXiv.1912.01703. [46] S.Lyu,“Deepfakedetection:Currentchallengesandnextsteps,”in Proc. IEEE Int. Conf. Multimedia Expo Workshops, 2020, pp. 1–6,doi: 10.1109/ICMEW46912.2020.9105991. [47] N. Carlini and H. Farid, “Evading deepfake-image detectors withwhite-andblack-boxattacks,”inProc.IEEE/CVFConf.Comput.Vis.PatternRecognit.Workshops,2020,pp.28042813,doi:10.1109/CVPRW50498.2020.00337. [48] J.Frank,T.Eisenhofer,L.Scho¨nherr,A.Fischer,D.Kolossa,and [49] T. Holz, “Leveraging frequency analysis for deepfake image de-tection,” in Proc. Int. Conf. Mach. Learn., 2020, pp. 3247–3258, doi:10.48550/arXiv.2003.08685. [50] C.Miaoetal.,“Learningforgeryregion-awareandID-independentfeaturesforfacemanipulationdetection,”IEEETrans.Biom., Behavior, Identity Sci., vol. 4, no. 1, pp. 71–84, Jan. 2022, doi:10.1109/TBIOM.2021.3119403. [51] Z.Liu,X.Qi,andP.H.S.Torr,“Globaltextureenhancementfor fake face detection in the wild,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 8060–8069, doi: 10.1109/CVPR42600.2020.00808. [52] Y. Nirkin, Y. Keller, and T. Hassner, “FSGAN: Subject agnostic faceswappingandreenactment,”inProc.IEEE/CVFInt.Conf.Comput. Vis., 2019, pp. 7183–7192, doi: 10.1109/ICCV.2019.00728. [53] H. Dang, F. Liu, J. Stehouwer, X. Liu, and A. K. Jain, “On thedetection of digital face manipulation,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2020, pp. 5780–5789, doi: 10.1109/CVPR42600.2020.00582. [54] Y. Zheng, J. Bao, D. Chen, M. Zeng, and F. Wen, “Exploring tem-poralcoherenceformoregeneralvideofaceforgerydetection,”inProc.IEEE/CVFInt.Conf.Comput.Vis.,2021,pp.15044–15054,doi:10.1109/ICCV48922.2021.01477. [55] T. Zhou, W. Wang, Z. Liang, and J. Shen, “Face forensics in thewild,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2021,pp.5778–5788,doi:10.48550/arXiv.2103.16076. [56] Y. Li and S. Lyu, “Exposing deepfake videos by detecting facewarping artifacts,” in Proc. IEEE/CVF Conf. Comput. Vis. PatternRecognit.Workshops, 2022, pp. 3456–3465, doi: 10.48550/arXiv.1811.00656. [57] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressingdeep neural networks with pruning, trained quantization andHuffman coding,” in Proc. Int. Conf. Learn. Represent., 2016, doi:10.48550/arXiv.1510.00149. [58] A.G.Howardetal.,“MobileNets:Efficientconvolutionalneu-ral networks for mobile vision applications,” arXiv preprint arXiv:1704.04861, 2017, doi: 10.48550/arXiv.1704.04861. [59] X. Zhang, X. Zhou, M. Lin, and J. Sun, “ShuffleNet: An extremelyefficient convolutional neural network for mobile devices,” in Proc.IEEE/CVFConf.Comput.Vis.PatternRecognit.,2018,pp.6848–6856,doi: 10.1109/CVPR.2018.00716. [60] J.Huang,V.Rathod,C.Sun,M.Zhu,A.Korattikara,A.Fathi,et al., “Speed/accuracy trade-offs for modern convolutional objectdetectors,”inProc.IEEE/CVFConf.Comput.Vis.PatternRecognit.,2017,pp.3296–3297,doi:10.1109/CVPR.2017.351. [61] Z. Wu, Y. Xiong, S. X. Yu, and D. Lin, “Unsupervised featurelearning via non-parametric instance discrimination,” in Proc. IEEE/CVFConf.Comput.Vis.PatternRecognit.,2018,pp.3733–3742,doi: 10.1109/CVPR.2018.00393. [62] A. Nagrani, J. S. Chung, and A. Zisserman, “VoxCeleb: A large-scalespeakeridentificationdataset,”inProc.Interspeech,2017,pp.2616–2620, doi: 10.21437/Interspeech.2017-950. [63] J. S. Chung, A. Nagrani, and A. Zisserman, “VoxCeleb2: Deepspeaker recognition,” in Proc.Interspeech, 2018, pp. 1086–1090, doi:10.21437/Interspeech.2018-1929. [64] K. Chumachenko, A. Iosifidis, and M. Gabbouj, “Self-attentionfusionforaudiovisualemotionrecognitionwithincompletedata,”IEEETrans.Multimedia,vol.25,pp.289–300,2023,doi:10.1109/ICPR56361.2022.9956592. [65] Y. Li, M.-C. Chang, and S. Lyu, “In Ictu Oculi: Exposing AI createdfakevideosbydetectingeyeblinking,”inProc.IEEEInt.Workshop Inf.ForensicsSecur., 2018, pp. 1–7, doi: 10.1109/WIFS.2018.8630787. [66] D.Citron,“Howdeepfakesunderminetruthandthreatendemocracy,”TEDTalk,Nov.2019.[Online].Available:https://www.ted.com/talks/daniellecitronhowdeepfakesunderminetruthandthreatendemocracy. [67] S. Salman, J. A. Shamsi, and R. Qureshi, “Deep fake generationand detection: Issues, challenges, and solutions,” ITProf., vol. 25,no. 1, pp. 52–59, Jan.–Feb. 2023, doi: 10.1109/MITP.2022.3230353. [68] J. Langguth, K. Pogorelov, S. Brenner, and P. Filkukova, “Don’ttrust your eyes: Image manipulation in the age of deepfakes,”Front.BigData,vol.4,pp.649989,Apr.2021,doi:10.3389/fcomm.2021.632317. [69] S. Khan, “Adversarially robust deepfake detection via adversarialfeature similarity learning,” arXiv preprint arXiv:2403.08806, 2024,doi: 10.48550/arXiv.2403.08806. [70] T. T. Nguyen et al., “Deep learning for deepfake detection: Asurvey,” IEEE Trans. Artif. Intell., vol. 3, no. 4, pp. 459–476, Aug.2022, doi: 10.48550/arXiv.1909.11573. [71] V.Dudykevych,H.Mykytyn,andK.Ruda,“Theconceptofa deepfake detection system of biometric image modificationsbasedonneuralnetworks,”inProc.2022IEEE3rdKhPIWeek Adv. Technol. (KhPIWeek), Kharkiv, Ukraine, 2022, pp. 1–4, doi:10.1109/KhPIWeek57572.2022.9916378. [72] L. Jiang, R. Li, W. Wu, C. Qian, and C. C. Loy, “DeeperForensics-1.0:Alarge-scaledatasetforreal-worldfaceforgerydetection,”inProc.IEEE/CVFConf.Comput.Vis.PatternRecognit., Seattle, WA,USA, 2020, pp. 2886–2895, doi: 10.1109/CVPR42600.2020.00296. [73] Preeti et al., “A GAN-based model of deepfake detection in socialmedia,”ProcediaComput.Sci.,vol.218,pp.2153–2162,2023,doi:10.1016/j.procs.2023.01.191. [74] S.C.P,B.J.J.I,A.M.B,V.R,Y.R.R.V,andE.Elango,“Deepfake detection using multi-modal fusion combined withattentionmechanism,”inProc.20244thInt.Conf.SustainableExpert Syst. (ICSES), Kaski, Nepal, 2024, pp. 1194–1199, doi:10.1109/ICSES63445.2024.10763221. [75] Y. Huang et al., “FakeLocator: Robust localization of GAN-basedface manipulations,” IEEE Trans. Inf. ForensicsSecur., vol. 17, pp.2345–2356, 2022, doi: 10.1109/TIFS.2022.3182478. [76] U.A.Ciftcietal.,“Howdotheheartsofdeepfakesbeat?Deepfakesource detection via interpreting residuals with biological signals,”in Proc.IEEEInt.JointConf.Biom.(IJCB), 2020, pp. 1–10, doi: 10.48550/arXiv.2008.11363. [77] M. A. Younus and T. M. Hasan, “Effective and fast deepfakedetectionmethodbasedonHaarwavelettransform,”inProc.2020 Int.Conf.Comput.Sci.Softw.Eng.(CSASE),Duhok,Iraq,2020,pp.186–190, doi: 10.1109/CSASE48920.2020.9142077. [78] M. S. Rana, M. N. Nobi, B. Murali, and A. H. Sung, “Deepfakedetection:Asystematicliteraturereview,”IEEEAccess,vol.10,pp.25494–25513, 2022, doi: 10.1109/ACCESS.2022.3154404. [79] V.S.Anandhasivam,A.K.Anusri,M.Logeshwar,and R. Gopinath, “Enhancing deepfake detection through hybridMobileNet-LSTMmodelwithreal-timeimageandvideoanal-ysis,”inProc.20244thInt.Conf.UbiquitousComput.Intell.Inf.Syst. (ICUIS), Gobichettipalayam, India, 2024, pp. 1989–1995, doi:10.1109/ICUIS64676.2024.10867. [80] S.D,R.S,S.Ravi,V.M,andP.M.P.U,“AlightweightCNNfor efficient deepfake detection of low-resolution images in fre-quency domain,” in Proc. 2024 Second Int. Conf. Emerging Trends Inf. Technol. Eng. (ICETITE), Vellore, India, 2024, pp. 1–6, doi:10.1109/ic-ETITE58242.2024.10493406. [81] B. Cavia, E. Horwitz, T. Reiss, and Y. Hoshen, “Real-time deepfakedetection in the real-world,” arXiv preprint arXiv:2406.09398, 2024,doi:10.48550/arXiv.2406.09398.
Copyright © 2025 Aditya Charpe, Dr. Rahul Khokale, Dheeraj Ghaghre. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET71021
Publish Date : 2025-05-14
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here