In forensic science, it is seen that hand-drawn face sketches are still very limited and time consuming when it comes to using them with the latest technologies used for recognition and identification of criminals. In this paper, we present a standalone application which would allow users to create composites face sketch of the suspect without the help of forensic artists using drag and drop feature in the application and can automatically match the drawn composite face sketch with the police database much faster and efficiently using deep learning and cloud infrastructure.
Introduction
Traditional hand-drawn face sketches, though useful for identifying criminals, are often time-consuming and inefficient for matching against large or real-time databases. Previous attempts to automate sketch recognition—including composite face construction tools and sketch-to-photo matching algorithms—have faced limitations such as low accuracy, cartoon-like outputs, difficulty handling varied face orientations, and complex workflows.
To address these issues, a new application is proposed that allows users to construct accurate composite sketches using a drag-and-drop interface of predefined facial features or upload hand-drawn sketches. The platform leverages deep learning algorithms and cloud infrastructure to match sketches with law enforcement databases efficiently. Features include automatic feature suggestion, security and privacy measures (machine locking, two-step verification, centralized server usage), backward compatibility with existing hand-drawn sketches, and an intuitive user interface that organizes facial elements for easy assembly.
The system operates in two main stages: face sketch construction, where users build sketches from categorized facial features, and face sketch recognition, where uploaded sketches are processed through feature extraction and matched against database images using machine learning. This approach aims to increase accuracy, save time, and bridge the gap between traditional sketch methods and modern automated recognition systems.
Conclusion
The Project ‘Forensic Face Sketch Construction and Recognition’ is been designed, developed and finally tested keeping the real-world scenarios from the very first splash screen to the final screen to fetch data from the records keeping security, privacy and accuracy as the key factor in every scenario. The platform displayed a tremendous result on Security point of view by blocking the platform use if the MAC Address and IP Address on load didn’t match the credentials associated with the user in the database and later the OTP system proved its ability to restrict the use of previously generated OTP and even generating the new OTP every time the OTP page is reloaded or the user tries to relog in the platform. The platform even showed good accuracy and speed while face sketch construction and recognition process, provided an average accuracy of more than 90% with a confidence level of 100% when tested with various test cases, test scenario and data sets, which means a very good rate according to related studies on this field.
The platform even has features which are different and unique too when compared to related studies on this field, enhancing the overall security and accuracy by standing out among all the related studies and proposed systems in this field.
References
[1] Hamed Kiani Galoogahi and Terence Sim, “Face Sketch Recognition By Local Radon Binary Pattern: LRBP”, 19th IEEE International Conference on Image Processing, 2012.
[2] Charlie Frowd, Anna Petkovic, Kamran Nawaz and Yasmeen Bashir, “Automating the Processes Involved in Facial Composite Production and Identification” Symposium on Bio-inspired Learning and Intelligent Systems for Security, 2009.
[3] W. Zhang, X. Wang and X. Tang, “Coupled information theoretic encoding for face photo-sketch recognition”, in Proc. of CVPR, pp. 513-520, 2011.
[4] X. Tang and X. Wang, “Face sketch recognition”, IEEE Trans. Circuits and Systems for Video Technology, vol. 14, no. 1, pp. 50-57, 2004.
[5] B. Klare and A. Jain, “Sketch to photo matching: a featurebased approach”, SPIE Conference on Biometric Technology for Human Identification, 2010.
[6] P. Yuen and C. Man, “Human face image searching system using sketches,” IEEE Trans. SMC, Part A: Systems and Humans, vol. 37, pp. 493–504, July 2007.
[7] H. Han, B. Klare, K. Bonnen, and A. Jain, “Matching composite sketches to face photos: A component-based approach,” IEEE Trans. on Information Forensics and Security, vol. 8, pp. 191–204, January 2013.
[8] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 5967–5976.
[9] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2242–2251.
[10] Y. Song, J. Zhang, L. Bao, and Q. Yang, “Fast preprocessing for robust face sketch synthesis,” in Proc. 26th Int. Joint Conf. Artif. Intell., 2017, pp. 4530–4536.
[11] Y. C. Lai, B. A. Chen, K. W. Chen, W. L. Si, C. Y. Yao,
and E. Zhang, “Data-driven npr illustrations of natural flows in chinese painting,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 12, pp. 2535–2549, Dec.2017.
[12] F.-L. Zhang, J. Wang, E. Shechtman, Z.-Y. Zhou, J.-X. Shi, and S. M. Hu, “PlenoPatch: Patch-based plenoptic image manipulation,” IEEE Trans. Vis. Comput. Graph., vol. 23, no. 5, pp. 1561–1573, May2017.
[13] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90,
2017.
[14] M. Zhu, N. Wang, X. Gao, and J. Li, “Deep graphical feature learning for face sketch synthesis,” in Proc. 26th Int. Joint Conf. Artif. Intell., 2017, pp. 3574–3580.
[15] N. Wang, X. Gao, L. Sun, and J. Li, “Bayesian face sketch synthesis,” IEEE Trans. Image Process., vol. 26, no. 3, pp. 1264–1274, Mar.2017.
[16] Y. Song, L. Bao, S. He, Q. Yang, and M.-H. Yang, “Stylizing face images via multiple exemplars,” Comput. Vis. Image Understanding, vol. 162, pp. 135–145, 2017.
[17] N. Wang, X. Gao, and J. Li, “Random sampling for fast face sketch synthesis,” Pattern Recognit., vol. 76, pp. 215–227, 2018.
[18] Y. J. Huang, W. C. Lin, I. C. Yeh, and T. Y. Lee, “Geometric and textural blending for 3d model stylization,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 2, pp. 1114–1126, Feb.2018.
[19] S. S. Lin, C. C. Morace, C. H. Lin, L. F. Hsu, and T. Y. Lee, “Generation of escher arts with dual perception,” IEEE Trans. Vis. Comput. Graph., vol. 24, no. 2, pp. 1103–1113, Feb.2018.
[20] N. Wang, X. Gao, and J. Li, “Random sampling for fast face sketch synthesis,” Pattern Recognit., vol. 76, pp. 215–227, 2018.
[21] Bin Sheng, Ping Li, Chenhao Gao, Kwan-Liu Ma, \"Deep Neural Representation Guided Face Sketch Synthesis\", IEEE Trans. Vis. Comput. Graph., vol. 25, no. 12, pp. 3216-3230, Dec.2019.