This paper presents a software-driven Iris Liveness Detection System designed to enhance the accuracy and dependability of biometric authentication. Although iris recognition is considered one of the most precise biometric methods, it remains vulnerable to presentation attacks such as printed eye images, textured contact lenses, and video replays. To address these challenges, the proposed approach employs a Deep Convolutional Neural Network (CNN) architecture, particularly MobileNet, to automatically capture and analyze fine-grained texture variations that distinguish genuine irises from spoofed ones. Furthermore, a pseudo-depth generation module is integrated to estimate virtual 3D information from conventional 2D iris images, improving detection performance against sophisticated spoofing attempts without relying on extra sensors. The framework is trained and evaluated using benchmark datasets like Clarkson to ensure robust performance across varying lighting conditions, iris textures, and spoofing types. By combining texture- and depth-based cues, the system achieves strong liveness detection capability while maintaining lightweight operation, scalability, and real-time efficiency. This purely software-based, hardware-independent solution offers a cost-effective and secure advancement for modern biometric authentication systems.
Introduction
Iris recognition is a highly accurate biometric authentication method that relies on the unique patterns of the human iris. However, conventional systems are vulnerable to spoofing attacks using printed images, textured contact lenses, or replayed videos. To counter this, a software-based Iris Liveness Detection framework is proposed, leveraging deep learning (MobileNet CNN) and a pseudo-depth estimation module to distinguish real irises from fake ones using only 2D images. The combination of texture analysis and depth estimation ensures robust, lightweight, real-time performance suitable for banking, access control, and identity verification applications.
Literature Insights:
Deep Learning Approaches:
CNN-based models, transfer learning with MobileNet, and hybrid frameworks like Siamese networks or DenseNet-SVM combinations have improved liveness detection accuracy.
Vision Transformer (ViT) and other physiological-signal methods show promise but often require specialized hardware or large datasets.
Multi-stage or cascaded CNNs enhance spoof detection but can be computationally intensive, limiting real-time or mobile deployment.
Feature-Fusion & Classical Methods:
Handcrafted feature fusion (e.g., LBP, GLCM, Haar transforms) has been used to detect presentation attacks, performing well on controlled datasets but sensitive to lighting, noise, or segmentation errors.
Hybrid methods combining deep and classical features improve robustness, yet high computational cost and reliance on large, annotated datasets restrict practical application.
Challenges Identified:
Generalization across diverse spoofing types and iris sensors remains limited.
High computational overhead hinders real-time and mobile implementations.
Environmental factors such as lighting, occlusion, and image quality impact accuracy.
Many methods fail to simultaneously address lightweight performance, high security, and real-time applicability.
Contribution of Proposed Framework:
By integrating texture-based CNN analysis with pseudo-depth estimation, the system provides a cost-effective, scalable, and robust solution that enhances security and reliability of iris-based biometric authentication without requiring additional sensors or hardware, overcoming many limitations of prior approaches.
Conclusion
This paper presented a software-based iris liveness detection framework that integrates deep learning and pseudo-depth estimation to effectively detect spoofing attempts. The proposed approach combines texture and depth-based features using MobileNet, achieving robust and hardware-independent performance. The framework demonstrates potential for real-time deployment in secure authentication systems such as banking, defense, and identity verification. Future work will focus on expanding dataset diversity and optimizing model efficiency for embedded and mobile platforms.
References
[1] M. Safeer, G. Hossain, M. H. Myers, G. Toscano, and N. Yilmazer, “Iris Liveness Detection Using Transfer Learning with MobileNets: Strengthening Cybersecurity in Biometric Identification,” International Journal of Computer Science and Information Security (IJCSIS), vol. 23, no. 1, pp. 1-17, Jan-Feb 2025
[2] P. Rai and P. Kanungo, “A Robust CNN-Siamese Framework for Iris Deepfake Spoof Detection with Superior Accuracy and AUC,” Journal of Information Systems Engineering and Management, vol. 10, no. 37 s, pp. 816–833, Apr. 2025
[3] Vaishali C. Kulloli et al. (2024) Iris Liveness detection using SIFT, SURF and SVM with Quality Metrics for Biometric Authentication Pulished in (ICCUBEA) 2024
[4] S. D. Thepade and L. R. Wagh, Iris Liveness Detection using Fusion of Thepade SBTC and Triangle Thresholding Features with Machine Learning Algorithms, International Research Journal of Multidisciplinary Technovation, vol. 6, no. 1, pp. 128–139, Jan. 2024.
[5] C.-N. Tran, M. S. Nguyen, D. Castells-Rufas and J. Carrabina, “A Fast Iris Liveness Detection for Embedded Systems using Textural Feature Level Fusion Algorithm,” Procedia Computer Science, vol. 237, pp. 858–865, 2024.
[6] Muhammad Mohzary, Khalid J. Almalki, Baek-Young Choi, and Sejun Song, “Apple in My Eyes (AIME): Liveness Detection for Mobile Security Using Corneal Specular Reflections,” IEEE Internet of Things Journal, vol. 10, no. 3, pp. 2270–2284, Feb. 2023
[7] T. Tinsley et al., “LivDet-Iris 2023: Benchmarking Deep Learning Approaches for Presentation Attack Detection,” Proc. Int. Joint Conf. on Biometrics (IJCB), 2023.
[8] O. D’Angelis, L. Bacco, L. Vollero, and M. Merone, “Advancing ECG Biometrics Through Vision Transformers: A Confidence-Driven Approach,” IEEE Access, vol. 11, pp. 138 752–138 766, Dec. 2023
[9] G. Parzianello and A. Czajka, “Saliency-Guided Contact Lens-Aware Iris Recognition,” IEEE Transactions on Biometrics, Behavior, and Identity Science, vol. 4, no. 2, pp. 220–229, 2022.
[10] S. Khade, S. Gite, and B. Pradhan, “Fine-tuning pre-trained CNN models for iris presentation attack detection using ND-Iris3D dataset,” International Journal of Intelligent Systems and Applications in Engineering (IJISAE), vol. 10, no. 2, pp. 135–142, 2022.
[11] S. Khade, S. Gite, and S. D. Thepade, “Iris Presentation Attack Detection Using Fragmental Energy of Haar-Transformed Features and Ensemble Machine Learning Classifiers,” International Journal of Intelligent Systems and Applications in Engineering (IJISAE), vol. 10, no. 3, pp. 220–227, 2022
[12] R. Rahmatallah, S. D. Thepade, and V. Jadhav, “Fusion of Global TSBTC and Local GLCM Features with Machine Learning Classifiers for Iris Presentation Attack Detection,” International Research Journal of Multidisciplinary Technovation, vol. 4, no. 2, pp. 75–84, 2022.
[13] M. Choudhary, V. Tiwari, and V. U. “Fusion of Domain-Specific BSIF and DenseNet Features at Score Level for Iris Liveness Detection and Contact Lens Identification,” International Journal of Biometrics, vol. 14, no. 1, pp. 56–67, 2022.
[14] S. Khade, S. Gite, and S. D. Thepade, “Texture and Statistical Features for Iris Presentation Attack Detection,” International Research Journal of Engineering and Technology (IRJET), vol. 8, no. 6, pp. 2401–2407, 2021.
[15] J. E. Tapia, S. Gonzalez, and C. Busch, “Iris Liveness Detection Based on a Cascade of Convolutional Neural Networks Using Modified MobileNetV2,” IEEE Access, vol. 9, pp. 7306–7320, 2021.
[16] S. Khade, S. Gite, and S. D. Thepade, “Hybridization of Discrete Cosine Transform (DCT) and Haar Transform with Machine Learning Classifiers and Ensembles for Iris Presentation Attack Detection,” International Research Journal of Multidisciplinary Technovation, vol. 6, no. 2, pp. 112–121, 2021.
[17] C. Long and F. Zeng, “Iris Liveness Detection Based on Batch-Normalized Convolutional Neural Networks,” Pattern Recognition Letters, vol. 128, pp. 485–491, 2019.
[18] M. Choudhary, V. Tiwari, and V. U., “Customized DenseNet and SVM-Based Ensemble Model (DCLNet) for Iris Contact Lens Detection,” IEEE Access, vol. 7, pp. 152684–152693, 2019.
[19] S. Singh and K. Mistry, “GHCLNet: A Hierarchical Convolutional Neural Network for Generalized Iris Contact Lens Detection,” IEEE Access, vol. 6, pp. 57943–57954, 2018.
[20] A. Trokielewicz, P. Czajka, and A. Maciejewicz, “Presentation Attack Detection for Cadaver Iris Recognition,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 6, pp. 1501–1514,2018.
[21] D. Yambay, V. Mura, A. Dantcheva, and S. Schuckers, “LivDet-Iris 2017—Iris Liveness Detection Competition 2017,” Proceedings of
the IEEE International Joint Conference on Biometrics (IJCB),Denver, USA, pp. 1–7, 2017.
[22] Y. Hu, L. Ma, T. Tan, and Y. Wang, “Iris Liveness Detection Based on Regional Feature Analysis,” Pattern Recognition Letters, vol. 82, pp. 242–249, Jan. 2016.
[23] J. Galbally, J. Ortiz-López, J. Fierrez, and J. Ortega-García, “Iris Liveness Detection Based on Quality Related Features,” Proceedings of the 5th IAPR International Conference on Biometrics (ICB), New Delhi, India, pp. 271–276, Mar. 2012.