The model utilizes MobileNet V2 that has been trained using a diversified banknote image dataset recorded under different real-world environments, both robust and accurate. The model is TensorFlow Lite optimized and run on a Raspberry Pi for best-in-class edge inference. Real images are captured by a camera module, classified by the CNN, and the output is transmitted using a text-to-speech engine for auditory feedback. The system is off-line based and therefore portable and internet-independent. Its small size, low price, and high precision make it perfect for mass usage, particularly in developing regions. Experimental validation verifies that the system behaves uniformly in multiple lighting and occlusion conditions, highlighting its ability to uplift the blind community to greater economic security.
Introduction
Recent technological advancements have significantly improved accessibility for people with disabilities, especially in financial inclusion for the visually impaired. Identifying currency is a daily challenge for blind individuals, particularly in countries like India where tactile features like braille on currency notes are inconsistent. Traditional reliance on others for identification risks deception and loss of privacy.
This research proposes an offline, real-time currency recognition system for Indian banknotes using deep learning, specifically Convolutional Neural Networks (CNNs), deployed on an affordable Raspberry Pi device with a camera and speaker. The system captures images of banknotes, classifies their denomination through a lightweight TensorFlow Lite CNN model, and provides auditory feedback via text-to-speech, enabling independent currency identification without internet or smartphones.
The system is designed to handle various real-world conditions including worn, folded, or partially visible notes, different lighting, and cluttered backgrounds. It improves on earlier methods that used traditional image processing or required expensive hardware or connectivity, by offering a standalone, user-friendly, accessible solution optimized for visually impaired users.
The approach includes a three-stage pipeline: image acquisition and preprocessing; feature extraction and classification with post-processing to reduce false positives; and auditory output for user feedback. Challenges such as dataset diversity, low-power deployment, and environmental variability were addressed using data augmentation, model optimization, and rule-based post-processing.
Compared to previous works, this system stands out by being offline, cost-effective, scalable, and specifically designed for visually impaired users, supporting real-world usability and inclusive innovation in assistive technology.
Conclusion
This In this research, we proposed and implemented an effective deep learning-based approach for real-time banknote recognition aimed at assisting visually impaired individuals. The system is designed to be cost-effective, accurate, and deployable on resource-constrained devices such as the Raspberry Pi. By eliminating the inclusion of coin detection, the scope is sharpened to focus entirely on recognizing paper currency, enhancing reliability and relevance. The three-stage detection architecture—comprising region extraction, classification, and voice output—ensures the system is able to identify notes under diverse conditions including varying lighting, occlusions, and physical distortions. Leveraging Convolutional Neural Networks and optimized deployment using TensorFlowLite, the proposed model provides high accuracy while maintaining computational efficiency.
Our experiments validate the system’s capability to function reliably in real-world scenarios. The model has been rigorously trained and tested, with class activation maps used for interpretability and ablation studies performed to validate the effectiveness of each detection stage. Furthermore, when compared with traditional image processing techniques and other state-of-the-art methods, the proposed solution demonstrates superior performance, particularly in robustness and real-time responsiveness on embedded systems.
The project addresses a significant accessibility gap and offers promising practical utility in public and private environments. Future improvements could include expanding the system to recognize worn or foreign currency notes, integrating OCR for serial number verification, or adding multilingual voice support. The developed prototype lays the foundation for accessible financial interaction and independence for visually impaired individuals and presents a scalable platform for further innovation in assistive technology
References
[1] G.A.R.Sanchez, Acomputervision-based banknote recognition system for the blind with an accuracy of 98% on smartphone videos, J. Korea Soc. Comput. Inf., vol. 24, pp. 6772, Jun. 2019.
[2] G. A. R. Sanchez, Y. J. Uh, K. Lim, and H. Byun, Fast banknote recognition for the blind on real-life mobile videos, in Proc. Korean Comput. Conf., Jeju Island, South Korea, Jun. 2015, pp. 835837.
[3] F. M. Hasanuzzaman, X. Yang, and Y. Tian, Robust and effective component-based banknote recognition by SURF features, in Proc. 20th Annu. Wireless Opt. Commun. Conf. (WOCC), Newark, NJ, USA, Apr. 2011, pp. 16.
[4] Y.Li,C.Yang,L.Zhang,R.Xia,L.Fan,andW.Xie, AnovelSURFbased on a unied model of appearance and motion-variation, IEEE Access, vol. 6, pp. 3106531076, Jun. 2018.
[5] T. D. Pham, C. Park, D. T. Nguyen, G. Batchuluun, and K. R. Park, Deep learning-based fake-banknote detection for the visually impaired people usingvisible-light images capturedbysmartphonecameras, IEEEAccess, vol. 8, pp. 6314463161, Apr. 2020.
[6] S. Mittal and S. Mittal, Indian banknote recognition using convolutional neural network, in Proc. 3rd Int. Conf. Internet Things, Smart Innov. Usages (IoT-SIU), Bhimtal, India, Feb. 2018, pp. 16.
[7] D. G. Pérez and E. B. Corrochano, Recognition system for Euro and Mexican banknotes based on deep learning with real scene images, Computación y Sistemas, vol. 22, no. 4, pp. 10651076, Dec. 2018.
[8] DM Lab. (2020). Dongguk Korean Banknote Database Version1 (DKB V1) and CNN Models for Banknote Detection. Accessed: Mar. 1, 2020. [Online]. Available: http://dm.dgu.edu/link.html
[9] P. Viola and M. Jones, Rapid object detection using a boosted cascade of simple features, in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), Kauai, HI, USA, Dec. 2001, pp. I-511 I-518.
[10] L.D.Dunai,M.C.Pérez,G.P.Fajarnés, and I. L. Lengua, Euro banknote recognition system for blind people, Sensors, vol. 17, no. 1, pp. 115, Jan. 2017.
[11] J. Liang and S. Y. Yuen, A novel saliency prediction method based on fast radial symmetry transform and its generalization, Cognit. Comput., vol. 8, no. 4, pp. 693702, Aug. 2016.
[12] A. R. Domínguez, C. L. Alvarez, and E. B. Corrochano, Automated banknote identication method for the visually impaired, in Proc. Prog. Pattern Recognit. ImageAnal.Comput.Vis.Appl.,PuertoVallarta, Mexico, Nov. 2014, pp. 572579.
[13] A. I. Ahmed, J. P. Chiverton, D. L. Ndzi, and V. M. Becerra, Speaker recognition using PCA-based feature transformation, Speech Commun., vol. 110, pp. 3346, Jul. 2019.
[14] F. Grijalva, J. C. Rodriguez, J. Larco, and L. Orozco, Smartphone recognition of the U.S. Banknotes denomination, for visually impaired people, in Proc. IEEE ANDESCON, Bogota, Colombia, Sep. 2010, pp. 1 6.
[15] N.A.J.Sufri,N.A.Rahmad,M.A.Asari,N.A.Zakaria,M.N.Jamaludin, L. H. Ismail, and N. H. Mahmood, Image based ringgit banknote recognition for visually impaired, J. Telecomm. Electron. Comput. Eng., vol. 9, nos. 39, pp. 103111, 2017.
[16] P.CunninghamandS.J.Delany,K-nearestneighbourclassi ers:2ndedi tion (with Python examples), 2020, arXiv:2004.04523. [Online]. Avail able: http://arxiv.org/abs/2004.04523
[17] N. Dey, S. Borah, R. Babo, and A. S. Ashour, Classication and anal ysis of Facebook metrics dataset using supervised classiers, in Social Network Analytics: Computational Research Methods and Techniques. Cambridge, MA, USA: Academic, 2019, pp. 1267.
[18] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You only look once: Unied, real-time object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Las Vegas, NV, USA, Jun. 2016, pp. 779788
[19] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, MobileNets: Efcient convolutional neural networks for mobile vision applications, 2017,arXiv:1704.04861.[Online].
Available:http://arxiv.org/abs/1704.04861
[20] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, Going deeper with convolutions, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Boston, MA, USA, Jun. 2015, pp. 19.
[21] R. C. Joshi, S. Yadav, and M. K. Dutta, YOLO-v3 based currency detection and recognition system for visually impaired persons, in Proc. Int. Conf. Contemp. Comput. Appl. (IC3A), Lucknow, India, Feb. 2020, pp. 280285.
[22] Q. Zhang, Currency recognition using deep learning, M.S. thesis, Dept. Comput. Inf. Sci., Auckland Univ. Technol., Auckland, New Zealand, 2018.
[23] U. R. Chowdhury, S. Jana, and R. Parekh, Automated system for Indian banknote recognition using image processing and deep learning, in Proc. Int. Conf. Comput. Sci., Eng. Appl. (ICCSEA), Gunupur, India, Mar. 2020, pp. 15.
[24] D. San Martin and D. Manzano, A deep learning model for Chilean bills classication, 2019, arXiv:1912.12120. [Online]. Available: http://arxiv.org/abs/1912.12120
[25] M. Jadhav, Y. K. Sharma, and G. M. Bhandari, Currency identication and forged banknote detection using deep learning, in Proc. Int. Conf. Innov. Trends Adv. Eng. Technol. (ICITAET), Shegaon, India, Dec. 2019, pp. 178183.
[26] R.Girshick, J. Donahue, T. Darrell, and J. Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Columbus, OH, USA, Jun. 2014, pp. 580587.
[27] Q. Xie, M.-T. Luong, E. Hovy, and Q. V. Le, Self-training with noisy student improves ImageNetclassication, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2020, pp. 1068710698.