Authors: Raparthi Vivek, Soujenya Voggu, D. Vasu Mitra, N. Ganesh Gautham
Certificate: View Certificate
For visually challenged people, distinguishing between different denominations of cash is a difficult process. Even though unique symbols are engraved on various currencies in India, the work is still difficult for blind individuals. The inadequacy of identifying devices prompted the development of a portable gadget for denominational segregation. This project aims to create an Android application that will assist visually and hearing-impaired people in detecting Indian cash denominations by putting a banknote in front of the camera. The work uses machine learning and Android programming approaches and is based on real-world applications. The android application uses text to speech concept to read the value of note to the user and then it converts the text value into speech. To harness the power of them all, we are leveraging the Keras, TensorFlow, Fastai, and PyTorch libraries, as well as different machine learning techniques like ResNet and MobileNet. Various technologies like machine learning models, python and many more libraries are used for the backend part of application. And for front end, java concepts and android development techniques are employed. Altogether they are integrated into a single platform which is highly user-friendly and makes it easy to use and implement in our daily life.
According to the most recent estimate from the World Organization, approximately 2.3 billion people across the world suffer from visual impairment or blindness, with at least 1 billion of them suffering from impaired vision that might have been prevented or that has yet to be addressed. Visually challenged people experience several challenges in conducting regular tasks. They have number of issues with money transactions as well. They are unable to distinguish the paper currencies due to the similarities in paper structure and size between the various types.
One of the most fundamental and crucial systems that helps a person with an impairment operate around his or her problems is assistive technology. This study shows how forward-thinking attempts are being made to build assistive technologies for the visually impaired so that they can live a socially and financially independent life. The currency denomination identifier application is an artificially intelligent currency detection app that acts as an assistive tool for the visually impaired to check whether they have been handed the correct amount of money and, thus, ensure that they have not been cheated on. It outputs computer-generated audio and has a simple user interface for a better user experience.
II. LITERATURE SURVEY
III. ARCHITECTURE AND LIMITATIONS
When developing on a mobile platform, there are a few things to keep in mind. The three primary limits are program size, memory, and processing time. To function without interfering with other programmes, an app should not utilise more than 100 megabytes of storage and 50 megabytes of RAM on a mobile phone. The banknotes are recognised by our application in two processes. To begin with, we separate the bill from the rest of the mess. Then we examine the bill in the information base that is the most practically identical. Several state-of-the-art computer vision algorithms can efficiently handle both of these problems, but they are not mobile-friendly. If implemented directly, the recognition model and other critical information for our application would normally need additional storage and compute capacity. By a significant margin, this surpasses practical bounds. The application's reaction time should be fast and accurate.
The challenge is compounded by the fact that the intended audience is visually impaired. The user has no idea how the surrounding environment is, including other objects, lighting, contrasts, or if the bill is inside the camera's field of vision. When it comes to task execution, the system should be extremely user-friendly and robust. Using the application should be simple with no authentication or login and no internet connection required.
Various security measures recommended by the RBI for currency notes are being evaluated in order to identify the existence of currency denominations, and they may also be expanded to detect whether the money is genuine or counterfeit. Watermark, security thread, and intaglio printing are some of the characteristics used in this project.
The Currency Denomination Identifier Application is based on machine learning and Android development techniques, which are mentioned in the following steps:
The steps involved in the development of the Currency Denomination Identifier Application are as follows:
The process of separating distinct attributes or traits of a currency note has a direct influence on currency identification ability, and currency recognition is always dependent on the characteristics of a specific country's note. To extract the characteristics, many image processing techniques have been presented over time. The security thread, the note's length and colour, the RBI logo, the identifying mark, and other security measures are among them. The identification of money notes relies heavily on feature extraction. As a result, several feature extraction techniques are applied. In this paper, we will look at feature extraction and identification techniques and libraries.
A. Dataset Preparation and Pre-processing
Aside from the stated value and other data, the different divisions of Indian Rupee notes contrast in size and variety, making them easy to recognise visually. For the visually impaired, however, text and colour are worthless, and the similar dimensions of the several banknotes may cause confusion. There is presently no dataset of pictures of Indian Rupee banknotes in different arrangements that is sufficient for the use cases that a visually impaired user may face. As a result, establishing such a collection was a part of our job. For this initiative, we generated the Indian Currency Dataset, which now has about 200 images per class. The dataset is a fairly large dataset with a wide range of photos.
While collecting the dataset, we examined different banknotes for each denomination, in different indoor and outdoor situations. In terms of lighting, backdrop, and position, this provides a lot of variability to the dataset. The collection includes images of clean and worn-out money, along with ones with scribbles. There are many classes for different denominations of currency pictures (including both the front and back of the currency note), as well as a class for "background." Each class comprises photos with notes positioned in various locations and at varied angles. For “background” class images are taken from ImageNet Samples. In order to get decent performance from the dataset model in real-life, the samples in the dataset should be illustrated and experimented on in all conditions accordingly.
B. Choosing the Right Model
For the past several years, researchers have been working on fine-tuning deep neural networks to achieve the best mix of accuracy and performance. This becomes significantly more difficult when the model must be deployed to mobile devices while still being high-performing. When developing Seeing AI apps, the currency detection model, for example, must operate locally on the phone. To provide the optimal user experience and without losing accuracy, inference on the phone must have a short latency.
We chose MobileNet for Seeing AI because it is fast enough for cell phones and gives adequate performance based on our empirical tests.
C. Build and Train Model
Since the dataset is of around 200-250 images per class, we employ two techniques to achieve the required solution -
D. Deploy the Model
We want to run the models locally for applications like Seeing AI so that the app may be utilised even when there is no online connection. The main modules used are:
The backend is made up of the tflite package and a tflite model, as well as labels that define all of the classes. The Tts module converts text to speech and is used here so that the result may be accessed as an audio file. The MobilenetV2 version can also be used as it is a newer version of the previous V1 version. The test results are 35% faster than the V1 version and its accuracy has also improved over the previous one.
V. EXPERIMENTATION AND RESULTS
In an assortment of studies, bills were held at various ranges and areas and compared to the device's camera in a variety of trials. They demonstrate that for the whole arrangement, solid outcomes are by and large recorded assuming the bill picture incorporates something like 40-60% of the general region of the procured picture, or on the other hand, in the event that the bill is a way off, not more than an arm's manageable distance from the camera.
We further attempt to illustrate why our strategy doesn't work in unambiguous conditions (see Figure 4 for instances of failure). Each banknote has an image of Mahatma Gandhi written on the front-side (front) (see Figure 4). When examining the bill's half-fold, there are few distinguishing features, and even fewer when the user's fingers cover a section of the surface area. Because colour is so susceptible to light and fading, it isn't a dependable feature in this situation. As a result, such postures frequently provide inaccurate or unclear outcomes. When various denominations of money are in the camera's view, the result is sure to be unclear because the person may be oblivious to his or her surroundings.
The test scenarios presented above are related to the suggested method's real-time operation. More data is necessary to increase the models' performance. We should study new techniques to synthesise more data while collecting more real-life data. In practise, we train the algorithm on both real and synthetic data, then test it against the real-world data we gathered.
Using our technique, we were able to get a recognition accuracy of 94.6 percent on an experimental collection of Indian rupees, and the mean computation rate was faster on a standard smartphone. We were successful in achieving our aim of building a system that could be used by visually impaired people to recognise currency. We moved the system to a mobile platform, overcoming obstacles such as limited computational power and storage while retaining high accuracy and quick announcement times. In the great majority of circumstances involving images taken on a cell phone, the approaches used are effective.
 Kaushiki Gautam Singh, Shweta Yadav, Zulfikar Ali Ansari. Currency Detection For Visually Impaired, © 2020 JETIR May 2020, Volume 7, Issue 5 (ISSN-2349-5162).  Paper Currency Identification Using image processing and radial basis functions (Rbf), Prakhar Chaturvedi, Harshdeep Kalra, and Ritu Raj Madhup, International Journal of Recent Technology and Engineering (IJRTE), ISSN: 2277-3878, Volume-7, Issue-6, March 2019.  P. Dhar, M. B. Uddin Chowdhury and T. Biswas, \"Paper Currency Detection System Based on Combined SURF and LBP Features,\" 2018 International Conference on Innovations in Science, Engineering and Technology (ICISET), 2018, pp. 27-30, doi: 10.1109/ICISET.2018.8745646.  Hameed, Shaimaa & Alwan, Mohammed. (2018). Paper Currency Detection based Image Processing Techniques: A review paper. Journal of Al-Qadisiyah for Computer Science and Mathematics. 10. 10.29304/jqcm.2018.10.1.359.  Althafiri, E., Sarfraz, M., & Alfarras, M. (2012). Bahraini Paper Currency Recognition. Journal Of Advanced Computer Science And Technology Research, 2(2).  J. Guo, Y. Zhao and A. Cai, \"A reliable method for paper currency recognition based on LBP,\" 2010 2nd IEEE InternationalConference on Network Infrastructure and Digital Content, 2010, pp. 359-363, doi: 10.1109/ICNIDC.2010.5657978.  Hassanpour, H. & Masoumifarahabadi, Payam. (2009). Using Hidden Markov Models for paper currency recognition. Expert Systems with Applications. 36. 10105-10111. 10.1016/j.eswa.2009.01.057.  A. Bhatia, V. Kedia, A. Shroff, M. Kumar, B. K. Shah and Aryan, \"Fake Currency Detection with Machine Learning Algorithm and Image Processing,\" 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), 2021, pp. 755-760, doi: 10.1109/ICICCS51141.2021.9432274.  Kiran, Chinthana K, Mahendra K N, Sahana Y S, 2018, Feature Extraction and Identification of Indian Currency for Visuallu Impaired People, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) NCESC – 2018 (Volume 6 – Issue 13).  Reddy, Jagan & Rao, K.. (2020). Identification of Indian Currency Denomination Using Deep Learning. Journal of Critical Reviews. 7. 2020.
Copyright © 2022 Raparthi Vivek, Soujenya Voggu, D. Vasu Mitra, N. Ganesh Gautham. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.