• Home
  • Submit Paper
  • Check Paper Status
  • Download Certificate/Paper
  • FAQs
  • Contact Us
Email: ijraset@gmail.com
IJRASET Logo
Journal Statistics & Approval Details
Recent Published Paper
Our Author's Feedback
 •  ISRA Impact Factor 7.894       •  SJIF Impact Factor: 7.538       •  Hard Copy of Certificates to All Authors       •  DOI by Crossref for all Published Papers       •  Soft Copy of Certificates- Within 04 Hours       •  Authors helpline No: +91-8813907089(Whatsapp)       •  No Publication Fee for Paper Submission       •  Hard Copy of Certificates to all Authors       •  UGC Approved Journal: IJRASET- Click here to Check     
  • About Us
    • About Us
    • Aim & Scope
  • Editorial Board
  • Impact Factor
  • Call For Papers
    • Submit Paper Online
    • Current Issue
    • Special Issue
  • For Authors
    • Instructions for Authors
    • Submit Paper
    • Download Certificates
    • Check Paper Status
    • Paper Format
    • Copyright Form
    • Membership
    • Peer Review
  • Past Issue
    • Monthly Issue
    • Special Issue
  • Pay Fee
    • Indian Authors
    • International Authors
  • Topics
ISSN: 2321-9653
Estd : 2013
IJRASET - Logo
  • Home
  • About Us
    • About Us
    • Aim & Scope
  • Editorial Board
  • Impact Factor
  • Call For Papers
    • Submit Paper Online
    • Current Issue
    • Special Issue
  • For Authors
    • Instructions for Authors
    • Submit Paper
    • Download Certificates
    • Check Paper Status
    • Paper Format
    • Copyright Form
    • Membership
    • Peer Review
  • Past Issue
    • Monthly Issue
    • Special Issue
  • Pay Fee
    • Indian Authors
    • International Authors
  • Topics

Ijraset Journal For Research in Applied Science and Engineering Technology

  • Home / Ijraset
  • On This Page
  • Abstract
  • Introduction
  • Conclusion
  • References
  • Copyright

Currency Detector System for Visually Impaired

Authors: Indresh Gupta, Sagar Kamble, Kartik Nisar, Parth Patel, Prof. Vidya Gogate

DOI Link: https://doi.org/10.22214/ijraset.2022.42256

Certificate: View Certificate

Abstract

In this modern era, many technologies have boom but still there are many problems to be deal with and one of the problems we are trying to solve is to help blind people to recognize the currency notes so as to help them avoid any fraudulent and scam by any shopkeeper or by any other means. To overcome this problem, we will build a system which will tell them the amount of the notes and a total amount that they have shown. In our system, we’ve applied Artificial Neural Network (ANNs). The classifier has Convolutional Neural Network (CNN) layers to extract features from the input image. With those features we will classify the images into 7 classes i.e., currency notes. The output will be provided with the help of a speakers that will tell the user the amount as well as the total amount they have shown to the system.

Introduction

I. INTRODUCTION

We know in the world of Data science and Artificial Intelligence there are many subsets and two of them Machine Learning and Deep Learning. For our system we had choose Deep learning approach instead of Machine Learning, why? Machine Learning needs early Feature Extraction as features and performed classification on it. But Deep Learning acts as a “black box” which do feature extraction and classification on its own.

The Deep learning model for our system will be run on the Raspberry Pi which is the system’s processing unit and that’s where all the program will be executed. Training a Deep learning model in a Raspberry Pi is quite hectic and it is very inefficient way to do so. Thus, we will train our model over the computer having high computation capabilities and convert the model into lite version which would be optimum for Raspberry Pi to run. At last, the output will be provided to the user as a speech that will tell the user which currency notes they have shown the system and how many currencies they have shown.

II. PROPOSED WORK

The purpose of this study is the development of system that takes image of currency notes as input, pre-processing the input, extract the required features, applying relevant models to train the neural network that will recognize the class of input image whether it is 10 rupees note, 20 rupees note, 50, 100 and so on. After recognition the system will speak out the amount of currency shown and will also be able to tell total amount.  The complete system can be divided into two major sections: The Hardware and Software sections which will work together as an embedded system.

As for Hardware, we will be using Raspberry Pi as those deep neural network models. Along with the Raspberry Pi we will be using Raspberry Pi Camera Module V2-8 Megapixel that will our core Central Processing unit. The reason behind choosing RPi is it provide more The purpose of this study is the development of system that takes image of currency notes as input, pre-processing the input, extract the required features, applying relevant models to train the neural network that will recognize the class of input image whether it is 10 rupees note, 20 rupees note, 50, 100 flexibility and can handle large computation which is very much required in our system plus Python will be our programming language to build acts as an eye for our user and for our system that is it will capture the frames from the real-world. As for our system output i.e., a speech for that we will be using a normal speaker. Along with this there will be some supporting hardware such as power supply/power bank, buttons for interaction between users and the system.

Now for software, as mentioned above our main programming language is Python. Python is majorly used to build models for Deep learning as well as Machine learning with similar reason is that the language itself is more understandable by the humans compare to other languages present. Python is also general-purpose language; it can do a set of complex machine learning tasks and enable you to build prototypes quickly that allow you to test your product for machine learning purposes. That’s it for the software section more in-depth information about various models, libraries, supporting API will be discussed in the Software architecture section.

III. DESCRIPTION

A. Hardware Description

The Raspberry Pi 4 offers ground-breaking increases in processor speed, multimedia performance, memory, and connectivity compared to the prior-generation boards, while retaining backwards compatibility and similar power consumption.

Specifications

  1. Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz
  2. 2GB, 4GB or 8GB LPDDR4-3200 SDRAM (depending on model)
  3. 2.4GHz and 5.0 GHz IEEE 802.11ac wireless, Bluetooth 5.0, BLE
  4. Gigabit Ethernet
  5. 2 USB 3.0 ports; 2 USB 2.0 ports.
  6. Gigabit Ethernet
  7. 2 USB 3.0 ports; 2 USB 2.0 ports.
  8. 5V DC via GPIO header (minimum 3A*)
  9. Power over Ethernet (PoE) enabled (requires separate PoE HAT)
  10. Operating temperature: 0 – 50 degrees C ambient
  11. A good quality 2.5A power supply can be used if downstream USB peripherals consume less than 500mA in total

B. Software Description

After encountering Hardware problems being insufficient to handle deep layers models and complex system. To make the complex ANNs hardware friendly and so we will be creating the deep ANNs including just a few convolution layers and some dense layers to classify the currency notes and process it further for output.

to run models which were built using TensorFlow Libraries on the Raspberry Pi was impossible as TensorFlow isn’t available for Raspberry Pi and it can’t be installed directly or even can use the RPi for training purposes as it would be quite slow and computationally intensive. Thus, we introduce TensorFlow Lite, a package developed and provided by TensorFlow itself. The tflite_runtime package is a fraction the size of the full TensorFlow package and includes the bare minimum code required to run inferences with TensorFlow Lite. It is suitable for Linux based embedded devices, such as Raspberry Pi, Coral and even it would work for platform such as Android and iOS.

IV. RESULT AND ANALYSIS

We will pass the set of inputs images into the system and see the results. The following images are given below.

Few changes will be needed in the program to see the output and so we added a print function after every say function, that way we can also see and listen to the output. Now we pass above four images one after the other by pressing the first button which is somewhat similar as capturing images and at last, we will press second button to see whether it is able to calculate total amount, looking from the images above we can say the total amount should be 360.

Above is output from the Raspberry Pi’s screen. Our system is working completely fine and the way we wanted it to work. With each time when the button is pressed it takes the images, predict it and store it in the list and after pressing the second button it’s giving out the total amount as 360 which is also correct.

A. Analysis

  1. The training accuracy of the model is 98% and test accuracy is 93%
  2. Being only 600 plus images in the dataset, which is less, the system is facing problem of overfitting

Conclusion

Our future aim is to fix the problems and make a better device that could help the people who are visually impaired. Our future scope are as follows: 1) Make the device more physically strong and portable. 2) High-definition camera. 3) Image augmentation to avoid overfitting. 4) Better UI With the proper blend of modern hardware like Raspberry Pi and software with Deep Learning algorithms our sincere efforts, we have tried construct the device that will read documents for people with impaired vision but due to hardware constraint for developing and training such a big Deep learning model we have switched the problem statements. The combination of microcontroller, neural networks and deep learning with the software’s like Python, TensorFlow and TensorFlow Lite to detect currency which is ready to give solution for visually impaired

References

[1] Currency Dataset from Kaggle created by Gaurav Rajesh Sahani . [2] “An Introduction to Convolutional Neural Networks” Thesis by Keiron O’Shea and Ryan Nash from Department of Computer Science, Aberystwyth University, Ceredigion, SY23 3DB and School of Computing and Communications, Lancaster University, Lancashire, LA1 4Y. [3] “Deep Learning in Neural Networks: The science behind an Artificial Brain” Thesis by Sarat Kumar Sarvepalli from The Open University (UK). [4] “McCulloch-Pitts Neuron — Mankind’s First Mathematical Model of a Biological Neuron” Article. Author Akshay L Chandra Published by Toward Data Science.

Copyright

Copyright © 2022 Indresh Gupta, Sagar Kamble, Kartik Nisar, Parth Patel, Prof. Vidya Gogate. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

ijraset42256Indresh

Download Paper

Authors : Parth Patel

Paper Id : IJRASET42256

Publish Date : 2022-05-05

ISSN : 2321-9653

Publisher Name : IJRASET

DOI Link : Click Here

About Us

International Journal for Research in Applied Science and Engineering Technology (IJRASET) is an international peer reviewed, online journal published for the enhancement of research in various disciplines of Applied Science & Engineering Technologies.

Quick links
  • Privacy Policy
  • Refund & Cancellation Policy
  • Shipping Policy
  • Terms & Conditions
Quick links
  • Home
  • About us
  • Editorial Board
  • Impact Factor
  • Submit Paper
  • Current Issue
  • Special Issue
  • Pay Fee
  • Topics
Journals for publication of research paper | Research paper publishers | Paper publication sites | Best journal to publish research paper | Research paper publication sites | Journals for paper publication | Best international journal for paper publication | Best journals to publish papers in India | Journal paper publishing sites | International journal to publish research paper | Online paper publishing journal

© 2022, International Journal for Research in Applied Science and Engineering Technology All rights reserved. | Designed by EVG Software Solutions