Authors: Mr. Vrushab Patil, Mr. Pradeep Parit, Miss. Ruchita Yadav, Mr. Aniruddha Yalgudre, Mr. Prathamesh Gurav, Prof. P. R. Desai
DOI Link: https://doi.org/10.22214/ijraset.2023.56432
Certificate: View Certificate
The Deaf helper using machine learning project represents a pivotal endeavor aimed at bridging communication gaps and enhancing accessibility for the deaf and hard of hearing community. In a world where spoken language dominates, this project harnesses the power of machine learning to facilitate seamless communication for individuals who rely on sign language as their primary mode of expression. At its core, this project leverages state-of-the-art machine learning techniques, including computer vision and natural language processing, to recognize and translate sign language gestures into written or spoken language and vice versa. By fusing these technologies, the project endeavors to create an inclusive and accessible communication tool.
I. INTRODUCTION
Sign language is the mode of communication which uses visual ways like expressions, hand gestures, and body movements to convey meaning. Sign language is extremely helpful for people who face difficulty with hearing or speaking. Sign language recognition refers to the conversion of these gestures into words or alphabets of existing formally spoken languages. Thus, conversion of sign language into words by an algorithm or a model can help bridge the gap between people with hearing or speaking impairment and the rest of the world.
A. Who use the Sign Language?
???????B. Why is Sign Language Important?
Provides a chance to the deaf children to educate themselves. Enhances the level of confidence among the disabled. Instils a feeling of social responsibility and sensitivity among the non-deaf who volunteer to learn sign language in order to communicate with those who are disabled. Makes life easier for the deaf.
Some of the benefits of learning the sign language and its usage are as follows:
a. Helps the deaf and the dumb to communicate with the others as well as amongst themselves.
b. Helps in the process of social inclusion of those that suffer from hearing impairment.
c. Provides a chance to the deaf children to educate themselves.
d. Enhances the level of confidence among the disabled.
e. Instils a feeling of social responsibility and sensitivity among the non-deaf
II. RELEVANCE OF WORK
The existing system have been able to recognize gestures with high latency as it uses only image processing. Identification of sign gesture is mainly performed by the following methods:
The Glove-based method, seems a bit uncomfortable for practical use, despite having an accuracy.
III. LITERATURE REVIEW
Every existing Virtual Assistant in today’s date is found to be Voice Automated thereby making it unusable by Deaf-mutes and people with certain disabilities. This leads to the need of a system which can help people with speaking or listening disabilities to make use of such Virtual Personal Assistants [8]. Artificial Neural Network is used in majority cases where static recognition is performed as shown in [1], but there are few drawbacks related to the efficiency of recognizing distinctive features from images which can be improved by using Convolutional Neural Network. Convolutional Neural Network when compared to its predecessors, recognizes important distinctive features more efficiently and without any human supervision. Artificial Neural Network uses one-to-one mapping which increases the number of nodes required thereby degrading the efficiency whereas Convolutional Neural Network uses one-to-many, keeping the number of nodes low and greatly improving the efficiency [5]. Many systems designed with such objectives tend to make use of more of physical hardware like the design observed in Cyber Glove thereby leading to need of manufacturing of such hardware gadgets and making it mandatory for the users to wear it while accessing the Virtual Assistants [11]. Many systems are designed in such a way that their application is limited to only certain Sign language or series of Hand gestures [9] whereas the proposed system is designed in such a way that it gives us the flexibility of changing to any standard sign language just by changing the dataset and training the model for the same.
IV. PROPOSED SYSTEM
The proposed system would be a real time system where in live sign gestures would be processed using image processing. Then classifiers would be used to differentiate various signs and the translated output would be displaying text. We will develop this application using the Machine Learning with proposed system we use CNN (Neural Network) for recognize of signs Deaf Helper using Machine Learning," which aims to address the pressing need for improved communication assistance for Deaf individuals Creating a "Deaf Helper using Machine Learning" system offers numerous advantages, both for the Deaf community and society as a whole.
Here are some key advantages of such a system:
V. OBJECTIVES
VI. METHODOLOGY
The "Deaf Helper" project employs a combination of natural language processing (NLP), machine learning, and computer vision techniques to achieve its goal of facilitating communication for individuals with hearing impairments. The project can be broken down into the following key components:
A. Data Collection and Pre-processing
B. Text to Sign Conversion
C. Voice to Sign Conversion
To sum up, \"Deaf Helper\" is a cutting-edge Python project that closes the communication gap and empowers people with hearing loss. By converting voice and text to sign language, the project provides an inclusive and accessible form of communication. The project\'s technique makes use of clever spell correction algorithms, machine learning models, and data preprocessing to deliver precise and effective translations from sign language. \"Deaf Helper\" is a positive step towards enhancing the quality of life for the hard of hearing as well as advancing accessibility and inclusivity in the digital age.
[1] Yusnita, L., Rosalina, R., Roestam, R. and Wahyu, R., 2017. Implementation of Real-Time Stat ic Han d Gesture Recognition Using Artificial Neural Network. CommIT (Communication and Information Technology) Journal, 11(2), p.85. [2] Rathi, P., Kuwar Gupta, R., Agarwal, S. and Shukla, A., 2020. Sign Language Recognition Using ResNet50 Deep Neural Network Archit ecture. SSRN Electronic Journal [3] V. Adithya, P. R. Vinod and U. Gopalakrishnan, \"Artificial neural network based m ethod for Indian sign language reco gnitio n,\" 2013 IEEE Conference on Information & Communication Technologies, T huckalay, Tamil Nadu, India, 2013, pp. 1 080-1085. [4] Guru99.com. 2020. Tensorflow Image Classification: CNN(Convolutional Neural Network). [onlin e] Available at: . [5] Guo, T., Dong, J., L i, H. and Gao, Y., 2017. Simple Convolutional Neural Network on Image Classificat ion. IEEE 2nd International Conference on Big Data Analytics, pp.1-2. [6] Medium. 2020. A Comp rehensive Guide To Convolutional Neural Networks?—?The ELI5 Way. [online] Available at: [7] Medium. 2020. Deep Learn ing With Tensorflow: Pa rt 1 — Theory And Setup. [online] Available at: [8] Issac, R. and Narayanan, A., 2018. Virtual Personal Assistant. Journal o f Network Communications and Emerging Technologies (JNCET), Volume 8(Issue 10, October (2 018). [9] Lai, H. and Lai, H., 2014. Real-Time Dynamic Hand Gesture Recognition. International Symposium on Computer, Consumer and Control, pp.658-661. [10] Pankajakshan, P . and Thilagavath i B, 2015. Sign language recognit io n system. 2015 Internat ion al Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS). [11] K. A. Bhaskaran, A. G. Nair, K. D. Ram, K. Ananthanarayanan and H. R. Nandi Var dhan, \"Smart gloves for hand gest ure recognition: Sign language to speech conversion system,\" 2016 Internat ional Conference on Robotics and Automation for Humanitarian Application s (RAHA), Kollam , 2016, pp. 1-6, doi: 10.1109/RAHA.201 6.793 1887. [12] Ertham, F. and Aydin, G., 2017. Data Classificat ion with Deep Learning using T ensorflow. IEEE 2nd International Conference on Computer Science and Engineering.
Copyright © 2023 Mr. Vrushab Patil, Mr. Pradeep Parit, Miss. Ruchita Yadav, Mr. Aniruddha Yalgudre, Mr. Prathamesh Gurav, Prof. P. R. Desai. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Paper Id : IJRASET56432
Publish Date : 2023-10-31
ISSN : 2321-9653
Publisher Name : IJRASET
DOI Link : Click Here