In ancient times, the Sign language system was developed to eradicate the problem in dumb and deaf people and make them gain knowledge for proper interaction with the world. Learning sign language involved creating sign language using fingers movement and movement in hand. It is been observed that dumb and deaf people feel themselves inferior to the society and do not interact much with other people. There are many projects tried to Develop to make things beneficial for them but still we are unable to create a better support for DnD people. There where a lot of projects developed but the costing of the project was very high which cannot be affordable at any scale. Projects developed by MIT students cost more than 2 lakh rupees and there where many other efforts made to help but all went in vain as the biggest problem raised was project being cost efficient and for a developing country like India creating a product that cause 2 lakh for a person is not at all possible. From the literature it is known that, in India it is true that computers have not reached even normal schools in the rural and remote areas. Therefore, providing computers for Dumb and deaf children to learn and use them seems certainly farfetched. Therefore, we got an idea to design and develop user-friendly, cost- effective learning aid for Dumb and deaf children. The project aims to design a learning aid for the Dumb and deaf people in English language which infuses a sense of playing while learning. The proposed idea is implemented on an Arduino Microcontroller interfaced with LCD, Gyroscope and mobile device as output devices. The proposed model develops a glove that can Convert sign language to human voice and which can be understandable by us and this will definitely build the bridge between us and make Dumb and Deaf people a part of our society and which can make better for mankind. The biggest challenge is to reduce the cost as much as possible and keep things more available for the society
Introduction
In an information-driven society, it is essential to enable easy access to information for all, including over 20 million deaf and dumb people worldwide, who face significant communication challenges. Sign language is their primary mode of communication but is not universally understood. To aid communication, this project focuses on converting sign language into speech using technology.
The system uses gloves equipped with sensors (flex sensors, accelerometers, and contact sensors) to detect hand gestures. These inputs are processed by an Arduino microcontroller, displayed as text on an LCD, and transmitted via Bluetooth to a mobile device where text-to-speech software converts it into audible voice. This approach improves interaction between deaf/dumb and hearing people.
A notable existing project, “SignAloud” gloves, similarly converts American Sign Language gestures to speech using wireless sensors, highlighting the demand for portable, practical solutions.
The current challenge in such devices is high cost and limited availability of sensors, particularly flex sensors, which are expensive and hard to source. To address this, the proposed system replaces costly flex sensors with a combinational circuit made of resistors and metal contact plates, reducing overall cost while maintaining accuracy. The system integrates an ADXL345 accelerometer for motion detection and uses MIT App Inventor to develop a compatible mobile application for easier Bluetooth communication.
The project aims to make sign-to-voice conversion affordable and accessible, enhancing communication for the deaf and dumb community and integrating hardware, software, and mobile applications into a working prototype.
References
[1] http://censusindia.gov.in/Census_And_You/disabled_population.a spx
[2] Itkarkar, R.R. and Nandi, A.V., 2013, July. Hand gesture to speech conversion using Matlab. In 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT) (pp. 1-4). IEEE.
[3] Pramada, S., Saylee, D., Pranita, N., Samiksha, N. and Vaidya, M.S., 2013. Intelligent sign language recognition using image processing. IOSR Journal of Engineering (IOSRJEN), 3(2), pp.45- 51.
[4] Yang, H., An, X., Pei, D. and Liu, Y., 2014, September. Towards realizing gesture-to-speech conversion with a HMM-based bilingual speech synthesis system. In 2014 International Conference on Orange Technologies (pp. 97-100). IEEE.
[5] Song, N., Yang, H. and Wu, P., 2018, May. A Gesture-toEmotional Speech Conversion by Combining Gesture Recognition and Facial Expression Recognition. In 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia) (pp. 1-6). IEEE.
[6] Liang, R.H. and Ouhyoung, M., 1998, April. A real-time continuous gesture recognition system for sign language. In Proceedings third IEEE international conference on automatic face and gesture recognition (pp. 558-567). IEEE.
[7] Manikandan, K., Patidar, A., Walia, P. and Roy, A.B., 2018. Hand Gesture Detection and Conversion to Speech and Text. arXiv preprint arXiv:1811.11997.
[8] Preetham, C., Ramakrishnan, G., Kumar, S., Tamse, A. and Krishnapura, N., 2013, April. Hand talk-implementation of a gesture recognizing glove. In 2013 Texas Instruments India Educators\' Conferencen (pp. 328-331). IEEE.
[9] Aiswarya, V., Raju, N.N., Joy, S.S.J., Nagarajan, T. and Vijayalakshmi, P., 2018, March. Hidden Markov Model-Based Sign Language to Speech Conversion System in TAMIL. In 2018 Fourth International Conference on Biosignals, Images and Instrumentation (ICBSII) (pp. 206-212). IEEE.
[10] Harish, N. and Poonguzhali, S., 2015, May. Design and development of hand gesture recognition system for speech impaired people. In 2015 International Conference on Industrial Instrumentation and Control (ICIC) (pp. 1129-1133). IEEE.