Communication plays an important role in human interaction and daily life. However, people with speech impairment face many problems in expressing their feelings and emotions in their social life. Although sign language provides a medium for them to communicate with others, it is not understood by everyone and most people often fail to interpret it correctly. Some existing methods in this area propose a system that converts hand gestures into messages but the major concern with these methods is the bulkiness of the system. Our project proposes an easier solution by designing a system that is lighter and capable of producing human speech based on gestures performed rather than displaying the messages.
Introduction
The text discusses the importance of communication in human interaction and the challenges faced by individuals with speech impairments. While sign language offers an alternative, its limited universal understanding creates barriers in everyday communication.
To address this, various gesture-based communication systems have been developed, typically using sensor-equipped gloves and microcontrollers to convert gestures into text. However, these systems are often bulky, complex, and dependent on external devices like mobile phones, limiting their practicality.
The proposed solution is a compact, wearable embedded system that directly converts hand gestures into audible speech. It uses an ESP32 microcontroller, flex sensors to detect finger movement, and an MPU6050 sensor to track hand motion. The system processes gesture data and triggers corresponding pre-recorded audio files through an MP3 module and speaker, enabling real-time speech output without needing external devices.
Designed to be lightweight, portable, and efficient, the system improves usability for daily communication. Overall, it provides a practical and accessible assistive technology for individuals with speech impairments by simplifying gesture-based communication and making it more effective in real-world scenarios.
Conclusion
In this work, a compact and efficient gesture-based speech synthesis system was developed to assist individuals with speech impairment. The system successfully converts hand gestures into audible speech using a wearable glove integrated with sensors and an ESP32 microcontroller. The use of a standalone design eliminates the dependency on external devices, making the system more practical and portable. The experimental results demonstrate that the system can accurately recognize predefined gestures and generate corresponding speech output effectively. This approach provides a simple and user-friendly solution to improve communication for people with speech disabilities. In future, the system can be extended to support a larger set of gestures and enhance recognition accuracy.
References
[1] Rekha H, “MEMS-Based Wearable Smart Glove for Gesture Detection and Sign Language Classification,” International Journal of Innovative Science and Engineering Applications (IJISAE), 2024.
[2] Devyani Pravin Nagarale, “IoT-Based Sign-to-Speech Converter System,” International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2024.
[3] Sangeeta Kurundkar, “Hand Gesture Recognizer Smart Glove Using ESP32 and MPU6050,” International Journal for Research in Applied Science and Engineering Technology (IJRASET), 2024.