With later headways in an assortment of strategies, the field of ponder on sign dialect acknowledgment is extending rapidly. The objective of this inquiry about is to form a framework that\'s simple to utilize for individuals who have inconvenience talking and hearing, particularly those who utilize sign dialect. Sign dialect is exceptionally pivotal for individuals who are vocally or capable of being heard and disabled. For these individuals, this is often the as it were way of communication. Our venture centers on facilitating the communication handle for those individuals. The essential objective of our venture is to create an application that will change over sign dialect (signs) as input into content and voice output and bad habit versa. The auxiliary objective of our application is to utilize these highlights employing an android application that can be utilized effectively and ought to have an intuitive UI that in turn improves the general encounter of utilizing the application.
Introduction
1. Introduction
The growing capabilities of Android devices have enabled apps to support advanced features like sign language recognition, aimed at improving communication for deaf and mute users. This project presents a bidirectional communication app that supports sign-to-text, text-to-sign, voice-to-sign, picture-to-sign, object detection, and language recognition, fostering inclusivity and independence.
2. Key Features and Functionalities
A. Bidirectional Communication
Unlike traditional one-way sign language apps, this app allows:
Text to Sign
Sign to Text
Voice to Sign
Picture to Sign
Object Detection
Language Identification
B. Advanced Modules
Uses Firebase ML Kit, Google Vision API, and custom gesture recognition models.
Employs Support Vector Machines (SVM) for classification and regression.
Integrates a user-friendly interface built in Android Studio.
C. Unique Capabilities
Users can upload images or use the camera for object and text recognition.
Converts voice memos or speech into sign language animations.
Detects and translates languages, including unfamiliar ones, into American English, then into sign language.
Shows object recognition confidence scores for better accuracy.
3. Literature Review Highlights
Previous Research: Discusses limitations in Indian Sign Language (ISL), sensor-based gloves, hand gesture detection, and CNN-based models.
Innovative Methods: Combines approaches like SURF, Hu Moments, CNN + BiLSTM, and SignBERT to improve real-time, signer-independent recognition.
4. System Requirements
Hardware: Android smartphone with Android 5.0+, camera, and microphone.
Software: Android Studio, Google APIs (Vision, Voice), Firebase ML Kit, Java utilities like HashMap.
5. System Design
Six Functional Segments:
Text to Sign Conversion – Converts typed text into sign language animation.
Picture to Sign Conversion – Extracts text from images and converts to signs.
Voice to Sign Conversion – Uses speech recognition to create sign output.
Sign to Text Conversion – Detects sign gestures via camera and converts to text.
Object Detection – Identifies and labels objects in images.
Language Identification – Detects language of text input and translates to English and sign language.
Core Modules:
Content Recognition System
Firebase ML Kit
Motion Recognition System
6. Implementation
Uses machine learning and computer vision to accurately recognize and map input (text, image, voice) to appropriate sign gestures.
Integrates real-time image and speech recognition.
Supports gesture training through diverse datasets for robustness across lighting and environments.
7. Results & Evaluation
The system achieved high accuracy using:
CNN models for sign recognition.
HashMap for efficient text-to-sign mapping.
Google APIs for image, text, and voice conversion.
Demonstrated reliable performance across all modules with significant improvements in accessibility and user interaction.
Conclusion
The study covers potential enhancements to hand gesture recognition systems, including generalizing the system to include more gestures and actions, as well as training the system on data from several users to account for variances in gesture execution. User testing is useful for identifying errors in recognition accuracy. It also discusses the key techniques, applications, and challenges of hand gesture recognition, including gesture acquisition methods, feature extraction, classification, and applications in sign language and robotics. Environmental issues and dataset availability are addressed, emphasizing the need for additional research in the topic.
While current methods have demonstrated great performance, there is still opportunity for exploration and growth of hand gesture detection into other technical domains such as tablets, smartphones, and game consoles. Hand gesture recognition has the potential to improve humancomputer interactions by making them more natural and pleasurable. The study also introduces an automatic hand-sign language translator for mute/deaf people and discusses system requirements and performance objectives. It goes into detail into software issues such as system startup and recognition algorithms, as well as challenges in identifying ambiguous measurements and recommending technical solutions.
References
[1] Research paper, \"A Survey on Hand Gesture Recognition for Indian Sign Language\", IRJET, 2016.
[2] Research paper, \"Gesture Recognition and Machine Learning Applied to Sign Language Translation\", Springer, 2016
[3] Research Paper, \"Towards Interpreting Robotic System for Fingerspelling Recognition in Real Time\", ACM, 2018.
[4] Research Paper, \"Hand Gesture Movement Recognition System Using Convolution Neural Network Algorithm\", International Research Journal of Computer Science (IRJCS) Issue 04, Volume 6 (April 2019) ISSN: 2393-9842.
[5] \"Hand Gesture Recognition for Sign Language: A New Hybrid Approach\", ResearchGate, January 2020. [6]\"Development of an End-to-End Deep Learning Framework for Sign Language Recognition, Translation, and Video Generation\", IEEE Access, 2022
[6] \"SignBERT: A BERT-Based Deep Learning Framework for Continuous Sign Language Recognition\", IEEE Access, Volume 9, Journal Article, 2021
[7] \"A Review of the Hand Gesture Recognition System: Current Progress and Future Directions\", IEEE Access, Volume 9, Journal Article, 2021
[8] \"Deep Learning-Based Approach for Sign Language Gesture Recognition With
[9] Efficient Hand Gesture
[10] Representation\", IEEE Access, Volume 8, Journal Article, 2020
[11] \"Real-Time Static Hand Gesture Recognition for American Sign Language
[12] (ASL) in Complex Background\", SCRI P, 2012