This research paper presents a Sign Language Detection System designed tobridge communication barriers for the hearing and speechimpaired.Byleveragingcomputervision techniques, the system utilizes OpenCV, MediaPipe,Scikit-learn, Numpy,andMatplotlib to detect and classify sign language gestures in real-time. The model is trained on an extensive dataset comprising various hand gestures to improve accuracy and responsiveness. Using Pickle for model serialization, the system achieves seamless loading and implementation, promoting accessibility and ease ofdeployment. This project demonstrates the impact of AI-driven solutions in assisting inclusivityandreducingcommunicationbarriers for differently-abled individuals.
Introduction
The project develops a Sign Language Detection System to bridge communication gaps between sign language users and the general population by translating hand gestures into text in real time. It uses technologies like OpenCV for video capture, MediaPipe for efficient hand tracking and feature extraction, and Scikit-learn for machine learning classification.
The system architecture includes data preprocessing, feature extraction of hand landmarks, model training (using classifiers like SVM or Random Forest), and real-time gesture recognition. Tested on a diverse dataset of sign language gestures, the model achieved over 90% accuracy with fast processing speeds (~0.5 seconds per frame), making it suitable for practical use.
Future plans involve expanding gesture vocabulary, improving accuracy with advanced neural networks, and enhancing continuous gesture recognition, aiming to create a robust, scalable tool to improve communication accessibility for the deaf and hard of hearing.
Conclusion
The Sign Language Detection System provides an accessible solution to facilitate communication for sign language users. Integrating OpenCV, MediaPipe, and Scikit-learn, the system offers real- time detection and classification of gestures, with potential applications in education, healthcare, and daily interaction. Future work may focus on expanding the gesture vocabulary, incorporating advanced neural networks to improve recognition accuracy, and extending the system to support continuous gestures. Such enhancements could further enhance the inclusivity and utility of this project, making it a valuable tool for bridging communication gaps
References
[1] Zhang, Y., et al. \'Real-time hand gesture recognition using computer vision techniques,\' Journal of AI Research, 2021. Available at: https://www.jair.org/index.php/jair/article/view/1207
[2] MediaPipe Documentation: https://google.github.io/mediapipe/
[3] OpenCV Documentation: https://opencv.org/documentation/
[4] Pedregosa, F., et al. \'Scikit-learn: Machine Learning in Python,\' Journal of Machine Learning Research, 2011. Available at: https://www.jmlr.org/papers/v12/pedregosa11a.html
[5] Asher, M., et al. \'Advances in Sign Language Recognition with Neural Networks,\' IEEE Transactions, 2022. Available at: https://ieeexplore.ieee.org/document/9837112
[6] Kumar,R.,etal.\'HandGestureDetection in Real Time Using OpenCV,\' Computer Vision and AI Conference, 2020. Available at: https://arxiv.org/abs/2005.07615
[7] Ravi,S.,\'ApplicationsofMediaPipein GestureRecognition,\'AIinMotionJournal, 2023. Available at: https://aimotionjournal.org/articles/mediapipe- gesture-recognition