This project presents an AI-powered Braille Learn- ing and Assistance System designed for visually challenged indi- viduals, aiming to make education more inclusive and adaptive. The system integrates tactile Braille feedback using six servo motors and voice-based interaction via speech recognition and synthesis. It runs entirely on a Raspberry Pi 4, making itcompact and deployable in low-resource environments. The use of lightweight large language models (LLMs) like Phi-3 and SmolLM2 allows real-time, locally processed conversational AI. This enables personalized teaching, content summarization, and interaction without requiring internet connectivity. The solution shows promise for educational settings where accessibility and adaptability are crucial.
Introduction
Objective:
To enhance inclusive education for visually challenged individuals by developing a low-cost, AI-driven Braille learning system that combines tactile (Braille) and auditory feedback for an interactive, offline, and accessible learning experience.
Key Features:
Platform: Built on Raspberry Pi 4, enabling portability and affordability.
Tactile Output: Six micro servo motors simulate real-time Braille characters for touch-based reading.
Voice Interaction:
Speech Recognition (via Vosk or gTTS) for hands-free control.
Text-to-Speech (pyttsx3) for clear, multilingual auditory feedback.
AI-Powered Local Model:
Uses SmolLM2 (360M parameters) deployed with llama.cpp for fully offline intelligent responses, preserving privacy and reducing latency.
Customization: Users can personalize Braille output speed, voice tone, and response styles.
Voice Note-Taking: Allows users to record and organize notes using only speech.
Related Work Comparison:
Previous Systems often focused solely on auditory or cloud-based tools, limiting tactile interaction and offline use.
This system stands out for its offline, multimodal design, combining Braille feedback and AI-powered interaction, with no internet required.
System Architecture Modules:
Speech Recognition (Vosk/gTTS): Converts voice input to text.
Local Language Model (SmolLM2): Processes queries intelligently offline.
Text-to-Speech (pyttsx3): Converts responses into speech.
6-Servo Braille Display: Outputs tactile Braille in real time.
Customization Module: Allows settings control via voice.
Voice-Based Note-Taking: Stores and organizes spoken notes locally.
Implementation & Testing:
Hardware: Raspberry Pi 4B, 6 servo motors, USB mic, 128GB SD card.
Performance:
Smooth real-time functioning (no noticeable lag).
Accurate Braille character rendering.
Responsive voice interaction and AI summarization.
Challenges Solved: Servo tuning and microphone noise issues were mitigated.
Results & Impact:
Validated Features: Braille accuracy, effective voice transcription, real-time AI response.
User Feedback: Positive, especially regarding tactile clarity and adjustable voice speed.
Educational Value: Enables self-paced, inclusive learning for visually impaired users, particularly in rural or offline settings.
Conclusion
ThedevelopmentofthisAI-poweredBrailleLearning and Assistance System demonstrates that accessible, adaptive learningtoolsforvisuallychallengeduserscanbebothafford- able and practical. By leveraging servo-actuated Braille, voice interaction,andofflineLLMsonacompactRaspberryPi4,the systemoffersastandalonesolutionforpersonalizededucation. Future improvements could include emotion-aware text-to- speech, multilingual support, and enhanced facial recognition to provide context-based feedback. This work lays the foun- dation for an inclusive educational assistant that can support children, adults, and the elderly, especially in underprivileged or remote areas without stable internet access.
References
[1] Vosk Speech Recognition Toolkit. [Online]. Available:https://alphacephei.com/vosk/
[2] smolLM2:360MModel.[Online].Available:https://huggingface.co/Intel
[3] Raspberry Pi 4 Documentation. [Online]. Available:https://www.raspberrypi.com/documentation/
[4] LazoCoder, “Braille Translator,” GitHub. [Online]. Available:https://github.com/LazoCoder/Braille-Translator
[5] BrailleAuthorityofNorthAmerica,“UnifiedEn-glish Braille – Symbols List.” [Online]. Available:https://www.brailleauthority.org/ueb/symbolslist.pdf
[6] ThinkerbellLabs.[Online].Available:https://www.thinkerbelllabs.com/
[7] K.Araki,A.Shimojima,M.Kondo,andK.Yoshino,“SpokenDialogueSystem for Learning Braille,” in Proc. Int. Conf. Computers HelpingPeople with Special Needs (ICCHP), Linz, Austria, 2018, pp. 3–10.
[8] N. Mahendran, V. Velusamy, S. N. Prabhakar, and P. K. Gunavathi,“Computer Vision-Based Assistance System for the Visually ImpairedUsing Mobile Edge Artificial Intelligence,” Assistive Technology, vol.34, no. 3, pp. 330–342, 2022.
[9] P. Deshpande, and M. A. S. Ali, “Learning at Your Fingertips: AnInnovative IoT-Based AI-Powered Braille Learning System,” AppliedSciences, vol. 11, no. 3, p. 91, 2021.
[10] S.Shrestha,Y.Matsubara,andH.Nakagawa,“AnAssistiveTechnologyfor Visually Impaired to Retrieve Mathematical Information,” in Pro-ceedings of the International Conference. Computers Helping Peoplewith Special Needs (ICCHP), Springer, Cham, 2020, pp. 563–571.
[11] H.Yamamoto,“BrailleTeachingMaterialandMethodofManufacturingSame,” European Patent EP 3 955 232 B1, filed Mar. 28, 2018, granted Dec.30,2020.
[12] Y. Kang, “System and Method for Braille Assistance,” United StatesPatent US 10,490,102 B2, filed Feb. 6, 2017, granted Nov. 26, 2019.
[13] Kurniawati, Nazmia. (2021). Predicting Rectangular Patch MicrostripAntenna Dimension Using Machine Learning. Journal of Communica-tions. 16. 394-399. 10.12720/jcm.16.9.394-399.
[14] B.CharbutyandA.Abdulazeez,“ClassificationBasedonDecisionTreeAlgorithmforMachineLearning”,JASTT,vol.2,no.01,pp.20-28, Mar.2021.
[15] Breiman, L. (2001). Random Forests. Machine Learning, 45(1), 5–32.doi:10.1023/A:1010933404324.