Recent advances in artificial intelligence and low- cost embedded computing have spurred the development of transformative technologies for the visually impaired. This paper provides a comprehensive survey of modern, vision-based wear- able systems designed to enhance navigation and environmental perception. We systematically review and analyze key research from 2015 to the present, categorizing systems based on their computational architecture (standalone vs. cloud-connected), form factor, and primary function. Our analysis covers the core technological pillars, including real-time navigation using GPS and sensor fusion, deep learning-based obstacle detection, and object recognition with Convolutional Neural Networks (CNNs). We specifically highlight the field’s definitive shift from traditional computer vision algorithms to these more powerful deep learning models. By comparing the performance, cost, and usability of various approaches, we identify persistent challenges that hinder widespread adoption, such as limited battery life, performance in varied lighting conditions, and the high cost of commercial devices. This survey concludes by outlining the most significant research gaps and promising future directions, including the need for more efficient AI architectures and robust multi-modal sensor integration for creating the next generation of effective and accessible assistive devices.
Introduction
Assistive technologies for the visually impaired aim to enhance mobility, independence, and safety for the world’s 2.2 billion visually impaired individuals. Modern solutions leverage deep learning, wearable devices, and AI, moving beyond traditional sensors and aids to provide real-time object detection, pathfinding, hazard identification, and rich contextual awareness. Lightweight models like YOLOv8 and MobileNet enable fast, accurate detection on embedded devices, while semantic segmentation and Large Vision-Language Models (LVLMs) provide detailed environmental understanding and spoken feedback.
Despite advancements, challenges remain in real-time performance, device affordability, usability, and environmental variability (e.g., low-light or cluttered urban areas). Future directions include multilingual, culturally adaptive systems and efficient, scalable architectures that balance computational requirements with wearable device constraints, ensuring accessibility and effectiveness across diverse global settings.
Conclusion
The development of affordable and effective assistive tech- nologies for visually impaired people continues to be an important research area with great social value. This survey reviewed different modern vision-based wearable systems, ranging from early computer vision methods to the latest deep learning approaches. Traditional sensor-based and classical algorithms built a strong base, but they often fail to handle complex real-world situations effectively. Deep learning techniques, especially Convolutional Neural Networks (CNNs) used on small embedded devices like the Raspberry Pi, have shown much better accuracy. However, they still face challenges in achieving a balance between high performance, fast real-time response, and low power usage.
From the studies reviewed, one major gap identified is that most existing systems focus on only one task, such as navigation or object detection, instead of providing a complete, all-in-one solution. Many models that work well in controlled environments do not perform the same in outdoor or real- life conditions due to factors like poor lighting, glare, and unexpected obstacles.
Key issues such as real-time processing, battery life, afford- ability, and user comfort still need improvement. Future work should focus on creating lightweight AI models and using sensor fusion methods that combine cameras with technologies like LiDAR or ultrasonic sensors for better accuracy and reliability. It is also important to design easy-to-use interfaces and test these systems in real-life situations with visually im- paired users to ensure they are practical and socially accepted. Progress in this field will depend not only on technology but also on teamwork between engineers, designers, and the visually impaired community to truly improve independence and quality of life.
References
[1] N. Yalcin and M. Alisawi, ”Enhancing Social Interaction for the Visually Impaired: A Systematic Review of Real-Time Emotion Recognition Using Smart Glasses and Deep Learning,” IEEE Access, Jun. 2025. doi: 10.1109/ACCESS.2025.3577106.
[2] S. Ikram, I. S. Bajwa, S. Gyawali, A. Ikram, and N. Alsubaie, ”En- hancing Object Detection in Assistive Technology for the Visually Impaired: A DETR-Based Approach,” IEEE Access, Apr. 2025. doi: 10.1109/ACCESS.2025.3558370.
[3] M. I. A. Hossain, J. Anjom, and R. I. Chowdhury, ”Towards walkable footpath detection for the visually impaired on Bangladeshi roads with smartphones using deep edge intelligence,”Array, vol. 26, p. 100388, Apr. 2025. doi: 10.1016/j.array.2025.100388.
[4] R. Kra´l, P. Jacko, and T. Vince, “Low-Cost Multifunctional Assistive Device for Visually Impaired Individuals,” IEEE Access, Mar. 2025. doi: 10.1109/ACCESS.2025.3554366.
[5] R. I. Chowdhury, J. Anjom, and M. I. A. Hossain, ”A novel edge intelligence-based solution for safer footpath navigation of visually impaired using computer vision,” Journal of King Saud University - Computer and Information Sciences, vol. 36, p. 102191, Sep. 2024. doi: 10.1016/j.jksuci.2024.102191.
[6] M. Alghamdi, H. A. Mengash, M. Aljebreen, M. Maray, A. A. Darem, and A. S. Salama, ”Empowering Retail Through Advanced Consumer Product Recognition Using Aquila Optimization Algorithm With Deep Learning,” IEEE Access, vol. 12, pp. 71055–71065, May 2024. doi: 10.1109/ACCESS.2024.3399480.
[7] M. S. A. Baig, S. A. Gillani, S. M. Shah, M. Aljawarneh, A. A. Khan, and M. H. Siddiqui, ”AI-based Wearable Vision Assistance System for the Visually Impaired: Integrating Real-Time Object Recognition and Contextual Understanding Using Large Vision-Language Models,” arXiv:2412.20059v1, 2024.
[8] A. M. Musharaff, G. S. Begum, P. R. Rao, and S. N. R. K., ”iSight- Navigable Path Detection for Visually Impaired using Semantic Seg- mentation,”International Journal for Research in Applied Science and Engineering Technology, vol. 12, issue V, May 2024.
[9] F. Ashiq, M. Asif, M. B. Ahmad, S. Zafar, K. Masood, T. Mahmood, M. T. Mahmood, and I. H. Lee, ”CNN-Based Object Recognition and Tracking System to Assist Visually Impaired People,” IEEE Access, vol. 10, pp. 14819–14834, Jan. 2022. doi: 10.1109/ACCESS.2022.3148036.
[10] A. Dinodiya and R. R., ”Smart Wearable Gadget to Detect Stairs, Potholes, and Other Objects for Visually Impaired Users Using Com- puter Vision,”Bachelor of Engineering Thesis, Dept. Mechatronics Eng., SATHYABAMA Institute of Science and Technology, Chennai, India, May 2022.
[11] H. Matsumura and C. Premachandra, ”Deep-Learning-Based Stair De- tection Using 3D Point Cloud Data for Preventing Walking Accidents of the Visually Impaired,” IEEE Access, vol. 10, May 2022. doi: 10.1109/ACCESS.2022.3178154.
[12] A. G. Song, X. Hu, and S. A. G. S, ”StereoPilot: A Wearable Target Location System for Blind and Visually Impaired Using Spatial Audio Rendering,”IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2022. doi: 10.1109/TNSRE.2022.3182661.
[13] H. Ali A., S. U. Rao, S. Ranganath, T. S. Ashwin, and G. R. M. Reddy, ”A Google Glass Based Real-Time Scene Analysis for the Visually Impaired,”IEEE Access, vol. 9, pp. 166351–166369, Dec. 2021. doi: 10.1109/ACCESS.2021.3135024.
[14] H. F. A´ lvarez, G. A´ lvarez-Narciandi, F. Las-Heras, and J. Laviada, ”Sys- tem Based on Compact mmWave Radar and Natural Body Movement for Assisting Visually Impaired People,” IEEE Access, vol. 9, pp. 129033- 129046, Sep. 2021. doi: 10.1109/ACCESS.2021.3110582.
[15] I. Sivakumar, N. Meenakshisundaram, I. Ramesh, S. E. D, and S. R. Raj C, ”VisBuddy - A Smart Wearable Assis- tant for the Visually Challenged,” 2020. [Online]. Available: https://link.springer.com/chapter/10.1007/978-981-15-5853-5-36
[16] P. S. Swami and P. Futane, ”Traffic Light Detection System for Visually Impaired Person with Voice System,” International Journal of Advance Engineering and Research Development, vol. 5, no. 04, Apr. 2018.
[17] A. Thakur, R. Singh, and A. Gehlot, ”Smart Blind Stick For Obstacle Detection and Navigation System,” International Journal of Emerging Technologies and Innovative Research, Oct. 2018.
[18] R. Kedia, K. K. Yoosuf, P. Dedeepya, M. Fazal, C. Arora, and M. Balakrishnan, ”MAVI: an embedded device to assist mobility of visually impaired,” in Proc. 30th International Conference on VLSI Design and 16th International Conference on Embedded Systems (VLSID), 2017, pp. 213-218. doi: 10.1109/VLSID.2017.38.
[19] R. Tapu, B. Mocanu, and T. Zaharia, ”DEEP-SEE: Joint object de- tection, tracking and recognition with application to visually impaired navigational assistance,”Sensors, vol. 17, no. 11, p. 2473, 2017. doi: 10.3390/s17112473.
[20] R. M. Y. B. Munger et al., “Apparatus and method for a dynamic “region of interest” in a display system,” US Patent 9,618,748, Apr. 11, 2017.