The development of intelligent assistive devices is crucial for enhancing the mobility and independence of visually impaired individuals. This review synthesizes recent advancements in electronic travel aids, particularly smart walking sticks and wearable systems, that leverage sensor fusion, microcontrollers, and artificial intelligence. Early systems primarily utilized ultrasonic and moisture sensors with Arduino microcontrollers to provide basic obstacle and water detection through audio and vibrational feedback. Recent research has significantly evolved to incorporate sophisticated technologies such as Raspberry Pi, computer vision, and deep learning. The integration of object detection algorithms like YOLO (You Only Look Once) and Convolutional Neural Networks (CNNs) enables real-time identification and classification of a wide range of obstacles, including people and vehicles. Furthermore, studies explore the use of IoT for emergency communication via GSM/GPS, deep reinforcement learning for autonomous navigation, and 3D point cloud modeling for dynamic path planning in indoor environments. The consistent findings across these studies highlight a paradigm shift from simple hazard detection to providing descriptive environmental awareness. Key challenges remain in optimizing the trade-off between detection accuracy and real-time performance on embedded devices. The future trajectory of this field points towards the increased use of model compression techniques, multi-sensor data fusion, and machine learning to create more affordable, robust, and intuitive navigation systems that significantly improve the safety and autonomy of visually impaired users.
Introduction
Advances in embedded systems, artificial intelligence, and the Internet of Things (IoT) have significantly transformed assistive technologies for visually impaired individuals. While traditional aids like the white cane provide basic mobility support, modern Electronic Travel Aids (ETAs) enhance safety and independence through intelligent sensing, object recognition, and real-time feedback. Early systems relied on Arduino-based ultrasonic sensors for basic obstacle detection, later expanding to include hazard detection for water, fire, and stairs. The introduction of powerful platforms such as Raspberry Pi and ESP32, combined with AI techniques, marked a major shift toward smart navigation aids.
Recent research increasingly integrates computer vision and deep learning, particularly CNNs and YOLO-based object detection, enabling real-time identification and classification of environmental objects with audio and vibration feedback. IoT technologies further enhance safety through GPS-based tracking, GSM emergency communication, mobile applications, and cloud connectivity. Multi-sensor fusion approaches combine ultrasonic, water, smoke, light, temperature, and motion sensors to provide comprehensive environmental awareness.
The literature demonstrates steady progress from low-cost sensor-based sticks to AI-powered, multi-modal assistive systems capable of indoor and outdoor navigation, hazard detection, and even social interaction via facial recognition. Advanced models employ lightweight and optimized deep learning architectures suitable for embedded deployment, achieving high accuracy and real-time performance. Emerging approaches also explore reinforcement learning, SLAM, wearable devices such as smart shoes, and smartphone integration.
Conclusion
The development of smart assistive devices for the visually impaired has progressed from basic sensor-based systems to sophisticated AI-enhanced platforms. Foundational work demonstrated the effective use of ultrasonic sensors and Arduino microcontrollers for reliable obstacle and hazard detection [1], [8]. The field has since evolved to incorporate multi-sensor data fusion [2], [16], [19], IoT connectivity for safety and tracking [4], [13], [16], and powerful deep learning models for real-time object recognition and classification [3], [4], [15]. These models, including YOLO and CNNs, have transformed assistive devices from simple warning systems into descriptive navigation aids. However, key challenges persist, particularly in achieving real-time performance with complex AI models on resource-constrained hardware [15], [17] and the need for more comprehensive environmental understanding that includes dynamic obstacles and complex terrains [9], [18]. Addressing these limitations requires a focused effort on model optimization through pruning and quantization [15], the development of robust benchmarking frameworks [18], and the exploration of advanced navigation techniques like deep reinforcement learning [17] and visual SLAM [17]. Future research directions should also prioritize the integration of cross-modal applications, such as facial recognition for social interaction [13] and 3D point cloud modeling for indoor path planning [9]. With continued innovation in these areas, the goal of creating fully autonomous, intuitive, and universally accessible navigation systems that grant visually impaired individuals unprecedented independence is steadily becoming an achievable reality.
References
[1] Dada Emmanuel Gbenga, Arhyel Ibrahim Shani, and Adebimpe Lateef Adekunle, “Smart Walking Stick for Visually Impaired People Using Ultrasonic Sensors and Arduino,” International Journal of Engineering and Technology (IJET), vol. 9, no. 5, pp. 3435–3447, Oct–Nov 2017.
[2] Shahzor Memon, Mirza Muhammad Aamir, Sadiq Ur Rehman, Halar Mustafa, and Muhammad Shakir Sheikh, “Enhanced Mobility Aid for the Visually Impaired: An Ultrasonic Sensor and Arduino-Based Smart Walking Stick,” Memoria Investigaciones en Ingenieria, no. 28, pp. 20 31, 2025.
[3] Vidya M Shinde, Shivanjali Ninad Pawar, Nandinee Anil Wangane, Sampoorna Sampat Salve, and Swapnali Rajendra Wandhekar, “Smart assistive stick for visually impaired people,” World Journal of Advanced Engineering Technology and Sciences, vol. 14, no. 2, pp. 110–116, 2025.
[4] Dipta Paul, S M Aliuzzaman, MD. Meraj Ali, Abatesham Rabbi, Md Fahim Khan, Niamul Alam, MD Tanvir Shakil, Dewan Saiful Islam, and Md Ariful Azad, “AI-Enhanced Multifunctional Smart Assistive Stick for Enhanced Mobility and Safety of the Visually Impaired,” Journal of Computer Science and Technology Studies, vol. 7, no. 1, pp. 283–301, 2025.
[5] Ahmet Karagoz and Gokhan Dindis, “Object Recognition and Positioning with Neural Networks: Single Ultrasonic Sensor Scanning Approach,” Sensors, vol. 25, no. 1086, 2025.
[6] K. Ramarethinam, Mrs K. Thenkumari, and P. Kalaiselvan, ``Navigation System for Blind People Using GPS & GSM Techniques”, International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering, vol. 3, no. Special Issue 2, pp. 399–405, Apr. 2014.
[7] S. Singh and Dr. B. Singh, “Intelligent Walking Stick for Elderly and Blind People”, International Journal of Engineering Research & Technology (IJERT), vol. 9, no. 03, pp. 19–22, Mar. 2020.
[8] T. Tirupal, B. Venkata Murali, M. Sandeep, K. Sunil Kumar, and C. Uday Kumar, “Smart Blind Stick Using Ultrasonic Sensor,” Journal of Remote Sensing GIS & Technology, vol. 7, no. 2, pp. 34–42, May–Aug. 2021.
[9] L. Diaz-Vilarino, P. Boguslawski, K. Khoshelham, H. Lorenzo, and L. Mahdjoubi, “Indoor Navigation from Point Clouds: 3D Modelling and Obstacle Detection,” The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. XL-B4, pp. 275 281, 2016.
[10] P. S, P. S. Nihal, R. R. Menon, S. S. Kumar, and S. Tiwari, “Smart Blind Stick using Artificial Intelligence,” International Journal of Engineering and Advanced Technology (IJEAT), vol. 8, no. SS, pp. 19–22, May 2019.
[11] Laurence Kenneth A. Balomaga, Dr. Dan Michael A. Cortez, Charmaine Allyson P. Reyes, Criselle J. Centeno, Raymund M. Dioses, and Leisyl M. Mahusay, “ProxiSense: IoT Smart Blind Stick with Voice Alerts for Obstacle and Water Hazard Detection,” 2024 IEEE Conference on Innovative Data Communication Systems and Engineering (ICIDCSE), pp. 1–7, 2024. DOI: 10.1109/CISES63760.2024.1091035.
[12] N. Loganathan, K. Lakshmi, N. Chandrasekaran, S. R. Cibisakaravarthi, R. Hari Priyanga, and K. HarshaVarthini, “Smart Stick for Blind People,” 2020 6th International Conference on Advanced Computing & Communication Systems (ICACCS), pp. 65–67, 2020. DOI: 10.1109/ICACCS48705.2020.9074374.
[13] Mrs. K.P. Kamble, Shambhavi Shende, Ved Sahu, Karan Bariya, and Gargi Deosthali, “Voice Assisted Smart Blind Stick,” 2024 5th International Conference on Electronics and Sustainable Communication Systems (ICESC), pp. 1565–1569, 2024.
[14] Romteera Khlaikhayai, Chavana Pavaganun, Benja Mangalabruks, and Preecha Yupapin, “An Intelligent Walking Stick for Elderly and Blind Safety Protection,” Procedia Engineering, vol. 8, pp. 313–316, 2011. DOI: 10.1016/j.proeng.2011.03.058.
[15] Ahmed Ben Atttallah, Yahia Said, Mohamed Amin Ben Atttallah, Mohammed Albekairi, Khaled Kaaniche, and Sahbi Boubaker, “An effective obstacle detection system using deep learning advantages to aid blind and visually impaired navigation,” Ain Shams Engineering Journal, vol. 15, p. 102387, 2024. DOI: 10.1016/j.asej.2023.102387.
[16] Ammar Almomani, Mohammad Alauthman, Amal Malkawi, Hadeel Shwaihet, Batool Aldigide, Donia Aldabeck, and Karmen Abu Hamoodeh, “Smart Shoes Safety System for the Blind People Based on (IoT) Technology,” CMC, vol. 76, no. 1, pp. 415-434, 2023. DOI: 10.32604/cmc.2023.036266.
[17] Thayer Corbin, “Vision-Based Autonomous Navigation and Obstacle Avoidance in Mobile Robots Using Deep Reinforcement Learning,” 2023.
[18] Mustufa Haider Abidi, Arshad Noor Siddiquee, Hisham Alkhalefah, and Vishwaraj Srivastava, “A comprehensive review of navigation systems for visually impaired individuals,” Heliyon, vol. 10, no. 10, p. e31825, 2024. DOI: 10.1016/j.heliyon.2024.e31825.
[19] Sangam Malla, Prabhat Kumar Sahu, Srikanta Patnaik, and Anil Kumar Biswal, “Obstacle Detection and Assistance for Visually Impaired Individuals Using an IoT-Enabled Smart Blind Stick,” Revue d’Intelligence Artificielle, vol. 37, no. 3, pp. 783-794, 2023. DOI: 10.18280/ria.370327. [20]
[20] P. Vennila, V. Alamelu Mangayarkarasi, K. Vinayakan, and G. R. Gnana Raja, “Smart IoT Navigation System for Visually Impaired Individuals: Improving Safety and Independence with Advanced Obstacle Detection,” International Journal of Computational Research and Development, vol. 10, no. 1, pp. 17-23, 2025