Indoor navigation remains a significant challenge for visually impaired individuals due to the absence of reliable positioning systems in enclosed environments. Conventional assistive tools provide limited obstacle detection but lack intelligent path planning and destination-based guidance. This paper presents a smart indoor assistive navigation robot that integrates LiDAR-based Simultaneous Localization and Mapping (SLAM), autonomous navigation, and offline voice interaction within a ROS 2 framework. A distributed processing architecture is implemented in which a Raspberry Pi 4 manages hardware interfacing and motor control, while mapping and path planning are executed on an external laptop to maintain efficiency at low cost. The robot employs differential drive locomotion with real-time obstacle avoidance and supports destination-based voice commands without internet dependency. Experimental evaluation in indoor environments demonstrates reliable mapping, stable localization, and consistent autonomous navigation. The proposed system offers a cost-effective and infrastructure-independent assistive mobility solution to enhance safety and independence for visually impaired individuals.
Introduction
The text presents the development of a smart indoor assistive navigation robot designed to help visually impaired individuals move safely and independently in indoor environments where GPS signals are unavailable. Traditional tools such as white canes and guide dogs help detect obstacles but do not support intelligent path planning or destination-based navigation. To address this limitation, the proposed system uses LiDAR-based Simultaneous Localization and Mapping (SLAM) and autonomous navigation to guide users within indoor spaces.
The robot integrates perception, navigation, voice interaction, and distributed processing using the ROS 2 framework. A LiDAR sensor captures environmental data to create a 2D occupancy grid map, which enables real-time localization and obstacle detection. The navigation module calculates optimal paths to predefined destinations while avoiding obstacles using global and local planning algorithms. Users interact with the system through an offline voice interface, which allows them to specify destinations without requiring internet connectivity. A physical handle and audio feedback provide tactile and auditory guidance to enhance user confidence and safety.
The hardware system includes a Raspberry Pi 4 controller, LiDAR sensor, differential drive motors, motor driver, rechargeable battery, and tactile handle. To manage computational demands, SLAM and high-level navigation tasks are executed on an external laptop, while the Raspberry Pi handles sensor data acquisition and motor control through a distributed ROS 2 network.
The robot operates in two phases: a mapping phase, where the environment is explored and mapped using SLAM, and an assistive navigation phase, where voice commands trigger path planning and autonomous movement toward the selected destination. Continuous LiDAR and odometry feedback ensure accurate localization and safe obstacle avoidance.
Experimental testing in indoor environments demonstrated reliable mapping, stable localization, and safe navigation around static and dynamic obstacles. The system produced accurate occupancy maps within 8–12 minutes for medium-sized areas, maintained stable position tracking, and successfully executed voice-based navigation commands. The distributed processing framework improved system efficiency while keeping hardware costs low.
Despite its effectiveness, the system currently requires pre-mapped environments, and performance may be affected by high noise levels or feature-sparse environments. Future improvements include real-time adaptive mapping, stronger embedded processors, multi-sensor fusion, improved voice recognition, and enhanced battery and mobility design.
Conclusion
This paper presented the design and implementation of a smart indoor assistive navigation robot based on lidar-driven slam, autonomous navigation, and offline voice interaction. The system was developed to support visually impaired individuals in navigating indoor environments safely and independently. By integrating mapping, localization, path planning, obstacle avoidance, and multimodal user interaction within a distributed processing architecture, the proposed solution demonstrates the feasibility of a low-cost yet reliable assistive robotic platform. The implementation using ros 2 and ubuntu provided a modular and scalable framework, while the combination of raspberry pi and external computing resources enabled efficient execution of computationally intensive tasks without requiring expensive embedded hardware. The differential drive mechanism ensured smooth mobility, and the physical handle interface offered intuitive tactile guidance for users. Experimental evaluations confirmed accurate map generation, stable localization, reliable obstacle avoidance, and effective voice-based destination control. Although certain limitations remain, including dependence on a pre-mapped environment and reliance on distributed processing, the developed prototype establishes a strong foundation for future enhancements in assistive robotics. The proposed system demonstrates that affordable, infrastructureindependent navigation robots can significantly improve indoor mobility support for visually impaired individuals. Overall, this work contributes to the advancement of human-centered assistive robotics by combining autonomy, accessibility, and cost.
References
[1] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cambridge, MA, USA: MIT Press, 2005.
[2] H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: Part I,” IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99–110, Jun. 2006.
[3] T. Bailey and H. Durrant-Whyte, “Simultaneous localization and mapping (SLAM): Part II,” IEEE Robotics & Automation Magazine, vol. 13, no. 3, pp. 108–117, Sep. 2006.
[4] M. Quigley et al., “ROS: An open-source Robot Operating System,” in Proc. IEEE Int. Conf. Robotics and Automation (ICRA) Workshop, Kobe, Japan, 2009.
[5] S. Macenski, F. Martin, R. White, and J. Ginés, “The Nav2 navigation system,” IEEE Robotics and Automation Letters, vol. 6, no. 3, pp. 5323–5330, Jul. 2021.
[6] J. Borenstein and Y. Koren, “Obstacle avoidance with ultrasonic sensors,” IEEE Journal of Robotics and Automation, vol. 4, no. 2, pp. 213–218, Apr. 1988.
[7] R. Siegwart, I. R. Nourbakhsh, and D. Scaramuzza, Introduction to Autonomous Mobile Robots, 2nd ed. Cambridge, MA, USA: MIT Press, 2011.
[8] A. Elfes, “Using occupancy grids for mobile robot perception and navigation,” Computer, vol. 22, no. 6, pp. 46–57, Jun. 1989.
[9] S. Liu, X. Zhang, and H. Li, “DRAGON: A dialogue-based robot for assistive navigation,” IEEE Robotics and Automation Letters, vol. 9, no. 2, pp. 1250–1257, 2024.
[10] J. Zhang and S. Singh, “LOAM: Lidar Odometry and Mapping in real-time,” in Proc. Robotics: Science and Systems (RSS), Berkeley, CA, USA, 2014.
[11] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “OctoMap: An efficient probabilistic 3D mapping framework based on octrees,” Autonomous Robots, vol. 34, no. 3, pp. 189–206, Apr. 2013.
[12] Raspberry Pi Foundation, “Raspberry Pi 4 Model B Datasheet,” 2023. [Online]. Available: https://www.raspberrypi.org