Two-dimensional (2D) simultaneous localization and mapping (SLAM) is a crucial technology for autonomous indoor robots. The robot can navigate and execute designated tasks using a map generated by SLAM. An indoor SLAM system utilizing LIDAR data that effectively tracks moving obstacles with notable accuracy and reliability. This paper presents a LIDAR-based SLAM methodology tailored for dynamic environments, specifically within indoor settings where changes occur due to mobile objects. The integration of LIDAR data with sophisticated algorithms facilitates high-precision localization and ensures the map remains current, even in intricate and confined spaces. The differentiation between static and dynamic feature extraction, along with adaptive filtering techniques, enhances localization accuracy and overall performance. Our experimental results are promising, showcasing consistent and safe navigation in dynamic indoor environments.
Introduction
Indoor robots play key roles in manufacturing and household tasks, relying heavily on autonomous navigation powered by Simultaneous Localization and Mapping (SLAM) technology. SLAM helps robots map unknown environments and localize themselves using sensor data, primarily from vision-based (cameras) or laser-based (Lidar) sensors. Visual SLAM systems offer rich environmental details but are sensitive to lighting and texture conditions, whereas laser-based SLAM, especially 2D Lidar SLAM, provides high accuracy and robustness indoors at a lower cost than 3D Lidar.
2D Lidar SLAM systems consist of front-end odometry (processing raw Lidar data), back-end optimization (error correction and map refinement), mapping, and loop detection to reduce cumulative errors and maintain a consistent global map. These systems commonly use probabilistic filters like Kalman or Particle filters.
Filter-based SLAM, including Extended Kalman Filter (EKF) and Particle Filter (FastSLAM), updates robot position by combining prior states and sensor data. Mobile robots often use Inertial Measurement Units (IMU) alongside Lidar for pose estimation, with sensor fusion algorithms like the Kalman Filter improving accuracy by mitigating drift.
The study focuses on real-world robotic applications with Robot Operating System (ROS), analyzing various SLAM methods: 2D Lidar-based (Gmapping, Hector SLAM, Cartographer), monocular camera-based (LSD SLAM, ORB SLAM, DSO), and stereo camera-based systems (ZEDfu, RTAB map, ORB SLAM, S-PTAM).
Implementation details include kinematic modeling of a four-mecanum-wheeled robot, sensor calibration, and data fusion techniques to estimate robot pose. The YDLIDAR X3, a 2D Lidar sensor, is used for environment scanning. ROS integration enables effective map building, with Hector SLAM highlighted for its advantage of not requiring wheel odometry.
Experimental tests in a lab environment demonstrate that Hector SLAM produces accurate maps, with distance measurements showing low errors (~3.25% average), confirmed by performance metrics such as RMSE and MAE, indicating reliable and precise mapping suitable for indoor robotic navigation.
Conclusion
In conclusion, combining YDLIDAR X3 with ROS-based mapping makes a strong base for low-cost, accurate, and efficient robotic mapping and localization systems that could be used in the future and make a big difference.
References
[1] Liu, B.; Xiao, X.; Stone, P. A Lifelong Learning Approach to Mobile Robot Navigation. IEEE Robot. Autom. Lett. 2021, 6, 1090–1096.
[2] De Heuvel, J.; Zeng, X.; Shi, W.; Sethuraman, T.; Bennewitz, M. Spatiotemporal Attention Enhances Lidar-Based Robot Navigation in Dynamic Environments. IEEE Robot. Autom. Lett. 2024, 9, 4202–4209.
[3] Cadena, C.; Carlone, L.; Carrillo, H.; Latif, Y.; Scaramuzza, D.; Neira, J.; Reid, I.; Leonard, J.J. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age. IEEE Trans. Robot. 2016, 32, 1309–1332.
[4] Xu, X.; Zhang, L.; Yang, J.; Cao, C.; Wang, W.; Ran, Y.; Tan, Z.; Luo, M. A Review of Multi-Sensor Fusion SLAM Systems Based on 3D LIDAR. Remote Sens. 2022, 14, 2835.
[5] Zhang, J.; Singh, S. LOAM: Lidar Odometry and Mapping in Real-time. Robot. Sci. Syst. 2014, 2, 9.
[6] Shan, T.; Englot, B. LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 4758–4765.
[7] Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24–30 October 2020; pp. 5135–5142.
[8] Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-Time Single Camera SLAM. IEEE Trans. Pattern Anal. Mach. Intell. 2007, 29, 1052–1067.
[9] Mur-Artal, R.; Tardos, J.D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras. IEEE Trans. Robot. 2017, 33, 1255–1262.
[10] Qin, T.; Li, P.; Shen, S. VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator. IEEE Trans. Robot. 2018, 34, 1004–1020.
[11] Jiang, G.; Yin, L.; Jin, S.; Tian, C.; Ma, X.; Ou, Y. A Simultaneous Localization and Mapping (SLAM) Framework for 2.5D Map Building Based on Low-Cost Lidar and Vision Fusion. Appl. Sci. 2019, 9, 2105.
[12] J. Engel, J. Stückler, and D. Cremers, “Large-scale direct slam with stereo cameras,” in Intelligent Robots and Systems (IROS), 2015 IEEE/RSJ International Conference on. IEEE, 2015, pp. 1935–1942.
[13] C. Forster, M. Pizzoli, and D. Scaramuzza, “SVO: Fast semi-direct monocular visual odometry,” Proceedings - IEEE International Conference on Robotics and Automation, pp. 15–22, 2014.
[14] M. Labbé and F. Michaud, “Online global loop closure detection for large-scale multi-session graph-based SLAM,” IEEE Int. Conference on Intelligent Robots and Systems, pp. 2661–2666, 2014.
[15] A. Concha and J. Civera, “DPPTAM: Dense piecewise planar tracking and mapping from a monocular sequence,” IEEE International Conference on Intelligent Robots and Systems, pp. 5686–5693, 2015.
[16] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, pp. 611–625, 2018.
[17] T. Whelan, S. Leutenegger, R. Salas Moreno, B. Glocker, and A. Davison, “ElasticFusion: Dense SLAM without A Pose Graph,” Robotics: Science and Systems XI, 2015.
[18] K. Tateno, F. Tombari, I. Laina, and N. Navab, “Cnn-slam: Real-time dense monocular slam with learned depth prediction,” in IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), vol. 2, 2017.
[19] T. Pire, T. Fischer, G. Castro, P. DeCristóforis, J. Civera, and J. JacoboBerlles, “S-PTAM: Stereo Parallel Tracking and Mapping,” Robotics and Autonomous Systems, vol. 93, pp. 27–42, 2017.