Abstract: | The scale uncertainty problem is a well-known challenging problem in visual SLAM (Simultaneous Localization and Mapping). Since the RGB-D sensor provides scale information, RGB-D SLAM improves this uncertainty problem. And it can greatly reduce the computational complexity of SLAM in depth estimation, thereby improving the real-time performance of SLAM. However, due to the limitations of the physical hardware conditions of the RGB-D sensor, it is difficult to measure the depth of reflective areas, dark areas, and out-of-range areas in real environments. Depth estimation from RGB-D sensors is only available in a limited area, and measurement error also typically increases with distance. Therefore, in extreme and complex environments, the depth map of RGB-D sensors has many holes that lack depth values. These holes can lead to some blind areas on the depth image, which can seriously affect the quality of the depth map. These low-quality depth maps bring great challenges to the accuracy and stability of RGB-D SLAM, and largely limit the application of RGB-D SLAM in real complex environments. In order to reduce the impact of these low-quality depth maps on RGB-D SLAM and further improve the stability and robustness of RGB-D SLAM in complex environments, this paper proposes a robust RGB-D SLAM based on depth map Improvement. At the first stage, based on a public dataset of complex environments, a powerful deep neural network model is trained using machine learning techniques. This model can complete the depth map with hole regions. It obtains the completed pixel-level dense depth map based on the input RGB map and the original depth map. At the next stage, using part of the work of ORB-SLAM2, we conduct secondary development. By combining with this deep neural network that can complete the blind area of the depth map, the basic prototype of our SLAM is obtained. Based on the completed depth map, the performance of our SLAM in complex environments is improved. After estimating more reliable positions and poses for each keyframe, we added the process of map building to the SLAM. Through the alignment and fusion adjustment of each keyframe point cloud map, a visualized 3D map is obtained. Finally, through the complementary combination of traditional SLAM and deep neural network, we get a better performance and more stable RGB-D SLAM. To test and evaluate the RGB-D SLAM proposed in this paper, we first conduct experiments on the existing RGB-D public benchmark datasets. From the experimental results on the datasets, our RGB-D SLAM has good accuracy and reliability. In addition, we also use Intel consumer-grade RGB-D camera Realsense D435 to conduct practical tests in real-world environments. The results show that the RGB-D SLAM proposed in this paper has a significant improvement in stability and robustness. In challenging environments, it can effectively improve the safety, reliability and efficiency of autonomous operating equipment. |
Published in: |
Proceedings of the 35th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2022) September 19 - 23, 2022 Hyatt Regency Denver Denver, Colorado |
Pages: | 1220 - 1225 |
Cite this article: | Zhang, Hao, Yu, Hongyang, "A Robust RGB-D SLAM using Deep Learning for Depth Map Improvement," Proceedings of the 35th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2022), Denver, Colorado, September 2022, pp. 1220-1225. https://doi.org/10.33012/2022.18495 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |