Multi-sensor Fusion for Autonomous Positioning of Indoor Robots

Zipei Shuai and Hongyang Yu

Abstract: Achieving more accurate autonomous positioning of mobile robots in indoor environments is the basis for realizing robot indoor navigation applications. In order to achieve accurate autonomous localization of mobile robot in indoor scene, we need to choose several localization algorithms suitable for indoor environmental conditions from the existing algorithms for autonomous localization of robots. At this stage, scholars have proposed many algorithms to realize robot autonomous position, but these algorithms often not only have problems such as high computational complexity, system accuracy and stability affected by the wrong depth matching, but also some technologies that cannot be used indoors, such as satellites positioning, even some technologies have very strict requirements for the scene, such as geomagnetic positioning. If the robot is in an indoor environment that is severely affected by magnetic forces, the geomagnetic positioning will fail and accurate positioning will not be possible. Therefore, in order to achieve a more robust and efficient autonomous localization algorithm for indoor mobile robot. In this paper, an algorithm is proposed to realize the autonomous localization of indoor mobile robot in known scenes by fusing monocular camera and LiDAR (Light-Laser Detection and Ranging) technology. The algorithm makes full use of the information provided by depth learning model and laser point cloud. Firstly, the monocular camera is used to make the training data set: the 3D point cloud map of the known scene is projected into the grid map, define the appropriate coordinate axis for the grid map, determine the precise coordinates of each grid, and use a monocular camera to take a certain number of pictures for the scene, and assign a certain grid to each picture, so that each picture has specific coordinates corresponding to it. After the training set is established, a deep learning algorithm is used to train a model that can use two-dimensional RGB images to determine the coordinates of the camera's location. This is where the mobile robot is. Finally, combined with the LiDAR information of the indoor mobile robot, the estimated coordinates are modified more carefully to get more accurate positioning. The algorithm of indoor mobile robot localization based on the fusion of vision sensor and LiDAR proposed in this paper is an improvement over the existing single sensor localization algorithm, which combines the existing deep learning model with the information provided by LiDAR. It effectively improves the positioning accuracy of the indoor mobile robot, improves the working efficiency of the mobile robot, and provides a better and higher guarantee for the navigation and other operations of the mobile robot in indoor scene, and promotes the better development of robot industry.
Published in: Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021)
September 20 - 24, 2021
Union Station Hotel
St. Louis, Missouri
Pages: 105 - 112
Cite this article: Shuai, Zipei, Yu, Hongyang, "Multi-sensor Fusion for Autonomous Positioning of Indoor Robots," Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 105-112.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In