Abstract: | Abstract—In this paper, we present a framework of simultaneous localization and mapping (SLAM) by combining the modular visual-inertial odometry (VIO) and object SLAM estimator. Semantic objects are known to possess rich localization information, such as scale and orientation. However, how to tightly couple these object measurements to an inertial sensor is not straightforward. To answer this, we fuse local object poses from a deep neural network to build a globally consistent object map under precise prior estimates from the VIO module. The contribution of our work is the representation of the object map with six-dimensional poses that enables a robot to exploit orientational, as well as positional information in the filtering formulation. We showcase that our method can output cm-level accuracy localization and mapping in a room-scale environment in our photo-realistic virtual environment. Index Terms—Visual-inertial odometry, simultaneous localization and mapping, object pose detection |
Published in: |
2023 IEEE/ION Position, Location and Navigation Symposium (PLANS) April 24 - 27, 2023 Hyatt Regency Hotel Monterey, CA |
Pages: | 1335 - 1340 |
Cite this article: | Jung, Jae Hyung, Park, Chan Gook, "A Framework for Visual-Inertial Object-Level Simultaneous Localization and Mapping," 2023 IEEE/ION Position, Location and Navigation Symposium (PLANS), Monterey, CA, April 2023, pp. 1335-1340. https://doi.org/10.1109/PLANS53410.2023.10140108 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |