Abstract: | Sensor calibration is the basic task for the research field of robotics and indoor navigation using heterogeneous data, such as visual images and point clouds collected from Light Detection and Ranging (LiDAR). In this paper, we take advantage of our indoor navigation robotic platform to develop novel sensor fusion algorithms with different levels of features. The recording platform is equipped with a 64-E Velodyne LiDAR and a monocular camera. The dataset, namely L2VCali, is composed of raw point clouds, images, and ground truth (GT) for both intrinsic and extrinsic calibration for each epoch. The data is collected in different types of indoor environments, for example, open areas, Manhattan-world rooms, and hallways. Results from state-of-art algorithms reveal that published algorithms can obtain high accuracy when the indoor environments become complex and have repetitive features. The dataset aims at becoming a benchmark for evaluating the robustness of calibration algorithms by providing both typical and challenging scenarios. |
Published in: |
Proceedings of the 2023 International Technical Meeting of The Institute of Navigation January 24 - 26, 2023 Hyatt Regency Long Beach Long Beach, California |
Pages: | 1184 - 1191 |
Cite this article: | Ai, Mengchi, Hokmabadi, Ilyar Asl Sabbaghian, El-Sheimy, Naser, "L2VCali: A Dataset for LiDAR and Vision-Based Sensor Calibration in Structural Indoor Environments," Proceedings of the 2023 International Technical Meeting of The Institute of Navigation, Long Beach, California, January 2023, pp. 1184-1191. https://doi.org/10.33012/2023.18646 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |