Abstract: | Visual-inertial navigation systems (VINS) have been extensively studied in the past decades to provide positioning services for autonomous systems, such as autonomous driving vehicles (ADV) and unmanned aerial vehicles (UAV). Decent performance can be obtained by VINS in indoor scenarios with stable illumination and texture information. Unfortunately, applying the VINS in dynamic urban areas is still a challenging problem, due to the excessive dynamic objects which can significantly degrade the performance of VINS. Detecting and removing the features inside an image using the deep neural network (DNN) that belongs to unexpected objects, such as moving vehicles and pedestrians, is a straightforward idea to mitigate the impacts of dynamic objects on VINS. However, excessive exclusion of features can significantly distort the geometry distribution of visual features. Even worse, excessive removal can cause the unobservability of the system states. Instead of directly excluding the features that possibly belong to dynamic objects, this paper proposes to remodel the uncertainty of dynamic features. Then both the healthy and dynamic features are applied in the VINS. The experiment in a typical urban canyon is conducted to validate the performance of the proposed method. The result shows that the proposed method can effectively mitigate the impacts of the dynamic objects and improved accuracy is obtained. |
Published in: |
2020 IEEE/ION Position, Location and Navigation Symposium (PLANS) April 20 - 23, 2020 Hilton Portland Downtown Portland, Oregon |
Pages: | 1563 - 1571 |
Cite this article: | Bai, Xiwei, Zhang, Bo, Wen, Weisong, Hsu, Li-Ta, Li, Huiyun, "Perception-aided Visual-Inertial Integrated Positioning in Dynamic Urban Areas," 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, Oregon, April 2020, pp. 1563-1571. https://doi.org/10.1109/PLANS46316.2020.9109963 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |