|Abstract:||The quality of visual features is very important for visual-based navigation, which suffers from the motion state and changeable environment. This paper presents a new method (FQS) to capture visual features with high quality by introducing the prediction from the inertial parameters to visual feature selection. Our approach has three key parts. First, we utilize inertial measurements to quantify the motion state and blur caused by rotation, which guides the effective feature selection. Second, we propose a description vector to describe the selected features of appearance, including texture, blur and moving objects in visual domain. Third, a BP network is trained as a quality evaluation model, aiming at trusting features that contribute to accurate localization. We hope that the model can reflect the relationship between features description and their quality. The ground truth (quality of a feature) of training data is manually labeled with respect to a map from localization error to quality. After obtaining the quality, we integrate it into Visual-Inertial Navigation (VIN), weighting the visual features for pose estimation. The results indicate that the proposed method reduces the error of localization. And our model is more robust and flexible, which can be adaptively used in different scenes in KITTI dataset as well as the EuRoC MAV dataset without retraining.|
Proceedings of the 2019 International Technical Meeting of The Institute of Navigation
January 28 - 31, 2019
Hyatt Regency Reston
|Pages:||270 - 282|
|Cite this article:||
Liu, Hongyan, Ma, Huimin, Wen, Jinghuan, Su, Jingxuan, Yao, Zheng, Zhang, Lin, "FQS: Feature Quality Supervision for Visual-Inertial Navigation," Proceedings of the 2019 International Technical Meeting of The Institute of Navigation, Reston, Virginia, January 2019, pp. 270-282.
ION Members/Non-Members: 1 Download Credit