Exploiting Ground Plane Constraints for Visual-aided Inertial Navigation

G. Panahandeh, D. Zachariah, and M. Jansson

Abstract: In this paper, an ego-motion estimation approach is introduced that fuses visual and inertial information, using a monocular camera and an inertial measurement unit. The system maintains a set of feature points that are observed on the ground plane. Based on matched feature points between the current and previous images, a novel measurement model is introduced that imposes visual constraints on the inertial navigation system to perform 6 DoF motion estimation. Furthermore, feature points are used to impose epipolar constraints on the estimated motion between current and past images. Pose estimation is formulated implicitly in a state-space framework and is performed by a Sigma-Point Kalman filter. The presented experiments, conducted in an indoor scenario with real data, indicate the ability of the proposed method to perform accurate 6 DoF pose estimation.
Published in: Proceedings of IEEE/ION PLANS 2012
April 24 - 26, 2012
Myrtle Beach Marriott Resort & Spa
Myrtle Beach, South Carolina
Pages: 527 - 534
Cite this article: Panahandeh, G., Zachariah, D., Jansson, M., "Exploiting Ground Plane Constraints for Visual-aided Inertial Navigation," Proceedings of IEEE/ION PLANS 2012, Myrtle Beach, South Carolina , April 2012, pp. 527-534. https://doi.org/10.1109/PLANS.2012.6236923
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In