Calibration-free Visual-Inertial Fusion with Deep Convolutional Recurrent Neural Networks

Soroush Sheikhpour and Mohamed Maher Atia

Abstract: Visual-Inertial Odometry (VIO) has been one of the most popular yet affordable navigation systems for indoor and even outdoor applications. VIO can augment or replace Global Navigation Satellite Systems (GNSSs) under signal degradation or service interruptions. Conventionally, the fusion of visual and inertial modalities has been performed using optimization-based or filtering-based techniques such as nonlinear Least Squares (LS) or Extended Kalman Filter (EKF). These classic techniques, despite several simplifying approximations, involve sophisticated modelling and parameterization of the navigation problem, which necessitates expert fine-tuning of the navigation system. In this work, a calibration-free visual-inertial fusion technique using Deep Convolutional Recurrent Neural Networks (DCRNN) is proposed. The network employs a Convolutional Neural Network (CNN) to process the spatial information embedded in visual data and two Recurrent Neural Networks (RNNs) to process the inertial sensor measurements and the CNN output for final pose estimation. The network is trained with raw Inertial Measurement Unit (IMU) data and monocular camera frames as its inputs, and the relative pose as its output. Unlike the conventional VIO techniques, there is no need for IMU biases and scale factors, intrinsic and extrinsic parameters of the camera to be explicitly provided or modelled in the proposed navigation system, rather these parameters along with system dynamics are implicitly learned during the training phase. Moreover, since the inertial and visual data are fused at mid-layers in the network, deeper correlations of these two modalities are learned compared to a simple combination of the final pose estimates of both modalities at the output layers, hence, the fusion can be considered as a tightly-coupled fusion of visual and inertial modalities. The proposed VIO network is evaluated on real datasets and thorough discussion is provided on the capabilities of the deep learning approach toward VIO.
Published in: Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019)
September 16 - 20, 2019
Hyatt Regency Miami
Miami, Florida
Pages: 2198 - 2209
Cite this article: Sheikhpour, Soroush, Atia, Mohamed Maher, "Calibration-free Visual-Inertial Fusion with Deep Convolutional Recurrent Neural Networks," Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019), Miami, Florida, September 2019, pp. 2198-2209.
https://doi.org/10.33012/2019.16918
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In