Previous Abstract Return to Session A3 Next Abstract

Session A3: Alternative Sensors for Aiding INSs and Precision Timing

Monocular Direct Visual Odometry Aid for Low-Cost IMU in Unknown GNSS-denied Environment
Jun Shi, Institute of Flight System Dynamics, Technical University of Munich, Germany; Yajie Chen, Institute of Photogrammetry and Geoinformation, Leibniz University Hannover, Germany; Johann Dambeck, Florian Holzapfel, Institute of Flight System Dynamics, Technical University of Munich, Germany
Location: Pavilion Ballroom East
Alternate Number 1

INS/GNSS integrated navigation is the general navigation solution for the unmanned aerial vehicle (UAV). But sometimes GNSS does not provide sufficient availability and accuracy because of the low signal strength in urban or forest areas and indoor environment. Although an alternative AHRS solution is able to provide continuous orientation information during GNSS failure, however positioning functionality breaks down. To maintain sufficient positioning accuracy and lengthen the period time during which a UAV can still navigate in an unknown environment after GNSS failure, a camera which provides abundant information of the unknown environment is considered as an alternative to GNSS for aiding of inertial navigation with its advantage of small size, low weight, low power, low price and autonomous.

To date, stereo or multiple visual odometry rather than monocular visual odometry are widely applied to aid inertial navigation in autonomous driving cars because the monocular system only provides position up to scale and needs strict initialization condition. For UAV, stereo visual odometry, however, doesn’t exhibit great superiority with respect to monocular visual odometry when UAV flies very high above the ground. In this case, stereo cameras degrade to a monocular camera system since the height of the UAV above ground is much larger than the baseline of the stereo cameras. So monocular visual odometry is utilized in this paper.

Visual odometry can be categorized into direct and feature-based methods. Feature-based method extracts and matches features in the phase of motion estimation while direct method directly operates on raw pixel intensities. During the phase of optimization, feature-based method applies geometric reprojection constraints in the cost function of optimization. In contrast, direct method utilize photometric constraints instead. In this paper, direct method is chosen because it still works well in the low-texture environment (artificial buildings, roads) where the feature-based method cannot extract sufficient features. Another reason is that feature-based is computationally more expensive than direct method. But direct method is not perfect because of its immature initialization. It cannot provide a sufficiently accurate initial 3D map which heavily affects the positioning accuracy. To address this problem and the scale ambiguity problem, the GNSS-based online mapping module before GNSS failure is proposed in this paper. This mapping module integrates the information of GNSS, IMU, and imagery before GNSS failure and generates a more accurate 3D map with a metric scale compared to the initialization of the conventional visual-inertial odometry.

In this paper, a novel visual-inertial navigation scheme is addressed. Besides the mapping module, this scheme employs direct monocular visual odometry which accounts for stereo cameras’ degradation to the monocular camera in high altitude and direct method’s good performance in the low-texture environment and high computation efficiency compared to the feature-based method. As the loosely-coupled integration solution uses the poses from the visual odomery as measurements, the measurements at different time instants are co-related which doesn’t fulfill the assumption of Kalman filter. Therefore, a tightly-coupled integration solution that uses the pixel intensity of the 3D points’ projection on the current image as measurements is also considered. To solve the strong nonlinearity problem especially in the pixel gradient, a tightly-coupled direct visual-inertial odometry based on iterative Kalman filter is proposed in this paper.

Although significant effort has been spent on the visual odometry or SLAM algorithm, the effects of the image resolution and the camera’s installation on the positioning accuracy have not been evaluated yet. A forward camera and downward camera on a UAV can observe different information and result in variations of positioning accuracy. In this paper, the effect of the camera’s installation on the positioning accuracy is analyzed. Ideally, the maximum translation of a camera that doesn’t make any changes between the two adjacent images sampled by the same camera determines the positioning accuracy limit. With a more expensive camera of higher resolution, the positioning accuracy limit can be reduced. In an effort to tradeoff the cost of the camera and the positioning accuracy limit, their relation is quantified in this paper.

The goal of this paper is to keep good positioning functionality of low-cost IMU during GNSS failure by limiting the heading error in 1 degree with the aid of a monocular camera. Navigation results are demonstrated using high-quality synthetic imagery and real imagery.

Previous Abstract Return to Session A3 Next Abstract