2D Relative Pose and Scale Estimation with Monocular Cameras and Ranging

Chen Zhu, Gabriele Giorgi, Christoph Günther

Peer Reviewed

Abstract: Cooperative swarms of robots equipped with cameras are robust against failures and can explore Global Navigation Satellite System-denied environments efficiently. Applying Visual Simultaneous Localization and Mapping (VSLAM) techniques, vehicles can estimate their trajectories and simultaneously reconstruct the map of the environment using visual cues. Due to constraints on payload size, weight, and costs, many Visual Simultaneous Localization and Mapping applications must be based on a single camera. The associated monocular estimation of the trajectory and map is ambiguous by a scale factor. This work shows that by exploiting sparse range measurements between a pair of dynamic rovers in planar motion, the correct scale factors of both cameras and the relative position, as well as the relative attitude between the rovers, can be estimated. Neither images nor feature vectors are required to be transmitted over the communication channel for the proposed method, which is a significant advantage in practice.
Published in: NAVIGATION: Journal of the Institute of Navigation, Volume 65, Number 1
Pages: 25 - 33
Cite this article: Export Citation
https://doi.org/10.1002/navi.223
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In