Visual Semantic Context and Efficient Map-Based Rotation-Invariant Estimation of Position and Heading

Junwoo Park, Sungjoong Kim, Kyungwoo Hong and Hyochoong Bang

Peer Reviewed

Abstract: This paper proposes a visual map-based position and heading estimation system that is invariant to image rotation and consistent over time, which is achieved by exploiting the radial and azimuthal distributions of semantic segments. To characterize the specific position and heading, a novel concept termed “visual semantic context” is applied, which collects semantics in a polar-coordinated fashion in collaboration with measures of discrepancy. The system then matches visual semantic contexts: one from a semantically segmented aerial image aided by deep learning technology and others from a semantics-labeled database. Two-stage minimization alleviates the expensive computation of an exhaustive search. The first stage marginalizes the heading and coarsely searches for positions. At the same time, the Kolmogorov–Smirnov test significantly reduces the search domain by rejecting unlikely candidates, and the second stage refines the estimates. Numerical experiments show that the proposed algorithm fixes the position and heading, is invariant to image rotation, and is also robust to imprecise scale information.
Video Abstract: