Abstract: | The low cost of GPS has made precision navigation a commodity in both civil and military applications. However, GPS dropouts are fairly common, typically occurring in urban canyon or indoor environments. In addition, GPS can be jammed or spoofed by sophisticated enemies. As a result, a number of navigation techniques have been proposed as complements or alternatives to GPS. One such technique is vision aided navigation. In previous work, we have demonstrated that fusing visual odometry information into a GPS/INS filter is able to significantly reduce navigation error drift for extended duration GPS outages. One disadvantage of the vision and inertial solution however, is that while the error drifts at a very slow rate, its magnitude is not bounded. After long GPS outages the navigation error may be too large to complete some missions. Recent work in the field of vision aided navigation has demonstrated that by recognizing previously visited or mapped places, the navigation solution can be updated, eliminating drift. However, place recognition presents a number of challenges. For example, some environments have repetitive appearance, making it difficult to discern one place from another (a problem known as perceptual aliasing). In addition, the images from the same place could have different scales, different viewpoints, different illumination conditions and they may contain moving objects. Another challenge is incorporating the information from the place recognition into the navigation solution in a statistically sound manner. Finally, recognizing places in large scale dynamic environments can be computationally intensive, making integration with a real-time navigation solution challenging. We will present an appearance based place recognition algorithm which addresses these challenges, thus providing absolute position updates enabling GPS-like, drift-free navigation. We will show results from experiments using large scale, real-world, street-level imagery in a dynamic environment with occlusions and significant change in appearance and viewpoint. We will also demonstrate the ability to construct compact maps for place recognition from various external databases. Our appearance based place recognition methods are based on the bag-of-words (BoW) representation. In the BoW representation, image features (such as SIFT or SURF features) from a set of training images are clustered, and each cluster is condensed to a single “visual word”. Images (or places) are represented as a collection of these visual words. Place recognition is then conducted by creating a map of places and comparing the collection of visual words in a query image with those contained in the map. The BoW approach was chosen for its expedience in handling occlusions and variations in viewpoint (as well as scale and illumination when paired with appropriate feature types). We have investigated and contrasted two solutions to the perceptual aliasing problem; a similarity matrix approach and a Bayesian probability-based approach. The similarity matrix approach uses a spectral method while the Bayesian approach exploits covariance of vocabulary words. Additionally, both approaches exploit temporal coherence to reduce matching ambiguity. Computational complexity and scalability are addressed by executing place recognition queries only on a sparse set of distinct images (key-frames) from the live image stream. Furthermore, we implement an algorithm which intelligently constructs a sparse map from a database of images, maximally preserving discriminability by preferring distinct places over repetitive places. When a place is recognized from the map, we use visual pose estimation methods to calculate the current camera position and orientation relative to the recognized place. Since the absolute location of the place is known, this enables us to update the navigation solution with a GPS-like position (and possibly orientation) measurement. As long as we are able to recognize places and incorporate these measurements, the navigation error will be bounded, providing drift-free navigation. |
Published in: |
Proceedings of the 26th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2013) September 16 - 20, 2013 Nashville Convention Center, Nashville, Tennessee Nashville, TN |
Pages: | 529 - 536 |
Cite this article: | Hamilton, L., Lommel, P., Galvin, T., Jacob, J., DeBitetto, P., Mitchell, M., "Nav-by-Search: Exploiting Geo-referenced Image Databases for Absolute Position Updates," Proceedings of the 26th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2013), Nashville, TN, September 2013, pp. 529-536. |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |