Title: PeerAppear: A P2P Framework for Collaborative Visual Localization
Author(s): Andrew Compton, John Pecarina
Published in: Proceedings of the 29th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2016)
September 12 - 16, 2016
Oregon Convention Center
Portland, Oregon
Pages: 1080 - 1090
Cite this article: Compton, Andrew, Pecarina, John, "PeerAppear: A P2P Framework for Collaborative Visual Localization," Proceedings of the 29th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2016), Portland, Oregon, September 2016, pp. 1080-1090.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In
Abstract: The ubiquity of GPS receivers in modern devices has given rise to the expectation for always-available real-time precision localization. While this expectation can often be met in rural outdoor environments, it is generally not achievable indoors or when line-of-sight to the horizon is blocked by large urban or geographic structures. The shortcomings of GPS have given rise to a myriad of alternative methods for precision navigation, many of which can directly augment GPS. One of the most promising methods builds upon recent advances in computer vision technologies to determine location visually by matching a captured image taken at an unknown location against a database of localized images. The accuracy and scale of these efforts is often determined by the size, scale, and granularity of the image database. Most research efforts focus their database building efforts on small-scale purpose-built collections or on the aggregation of localized crowd-sourced images from services such as Flickr and Instagram. While small-scale purpose built collections are often sufficient for the experimental validation of localization methods, they do not provide for a scalable localization capability. Databases built from crowd-sourced images generally do possess the necessary scale, but lack in quality and granularity because images tend to be densely clustered near landmarks and points of interest. To overcome the challenges encountered when constructing datasets for visual localization, we present PeerAppear, a middleware framework for the extraction and dissemination of visually derived spatial data. PeerAppear enables collaborative visual information discovery through the implementation of a peer-to-peer middleware framework which automates the indexing and sharing of visual information extracted from images in a user’s collection. Evaluations of the framework’s theoretical complexity and experimental performance are presented, demonstrating PeerAppear’s feasibility for supporting large-scale, decentralized collaborative visual localization.