Title: A Vision-Based Indoor Positioning Method with High Accuracy and Efficiency Based on Self-Optimized-Ordered Visual Vocabulary
Author(s): Tesi Wu, Lian-Kuan Chen and Yang Hong
Published in: Proceedings of IEEE/ION PLANS 2016
April 11 - 14, 2016
Hyatt Regency Hotel
Savannah, GA
Pages: 48 - 56
Cite this article: Wu, Tesi, Chen, Lian-Kuan, Hong, Yang, "A Vision-Based Indoor Positioning Method with High Accuracy and Efficiency Based on Self-Optimized-Ordered Visual Vocabulary," Proceedings of IEEE/ION PLANS 2016, Savannah, GA, April 2016, pp. 48-56.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In
Abstract: In this paper, we present a novel indoor positioning method with high accuracy and efficiency, requiring only the camera of a mobile device. The proposed method takes advantage of a novel visual vocabulary, Self-Optimized-Ordered Visual (SOO) Vocabulary under Bag-of-Visual-Word framework to exploit deep connections between physical locations and feature clusters. Additionally, related techniques improving positioning performance such as feature selection and visual word filtering are also designed and examined. Evaluation results show that when the training image size varies from 20 to 640, our method can save up to 80% processing time in both phases compared to two existing vision-based indoor positioning methods that use state-of-art image query techniques. In the meantime, the average image query accuracy of our method among all evaluated indoor scenes is above 95%, which highly increases positioning accuracy and makes the method a very suitable option for smart-phone based indoor positioning and navigation.