|Abstract:||Collaborative Navigation (CN) is developed to provide an improvement on positioning performance for various navigation applications. This paper aims to investigate the CN technology in the way of object detection based on deep learning. A novel concept which integrates Inertial Measurement Units (IMUs) and digital cameras to achieve CN is presented in this paper. This approach utilizes a pre-trained faster region-based convolutional neural network (Faster-RCNN) to detect neighboring users from images. Then, by the image–based ranging approach proposed in this study, we obtain space ranges between users which would be tightly integrated with a local Kalman filter to provide IMU with error estimates. Unlike traditional location and ranging methods, this approach allows positioning solution to be performed in indoor environments, where radio frequency (RF) signal is generally blocked, reflected and attenuated by obstacles. To evaluate the performance of the proposed approach, a test at the Ohio State University was implemented. The test results show that the ranging accuracy of image–based ranging approach stays around 1m at various range level. Also, bounding boxes with a higher score are more beneficial to achieve high ranging accuracy. The CP solution is able to drag the trajectory back to the ground truth and restricts the horizontal error to around 1m in an indoor area.|
Proceedings of the 30th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2017)
September 25 - 29, 2017
Oregon Convention Center
|Pages:||3289 - 3300|
|Cite this article:||
Zhang, Lin, Xu, Haowei, Lian, Baowang, Toth, Charles K., Grejner-Brzezinska, Dorota A., "Integrated IMU/Image Collaborative Navigation for Indoor Environments," Proceedings of the 30th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, Oregon, September 2017, pp. 3289-3300.
ION Members/Non-Members: 1 Download Credit