Title: Computer Vision Combined with Convolutional Neural Network aid GNSS/INS Integration for Misalignment Estimation of Portable Navigation
Author(s): Tz-Chiau Su and Hsiu-Wen Chang
Published in: Proceedings of the 30th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2017)
September 25 - 29, 2017
Oregon Convention Center
Portland, Oregon
Pages: 611 - 621
Cite this article: Su, Tz-Chiau, Chang, Hsiu-Wen, "Computer Vision Combined with Convolutional Neural Network aid GNSS/INS Integration for Misalignment Estimation of Portable Navigation," Proceedings of the 30th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, Oregon, September 2017, pp. 611-621.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In
Abstract: The technique of positioning and navigation is undoubtedly a popular topic of research in recent years. GNSS and INS are two main techniques used for navigation. They have been used widely in many aspects such as automobile guidance, pedestrian guidance or indoor navigation. However, these navigation systems have their own disadvantage with respect to others that many researchers devoted to finding out improved methods in navigation field. Conceptually, GNSS will be influenced by the type of signal and worse case is the loss of signal in harsh environment. The performance of self-contained INS depends on the price and the size that error will increase rapidly over time when the cheaper MEMS based INS is used. Many techniques are proposed to mitigate this issue such as integration with Pedestrian Dead Reckoning (PDR) that estimates the trajectory of walker by step length and orientation detection. With the advance hardware and technology development, faster image processing brings up the prevalence of computer vision. Therefore, this research aims at aiding low– cost GNSS/INS integration with computer vision for misalignment detection. The convolution neural network is introduced to learn the camera movement and further estimate the angle difference from moving direction to the inertial sensor. With this misalignment angle, inertial sensor based estimation can be improved.