Real-Time Fusion of Image and Inertial Sensors for Navigation

J. Fletcher, M. Veth, and J. Raquet

Abstract: As evidenced by many biological systems, the fusion of optical and inertial sensors represents an attractive method for passive navigation. In our previous work, a rigorous theory for optical and inertial fusion was developed for precision navigation applications. The theory was based on a statistical transformation of the feature space based on inertial sensor measurements. The transformation effectively constrained the feature correspondence search to a given level of a priori statistical uncertainty. When integrated into a navigation system, the fused system demonstrated performance in indoor environments which were comparable to that of GPS-aided systems. In order to improve feature tracking performance, a robust feature transformation algorithm (Lowe’s SIFT) was chosen. The SIFT features are ideal for navigation applications in that they are invariant to scale, rotation, and illumination. Unfortunately, there exists a correlation between feature complexity and computer processing time. This limits the effectiveness of robust feature extraction algorithms for real-time applications using traditional microprocessor architectures. While recent advances in computer technology have made image processing more commonplace, the amount of information that can be processed is still limited by the power and speed of the CPU. In this paper, a new theory which exploits the highly parallel nature of General Programmable Graphical Processing Units (GPGPU) is developed which supports deeply integrated optical and inertial sensors for real-time navigation. Recent advances in GPGPU technology have made realtime, image-aided navigation a reality. Our approach leverages the existing OpenVIDIA core GPGPU library and commercially available computer hardware to solve the image and inertial fusion problem. The open-source libraries are extended to include the statistical feature projection and matching techniques developed in our previous research. The performance of the new processing method was demonstrated by integrating the inertial and image sensors on a commercially-available laptop computer containing a programmable GPU. Experimental data collections have shown up to a 3000% improvement in feature processing speed over an equivalent CPU-based algorithm. In this experimental configuration, frame rates of greater than 10 Hz are demonstrated, which are suitable for real-time navigation. Finally, the navigation performance of the new real-time system is shown to be identical to that of the old method which required lengthy post-processing.
Published in: Proceedings of the 63rd Annual Meeting of The Institute of Navigation (2007)
April 23 - 25, 2007
Royal Sonesta Hotel
Cambridge, MA
Pages: 534 - 544
Cite this article: Fletcher, J., Veth, M., Raquet, J., "Real-Time Fusion of Image and Inertial Sensors for Navigation," Proceedings of the 63rd Annual Meeting of The Institute of Navigation (2007), Cambridge, MA, April 2007, pp. 534-544.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In