Integrated Multi-Aperture Sensor and Navigation Fusion

A. Soloviev, J. Touma, T. Klausutis, M. Miller, A. Rutkowski, K. Fontaine

Abstract: The integration of vision sensors and inertial navigation system (INS) can enable precision navigation capabilities in the absence of GPS. Inspired from biological systems, a multi-aperture vision processing system allows for accurate selfmotion (egomotion) estimation by observing optical flow across all apertures. The multi-aperture approach is particularly well suited for resolving motion-ambiguity by providing a wide field of regard for detecting and tracking visual features (optical flow). This paper presents a data fusion approach formulti-aperture sensors that integrates the vision processing into a single unified frame of reference by projecting imagery from each aperture onto the unit sphere centered on the navigation frame. The unit sphere projection allows for the seamless integration of multiple apertures into the more natural angleangle space of the navigation frame of reference. As a first step in evaluating the multi-aperture processing strategy, the paper evaluates navigation fusion algorithms with simulated data. The results presented clearly show the advantage of coupling the inertial system with a multi-aperture optical sensor. INTRODUCTION Vision-based navigation techniques serve as a viable option for autonomous, passive navigation and guidance in Global Navigation Satellite Systems (GNSS)-denied environments [1]. This paper discusses extension of the vision-aided inertial navigation approach for multi-aperture camera cases. Inspired from biological systems, a multi-aperture vision processing system allows for accurate motion estimation by observing optical flow across all apertures. The multi-aperture approach is particularly well suited for resolving motion ambiguity by providing a wide filed of regard for detecting and tracking visual features. Moreover, as we show in this paper, a multi-aperture system coupled with an inertial system simplifies the problem of obtaining the range to the observed features as our system moves though the environment. The multi-aperture vision formulation presented herein is inspired by a wide field of regard of insect vision sensors such as compound eyes of a dragonfly shown in Figure 1. Figure 1. Example biological multi-aperture vision system: compound eyes of a dragonfly Insects navigate successfully and efficiently in complex environments using optical flow sensors coupled with other sensors such as halteres (inertial sensors). Rather that using a single high-quality vision device, an eye of an insect can be represented as a combination of multiple low-quality cameras with slightly different fields-of-view and optical characteristics. Moreover, coupling of vision with other sensors provides more robust egostate estimation as compared to vision alone. Therefore, this paper fuses multi-aperture vision sensing with inertial navigation. The use of multi-aperture vision is also motivated by the enhancement of situational awareness for guidance and mission planning: i.e., the capability to look forward is augmented by side-looking and backward-looking capabilities. Figure 2 illustrates the improved situational awareness that is achieved by using multi-aperture vision as compared to a single aperture camera system for an indoor flight mission of a mini-UAV. Figure 2. Single-aperture vs. multi-aperture example In this example, the limited field-of-view of a single aperture results in the mission path that collides with the window. The multi-aperture camera enables a complete observation of the scene thus allowing for the collision avoidance. Figure 3 illustrates an example implementation of the multi-aperture camera system. Figure 3. Multi-aperture experimental setup developed by the Alt-Nav team of the Air Force Research Laboratory’s Munitions Directorate; the setup includes three video cameras with a 90-deg separation of their optical axes This paper focuses on the benefits of using multiaperture system for the estimation of vehicle navigation states including position, velocity, and attitude. The main benefits are summarized as follows: •?Increased number of high quality features that can be applied for navigation; •?Improved relative feature geometry, which results in reduced values of dilution of precision (DOP) factors for the estimation of navigation parameters from vision features; and, •?Improved capability to resolve the unknown scale of video images. The remainder of the paper is organized as follows. First, a unit sphere coordinate frame that is applied to represent multi-aperture imagery is described. Second, motion constraints are derived in the unit sphere frame. Third, fusion of multi-aperture vision data with measurements of an inertial navigation system (INS) is discussed. Finally, simulation results are presented to demonstrate efficiency of the multiaperture vision approach.
Published in: Proceedings of the 22nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2009)
September 22 - 25, 2009
Savannah International Convention Center
Savannah, GA
Pages: 759 - 766
Cite this article: Soloviev, A., Touma, J., Klausutis, T., Miller, M., Rutkowski, A., Fontaine, K., "Integrated Multi-Aperture Sensor and Navigation Fusion," Proceedings of the 22nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2009), Savannah, GA, September 2009, pp. 759-766.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In