Return to Session B9 Next Abstract

Session B9: Complementary PNT: Vision Aided/Optical Ground

Visual-Inertial Navigation in the Dark
Rich Madison, Olegs Mise, Thales Visionix
Location: Ballroom B
Date/Time: Thursday, Jun. 15, 10:35 a.m.

Soldier lethality and survivability are predicated on situation awareness, which consists substantially of Soldiers knowing where they are and where their friends are. This awareness in turn requires reliable PNT (Position, Navigation, and Timing). GPS/INS (Global Positioning System / Inertial Navigation System) is the gold standard for PNT, but GPS is not always available on the modern battlefield. Indoor, subterranean, dense urban, and even forest environments can deny GPS to the Soldier or create artifacts such as multipath reflections that make GPS less reliable. There is also the possibility of a near-peer threat jamming the GPS signal creating a GPS degraded environment. Coincidentally, these environments also typically provide nearby, visual texture that is optimal for visual-inertial navigation. Our company has sought to assure PNT to the dismounted Soldier by supplementing GPS with visual-inertial navigation based on their commercial, motion tracking technology. Their tracker monitors its own position and orientation by fusing measurements from an IMU (Inertial Measurement Unit), a daylight camera, and optional other sensors. The tracker is a low-SWaP (Size, Weight, and Power) widget designed to provide head tracking to drive augmented reality in unstructured environments. It can be body-mounted instead, or potentially mounted on an autonomous vehicle, to monitor just a user’s (or a vehicle’s) position, allowing PNT to continue while GPS is unavailable. However, the tracker relies on a daylight camera, limiting its utility to daylight or lit-indoor applications.
The team investigated whether the navigation approach used by their motion tracker also could be used to navigate at night, or in low-light, indoor scenarios, by replacing the daylight camera with a night vision camera. We developed experimental sensor heads that collect inertial measurements and daylight video measurements from the existing sensor as well as video from night vision cameras. The sensor heads included multiple camera modalities. We collected video and inertial data while walking with these sensor heads through different visual environments (office, forest, garden, parking lot, desert, and a U.S. Army test site) under different lighting conditions (lights on, lights off, mid-day, late afternoon, early night, late night). We compared the performance of various visual and visual-inertial navigation algorithms, operating on monocular and stereo video from the different imaging modalities.
This work provided insight into which combinations of imaging modality and algorithm are suitable for low-light navigation in each tested environment and why. The presentation will review these results. We identified trends in these results and implemented algorithm modifications to improve navigation accuracy. The presentation will review these results as well. The presentation will provide several conclusions about the potential of using visual-inertial sensors and algorithms to monitor a user’s position, with low drift, in both lit and unlit environments. This will help inform discussion as to what role visual-inertial navigation can play as an aiding source for GPS to provide Assured Positioning, Navigation, and Timing.
We will continue the quest for nighttime visual-inertial navigation accuracy through improved algorithms, additional imaging modalities, and integration into fielded systems. The presentation will suggest specific directions for continued research.



Return to Session B9 Next Abstract