Event-Based Vision and Factor Graph-Based Approach for Sensor Geolocation from Observations of an Orbiting Satellite
Kaleb Nelson, Clark Taylor, and Rachel Oliver, Air Force Institute of Technology
Location: Beacon A
This work proposes a novel method of self-positioning that leverages passive optical data of stars and satellites collected with an event camera. While prior work has used passive EO or IR cameras to perform self-positioning[1,2] , to our knowledge this is the first work using event cameras[3,4]. Event cameras, otherwise known as neuromorphic cameras, are a new type of camera that, rather than reporting the “brightness” of an object like a traditional camera, reports changes in brightness. Note that the change in brightness is reported asynchronously, leading to precise measurements of when movement occurs in the field of view. This novel method of sensing has several advantages over traditional cameras including low power and a much higher dynamic range. Of particular importance to self-positioning using celestial objects is increased sensitivity to low brightness objects even against a bright background. This means that the ability to detect stars and satellites during daytime may be accomplished with a much wider field of view than was previously achievable with traditional sensors.
In this paper, we will not evaluate the effectiveness of event cameras in comparison with traditional cameras, but will instead introduce an algorithm using event camera inputs to perform self-localization. The algorithm will exploit the distinctive characteristics of event-based vision to effectively discriminate between stellar and satellite objects within the captured optical data. The intended scenario for this method will depend on a wide field of view sensor capturing where the expected satellite is in the sky (with the identification of the satellite based on a rough estimated latitude and longitude and the satellite’s position calculated from its TLE, and then a second narrow field of view sensor capturing the target satellite with more precision in relation to the surrounding stars. This work will focus on the algorithm for processing the event-based vision data from the narrow field of view sensor that is staring at a patch of sky where the target satellite will be passing through. The core premise of this approach lies in describing the relationship between the earth-centric position of the satellite to its corresponding pixel location on the focal plane of the optical sensor, with the position of the satellite calculated from propagating the orbit from a recent TLE (two-line element).
This approach capitalizes on the unique attributes of event-based vision by identifying and tracking stars and satellites within the visual field using a RANSAC (random sampling and consensus) based method of finding events that are linearly aligned in the spatiotemporal data. By utilizing known stellar positions, we employ a legacy celestial navigation approach to derive the right ascension and declination of a reference pixel, effectively orienting the focal plane in the celestial sphere. As the stars move across the pixels (due to the Earth's rotation), the reference pixel address will be updated accordingly until a new star tracking solution is provided. The satellite will be identified in the event-based sensor data through its spatiotemporal velocity.
A factor graph framework is then employed to integrate the estimated latitude and longitude of the sensor, the satellite's position in the Earth-Centered Inertial (ECI) frame, the known sensor altitude, and the event camera’s observed pixel location of the satellite. This enables us to refine our estimate of the optical sensor's latitude and longitude.
We anticipate presenting results showcasing the efficacy of this approach in achieving accurate self-localization, even in challenging scenarios marked by high-dynamic range and low light levels scenarios. This capability has significant implications for GPS-denied self-localization.
[1] Bowditch, N. (2024). American Practical Navigator (No. 9). National Geospatial-Intelligence Agency.
[2] Liu, C., Yang, F., & Ariyur, K. B. (2016). Interval-based celestial geolocation using a camera array. IEEE Sensors Journal, 16(15), 5964-5973.
[3] Gallego, G., et al. (2020). Event-based vision: A survey. IEEE transactions on pattern analysis and machine intelligence, 44(1), 154-180.
[4] Chakravarthi, B., et al. (2024). Recent Event Camera Innovations: A Survey. arXiv preprint arXiv:2408.13627.
For Attendees Technical Program Tutorials Registration Hotel Travel and Visas Exhibit Hall For Authors Abstract Management Editorial Review Policies Publication Ethics Policies Author Resource Center For Exhibitors Exhibitor Resource Center Marketing Toolkit Other Years Future Meetings Past Meetings