Attendee Sign In Sign in to access papers, presentations, photos and videos
Return to Session B2b Next Abstract

Session B2b: Navigation of Uncrewed Aerial Vehicles

Fusing GNN-enhanced GNSS with Neural Radiance Fields for UAV Navigation
Adyasha Mohanty and Grace Gao, Stanford University
Location: Beacon A
Date/Time: Wednesday, Jan. 24, 10:40 a.m.

Global Navigation Satellite System (GNSS) is a commonly used sensor for Unmanned Aerial Vehicle (UAV) localization. However, GNSS-based localization is affected by challenges at low altitudes due to issues such as multipath interference and Non-Line-of-Sight (NLOS) signals which degrade positioning accuracy. To address these challenges, we propose a sensor fusion framework by integrating Neural Radiance Fields (NeRFs) with Graph Neural Network (GNN)-enhanced GNSS positioning. NeRFs provide a compact, continuous, and detailed 3D representation of urban landscapes, offering photorealistic views that enable precise map-matching and environmental reconstructions. In our framework, we train a NeRF model to accurately represent the environment in which the UAV is flying and use the discrepancy between the UAV-captured images and NeRF-rendered scenes to compute visual corrections. These corrections are then integrated with the GNSS corrections learned by the GNN and used to predict the UAV’s absolute position. The proposed approach is validated on UAV real-world urban environment datasets with emulated GNSS pseudorange measurements. The results demonstrate that our framework improves UAV localization accuracy compared to traditional GNSS and camera-based positioning methods under different measurement noise conditions.



Return to Session B2b Next Abstract