| Abstract: | Accurate and reliable localization, a crucial component of autonomous navigation, is often achieved by matching sensor data with georeferenced maps. This matching process requires map representations that sufficiently capture the environment’s complexity while remaining efficient and scalable for real-time use. However, the maps are often constructed under a single weather and lighting condition that may not match well with the distribution of conditions that the autonomous vehicle must navigate through. In recent years, breakthrough techniques for generating realistic 3D reconstructions from images have emerged from the computer vision and graphics communities. Simultaneously, the same communities have developed generative solutions to edit images with language instructions, allowing for simulation of distributional shifts like weather and lighting. In this work, we generate a high-quality 3D map from drone and phone imagery, combining multiple techniques in the recent 3D Gaussian Splatting (3DGS) literature. We implement a pose estimation framework between the 3DGS map and query images using learned feature extraction and matching. We use a language-diffusion model to edit real images of the scene and analyze to what extent learned features handle the distributional shifts. |
| Published in: |
Proceedings of the 38th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2025) September 8 - 12, 2025 Hilton Baltimore Inner Harbor Baltimore, Maryland |
| Pages: | 1975 - 1985 |
| Cite this article: | Neamati, Daniel, Dai, Adam, Partha, Mira, Legel, Lance, Gao, Grace, "Distributional Robustness of Learned Features for 3D Map Localization," Proceedings of the 38th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2025), Baltimore, Maryland, September 2025, pp. 1975-1985. https://doi.org/10.33012/2025.20446 |
| Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |