Abstract: | For safe urban navigation, we need a framework for GNSS-camera fusion that provides reliable positioning in the presence of measurement faults. Traditional fault mitigation methods either provide only point positioning estimates and/or detect faults and estimate the state sequentially. Our prior particle filtering framework performed state estimation and fault mitigation jointly under a single optimization framework. However, it loosely coupled GNSS-camera measurements. Furthermore, it derived a camera measurement likelihood using map-matching, which was computationally expensive at inference time, required access to a database of images, and was less robust to vision faults. In this work, we tightly couple GNSS-camera measurements within a particle filtering framework while mitigating measurement faults with an M-estimator. We also propose a data-driven method method using Convolutional Neural Networks (CNN) that is fast at inference time and more robust to vision faults. We validate our framework on a real-world urban driving dataset. Our method achieves lower positioning error than baseline methods under multiple GNSS and camera measurement faults. |
Published in: |
Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021) September 20 - 24, 2021 Union Station Hotel St. Louis, Missouri |
Pages: | 2646 - 2655 |
Cite this article: |
Mohanty, Adyasha, Gao, Grace, "A Particle Filtering Framework for Tight GNSS-Camera Fusion using Convolutional Neural Networks," Proceedings of the 34th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2021), St. Louis, Missouri, September 2021, pp. 2646-2655.
https://doi.org/10.33012/2021.17940 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |