AI-Assisted Multi-Sensor Fusion for Enhanced Autonomous Vehicle Navigation

Jorge Morán, Philipp Bohlig, Robert Bensch, Luka Sachsse, Sai Parimi, Frieder Schmid, Julian Dey, Olena Horokh, Jan Fischer

Abstract: Autonomous vehicles demand navigation systems that deliver high levels of accuracy, availability, and integrity under all operating conditions. Urban environments pose particular challenges, with GNSS performance degraded by multipath, non-lineof-sight (NLOS) conditions, and intentional or unintentional interference. To meet these requirements, multi-sensor fusion has become the de-facto approach, combining GNSS with inertial and perception sensors. A core contribution of our work is the integration of multiple AI approaches to improve the robustness and accuracy of the system in challenging urban scenarios. Although many sensor fusion techniques have been proposed to address these issues, few integrate advanced artificial intelligence (AI) methods to enhance the performance and reliability of each subsystem. The system follows a semi-tightly coupled architecture. At the core, a tightly coupled multi-antenna GNSS/INS subsystem operates alongside two parallel modules based on LiDAR-inertial and visual-inertial SLAM. These subsystems are integrated in a federated Kalman filter, which loosely combines their outputs to improve reliability and robustness. AI modules are deployed at several levels of this framework to address limitations of conventional methods. For GNSS, three independent neural networks are employed to detect and mitigate multipath and NLOS signals, identify and counteract jamming and spoofing, and improve ambiguity resolution. In addition to GNSS enhancement, the paper addresses challenges related to the inertial navigation component. An AI-based adaptive calibration approach is proposed, that refines IMU calibration and dynamically adjusts the confidence levels of pseudo-measurements used in state estimation. The proposed system combines both elements by using a Multitask Deep Neural Network (MTDNN) architecture capable of using additional sensor data — such as odometry, camera and lidar information — alongside the raw IMU signals to the network. For LiDAR and Visual based localization, an AI-based method is used to detect and remove dynamic features, ensuring that pose estimation relies on static elements of the environment and improving accuracy in dense traffic scenarios. A distinctive aspect of this work is the focus on real-time implementation under resource constraints. Neural network architectures and SLAM modules have been optimized to run on embedded hardware, ensuring feasibility for deployment in automotive platforms.
Published in: Proceedings of the 38th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2025)
September 8 - 12, 2025
Hilton Baltimore Inner Harbor
Baltimore, Maryland
Pages: 1024 - 1037
Cite this article: Morán, Jorge, Bohlig, Philipp, Bensch, Robert, Sachsse, Luka, Parimi, Sai, Schmid, Frieder, Dey, Julian, Horokh, Olena, Fischer, Jan, "AI-Assisted Multi-Sensor Fusion for Enhanced Autonomous Vehicle Navigation," Proceedings of the 38th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2025), Baltimore, Maryland, September 2025, pp. 1024-1037. https://doi.org/10.33012/2025.20235
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In