View Abstract Sign in for premium content
Sensor fusion is critical in localization and positioning problems, it combines measurements from different sensors to improve estimation accuracy and increase system redundancy and integrity. A Kalman filter (KF) is one of the most commonly used algorithms in sensor fusion. As a model-based algorithm, a KF leverages the known models and manages to provide reasonable sensor fusion solutions. However, the KF performance is limited by noise model assumptions, tuning problems and non-linearity in the systems. Multi-sensor systems provide a large quantity of data which makes learning-based algorithms possible, and it has been proved that the learning-based algorithms can provide promising accuracy without requiring the precise knowledge of one system and only need to make a few generic assumptions on the systems. On the other hand, Learning-based algorithms require large training effort and are susceptible to the errors in data. In this paper, we hybridize model-based and learning-based algorithms. A deep neural network is integrated into the KF structure and assists the KF to learn the optimal Kalman gain from data. We test the resulting “deep learning supported KF” and compare the results with the standard KF on simulated datasets of 2D linear motion and 2D circular motion. The hybridization algorithm shows equal performance to the KF in case the KF is optimum (linear case) and improved performance on position and velocity estimation, when the KF models are known to deviate from the simulated motion (circular case).