Previous Abstract Return to Session D1 Next Abstract

Session D1: Robotic and Indoor Navigation

Graphical Kalman Filter
Thomas Burgess and Boxian Dong, indoo.rs GmbH, Austria; Hans-Bernd Neuner, TU Wien, Austria
Location: Spyglass

This work introduces a Graphical Model Kalman Filter (GMKF) for use in mobile phone indoor navigation applications and shows it to outperform the most commonly used approach in robotic and mobile applications, the Extended Kalman Filter (EKF).
Urban and Indoor Positioning Systems (IPS) for mobile phones cannot rely on Global Navigation Satellite Systems (GNSS) because of severe signal obstruction. Instead, it is common to use a hybrid approach employing motion estimates as step events from an on-device Pedestrian Dead Reckoning (PDR) system, and position observations based on radio Received Signal Strength Indicator (RSSI) methods. The motion estimates are then used as state transitions and the positions as observations in a Kalman Filter (KF) to produce a smooth combined trajectory. As a regular KF cannot handle the nonlinear state transitions and observation models obtained in a typical IPS solution, instead EKF are often employed where the equations are linearized around the current estimate. This is adequate for a well behaved system. However, in practice the EKF underperforms as the localization is affected by systematic errors such as heading drift in PDR and frequent large position outliers in RSSI due to noise and missed signals.
Through simulations we show that information in the EKF state tends to decay fast when the system is highly nonlinear and the observations noise is large. Hence, the next state is dominated by the previous observation, thus making the system highly susceptible to outliers in the observations. An improvement over the vanilla EKF is the Adaptive Extended Kalman Filter (AEKF), which guesses the observation covariance and state degradation from historical observations and state information in the filter. Not only does this estimation provide a more realistic estimation of observation errors, but also improves the long-term dependencies of the filter. By determining transition add-on noise based on historical states it can normally be lower than its counterpart in EKF which has to be large enough to keep the filter stable. A higher noise leads to a more rapid decay of information in the state. The applicability of the AEKF for these situations have been shown in previous work by the authors and others. However, there are limitations to the covariance estimation and the increased time dependency in the AEKF. The covariance estimation essentially acts as a low-pass filter that assumes constant covariance over time. This is often not the case in an IPS, where accuracy is location dependent motion determined by the user. Furthermore, the additional time dependence is not enough for the IPS to recover from multiple outlier observations.
To improve on the AEKF we introduce the GMKF. The GMKF increases accuracy by improving the long term dependency of the filter and improves stability by limiting the update step size. By extending the kalman observation from only considering the last state to also include several preceding states the GMKF can retain information over longer time. This is achieved by predicting the surrounding states of an active state in each Kalman prediction-update cycle through propagating both backward and forward using the Kalman transitions and the active state. From the reconstructed surrounding states an observation is predicted which forms a graphical model with the actual observations. Through simulations we show that this methods yield similar results as backward-forward KF in a linear system. However, because of the simplifying Markov assumption backward-forward filtering removes any dependency between non-adjacent states unlike the GMKF which preserves the full inter-state dependencies in the graphical model. This reconstructed graphical model is essentially the same as the graph constructed in graphSLAM.
The GM based observation construction poses the challenge to the Kalman update that it may converge to the wrong values if it escapes the linearized region of the EKF. An ill conditioned GM will result in a very unstable filter. The iterative optimization used in graphSLAM could solve this problem, but is unable to estimate the covariance matrix analytically and also is too computationally expensive. Instead, GMKF uses a limited step size kalman update, which converges to the optimum value of the local graphical model when the GM created by the filter is a good first guess of the final state. However, poor filter state gaussing or observation outliers may result in observations drifting away from the predictions. In those cases, the limited step size is achieved by adding artificial noise to the observation. This modified kalman update is able to keep the kalman filter stable in both cases. It actually improve the performance of the filter and make it resilient to outliers in the second case. For the first case, which is rare and mostly caused by previous observations with correlated errors, the update cause the filter to converge slowly and weakens the long term dependence inside the kalman filter. This should be further studied in future work.
Evaluation and simulations shows the advantage of the proposed kalman filter over both traditional EKF and AEKF: Because of the introduced long term dependence, the filter not only has better accuracy but also completely immunes to single outlier in the observations, and can recover the true path to some extend even when at the presence of correlated outliers (when the radio location are clustered to some wrong location). The observation covariance manipulation based Kalman update step size control ensures filter stability, so that even in extreme ill conditioned cases the filter will converge toward the optimum.



Previous Abstract Return to Session D1 Next Abstract