Title: Detection of Outliers in Navigation Sensor Measurements
Author(s): Sasha Draganov
Published in: Proceedings of IEEE/ION PLANS 2016
April 11 - 14, 2016
Hyatt Regency Hotel
Savannah, GA
Pages: 1001 - 1007
Cite this article: Draganov, Sasha, "Detection of Outliers in Navigation Sensor Measurements," Proceedings of IEEE/ION PLANS 2016, Savannah, GA, April 2016, pp. 1001-1007.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In
Abstract: An outlier detection, usually called measurement editing, is commonly used by data fusion algorithms. In a typical implementation, a measurement is accompanied by an estimate for its standard deviation. If the measurement residual exceeds some multiple of standard deviations (e.g., 4), the editing algorithm rejects this measurement as an outlier. The standard approach does not provide any guidance for setting the threshold. A threshold that is too low rejects legitimate measurements, and the filter may get “stuck” in a wrong state. A threshold that is too high lets outliers in, affecting the quality of a solution. A modern navigation system integrates data from different sensors that have different error statistics, including the amount and the severity of outliers. A sensor-specific approach for treating outliers becomes a necessity. For a Gaussian statistics, large residuals are exponentially rare, and outliers are not an issue. Unfortunately, the nature rarely follows Mr. Gauss; any hopes to salvage the situation by invoking the Central Limit Theorem are crushed by a Gaussian’s extremely slow convergence at the tails. In practice, “fat tails” are quite common and are at the root cause of solution errors due to outliers. In this paper, we present two new method of detecting and treating outliers. These methods are consistent with the general philosophy of optimal fusion: process only the data that is needed, with weights that accurately reflect data error statistics. The first method uses Pickands - Balkema - de Haan (PBdH) theorem to detect fat tails. For any particular sensor, we pre-process large amount of data and estimate the statistics of the tail of the error distribution. We derived a formulation that translates the tail statistics into actionable outlier rejection algorithm and/or into a means for pre-processing measurements before they are fed into a navigation filter. In a simple case, the algorithm is similar to the conventional threshold for measurement editing; however, the magnitude of this threshold is now tailored to the statistics of measurement errors for the sensor in question. We processed real data from multiple navigation sensors to test this algorithm in practice. While some sensors are nearly outlier-free, others (e.g., magnetic compass) are not. The measurement editing threshold for such sensors is significantly lower; for example, for a magnetic compass the optimal threshold is only at approximately two standard deviations of the measurement noise. The second method uses pattern recognition in the data to detect faulty measurements. At each time epoch, the algorithm processes recent measurements from a brief rolling window. The application of the algorithm includes a training stage, where multiple sets of measurements in the window are collected and categorized. After the training has been completed, the algorithm can detect an outlier at epoch N by looking at the pattern of measurements at epochs (N-n), (N-n+1), …, N. This algorithm was implemented and tested in real time. The results show reliable detection of outliers in a sensor with a small form factor and with limited computational resources. Finally, we present an approach that integrates the above two outlier detection algorithms. While they may appear unrelated, there is a way to combine them in a mathematically sensible way.