Improved Fusion of Visual Measurements Through Explicit Modeling of Outliers

C.N. Taylor

Abstract: With the recent proliferation of low-cost, lightweight, and high-resolution cameras, significant research has been conducted on utilizing visual sensors within navigation systems. Visual sensors, however, are different from more traditional sensors utilized within estimation systems because cameras do not directly sense the quantity (or its derivative/integral) to be estimated/fused. Instead, the luminance outputs of the camera sensor are processed algorithmically to obtain some other quantity that is used as an input to an estimator. Sample outputs of vision processing algorithms utilized in automated systems include optical flow, feature tracks, or object localization. Unfortunately, the outputs of these algorithms do not conform to the traditional model of the true measurement plus a Gaussian random process. Instead, each of the algorithms occasionally produce spurious outputs (outliers) that deviate significantly from the true value being observed. In addition, the probability of an outlier occurring and its effects on down-stream algorithms are generally unknown. In this paper, we present an estimation approach that explicitly considers the probability of outliers and performs maximum likelihood estimation over a distribution with outliers. In addition to more accurately estimating the final state, the predicted uncertainty of the system is shown to be accurate.
Published in: Proceedings of IEEE/ION PLANS 2012
April 24 - 26, 2012
Myrtle Beach Marriott Resort & Spa
Myrtle Beach, South Carolina
Pages: 512 - 517
Cite this article: Taylor, C.N., "Improved Fusion of Visual Measurements Through Explicit Modeling of Outliers," Proceedings of IEEE/ION PLANS 2012, Myrtle Beach, South Carolina , April 2012, pp. 512-517. https://doi.org/10.1109/PLANS.2012.6236921
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In