Previous Abstract Return to Session C3 Next Abstract

Session C3: Multisensor Integrated Systems and Sensor Fusion Technologies

CDGNSS-Enabled Online Sensor Calibration for Automated Vehicles
Nick Montalbano, Evan Srnka and Todd Humphreys, University of Texas at Austin
Location: Windjammer

A multiple-model-based approach to online calibration of inertial sensors, visible light cameras, and radar sensors is presented that exploits dual-antenna carrier-phase-differential GNSS to yield a highly-accurate calibration with a short window of data. Such automated, high-quality online calibration can significantly reduce the cost of deploying automated vehicles such as self-driving cars or unmanned aerial vehicles (UAVs). In effect, multi-antenna CDGNSS enables laboratory-quality calibration to be performed continuously during standard vehicle operation: the vehicle becomes the laboratory. Inerial Measurement Unit (IMU) calibration is a mature topic and there are existing techniques for calibrating the intrinsic parameters (center point, focal length, and distortion pattern) and extrinsic parameters (mounting location and angle) of a camera. By performing multiple-model-based estimation on an enumerated set of possible IMU, camera, and radar models, and by exploiting centimeter-accurate knowledge of the vehicle position and sub-degree accurate knowledge of the inter-antenna vector, sensor models can be determined quickly to within a bounded set. Feature detection, coupled with centimeter-accurate knowledge of the ground truth motion, aids in calibration of the camera and radar units, enabling them to be calibrated during standard operation without a checkerboard pattern or fixed radar reflector.

Active calibration of IMUs has been extensively studied from the perspective of state augmentation. [1] analytically determined necessary conditions on vehicle motion to determine the lever arm and biases of an IMU mounted on a car equipped with carrier-phase differential GNSS. [2] studied the full nonlinear problem on a car with a single-antenna GNSS setup and derived a series of sufficient conditions to determine intrinsic parameters. The approach presented herein will estimate biases and mounting errors from an enumerated set. A multiple-model filter can accurately determine these biases and locations faster and more efficiently than conventional methods by extracting information from previously unused knowledge. For example, the vehicle's approximate size constrains the search space for sensor location while an IMU's quality may lead to a more accurate range of initial bias estimates. Multiple model estimation has been held back by its prohibitive computational cost compared to other adaptive estimation methods [3]; however, modern processing power is quickly compensating for this increased cost. [4] presents a modern comparison of a multiple model estimator with a 15-state extended Kalman filter (EKF) and demonstrates gains in accuracy and convergence time, a clear indicator of the potential of this approach. Centimeter-accurate CDGNSS will provide partial attitude observability, further improving the rate of convergence of estimates for mounting angle and gyroscopic bias.

Cars of the future will almost certainly come equipped with a plethora cameras, ranging from backup and dashboard cameras to visual-light cameras for autonomous driving. Camera calibration today often involves photography of a checkerboard pattern; corner detection coupled with knowledge of the true pattern can be exploited to examine a camera’s underlying distortion pattern [5]. Some algorithms expand on this concept, moving from checkerboards to circles with inscribed patterns [6]. However, these algorithms require laboratory testing and thus are not conducive to a plug-and-play sensor modality. Some effort has been undertaken to move camera calibration into the field. [7] presents an algorithm that performs feature extraction on images taken while driving and combines them with knowledge of the ground truth of the motion to model camera distortion at the pixel level. The authors test with the KITTI data set and perform their calibration offline; it is possible to perform a similar calibration online with a lower-quality camera. [8] presents a hybrid EKF algorithm for online estimation of coupled camera and IMU states, but neglects the potential accuracy benefits of high-precision CDGNSS. CDGNSS will provide centimeter-accurate knowledge of ground truth motion to accelerate model convergence. A multiple-model approach will provide further increases to convergence speed.

Some automated vehicles will incorporate radar for locating obstacles that the car should avoid. Mounting angle errors can be a large source of estimation errors for radar units. A mounting error of even one degree can produce erroneous obstacle locations well beyond those reasonable for safety-of-life applications. Such mounting errors can be readily estimated during standard operation by tracking the relative motion of a static target through a motion and correlating those data against knowledge of the underlying motion, provided by CDGNSS or any other source. This technique may encounter problems with detection of static background features to serve radar reflectors; using a camera to perform image recognition to detect potential point reflectors, such as lampposts, will aid in radar calibration. Mounting location is only weakly observable in practice; a CDGNSS-based multiple-model approach, potentially coupled with feature recognition and tracking, will improve the accuracy of existing techniques and enable radar calibration to be performed during standard operation, rather than inside of a carefully controlled environment.

Algorithm validation will occur on two platforms, both currently under development. The Sensorium is a car-mounted sensor array that includes a MEMS IMU, a visible-light camera, a radar unit, and a dual-antenna CDGNSS configuration. Testing may also be performed on a CDGNSS-equipped quadrotor platform with a camera and IMU of middling quality. These platforms will enable algorithm verification on a variety of sensors across multiple configurations.

References:
[1] S. Hong, M. Lee, H.-H. Chun, S.-H. Kwon, and J. Speyer, “Observability of Error States in GPS/INS Integration,” IEEE Transactions on Vehicular Technology, vol. 54, no. 2, pp. 731–743, Apr. 2005.
[2] Y. Tang, Y. Wu, M. Wu, W. Wu, X. Hu, and L. Shen, “INS/GPS Integration: Global Observability Analysis,” IEEE Transactions on Vehicular Technology, vol. 58, no. 3, pp. 1129–1142, May 2008.
[3] C. Hide, T. Moore, and M. Smith, “Adaptive Kalman Filtering Algorithms for Integrating GPS and Low Cost INS,” PLANS 2004. Position Location and Navigation Symposium (IEEE Cat. No.04CH37556), Apr. 2004.
[4] Q. M. Lam and J. L. Crassidis, “A Close Examination of Multiple Model Adaptive Estimation Vs Extended Kalman Filter for Precision Attitude Determination,” AIAA Guidance, Navigation, and Control (GNC) Conference, Aug. 2013
[5] R. Szeliski, Computer Vision: Algorithms and Applications. Springer London, 2011.
[6] S. Daftry, M. Maurer, A. Wendel, and H. Bischof, “Flexible and User-Centric Camera Calibration using Planar Fiducial Markers,” Procedings of the British Machine Vision Conference 2013, 2013.
[7] I. Krešo and S. Šegvi?, “Improving the Egomotion Estimation by Correcting the Calibration Bias,” Proceedings of the 10th International Conference on Computer Vision Theory and Applications, 2015.
[8] M. Li, H. Yu, X. Zheng, and A. I. Mourikis, “High-fidelity sensor modeling and self-calibration in vision-aided inertial navigation,” 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014.



Previous Abstract Return to Session C3 Next Abstract