Abstract: | The use of monocular cameras in combination with an IMU for Simultaneous Localization and Mapping has been widely studied as a particularly useful technique for autonomous navigation in GNSS denied environments. Several formulations of SLAM algorithms have been proposed and analyzed in recent literature. Regardless of the SLAM formulation used, the first step of processing of the camera data is the extraction and matching of features across successive frames. The image coordinates of these matches are typically fed to a filter that fuses them with the position, velocity and attitude computed by a strapdown navigation algorithm to generate optimal estimates of the position and orientation of the vehicle. The quality of the feature matches extracted decides the amount of information that is extracted from the camera and is therefore crucial for good overall navigation performance. This article provides a comparison of the effectiveness of different types of point feature extraction and associated matching methods for vision based navigation. The SIFT, SURF and KLT methods of extracting point features are considered. Various metrics are used to compare the number and quality of the matches that characterize their influence of navigation performance. The methods are also evaluated for their effect on the navigational accuracy using an implementation of monocular SLAM. The results for a sample dataset illustrates the relative merits of the three point feature methods for use in vision based navigation. |
Published in: |
Proceedings of the 2012 International Technical Meeting of The Institute of Navigation January 30 - 1, 2012 Marriott Newport Beach Hotel & Spa Newport Beach, CA |
Pages: | 915 - 921 |
Cite this article: | Ma, Y., Rao, S., "Comparison of Point Features for Vision Based Navigation," Proceedings of the 2012 International Technical Meeting of The Institute of Navigation, Newport Beach, CA, January 2012, pp. 915-921. |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |