Abstract: | Low cost inertial sensors are one potential method for positioning indoors. However, such sensors typically provide poor quality measurements which are only suitable for positioning for a few seconds. For inertial navigation to be useful, it is necessary to combine the sensors with measurements from other systems. This paper explores the integration of an Inertial Measurement Unit (IMU) with measurements from a computer vision algorithm, for indoor pedestrian navigation.. The concept is to make use of sensors that are already available in modern smartphones. It is assumed that a pedestrian user is walking with the mobile device held out in front of them with the camera pointing approximately towards the ground. Therefore the camera is taking images of the ground immediately in front of the user. The computer vision algorithm matches features between pairs of successive images. Typically, many of these features will fall on the ground plane. The relative positions of features on the ground plane are related by a homography which describes the rotation and translation of the camera between images. The robust BaySAC framework is used to simultaneously identify which features lie on the ground plane, while estimating the homography relating the two views. From the homography, the camera’s orientation and 3-dimensional body frame translation relative to its previous position are computed. This information, along with measurements from other systems such as GPS when they are available, is used to aid the IMU using a Kalman filter, to reduce the position drift. This paper describes the implementation of the combined computer vision and inertial navigation approach. A microelectromechanical (MEMS) IMU is used along with a consumer grade digital camera to capture data. It is demonstrated that the drift of the inertial sensor is significantly reduced by incorporating measurements from the computer vision algorithm. The algorithm is relatively computationally expensive, therefore this paper explores the computational requirements and identifies two methods that may be used to improve efficiency. A method of reducing the sample rate of the computer vision algorithm is demonstrated to provide a significant reduction in processor requirements with only a small reduction in positioning accuracy. |
Published in: |
Proceedings of the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2011) September 20 - 23, 2011 Oregon Convention Center, Portland, Oregon Portland, OR |
Pages: | 1378 - 1385 |
Cite this article: | Hide, C., Moore, T., Botterill, T., "Low Cost IMU, GPS and Camera Integration for Handheld Indoor Positioning," Proceedings of the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2011), Portland, OR, September 2011, pp. 1378-1385. |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |