Previous Abstract Return to Session D1 Next Abstract

Session D1: Robotic and Indoor Navigation

Research of Kinect/IMU Integrated Navigation Based on Indoor Robot
Hang Guo, Xi Wen, Min Wan, Huixia Li, Nanchang University, China; Min Yu, Jiangxi Normal University, China
Location: Spyglass

In the mobile robot navigation, obtaining a stable and reliable positioning results is the key and prerequisite of the planning paths. In recent years, Kinect is gradually applied in the fields of robot obstacle avoidance [1], target reconstruction [2], target tracking [3], attitude control [4] and other fields due to its own characteristics advantages. Kinect is a new 3D stereoscopic camera developed by Microsoft, it can provide RGB information and depth information of mobile robot environment for users with lower price, which is suitable for replacing conventional ultrasonic radar and a laser radar as the distance sensors. Among them, the acquired environment depth value has the characteristics of continuity, large amount of information, small influence by light, etc., and can be used for cost limited positioning occasions which has certain requirements for accuracy.
As visual sensor, Kinect has two limitations of accuracy and speed. In terms of accuracy, since the uncertainty of the position estimation of the feature point in space, visual positioning method exists positioning error. Take the visual odometer as an example, based on position change of current frame and the previous one by cumulative calculation to estimate the position and attitude of the robot, accumulation of every frame estimated error will result in degradation of the positioning accuracy with increasing number of frames [5]. In extreme cases, pure visual autonomous navigation may be leading to feature point extraction and matching algorithm failure due to insufficient or too strong external light. While in terms of speed, since the big amount of image data per frame collected, complicated processing algorithms and lower degree of parallelism of the corresponding image, the speed cannot be improved, thus limiting the real-time response to the positioning accuracy requirements. In contrast, the inertial measurement unit (inertial measurement unit, IMU), with real-time, simple and all-weather completely autonomous navigation, can compensate for visual measurement errors in accuracy and speed, while it still has more serious problems--accumulated positioning error.
Therefore, this paper carried on the research of indoor mobile robot integrated navigation and positioning based on Kinect and IMU. For the purpose of high precision positioning, firstly using the feature point extracting and matching of the RGB image between the target frame obtained by Kinect and reference frame, and using RANSAC (Random Sample Consensus, RANSAC) algorithm to remove the mismatching points, then using absolute orientation algorithm to get the Kinect attitude and offset (In this paper, Kinect attitude represents the attitude of the robot), thereby obtaining the trajectory of mobile robot movement. The visual positioning result and the INS data are fused by Kalman filtering algorithm to improve the self-positioning accuracy of indoor mobile robot.
Experimental results show that the proposed method can improve the positioning accuracy and stability of the indoor mobile robot. The result of Kalman filtering integrated positioning reduces the accumulative error of visual odometer, and modifies the IMU trajectory, thus improving the positioning accuracy of the indoor robot. The shortcoming of this method is that Kinect sensor has limited shooting range and is easily affected by noise, which affects the accuracy of counting motion parameters. And when the robot body shakes serious, there is a big error in attitude estimation, which requires further research.
REFERENCES
[1] Cunha J, Pedrosa E, Cruz C, et al. Using a depth camera for indoor robot localization and navigation[J]. DETI/IEETA-University of Aveiro, Portugal, 2011.
[2] UM D, RYU D, KAL M, et al. Multiple intensity differentiation for 3-D surface reconstruction with mono-vision infrared proximity array sensor. IEEE Sensors Journal, 2011, vol. 11, no. 12, pp. 3352-3358.
[3] Noel R. R., Salekin A, Islam R, et al. A natural user interface classroom based on Kinect. IEEE Learning Technology, 2011, vol. 13, no.4, pp.59-61.
[4] Stowers, J., Hayes, M., & Bainbridge-Smith, A. Altitude control of a quadrotor helicopter using depth map from Microsoft Kinect sensor. InMechatronics (ICM), 2011 IEEE International Conference on (pp. 358-362). 2011, April. IEEE.
[5] AMIDI O, KANADE T, FUJITA K. A visual odometer for autonomous helicopter flight. Robotics and Autonomous Systems, 1999, vol.28, no.23, pp.185-193.



Previous Abstract Return to Session D1 Next Abstract