Previous Abstract Return to Session C2 Next Abstract

Session C2: Vision/Integrated Navigation Systems

Photogrammetric Visual Odometry with Unmanned Ground Vehicle using Low Cost Sensors
Paolo Dabove, Andrea Maria Lingua, Marco Piras, Politecnico di Torino, Italy
Location: Windjammer

Nowadays, there are several strong motivation to replace some dangerous human activities with the use of small unmanned system, both terrestrial and aerial. An interesting example is the use of micro-ground unmanned rover in emergency, to realize an early mapping or to distribute the help.
It is possible to maneuvering these ground system using remote control, but this method is quite critical when the operational environment is critical or with a limited accessibility.
A possible solution could be to use the unmanned system, where the driving is autonomous and realized with an integration of several sensors for positioning, imaging, range detection, object detection, etc.
One of the most relevant challenge is the navigation, because where the GNSS is available, the autonomous driving is allowed with a metrical level of accuracy. But where GNSS positioning is not available (urban canyon, indoor, hard environment, jamming condition), a possible valid alternative is the visual odometry.The problem of estimating a vehicle’s motion from visual input alone started in the early 1980s and was described by Moravec [1]. Over the years, monocular and stereo VOs have almost progressed in order to reach a positioning level of accuracy of about few tens of decimeter. In this context, the Authors have investigated and developed an innovative method for terrestrial rover navigation considering low-cost sensors. The algorithms, implemented in Matlab, start to the acquisition up to the processing phase. Particular attention was paid to feature extraction: this is the first step in any image analysis procedure and it is essential for many applications. As described in bibliography, there are two main approaches to find feature points and their correspondences: the first one is to find features in one image and track them in the following images using local search techniques, such as correlation. The second one is to independently detect features in all the images and match them based on some similarity metric between their descriptors. In this work we have followed this last approach, considering the SIFT operator for the feature detection. During the feature-detection step, the image is searched for relevant keypoints that are likely to match well in other images. A local feature is an image pattern that differs from its immediate neighborhood in terms of intensity, color, and texture. For Visual Odometry (VO), point detectors, such as corners or blobs, are important because their position in the image can be measured accurately.
Also the tie-points detection has been considered, especially when a low-cost camera is used: they allow to obtain an accurate navigation solution and permit to constrain all the images considered for the positioning.
So the windows of tie points have been analyzed, in order to extract the minimum number for a correct solution. Ground control points (GCP) have been extracted manually from the ortophoto generated by a fast process. The positions of these GCP is obtained with an accuracy of about 1-3 cm and is considered to constrain the solution of VO algorithm. One of the conducted analysis has regarded the minimum number of GCP considerable. In this context there are no formulas that explain this value: it depends by the environment (the presence of repetitive structures and natural details available during the positioning and navigation) and the dynamic (speed, angular velocities) of the vehicle. The positioning solution has been obtained considering a 6-state Kalman filter approach coupled with a data snooping technique, without using any external sensor (e.g. INS, GNSS, DMI system).
Sensors have been installed on a micro-ground rover ( 90x40x 30 cm) and connected to a raspberry platform with purpose to data collection. The system was powered by Lipo Battery. The test sites have been differnt indoor environments in our department, considering different brightness and lightness condition, different obstacle conditions and different size.
The results have shown an impressive performance in terms of positioning accuracies: the maximum difference between the developed solution with respect to the reference coordinates is 60 cm, with a mean value of 38 cm.
These results will have a great impact especially for autonomous navigation solutions.
[1] H. Moravec, “Obstacle avoidance and navigation in the real world by a seeing robot rover,” Ph.D. dissertation, Stanford Univ., Stanford, CA, 1980.



Previous Abstract Return to Session C2 Next Abstract