Abstract: | We present a 3D reconstruction technique designed to support an autonomously navigated unmanned aerial system (UAS). The algorithm presented focuses on the 3D reconstruction of a scene using only a single moving camera and can be used to construct a point cloud model of unknown areas. The reconstruction process, resulting in a point cloud model is computed using a Speeded Up Robust Feature (SURF) point matching process and depth triangulation analysis, is a six step process. The first step is SURF extraction from each frame of video; a neighborhood-magnitude-direction dependent matching procedure is applied to track feature points through subsequent frames. The distance a feature point travels, in pixels, becomes the feature’s disparity which can be translated into depth. The Cartesian depth coordinate, in the z direction, is determined using the disparity values, while the x and y coordinates are determined using the focal length information of the camera. The process consists of determining the size of the image at a particular depth and computing the width and height, x and y directions, for each SURF point. The final output is a point cloud, a collection of points accurately positioned within a model. With enough points, surfaces and textures can be added to create a realistic model. The accuracy of the reconstruction is measured by evaluating the density and precision of the point cloud for autonomous navigation and mapping tasks within unknown environments. An autonomous navigation control system utilizes the resulting visually reconstructed scene, centered at the current camera location, to either register its position with a location in a known 3D model, or for obstacle avoidance and area exploration while mapping an unknown environment. The presented reconstruction algorithm forms a foundation for computer vision self-positioning techniques within a known environment without the use of GNSS signals. The suitability of the reconstruction for mapping tasks can be evaluated using ground-truth measurements of actual objects. |
Published in: |
Proceedings of the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2011) September 20 - 23, 2011 Oregon Convention Center, Portland, Oregon Portland, OR |
Pages: | 3596 - 3604 |
Cite this article: | Diskin, Yakov, Tompkins, R. Cortland, Youssef, Menatoallah, Asari, Vijayan K., "UAS Exploitation by 3D Reconstruction Using Monocular Vision," Proceedings of the 24th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS 2011), Portland, OR, September 2011, pp. 3596-3604. |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |