Real-Time Wide-Area Scene Reconstruction Based on Volume Fusion

Linhang Zhu, Hongyang Yu

Abstract: 3D reconstruction technology has a wide range of applications in various fields, including real-time navigation and positioning of robots, virtual reality, geographic information systems, and medical imaging. 3D reconstruction methods include lidar-based scanning reconstruction and visual sensor-based reconstruction. Lasers can provide more accurate distance and angle information, but the price is more expensive than visual sensors. How to develop an algorithm to improve the quality of 3D modeling on the basis of existing equipment is a very meaningful problem. With the generalization of consumer-grade depth sensors, there are currently many reconstruction methods based on depth cameras. Most of these algorithms only need to use the RGBD data stream of the depth camera to reconstruct the scene in real time. A typical algorithm first obtains the depth values measured at different angles by moving the sensor, then uses the information in the color map to calculate the pose transformation of the sensor, and finally accumulates the depth value and color information into a single model through the transformation matrix. The fused information is continuously updated with the movement of time and sensors to obtain a 3D model of the environment. When using such methods for reconstruction, there is undoubtedly a trade-off between reconstruction range, reconstruction quality, and voxel scale and data processing speed. In order to perform real-time reconstruction on the basis of ensuring accuracy and reconstruction range, most of the current algorithms rely on expensive GPU hardware devices. This paper starts from the classic reconstruction algorithm KinectFusion and analyzes its performance in camera pose estimation, 3D space model representation and 3D model There are still deficiencies in the methods of storage and surface extraction, and some improvement ideas are proposed on the basis of this algorithm, and they are verified in experiments.
Published in: Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023)
September 11 - 15, 2023
Hyatt Regency Denver
Denver, Colorado
Pages: 67 - 72
Cite this article: Zhu, Linhang, Yu, Hongyang, "Real-Time Wide-Area Scene Reconstruction Based on Volume Fusion," Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023), Denver, Colorado, September 2023, pp. 67-72. https://doi.org/10.33012/2023.19195
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In