PLV-GVINS:A Tightly Coupled Visual-Inertial GNSS State Estimator Using Points,Lines and Vanishing Points
Zongqi Yu Gang Liu Yiding Zhan Xiaowei Cui Mingquan Lu, Tsinghua University
Location: Beacon A
In recent years, GNSS-VIO(Global Navigation Satellite System - Visual Inertial Odometry) systems have gained significant attention. GNSS can provide precise timestamps and globally drift-free measurements for VIO, while VIO can compensate for the performance deficiencies of GNSS in indoor and urban canyon scenarios.
In open-sky or shortly GNSS signal denied scenarios, GNSS-VIO positioning systems can achieve decimeter to centimeter-level accuracy. However, in prolonged GNSS signal denied scenarios, GNSS-VIO inevitably suffers from long-term drift. To ensure robustness and high precision, researchers often utilize high-quality cameras that have low distortion but narrow fields of view (752*480) and are expensive. However, the high costs limit their widespread practical application. Line feature can effectively reflect the geometric structure of scenes, thereby enhancing the accuracy and robustness of the positioning system. In high-resolution industrial cameras with significant distortion, line feature extraction requires images that prior croped and undistorted, which severely impacts the system's real-time performance,even usability. Additionally, relying solely on point features fails to address the positioning needs in texture-sparse scenes. Therefore, how to effectively and thoroughly develop visual features is one of the critical directions for solving the problem of prolonged GNSS signal outages.
In this paper, we propose a novel line extraction method that deeply integrates the camera's distortion parameters with the line extraction algorithm, allowing for the direct extraction of "curves" from distorted images. This method meets the stringent imaging conditions of industrial cameras and enhances the system's real-time performance and compatibility.
The main contributions of this paper are as follows:
A fast "curve" extractor that can directly extract the true linear geometric structure from distorted images without the need for undistortion.
? Thoroughly leveraging visual elements based on points, lines, and vanishing points to improve the overall accuracy of the system.
? A factor graph optimization algorithm based on point, line, and vanishing point features, as well as GNSS pseudorange and carrier phase, which jointly optimizes the odometry states.
The state estimator consists of a front-end and a back-end. The front-end detects point, "curve," and vanishing point features, projecting these features onto a normalized plane and sending them to the back-end. The back-end receives the visual features from the front-end, along with IMU acceleration and angular velocity, GNSS pseudorange, carrier phase, ephemeris data, and other information. After constructing the corresponding residual factors for various sensor data, all data are optimized within the graph optimization model. By fully leveraging visual feature elements and GNSS information, we can obtain a more robust state estimation.
We conducted simulations and experiments to validate the proposed method. The control group for the simulation experiments included PL-VINS, GVINS, and GICI-LIB. Real experiments were carried out in three typical scenarios: an outdoor open environment, outdoor street scene containing buildings and trees, and a completely GNSS-denied indoor corridor scene. The results indicate that our state estimator outperforms other state-of-the-art GNSS-VIO fusion systems, demonstrating good performance in all test environments, achieving centimeter or sub-meter level positioning accuracy.
The proposed method has several advantages:
Compatibility: The algorithm is adaptable to various types of cameras, making it easy to deploy and apply.
? Low Computation: Unlike other methods, the extraction of linear features does not require undistortion of images. Avoiding image undistortion operations can significantly reduce computational consumption.
Robustness: The estimator effectively leverages visual features and tightly couples them with GNSS pseudorange and carrier phase, thus better addressing situations with few or no visible satellites.