The reliability and robustness of an integrated positioning system are crucial in the fields of robotics and autonomous navigation. Utilizing inertial navigation systems (INS) with Real-Time Kinematic (RTK) Global Navigation Satellite System (GNSS) is not sufficient as the integrated navigation performance inevitably degrades in urban areas due to extended GNSS outages. For these reasons, multiple variants of visual odometry (VO) have been developed in hope to correct INS position, attitude, and bias estimates under challenging GNSS environments. However, state-of-the-art visual-inertial odometry (VIO) algorithms lack well-defined uncertainty estimations. The tuning of the noise covariance is done heuristically, which implies that data-dependent tuning is needed to achieve high performance or even convergence of the navigation filters. To tackle this issue, we propose Voronoi Diagram-aided Visual Inertial Odometry (VD-VIO), which makes use of Voronoi Diagrams by exploiting the impact of feature distribution to the end positioning and navigation solution. To model this error, we engineer a feature set specifically designed to provide insight into the VO performance. A multi-layer perceptron neural network (MLPNN) is then adopted to obtain positional and attitudinal confidence metrics, which are then used to adjust the error covariance matrix coefficients dynamically, thus enabling a better correction of IMU PVA and bias errors, hence improving the overall integrated navigation solution in case of GNSS outages. We illustrate the efficacy of the proposed method by showing the VIO/GNSS system’s performance with and without learned uncertainties.