Enhanced Camera-LiDAR Trilateration Using YOLO Detection and RANSAC Ranging

Travis W. Moleski and Jay P. Wilhelm

Abstract: GNSS-denied navigation presents significant challenges for uncrewed vehicles operating in urban canyons, caves, or forested environments. Existing localization methods often require specialized sensing hardware or provide only local frame navigation. Autonomous systems commonly use LiDAR and RGB cameras for mapping, sensing, and obstacle avoidance, which can be leveraged for navigation by providing standalone or complementary localization solutions in global or local frames. Previous approaches have correlated scanning LiDAR data with camera pixel coordinates to range unique visual landmarks, primarily using distinctively colored spherical markers for recognition. Camera-LiDAR trilateration-based positioning was enhanced by integrating YOLO-based object detection with LiDAR plane fitting using Random Sample Consensus (RANSAC). Ranging accuracy was evaluated at distances from 1.1 meters to 3.5 meters to assess the performance when increasing the true distance from the LiDAR. Three-dimensional position error was then computed to identify the improvements in localization accuracy and variance when leveraging RANSAC for ranging, when compared to extracting a single point on the landmark surface. Planar landmarks were detected, ranged, and analyzed for localization accuracy with and without plane fitting using a threedimensional scanning LiDAR with 128 channels of vertical resolution for detection, USB camera for detection, and Vicon motion capture for ground truth. RANSAC-based plane fitting improved ranging accuracy, reducing the average range error by 52.7% and decreasing the standard deviation by 55.7% compared to the centroid method. Positioning accuracy improved most in the vertical direction, where the average vertical error decreased by 88.5% and the standard deviation was reduced by 62.8%, consistent with the higher Vertical Dilution of Precision (VDOP) relative to the horizontal components. The average Euclidean position error was reduced by 31.5% and the corresponding standard deviation by 60.5% across test locations using plane fitting instead of a single point on the landmark surface. Incorporating YOLO for detection and RANSAC for ranging increases the applicability of camera-LiDAR localization by enabling the use of common planar landmarks, rather than relying solely on uniquely colored spherical markers.
Published in: Proceedings of the 38th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2025)
September 8 - 12, 2025
Hilton Baltimore Inner Harbor
Baltimore, Maryland
Pages: 1964 - 1974
Cite this article: Moleski, Travis W., Wilhelm, Jay P., "Enhanced Camera-LiDAR Trilateration Using YOLO Detection and RANSAC Ranging," Proceedings of the 38th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2025), Baltimore, Maryland, September 2025, pp. 1964-1974. https://doi.org/10.33012/2025.20445
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In