Jason H. Rife and Matthew McDermott, Tufts University

View Abstract Sign in for premium content


This paper quantifies a significant error source that limits the accuracy of LIDAR scan matching. LIDAR scan matching, which is used in dead reckoning (aka LIDAR odometry) and in mapping, computes the rotation and translation that best align a pair of point clouds. Perspective errors occur when a scene is viewed from different angles, with different surfaces becoming visible or occluded from each point of view. Specifically, this paper models perspective errors for two objects representative of the urban landscapes in which LIDAR frequently operates: a cylindrical column and a dual-wall corner. For each object, we provide an analytical model of the perspective error for voxel-based LIDAR scan matching. We then analyze how perspective errors accumulate as a LIDAR-equipped vehicle moves past these objects.