Sensor calibration is the basic task for the research field of robotics and indoor navigation using heterogeneous data, such as visual images and point clouds collected from Light Detection and Ranging (LiDAR). In this paper, we take advantage of our indoor navigation robotic platform to develop novel sensor fusion algorithms with different levels of features. The recording platform is equipped with a 64-E Velodyne LiDAR and a monocular camera. The dataset, namely L2VCali, is composed of raw point clouds, images, and ground truth (GT) for both intrinsic and extrinsic calibration for each epoch. The data is collected in different types of indoor environments, for example, open areas, Manhattan-world rooms, and hallways. Results from state-of-art algorithms reveal that published algorithms can obtain high accuracy when the indoor environments become complex and have repetitive features. The dataset aims at becoming a benchmark for evaluating the robustness of calibration algorithms by providing both typical and challenging scenarios.