Vision Based Navigation for Asteroid Explorer
Yu-Tse Hsieh, Shau-Shiun Jan, Kai-Wei Chiang, National Cheng Kung University,Taiwan
Planetary and asteroid exploration is an essential way to unveil the mysteries of space. To discover whether there is any life or water in the outer space, we must take a closer look. Asteroid explorer autonomous precision landing is one of the crucial step for exploration and research. Traditionally, inertial navigation system (INS) and light detection and ranging (LiDAR) are mostly used. However, both of these sensors are expensive. Thus, the objective of this research is to develop a method using low-cost mono-camera to provide vision-based navigation for asteroid explorer during descent phase. Terrain relative visual navigation (TRVN) is integrated with terrain absolute visual navigation (TAVN) to achieve precision landing on asteroid surface. This research chooses the nearest asteroid, Moon, as the simulation environment. The explorer landing process is based on planet and asteroid natural scene generation utility (PANGU). That is, this research uses PANGU to simulate the explorer’s camera vision as the input of the algorithm. In recent years, visual odometry (VO) has been developed rapidly with low-cost sensor, camera, to provide the translation and rotation of vehicles. In general, VO has an accumulative error coming from scale-drift, rotation-drift, and translation-drift. In addition, the mono-camera cannot provide the scale factor relating to the real world. Hence, it is important to integrate monocular VO with absolute navigation algorithm to reduce the accumulative error and estimate the scale factor.
Consequently, our research first applies a TRVN to obtain the explorer’s relative position and velocity. Several pixels were selected from the input lunar surface image and were tracked between several frames to estimate the transformation of the camera. A key frame (KF) is generated to save the tracking points information and keep optimizing the information to refine the transformation estimation. Secondly, this research presents an algorithm of TAVN, where a large area lunar surface feature points database is created, which includes feature point’s descriptor and the three-dimension position in the moon-fixed coordinate. When a lunar surface image is applied as an input, the algorithm detects all the feature points of the image and matches the descriptor with the database. The camera absolute position in the moon-fixed coordinate is then derived from the feature points matched with the database and the points’ pixel position on the image. However, matching the descriptor with the database takes time to calculate the absolute position. Moreover, when the explorer descents below a certain height, matched feature points might be insufficient for the camera vision to calculate the absolute position. In these situations, TRVN is essential to provide the navigation for the explorer. The extended Kalman filter (EKF) is utilized in our study as the navigation engine to integrate TRVN and TAVN. EKF provides integrated navigation solutions by loose coupling, to correct the TRVN’s accumulative error, and calculate the scale factor for TRVN by using the absolute position from TAVN. In addition, when the relative navigation tracking is lost, TAVN could provide positions for the explorer at the lower sampling rates. Therefore, when TAVN is in process or the camera vision is lack of matched feature points, TRVN could provide measurements for the navigation engine to output the navigation solution.
To conclude, this paper presents a mono-camera vision-based navigation for asteroid explorer by integrating TRVN and TAVN to achieve precision landing on asteroid surface. The experimental results including the positioning error statistics of TRVN, TAVN, and the integrated navigation engine are evaluated in the paper. As shown in the experimental results, the positioning performance of this proposed navigation algorithm is improved.