|Abstract:||Nowadays, unmanned aerial vehicles (UAVs) play a vital role in many applications and have been successfully employed in a wide variety of applications such as road surveying, precision agriculture, landslide monitoring, cultural heritage mapping, and pipeline monitoring. All of these applications require accurate and stable navigation system. Currently, most of the commercially available UAVs rely on the integration of the Global Satellite Navigation Systems (GNSS) and Inertial Navigation Systems (INS) to estimate the position, velocity, and attitude. The small form factor of modern UAVs allowed them to operate in more challenging environments such as urban and natural canyons. In these environments, the GNSS availability cannot be guaranteed, and hence the navigation solution will deteriorate dramatically during these GNSS signal outages due to the drift exhibited by the inertial navigation solution. Using other aiding sensors is crucial to limit the accumulated errors associated with INS measurements. Onboard cameras can offer a useful clue to support the navigation solution during the GNSS signal outage periods. Varieties of monocular visual odometry based on photogrammetric and Structure from Motion (SfM) approaches have been proposed to assist the navigation estimation process. The main problem of using the monocular VO technique is the loss of scale if neither external measurement nor a priori knowledge about the surrounding environment are available. Moreover, the estimated camera pose from VO is prone to drift with time. This paper introduces a novel approach for estimating the navigation states of an Unmanned Aerial Vehicle (UAV) by integrating the visual information that obtained from a monocular camera with the Inertial Measurement Unit (IMU) observations via an Extended Kalman Filter (EKF). Most of the current monocular VO algorithms rely on a calibrated camera model and apply the conventional photogrammetric and SfM approaches. While these approaches can help towards estimating the relative rotation and translation by tracking image features and applying geometrical constraints, they cannot estimate the motion scale using only the image visual features.|
Proceedings of the 2017 International Technical Meeting of The Institute of Navigation
January 30 - 2, 2017
Hyatt Regency Monterey
|Pages:||856 - 865|
|Cite this article:||
Mostafa, M.M., Moussa, A.M., El-Sheimy, Naser, Sesay, Abu B., "Optical Flow Based Approach for Vision Aided Inertial Navigation Using Regression Trees," Proceedings of the 2017 International Technical Meeting of The Institute of Navigation, Monterey, California, January 2017, pp. 856-865.
ION Members/Non-Members: 1 Download Credit