Register    Attendee Sign In Sign in to access papers, presentations, photos and videos
Return to Session B2a

Session B2a: Advancements in Navigation Algorithms

Visual Segmentation for Autonomous Aircraft Landing on Austere Runways
Alissa Owens, Clark Taylor, and Scott Nykl, Air Force Institute of Technology
Location: Beacon B

To enable fully autonomous aircraft, landing without human intervention must be achieved. While significant previous research has been performed on autonomous landing of aircraft, these techniques generally assume one of two things: (1) the presence of GNSS systems such that real-time, high-precision position can be determined at all times or (2) well-marked, predictable markings on a runway to enable high-quality localization of the landing aircraft from an on-board camera. In this paper, we introduce a capability that (a) does not depend on the GPS and (b) can be performed on “austere” runways, meaning a runway without any specific markings. For example landing on a dirt or grass runway for small aircraft will not have specific markers present on the runway, but could greatly expand the ability for autonomous aircraft to be used in different environments.
To enable landing on an austere runway, we assume that imagery the desired landing spot has been captured by satellite or previous aerial imagery, like can be commonly seen on Google Earth. Using this previously captured imagery, a machine learning algorithm is trained to identify the runway in images captured by the incoming aircraft. This machine learning algorithm then outputs a “semantic segmentation”, or mask, of which pixels in the image correspond with the runway. Using a corresponding mask from prior, geo-registered satellite imagery, the pose of the incoming aircraft with respect to the satellite imagery can be determined, enabling full pose estimation of the aircraft. This pose estimation can be used to land the aircraft.
More specifically, we use 3D simulation and modeling to demonstrate that only an orthogonal, high-resolution satellite image is needed to train a machine learning model to detect surrounding features and outline the runway. A YOLO Semantic Segmentation model was trained to detect the austere runway, utilizing approximately 5,000 training images derived from a single orthogonal image. These images represented various seasonal conditions (summer, winter, fall, and spring), enabling robust detection across different environments. In the final paper, we will show results of successful segmentation of a dirt road captured by a small UAV, where the training images for that road were all from previously captured overhead imagery.
Once the YOLO mode has been successfully trained and the runway identified in an image, further analysis is required to compute the aircraft's relative position and orientation (pose). The resulting inference of a YOLO Semantic Segmentation model is a masked image such that the trained shape of the runway is white while the remainder of the scene is black. Jacobian images, which represent the partial derivatives of image residuals with respect to camera pose parameters, are used to quantify how small changes in camera translation and rotation affect pixel movements, guiding iterative updates to refine the pose estimate. An Extended Kalman Filter (EKF) was used to predict the camera’s state (pose) based on previous states, with updates from new observations: each being the next masked inference of the runway from the YOLO model. Jacobian images allowed for the austere runway to be characterized with respect to the camera based on the camera’s pose.
These two methods combined (YOLO semantic segmentation and registration using Jacobian images) lead to a complete system for estimating pose of an aircraft with respect to an austere runway. Complete results from both simulation and real flight will be given demonstrating the pose accuracy of this complete technique.



Return to Session B2a