Visual Segmentation for Autonomous Aircraft Landing on Austere Runways

Alissa Owens, Clark Taylor, and Scott Nykl

Peer Reviewed

Abstract: To enable fully autonomous aircraft, landing without human intervention must be achieved. While significant previous research has been performed on autonomous landing of aircraft, these techniques generally assume one of two things: (1) the presence of GNSS systems such that real-time, high-precision position can be determined at all times or (2) well-marked, predictable markings on a runway to enable high-quality localization of the landing aircraft with respect to the runway from an on-board camera. In this paper, we introduce a capability that (a) does not depend on the GPS and (b) can be performed on “austere” runways, meaning a runway without any specific markings. For example, landing on a dirt or grass runway for small aircraft will not have specific markers present on the runway, but could greatly expand the ability for autonomous aircraft to be used in different environments. To enable landing on an austere runway, we assume that imagery of the desired landing spot has been captured by satellite or previous aerial imagery (as commonly seen on Google Earth). Using this previously captured imagery, a machine learning algorithm is trained to identify the runway in images captured by the incoming aircraft. This machine learning algorithm then outputs a “semantic segmentation”, or mask, of which pixels in the image correspond with the runway. Using a corresponding mask from prior, geo-registered satellite imagery, the pose of the incoming aircraft with respect to the satellite imagery can be determined, enabling full pose estimation of the aircraft. This pose estimation can be used to land the aircraft. In this paper, we describe in more detail both (a) how the machine learning algorithm is trained to identify the pixels corresponding with the runway and (b) how this mask is used to determine the pose of the incoming aircraft. We present simulated results showing how effectively the semantic segmentation algorithm can b e t rained and also provide real results u sing imagery collected by a small UAV flying over a dirt road. Pose accuracy results of the complete algorithm will also be presented from both simulated and real-world data.
Published in: Proceedings of the 2025 International Technical Meeting of The Institute of Navigation
January 27 - 30, 2025
Hyatt Regency Long Beach
Long Beach, California
Pages: 23 - 36
Cite this article: Owens, Alissa, Taylor, Clark, Nykl, Scott, "Visual Segmentation for Autonomous Aircraft Landing on Austere Runways," Proceedings of the 2025 International Technical Meeting of The Institute of Navigation, Long Beach, California, January 2025, pp. 23-36. https://doi.org/10.33012/2025.19972
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In