GASx: Explainable Artificial Intelligence For Detecting GPS Spoofing Attacks
Zhengyang Fan, Xin Tian, Sixiao Wei, Dan Shen, Genshe Chen, Intelligent Fusion Technology, Inc.; Khanh Pham and Erik Blasch, Air Force Research Lab
Location: Seaview Ballroom
Date/Time: Thursday, Jan. 25, 11:48 a.m.
Peer Reviewed
Unmanned Aerial Systems heavily depend on the Global Positioning System (GPS) for navigation. However, the GPS signals are subject to different types of threats including GPS spoofing attacks. While many machine learning methods have been successfully applied to detect spoofing attacks in these unmanned systems, the focus has mainly been on developing accurate prediction models, without delving into the reasons behind the predictions. We believe that understanding the underlying factors leading to a signal being classified as spoofed is crucial for gaining insights and effectively mitigating the effects of spoofing. In this paper, we propose a machine learning approach that incorporates explainable artificial intelligence techniques, specifically Shapley Additive Explanations (SHAP), to analyze why a signal is classified as a spoofed signal. Our approach utilizes a tree-based ensemble model, achieving a high F1 score of 0.956 for three different types of spoofing attacks. By leveraging SHAP, our analysis uncovers distinctive characteristics associated with each type of spoofing, providing valuable insights into the factors contributing to a signal being classified as spoofed.