Abstract: | This work presents a three-dimensional control algorithm using reinforcement learning to guide an attacking hunter drone capable of performing a global navigation satellite systems (GNSS) repeater attack on the GNSS receiver of a target invader drone. Considering the mission and movement requirements of the hunter drone, a Q-learning algorithm was developed, for which the table with the possible transitions of the states and actions is obtained by the actions that the vehicle can take considering directions and the respective consequences of each action. The learning capability of the proposed algorithm arises from the trial and error by an agent. The penalty calculation is based on the error of the invader position to the hunter’s desired position of the attacked drone. The developed algorithm is tested using a software-in-the-loop (SITL) implementation, which is based on the Ardupilot platform. SITL simulations are performed in a developed testbed to emulate operational scenarios, where an unmanned aerial vehicle (UAV) is hijacked and then controlled by an attacking UAV until it reaches the final position desired by the hunter, usually a secure area where the vehicle can be captured without being destroyed. Results, including error metrics and action time, are discussed for different mission scenarios. |
Published in: |
2020 IEEE/ION Position, Location and Navigation Symposium (PLANS) April 20 - 23, 2020 Hilton Portland Downtown Portland, Oregon |
Pages: | 91 - 99 |
Cite this article: | Silva, Douglas L. da, Antreich, Felix, Coutinho, Olympio L., Machado, Renato, "Q-Learning Applied to Soft-Kill Countermeasures for Unmanned Aerial Vehicles (UAVs)," 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, Oregon, April 2020, pp. 91-99. https://doi.org/10.1109/PLANS46316.2020.9110222 |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |