Abstract: | This paper proposes a novel system for navigation under incomplete localization using reinforcement learning. Our system mainly consists of two modules: a Bayesian module and an action module. The Bayesian module is applied to update belief, a probabilistic distribution over all locations, through history information of actions and observations. The action module is responsible for decision-making based on the current belief, which is separated into two stages. In the first stage, we use Deep Q Network (DQN) to move the agent from an ordinary place to a transportation hub which is a place of relatively high localization certainty. In the second stage, actions are selected by weighting the shortest distance to destination with the current belief. Our system can successfully solve the large-scale navigation problem that general POMDP models cannot be figure out due to computational cost and difficulty in model construction. Different from traditional navigation methods which passively receive localization results, our method has the exploration capability by executing actions selected by DQN, which actively increases the localization accuracy. We perform extensive experiments and results show that our model is able to robustly navigate with a high success rate under incomplete localization. In the case of 30% localization uncertainty, the navigation success rate can surpass 80%. |
Published in: |
2020 IEEE/ION Position, Location and Navigation Symposium (PLANS) April 20 - 23, 2020 Hilton Portland Downtown Portland, Oregon |
Pages: | 1618 - 1624 |
Cite this article: | Xue, Wuyang, Ying, Rendong, Chu, Xiao, Miao, Ruihang, Qian, Jiuchao, Liu, Peilin, "Robust Navigation Under Incomplete Localization Using Reinforcement Learning," 2020 IEEE/ION Position, Location and Navigation Symposium (PLANS), Portland, Oregon, April 2020, pp. 1618-1624. |
Full Paper: |
ION Members/Non-Members: 1 Download Credit
Sign In |