Efficient Graph Neural Network Driven Recurrent Reinforcement Learning for GNSS Position Correction

Haoli Zhao, Jianhao Tang, Zhenni Li, Zhuoyu Wu, Shengli Xie, Zhaofeng Wu, Ming Liu, Banage T.G.S. Kumara

Abstract: With the wide applications of the Global Navigation Satellite System (GNSS) in autonomous driving scenarios, the demand for high-precision positioning of navigation systems has increased dramatically in complex multipath environments. Conventional model-based methods are constrained by strict assumptions about noise models and can hardly model complex environment errors. In contrast, approaches based on artificial intelligent learning have become an important direction to solving the problem of high-precision positioning because learning-based approaches only require simple assumptions. However, current learning-based approaches are facing the following issues. The existing Graph Neural Network-based (GNN) method could hardly adapt to dynamically changing driving environment scenarios since it considers positioning discretely. On the other hand, existing Reinforcement Learning-based (RL) approaches ignore the relationship between multi-constellation satellites, resulting in an inadequate description of the driving correction environment observations. In this paper, we construct a GNNdriven recurrent reinforcement learning method to consider the GNSS measurement of multi-constellation satellites and to learn real-time correction strategy in the dynamic driving environment. To establish a comprehensive positioning correction environment, we construct a multi-constellation graph observation, based on the feature vector concerning GNSS measurement of multi-constellation satellites and edges for satellites in and between constellations. To make more effective use of GNSS measurements, we employ the graph embedding module to deal with the multi-constellation graph inputs, to extract hidden topological features to form the brief states about relationships between multi-constellation satellites for the RL environment. Finally, we construct a recurrent actor-critic structured RL model with cumulative reward and continuous action space to exploit historical information and achieve dynamic satellite positioning correction. The performance of the proposed model is validated on the Google Smartphone Decimeter Challenge (GSDC) dataset with Android raw GNSS measurements and our collected GNSS dataset in Shanghai areas (SHGNSS) with base and rover GNSS measurements. The experimental results show that our algorithm can outperform state-of-the-art model-based and learning-based approaches with better correction performances both in urban and semi-urban areas, e.g., 26% improvement in the GSDC urban dataset and about 10% improvement in the SHGNSS semi-urban dataset from baseline.
Published in: Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023)
September 11 - 15, 2023
Hyatt Regency Denver
Denver, Colorado
Pages: 216 - 230
Cite this article: Zhao, Haoli, Tang, Jianhao, Li, Zhenni, Wu, Zhuoyu, Xie, Shengli, Wu, Zhaofeng, Liu, Ming, Kumara, Banage T.G.S., "Efficient Graph Neural Network Driven Recurrent Reinforcement Learning for GNSS Position Correction," Proceedings of the 36th International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2023), Denver, Colorado, September 2023, pp. 216-230. https://doi.org/10.33012/2023.19313
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In