Title: Smart Fusion of Multi-sensor Ubiquitous Signals of Mobile Device for Localization in GNSS-denied Scenarios
Author(s): Jichao Jiao, Zhongliang Deng, Fei Li, Lianming Xu
Published in: Proceedings of the 30th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2017)
September 25 - 29, 2017
Oregon Convention Center
Portland, Oregon
Pages: 549 - 572
Cite this article: Jiao, Jichao, Deng, Zhongliang, Li, Fei, Xu, Lianming, "Smart Fusion of Multi-sensor Ubiquitous Signals of Mobile Device for Localization in GNSS-denied Scenarios," Proceedings of the 30th International Technical Meeting of The Satellite Division of the Institute of Navigation (ION GNSS+ 2017), Portland, Oregon, September 2017, pp. 549-572.
Full Paper: ION Members/Non-Members: 1 Download Credit
Sign In
Abstract: 1.Objectives Positioning in indoor and outdoor environments are being important and gained lots of commercial interests. With the development of the mobile devices integrate powerful sensors, computational chips and memory that are necessary tools of people’s lives, seamless indoor and outdoor positioning is used in pedestrian positioning, unmanned vehicles, unmanned air vehicles, mobile robots, and wearable equipment. Moreover, according to the requirement of Federal Communication Commission (FCC), the standards of 5G communication networks must provide 3D accurate location information. The global navigation satellite systems (GNSS, GPS/BDS/GLONASS/ Galileo) can support outdoor positioning services with high accuracy, but they cannot give the indoor locations in indoor environments. Therefore, radio frequency (RF) signals that include wireless local area networks (WLAN), mobile communication base stations, dedicated infrastructure (RFID/NFC, Bluetooth) are widely used for achieving positioning in the GNSS-denied scenarios. However, the positioning accuracy is declined by the multi-path, or non-line-of-sight (NLOS). Besides, the camera/inertial sensors including tri-axis accelerometer, tri-axis gyroscope, tri-axis magnetometer, and barometer are used in some urban canyons. However, the positioning error will increase with the time. Recently, magnetic field fingerprinting is proposed for calculating a user’s location, which is an infrastructure-free technology. However, the resolution of magnetic sensor readings integrated in the mobile device is low, which results in a low positioning accuracy. Therefore, multi-modal positioning based technologies are attracting much attention. In order to achieve a high accuracy positioning, kinds of multi-modal positioning system have been proposed. Wang and his colleagues proposed a visual-inertial fusion navigation system for aerial robotics based on an improved kalman filter (KF). Kuo and his co-authors used the unmodified smart- phones and slightly modified commercial LED luminaires for calculating the indoor location. By using the fusion of magnetic and visual sensors based on a particle filter, Liu and his colleagues installed an infrastructure-free indoor positioning module for the smartphone. Fendy and his co-authors proposed a vision/GPS/IMU localization technique for multi-rotor vehicles. Li and his co-authors proposed an improved inertial/wifi/magnetic fusion structure that included a three-level quality-control mechanism for indoor navigation. They utilized the extended kalman filter (EKF) for fusing multi-sensor information. Wu and his co-authors proposed a particle filter for fusing the received signal strength of wireless local area networks (WLANs) and inertial sensor information for mobile robots. Therefore, we can find that the multi-sensor based technology is an important factor for the precise positioning system. Additionally, EKF and particle filter are main approaches for fusing multi-modal information. However, the RF-based positioning cannot be still achieved in the urban canyon areas because of reliability against limited communication range and sensors failures. Moreover, the complexity of fusion algorithms becomes evident since mobile devices have the weak processing power and short battery life. Recently, neuro network based technologies especially the deep learning is becoming a useful tool for the location and navigation in complex environments with dynamic elements, which is an important milestone. Moreover, based on our previous research, the image can support the LBS for users without wireless signals covering, which is similar to the inertial sensors. Feature matching is an important factor for the vision-based positioning. Many scholars showed that the convolutional neural network (CNN) is a powerful tool to extract the invariable features. Therefore, inspired by the powerful performance of CNN, we introduce an improved CNN combing the particle filter in this paper. 2. Anticipated This paper is to focus on the adaptive fusion of the image, the WiFi signal, the magnetic information, and the inertial information, which is achieved based on an improved particle filter. In order to make the user with a mobile device obtain a positioning service with no restriction on device orientation during localization in the GNSS-denied regions, we leverage a CNN framework and a particle filter to build a two-level high accuracy positioning architecture fusing the image/WiFi/magnetic/inertial. Moreover, we proposed a new image that is named RGB-WM fusing of 1D signals (WiFi/magnetic) and RGB image. It is noted that the magnetic field, the image, and the inertial information are infrastructure-free and ubiquitous. 3. Key innovative steps and the significance In this section, we introduce the approaches for RGB-WM image creating, image feature extraction based on CNN, multi-model positioning signals fusion based on a new particle filter. The proposed augmented particle filter with dynamic ubiquitous signals and image-based positioning information will enhance the robustness of the existing particle filter. In order to support indoor and outdoor seamless location based services (LBS), this paper proposes a smart fusion architecture for combing the ubiquitous signals of the mobile device integrated multi-modal sensors based on deep learning, which can fuse the vision/wireless/inertial information. The core of the fusion architecture is an improved four-layers deep neural network that integrating a CNN and an improved particle filter. In the first place, inspired by creating the RGB-D image, we change the image gray by using a normalized magnetic strength and scale the image intensity by using a normalized WiFi signal strength, which results in an RGB-WM image. Then, homogeneous features are extracted from the RGB-WM image based on the improved CNN for achieving context-awareness. Based on using the context information, we introduce a new particle filter for fusing different information from multi-modal sensors. In order to evaluate our proposed positioning architecture, we have conducted extensive experiments in three different scenarios including our laboratory, the T3 terminal of Beijing Capital international airport, a shopping mall, and the campus of our university. 4. Experimental results We evaluate our proposed positioning architecture in three different scenarios. The experimental results imply that the TLSF is a powerful and energy-efficient seamless indoor and outdoor positioning algorithm. Experimental results demonstrate the precision and recall of the RGB-WM image feature is 95.6% and 4.1% respectively. Furthermore, the proposed infrastructure-free fusion architecture reduced the root mean square error (RMSE) of locations in the range of 13.3 to 55.2% in walking experiments with two smartphones, under two motion conditions, which indicates a superior performance of our proposed image/WiFi/magnetic/inertial fusion architecture over the state-of-the-art with these four localization scenarios. The ubiquitous positioning accuracy of our proposed algorithm is less than 1.37 meters, which can meet the requirement of the complex GNSS-denied regions. 5. Conclusion This paper presented a positioning method based on a mobile device for GNSS-denied scenarios. Experimental results show that the proposed positioning algorithm is superior to the previous positioning algorithms in positioning accuracy and stability. The proposed solution is a hybrid solution, fusing multiple smartphone sensors with WLAN and image signals. The smartphone sensors are used to measure the motion dynamics information of the mobile user, the WLAN positioning by mitigating the impact of RSSI variance, and the image information. Because the operation of this method only uses the built-in hardware and computational resources of a mobile device, the positioning solution presented here is more cost-efficient and convenient for integration with related applications and services than alternative systems presented previously. This paper provides the experimental results of a system utilizing only the sensors available on a smartphone to provide an indoor positioning system that does not require any prior knowledge of floor plans, transmitter locations, radio signal strength databases, etc.