Previous Abstract Return to Session A6 Next Abstract

Session A6: GNSS Integrity and Augmentation

Investigation into Satellite Clock and Ephemeris Errors Uncertainty and Predictability
Xinwei Liu, Juan Blanch, Todd Walter, Stanford University
Location: Beacon B
Date/Time: Thursday, Jan. 27, 3:48 p.m.

Peer Reviewed

Peer Reviewed

I. INTRODUCTION
The Advanced Receiver Autonomous Integrity Monitoring (ARAIM) concept relies on the characterization of conservative error bounds of the clock and ephemeris nominal errors to achieve the required level of integrity. [1] To have a conservative bounding, Blanch et.al provide a bounding algorithm on the residual [2] [3] [4], which considers the effect of extreme non-faulted events defined as errors lower than the standard 4.42 time of User Range Accuracy (URA) for GPS and 4.17 for Galileo. In this paper, we investigate the satellite clock and ephemeris errors uncertainty and predictability of the Gaussian bounding parameter.
To characterize the nominal error modal, we inspect the bounding distribution in section II. We further characterize the error bounding uncertainty and predictability by exploring the stability of the bounding parameters. If the parameters are stable enough, we can be confident that they will likely stay stable in the future and can be well characterized using service history. Taking a step further, we can produce simulations to predict future error bounding models using the existing data. In sections III and IV, we describe the bootstrap method for characterizing error bounding parameter uncertainty and the training-validation method for characterizing the bounding uncertainty and prediction simulation.[2]
II. ERROR SERVICE HISTORY EXAMINATION
In this section, we examine the bounding parameters variation using 12 years of User Projected Error (UPE) from GPS with three years of a time window. We apply the standard threshold and later use a lower threshold of 3 times the URA to explore the validity of the standard threshold. 2 years of UPE error bounding parameter histories from Galileo are also generated.
As suggested by the result, IIA satellite’s theta exceed the bounding with the standard threshold, and all satellite bounding parameters stayed below bounding with a lower threshold. In addition, abrupt jumps occur more often in the standard threshold plot, indicating instability. For Galileo, one satellite‘s theta exceed the bounding with the standard threshold, and no satellite exceeds the bounding with a lower threshold.
To investigate whether the instability and the satellite exceeding the standard threshold is due to the lack of data, we aggregate errors from all satellites to obtain enough data to stabilize the bounding parameters.
The result shows that although with the standard threshold, normalized theta no longer exceeds 1, the instability persists in the standard threshold plot, whereas stability can be observed in the lower threshold plot. This result indicates that the instability is likely induced by near-fault events, which requires further investigation.
To further explore the stability of the satellite errors, we employ two methods, the bootstrap method, and the training validation method, which are introduced in the following two sections.
III. BOOTSTRAP METHOD
Bootstrap is a type of resampling method. It is used to quantify uncertainty in the bounding parameters. A typical bootstrapping method is executed according to the following Algorithm :
1. Draw a sample with size n from the population
2. Draw subsample with size n from the original sample using sampling with replacement and store the subsample
3. Calculate the statistic parameter theta using the subsample
4. Store theta
5. Repeat steps 2 to 4 for m number of times and construct the distribution of theta.
In this case, the distribution of theta would reflect the theta distribution drawn from the population. Bootstrap is suitable for our problem as it is regarded as one of the standard methods to investigate uncertainty within the data series. [7] In addition, bootstrap does not require a model. Since we do not have an accurate model for the residual bounding parameter distribution, it is better to use the bootstrap method than model-based simulation methods such as Monte Carlo simulation. [8] Also, bootstrap does not make many assumptions about the distribution, which helps to avoid assumption mistakes. [6] Finally, we have abundant data available, which is beneficial for the re-sampling method.
IV. TRAINING VALIDATION METHOD
We also explore the training-validation method, a variation of the cross-validation method used in the machine learning algorithm. We designate a part of the data as the "training data," compute their bounding parameter, and compare the values to the bounding values generated from the rest of the data, the validation data. A comparison is made between the two to examine how closely the training data bounding parameters resemble the validation data bounding parameters. This method can be view as a prediction simulation experiment conducted on the available data, and it gives us insights into what would happen if we use all the available data to predict the future bounding parameters.
The training-validation algorithm is presented in the following Algorithm:
1. Divide the data into the training set and the validation set
2. Compute the bounding parameters from the validation set
3. Divide the training set into chunks of data
4. Randomly select the data chunks from the training set as training data
5. Compute the bounding parameters correspond to the selected data as training bounding parameters
6. Store the calculated parameters
7. Repeat steps 4 to 6 for m number of times
8. Construct distribution using the stored training and validation parameters
Specifically, this method is applied to investigate the stability and predictability of the satellites over time using the same set of satellites for training and validation. It divides the data into half and half in time and uses them as training and validation sets. The training set is further divided into units of 6 months to preserve time correlation. Training data is drawn from the training set data units. This method can also examine the effect of training time length on the result.
V.EXPERIMENT
The experiment is carried out for both methods using the 12 years of GPS clock and ephemeris errors provided by the Stanford GPS lab. The results are normalized by URA. In this experiment, the direct computation of bias and theta is applied to the MPE. For URE, we calculate the smallest bounding bias for all the users and select the theta that would bound for all users.
1. Bootstrap
We take 6, 9, and 12 years of data from a satellite, and apply bootstrap to the theta parameter. We observe a bell shape with low standard deviations, indicating that the data is stable with little uncertainty. The simulation shows stability in the bounding parameter.
2 .Training-Validation
The training-validation experiment is applied using the URE data. After selecting the training and validation data, we aggregate the satellites separately to avoid instability due to a lack of data.
We select sizes of 2,4, and 5 years of training data for the simulation, compute the ratio of the training to validation data and center the histogram at 0 with 1-CDF curves. 1000 simulation data points are generated. The threshold is set to be 3 times the URA values.
The closer the values are around 0, the better the predictions are. In the result, the ratio values are close to 0 and have a low standard deviation, which indicates that the prediction method works well. We can also observe that majority of the data lies in the negative range, implying overestimation, which is needed to achieve conservative bounding.
With the increase of training data size, we have a lower standard deviation, indicating that more data will stabilize the parameter and create less uncertainty in the data.
VI. CONCLUSION
In this study, we observe the instability in the overbounding parameters in the time history plot using the standard threshold. The parameters are stabilized using a lower threshold, indicating that possible fluctuating behavior is caused by the near-fault events. The results of both bootstrap and training-validation simulation show promising stability with a low value of standard deviation. This result suggests using the bootstrap and training-validation method might be useful for characterizing the uncertainty in the bounding parameters.

REFERENCES
[1] Gssc.esa.int. 2021. Navipedia.
[2] Walter, T., Gunning, K., Eric Phelts, R., and Blanch, J. (2018) Validation of the Unfaulted Error Bounds for ARAIM. J Inst Navig, 65: 117– 133. doi: 10.1002/navi.214.
[3] Blanch, Juan, Liu, Xinwei, Walter, Todd, "Gaussian Bounding Improvements and an Analysis of the Bias-sigma Tradeoff for GNSS Integrity," Proceedings of the 2021 International Technical Meeting of The Institute of Navigation, January 2021, pp. 703-713.
[4] J. Blanch, T. Walter and P. Enge, "Gaussian Bounds of Sample Distributions for Integrity Analysis," in IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 4, pp. 1806-1815, Aug. 2019, doi: 10.1109/TAES.2018.2876583.
[5] Weisstein, Eric W. "Bootstrap Methods." From MathWorld–A Wolfram Web Resource.
[6] Hesterberg, Tim & Monaghan, Shaun & Moore, David & Clipson, Ashley & Epstein, Rachel & Freeman, W & York, Company. (2005). Bootstrap Methods and Permutation Tests. Introduction to the Practice of Statistics. 14.
[7] Efron and Gong (February 1983), A Leisurely Look at the Bootstrap, the Jackknife, and Cross Validation, The American
Statistician.
[8] H.BarretoandF.M.Howland, IntroductoryEconometrics: UsingMonteCarlosimulationwithMicrosoftExcel.Cambridge: Cambridge Univ. Press, 2013.



Previous Abstract Return to Session A6 Next Abstract