View Abstract Sign in for premium content

### Abstract:

The interpolation is commonly used for increasing the data points of original time measurements for the cases of data missing or long empty sampling intervals. This study aims to understand how much error would be introduced by interpolation and how to evaluate its uncertainty. We take the procedure of the portable cesium clock measurement as an example to discuss the uncertainty of the interpolation prediction. The two-year measurement data of a cesium clock (5071A with a high-performance tube) were employed in our experiments. We masked the real measured data for a period to simulate the blind period. The values that we predicted by an interpolation method were compared with the clock data. We then calculated the root-mean-square (rms) of their differences to estimate the uncertainty. We designed and conducted three experiments. In the first experiment, we employed the linear regression to predict the data and calculated the uncertainties in the midpoint of different blind periods of ranging from 1 hour to 30 hours. In the second experiment, we still employed the linear regression to predict the data but focused on the uncertainties of each time epoch in a blind period. In the last experiment, we used the simple linear interpolation to predict the data and calculated the uncertainties of each time epoch in a blind period. Our results show that the uncertainties of the linear regression prediction are slightly lower than those of the Allan deviation prediction. If we use the simple linear interpolation between the last point of pre-measured data and the first point of post-measured data to estimate the value of each time epoch in a blind period, the uncertainties will be much lower than that of the linear regression method, especially for the time epoch near the two ends of the blind period. The trends of the uncertainties for the three experiments were well studied in this paper. We also provide the best fit for each trend with equations.