In the October 2020 issue of Metrologia, two outwardly similar but highly contradictory articles were published on the subject of the uncertainties inherent in the Coordinated Universal Time (UTC) and the several real-time realizations of UTC, termed UTC(k), where k identifies the individual laboratory [1, hereafter M2020] and [2, hereafter PPH2020]. Based on essentially identical matrix equations, their opposing conclusions can best be summarized as to whether the systematic uncertainty of any one clock (or UTC realization) minus the average of all clocks depends on the systematic uncertainty of every link used by the International Bureau of Weights and Measures (BIPM) to compute UTC. One would intuitively expect both the statistical (Type A) and systematic (Type B) uncertainty of UTC-UTC(k) to depend on the uncertainties of all the links, because UTC is a weighted and steered average of adjusted clocks whose readings are deemed synchronous using time transfer links that have unmodelled biases and noise. For systematic uncertainties there is no way to distinguish between a systematic offset in the clock readings of a laboratory k and a systematic unmodelled bias in the time transfer links. It follows that the uncertainty of each UTC-UTC(k), as well as the difference between UTC and any clock in the system, depends on the uncertainties all the links used to create UTC. One consequence, with today’s network topology, is that a laboratory connected to the pivot lab (PTB) with a poorly calibrated link will raise the uncertainties of all other laboratories, in rough proportion to the aberrant lab’s weight in UTC. This intuition is supported by M2020 as well as the earliest papers [3,4] on the subject, which formulated the problem in terms of statistics and the law of propagation of uncertainties rather than by matrix manipulations. In 2017, the consequences of the early algorithms were explored in , which suggested some ways to ameliorate them. PPH2020 identifies no mistake in any of these previous papers, despite its differences with them. This paper identifies what can only be considered a mistake in PPH2020: after having incorporated the calibration measurements and their uncertainties into the time transfer measurement equations, PPH2020 computes the statistical uncertainties as if these equations were providing new information. We show that if PPH2020 had avoided this mistake, it would be consistent with every other paper published on the subject. In this work, we report a Monte Carlo simulation of UTC. Algorithms very similar to those employed by the BIPM and the timing labs to create UTC and the UTC(k) are employed, using clock predictions (often incorrectly termed models, because models are used to generate the predictions) based on artificial data in which the clock readings are modelled as random walks with respect to a highly precise primary frequency standard (hereafter “perfect clock”) kept at the pivot lab. A simulation consists of varying the unmodelled link biases and observing the variation in what an observer with extremely precise, accurate, and unbiased optical fiber technology (hereafter, an “omniscient observer”) would realize is UTC-UTC(k). The simulations support all the previous papers except PPH2020. It has been argued that the BIPM should adopt PPH2020, because it is “unfair” to raise the systematic uncertainties of all labs due to the large uncertainties of some labs. We show how it is possible to enforce “metrological justice” without violating basic laws of statistics. We also provide a simplified version of the algorithm in M2020, and a variety of cases wherein PPH2020 obviously gives the incorrect result. This controversy is far from resolved. While we believe the reasoning presented here fully explains the controversy and points to the resolution, we fully welcome discussions with any interested parties. Some took place during this conference, and this paper and its conclusions have been revised to more specifically address the concerns that were raised.