The earliest definitions of time and time-interval quantities were based on observed astronomical phenomena, such as apparent solar or lunar time, and as such, time as measured by clocks, and frequency, as measured by devices were derived quantities. In contrast, time is now based on the properties of atoms, making time and time intervals themselves derived quantities. Today’s definition of time uses a combination of atomic and astronomical time. However, their connection could be modified in the future to reconcile the divergence between the astronomic and atomic definitions. These are some of the observations made by Judah Levine, author of a riveting paper just published in EPJ H, which provides unprecedented insights into the nature of time and its historical evolution.
The earliest clocks were in Egypt, India, China, and Babylonia before 1500 BC and used the flow of water or sand to measure time intervals. The Babylonians were probably the first to use a base-60 numbering system, and we also employ the Egyptian system of dividing the day into 24 hours, each with 60 minutes, and each minute with 60 seconds.
The definition of the length of the day therefore implicitly defines the length of the second, and vice versa. This link was an important consideration in the definition of the international time scale, UTC (Coordinated Universal Time). In fact, UTC combines atomic frequency-standard data with observations of the astronomical time-scale UT1, a combination that has both advantages and problems, as discussed in this paper. However, the rate of divergence between UTC and UT1 is estimated to be less than one minute per century.
Levine concludes that as we move away from the everyday definitions of time and time interval towards a more uniform but more abstract realisation, then applications that depend on stable frequencies and time intervals will play a more fundamental role than time itself.
Filed Under: Aerospace + defense