The previous post looked into the common definition of Unix time as “the number of seconds since January 1, 1970 GMT” and why it’s not exactly true. It was true for a couple years before we started inserting leap seconds. Strictly speaking, Unix time is the number of non-leap seconds since January 1, 1970.
This leads down the rabbit hole of how a second is defined. As long as a second is defined as 1/86400 th of a day, and a day is the time it takes for the earth to rotate once on its axis, there’s no cause for confusion. But when you measure the rotation of the earth precisely enough, you can detect that the rotation is slowing down.
Days are getting longer
The rotation of the earth has been slowing down for a long time. A day was about 23½ hours when dinosaurs roamed the earth, and it is believed a day was about 4 hours after the moon formed. For most practical purposes a day is simply 24 hours. But for precision science you can’t have the definition of a second changing as the ball we live on spins slower.
This lead to defining the second in terms of something more constant than the rotation of the earth, namely the oscillations of light waves, in 1967. And it lead to tinkering with the calendar by adding leap seconds starting in 1972.
Cesium
You’ll hear that the second is defined in terms of vibrations of a cesium atom. But what exactly does that mean? What about the atom is vibrating? The second is not defined in terms of motions inside an atom, but by the frequency of the radiation produced by changes in an atom. Specifically, a second has been defined since 1967 as
the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom.
Incidentally, “cesium” is the American English spelling of the name of atomic element 55, and “caesium” is the IUPAC spelling.
The definition of a second raises several questions. Why choose cesium? Why choose that number of periods? And what are hyperfine levels and all that? I’ll attempt to answer the first two questions and punt on the third.
OK, so why cesium? Do we use cesium because that’s what atomic clocks use? And if so, why do atomic clocks use cesium?
As I understand it, the first atomic clocks were based on cesium, though now some atomic clocks are based on hydrogen or rubidium. And one reason for using Cs 133 was that it was easy to isolate that particular isotope with high purity.
Backward compatibility
So why 9,192,631,770 periods? Surely if we’d started from this definition we’d go with something like 10,000,000,000 periods. Clearly the number was chosen for backward compatibility with the historical definition of a second, but that only approximately settles the question. The historical definition was fuzzy, which was the point of the new definition, so which variation on the historical definition was used for backward compatibility?
The time chosen for backward compatibility was basically the length of the year 1900. Technically, the number of periods was chosen so that a second would be
the fraction 1/31556925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time.
Here “tropical year” means the time it took earth to orbit the sun from the perspective of the “fixed stars,” i.e. from a vantage point so far away that it doesn’t matter exactly how far away it is. The length of a year varies slightly, and that’s why they had to pick a particular one.
The astronomical definition was just motivation; it has been discarded now that 9,192,631,770 periods of a certain radiation is the definition. We would not change the definition of a second if an alien provided us some day a more accurate measurement of the tropical year 1900.
It gets weirder.
Time slows down in a gravity well, so the cesium transition time will be relatively faster in orbit about Earth, and again in interplanetary space, and then insterstellar space, and probably, intergalactic space.
I think the French (international?) standard may address the first couple of cases.