Re: [LEAPSECS] Internet-Draft on UTC-SLS
How delightful! A discussion about the design merits of actual
competing technical proposals!
One indication that the discussion is maturing is that someone with
strong opinions on one extreme of the issues can find aspects to
agree with and disagree with in each of the prior messages. I'll
even refrain from taking umbrage over the comment about "serious
timekeeping people" :-)
Markus Kuhn says:
> if you look at *any* form of PLL (circuit or software), then you
> will find that its very purpose is to implement "rubber seconds",
> that is to implement phase adjustments via low-pass filtered
> temporary changes in frequency.
An excellent observation. Almost by definition, a real world clock
is meant to be compared, and perhaps adjusted, to the time kept by
other clocks. This is certainly true of NTP.
Ed Davies says:
> Appendix A argues against putting the adjustment interval after the
> leap second (method 4a) by pointing out that some time signals
> contain announcements of the leap second before it happens but not
> after.
Right, but we should separate the trigger of a timekeeping event from
its implementation. There are any number of ways - starting with a
simple semaphore - to arrange for the clock adjustment associated
with a leap second to occur at any arbitrary point (or range, as with
UTC-SLS) after the trigger.
> I think a stronger argument against this method of adjustment is
> that during positive leap seconds UTC and UTC-SLS would be
> indicating different dates:
This may be a fact - it does not itself constitute an argument. An
argument would have to answer the question: So what?
A date+time expression (such as ISO 8601) forms one atomic unit. The
precise issue we have been debating over the last half dozen years
amounts to that one cannot separate the date (units of days) from the
time (units of seconds) in a self-consistent fashion.
Look at it another way. We're dealing with issues of signal
processing. An allied field is image processing, and several folks
on this list have extensive experience with image processing
algorithms and standards. A key issue in building any image
processing application is to settle on a consistent point of view of
how the coordinates overlay the pixel grid. (Some might have it the
other way.) For instance, is the origin at the center of a pixel, or
at the "lower left hand corner", or at the ULHC or what?
Consider (just as an analogy - nobody bust a gut) a particular second
of time as a pixel - a day as a one dimensional image vector 86400
pixels long. Does the day start at the beginning of a second, or at
its "middle", or at its end? I'm not posing this as some deeply
philosophical question - in practice the question is already
answered, of course. Or is it - in the case of a leap second? It is
entirely up to us to define (in a coherent fashion) our point of view
about whether the single day containing a leap second is 86401
seconds long, or rather whether each of the two days bracketing a
leap second are 86400.5 seconds long.
A bit of a mess? Yes. But there are only two ways to resolve the
conflicting definitions of the day and of the SI second when
implementing systems - reset one of the clocks with a quasi-periodic
schedule - or - introduce rubber seconds. And really, aren't these
just different sides of the same coin?
> My objection is that you invent a new kind of seconds with new
> durations instead of sticking with the SI second that we know and
> love.
Perhaps we can all stop anthropomorphizing? There isn't much to
"love" about any units - SI, or otherwise. And another way to look
at this discussion is exactly as an exploration of the boundaries of
our ignorance regarding timekeeping. Having a definition of the SI
second is not the same as "knowing" it.
> 1000 seconds is an incredible silly chosen number in an operational
> context. At the very least make it 15, 30 or 60 minutes.
I would tend to agree with this. The Babylonians must have their
kilogram of flesh. How about 10 minutes - 5 before midnight and 5
after?
> But mostly I object to it being a kludge rather than a solution.
Um - a kludge is a type of solution - it is subclassed. There are
certainly good solutions and bad solutions, but simply changing our
terms doesn't add anything to the discussion.
> I would far rather we tried to define a time API for POSIX to adopt
> that makes sense.
>
> By make sense I mean:
>
> o conforms to relevant international standards
> ie: recognizes the defininition of leap seconds since
> for all purposes and intents we're probably stuck with
> the blasted things for another 20 years.
Bless you, my son.
> o is workable from a computer architecture point of view
> o Caters to the needs of relevant user communities.
By all means!
> Here's a strawman:
>
> Use a 128 bit signed integer.
>
> Assign the top 120 bits as one integer field with
> resolution of 1/2^56 seconds.
>
> Assign the bottom 8 bits to an enum containing the
> timescale in question.
Like most strawmen, this won't survive through to the end of the
discussion, but it serves the purpose of priming the pump. (I can
mix metaphors with the best.)
> Assign different timescales very different
> numeric epochs:
> TAI: 1972-01-01 00:00:00 UTC
> UTC: MJD's epoch.
> UT1: Flamsteads birthday ?
> NTP: defined in RFC1305
Moving in the right direction.
> Advantages:
>
> Sufficient resolution to represent any likely physical
> measurement or realizable frequency for the forseeable
> future (13.8e-18 seconds resolution).
Any guess at "likely physical measurements" is going to fall short
for some purposes. For one thing, one might want to represent
theoretical values in addition to experimental. That said, you are
likely correct for "our purposes".
> Extracting the whole second part can be done by accessing
> only the top 64 bits (which are enough to contain all
> of history and then some).
I like this feature.
> Conversion to/from NTP timestamps is trivial.
>
> Conversion to time_t is a matter of addition and extraction
> of the relevant 32 bits.
>
> The binary format makes for fast and efficient arithmetic.
>
> By assigning the UTC timescale an identifier of zero,
> the majority of implementations can disrecard the
> multiple timescale aspect in total.
>
> Small platform implementations can use a smaller width,
> for instance 64 bits split 48/16 and easily transform
> to standard format by zero extension.
>
> High quality implementations will check the bottom 8 bits
> for identity and fail operations that mix but don't match
> timescales.
>
> Different epochs will make it painfully obvious when people
> mix but don't match timescales in low quality implementations.
These are all interesting goals that might be polished into
functional requirements.
> Now, please show some backbone and help solve the problem rather
> than add to the general kludgyness of computers.
Do you find this "tone of voice" productive when collaborating? :-)
It seems to me that we're discussing apples and oranges again.
Whatever the representation of time values - whatever the underlying
standards - there have to be mechanisms for implementing same on
particular platforms. I haven't heard any claim that UTC-SLS is
intended to serve all needs. Rather, we've heard the opposite.
Suspect I'm not alone in being suspicious of any overreaching
"solution" proffered for all timekeeping situations - sounds like the
definition of a kludge.
Rob Seaman
NOAO
Received on Thu Jan 19 2006 - 09:03:05 PST
This archive was generated by hypermail 2.3.0
: Sat Sep 04 2010 - 09:44:55 PDT