A recent visitor gave me a detailed account of a telecoms application
where UTC leap seconds can cause havoc, which I would like to share here
with you. In this example, the design ended up being vulnerable to UTC
leap seconds, in spite off the engineers being fully aware of all the
issues involved (leap seconds, UTC vs. TAI vs. GPS time, etc.), but had
to conclude that there was no sufficiently simple way around the problem
to be worth being taken.
Background: The Digital Radio Mondiale standard (ETSI TS 101980) defines
the new global broadcast format for long/medium/short-wave digital
radio. The modulation technique it uses is based on coded orthogonal
frequency division multiplexing (COFDM). In this technique, about ten
thousand data bits are packed together with error-correction information
and then sent through a fast fourier transform, in order to generate a
waveform that consists of about a hundred carrier signals, each of which
represents a hand full of bits via its amplitude and phase. The output
of the FFT is then broadcast. The coding is arranged carefully, such
that if echoes or doppler shifts disrupt or cancel out a number of
carrier frequencies, the payload audio data can still be recovered fully
intact. The high robustness of COFDM against echos makes it feasible to
operate single-frequency networks. Several transmitters that are spread
over a large region broadcast the essentially same waveform at the same
time in the same frequency band. Handling overlapping signals from
multiple transmitters on the same frequency is not very different from
handling long-distance echos. Single frequency networks offer highly
efficient use of the radio spectrum, as the requirement of keeping large
minimum distances between transmitters on the same frequency falls away.
The transmitter infrastructure design supplied by a major manufacturer
of DRM broadcast equipment works like this:
At the central head end, where the broadcast signal arrives from the
studios, a Linux PC has a sound card that samples the studio signal at
48.0000000 kHz (or equivalently resamples an asynchronously arriving
digital studio signal in a polyphase filterbank). The head-end Linux PC
is connected to a Meinberg GPS receiver (a special GPS receiver
optimized for precision timing applications). The 10.00000000 MHz output
of the GPS receiver is used to derive the high-precision sampling clock
signal used by the soundcard. The serial port of the GPS receiver also
synchronizes the Linux clock via the usual xntpd driver software by Dave
Mills, et al.
Every 400 ms, the software on the PC takes 400 ms worth of sampled
audio, sends them through an MPEG Advanced Audio Coding (AAC)
compression algorithm. The PC knows for each audio sample received
within about a millisecond the corresponding UTC time. It attaches to
the generated 400-ms long compressed data packet a UTC timestamp that is
a few seconds into the future. At this time the packet must be leaving
the antennae of transmitters all over the region. The packets are then
sent via low-cost asynchronous communication links (e.g., selected parts
of the Internet) to the various transmitter stations. There, they are
modulated and queued for delivery by a DSP board that is equally
connected to a GPS receiver. When the scheduled transmission time
arrives for a packet, it is handed with high timing accuracy to the
analog-to-digital converter, which will produce the analog complex (I,Q)
baseband signal that drives the high-power transmitter.
In order to keep the complexity of the design simple, existing standard
components such as xntpd, the Linux kernel clock driver and commercially
available GPS clocks with UTC output were used, together with low-cost
asynchronous communication links. The result is a significantly more
economic transmitter infrastructure than what could be offered by
competing technologies.
The only problem with using such off-the-shelf components is, that the
entire design is unable to transmit a signal during a leap second,
because packets are scheduled based on UTC!
The engineers were fully ware of the problem, and they knew that TAI and
GPS time exist. In fact, the packet protocol format supports the use of
GPS timestamps. But as there is no simple configuration option to cause
the particular GPS receiver used to output any of these, or to get Linux
and xntpd to run on a TAI or GPS timescale (which would have broken
other things, such as the regular correct-local-time timestamping
preferred for system administration), this was not applicable. As a
result, nobody could be bothered to add a few person months of work,
simply to ensure that there is no 1-second disruption once every year.
[In fact, since short-wave transmitters frequently switch between
programmes at the full hour, in discussions the hope was expressed that
in practice nobody will notice.]
Either having a commonly used standard time without leap seconds (TI),
or having TAI widely supported in clocks and APIs would have solved the
problem.
Markus
--
Markus Kuhn, Computer Laboratory, University of Cambridge
http://www.cl.cam.ac.uk/~mgk25/ || CB3 0FD, Great Britain
Received on Fri Jul 18 2003 - 07:50:52 PDT