Technologists look at ways in which electronic systems can tell the time reliably
4 mins read
Does it not seem strange that the one thing that most software is missing is the concept of time? Even the typical real time operating system (RTOS) is curiously misnamed, as there is no object that marks it out as being different from most standard time sharing operating systems.
Linux is as much a slave to the real time clock as almost every RTOS on the market. And there is no direct link in the canonical RTOS between the interrupt triggered by the real-time clock and the ability of tasks running on it to meet set deadlines. Perhaps that should come as no surprise: the kernel architecture of an operating system such as Linux is very close to that of an RTOS – the main differences lie in the details of the scheduling algorithms and the handling of memory.
In an RTOS, tasks can use strict pre-emption, where a running task can be suspended only by one of higher priority that is ready to run. The RTOS is able to run the code that makes that decision after handling an interrupt, which may come from external I/O or the regular clock-tick interrupt itself. The architecture is designed to ensure that the most important task is never blocked when it needs to run – although this is not always possible to guarantee in practice. By doing so, the system is meant to finish its necessary work before key deadlines expire.
Architectures have been proposed that have some concept of deadline. For example, the MWave processor developed by IBM for multimedia processor swapped traditional pre-emption for deadline scheduling. This was meant to ensure that tasks with a pressing need to deliver data would be given time to execute before their deadline expired. However, such architectures remain a rarity.
Even the processors that run the software have no concept of time; a feature that is largely an accident of history. The first computer architectures to gain traction, such as the IBM System/360, did so in the world of batch data processing. Here, time was relevant only insofar as it was needed to estimate how long it would take to complete each job.
Microprocessors borrowed their fundamental architecture from these machines, first appearing in systems that had no need to hit hard deadlines. The real time clock in today's embedded systems and computers is something of an afterthought, although it does provide the source of the essential timer interrupt.
A growing number of researchers, along with one or two companies, believe this situation has to change and that embedded systems, at the very least, should have the concept of time baked into their structure, rather than simply using it as a performance metric. Prominent advocates include Professor Edward Lee of UC Berkeley. In turn, National Instruments has embraced a number of the concepts developed by Prof Lee's group in its LabView development environment. Here, FPGAs in the target hardware are used to guarantee timing for certain threads of execution. The architecture developed by Professor David May for the XMOS processor also contains some features that Prof Lee thinks are important for timing sensitive systems.
The US National Science Foundation has decided that more work needs to be done on time sensitive embedded systems, announcing $4m of funding for research by a group of universities under the banner of the Roseline project. These will work on clocking techniques for embedded systems, as well as synchronisation protocols and control and sensing algorithms.
Prof Lee claims the lack of explicit timing in programming languages and operating systems makes embedded systems 'brittle'. He believes the only way to guarantee consistent timing for a given piece of code without external assistance is to use identical processors – which is a major problem when you consider the obsolescence factor. By making timing a first class citizen in application design, segments of code can work with guaranteed timing across a variety of processor targets and reduce the amount of time needed to verify the system's function.
But timing is not completely alien to embedded systems. Systems that have to guarantee safe behaviour often use strict time slicing if they need to support multiple tasks because this prevents blocking of those needed to meet strict deadlines. Each piece of software in an ARINC 653 avionics system, for example, has its own dedicated, protected memory space and the executive software controlling the system provides a guaranteed time slice to each process.
Far from the world of safety critical processing, audio systems have relied on time triggered communications and processing. As well as early digital telephone systems, professional audio studio tools, such as the DSP networks inside Avid Pro Tools' TDM hardware, use strictly time-sliced processing and networking protocols. Each DSP works for a fixed length of time on a given task then, at the end, delivers its data to the network once its dedicated time slot rolls around.
A similar approach has driven the adoption of a time triggered form of Ethernet in automotive entertainment systems in order to avoid video and audio glitches. Now it has the ability to tell the time, this version of Ethernet could begin to move into vehicle control systems. In Europe, time triggered networking specialist TTTech is partnering with networking giant Cisco to perform further work on timed Ethernet at a research centre in Berlin.
The synchronisation is likely to move outside the car using wireless networks. A team led by Professor Nick Maxemchuk at Columbia University has developed an experimental protocol for self driving cars that relies on scheduled messages. If the signals that control manoeuvres do not arrive before set deadlines, the cars abort the procedure so they can start afresh or deal with other vehicles. The use of deadlines and synchronisation not only makes it easier to check whether something has gone wrong in communications, but also to verify the system works as planned.
An issue with distributed processing based on time is ensuring everyone has their clocks synchronised. The IEEE1588 timing protocol, originally developed to provide a way to set clocks reliably within telecom networks, is branching out into areas as diverse as high energy physics and automotive communications.
Under IEEE1588, the master clock – which will be the one with the more accurate timing out of any pair – first sends a sync message that contains a timestamp (see fig 1). Shortly afterwards, it sends a follow up that contains a later timestamp. The slave responds with a message asking the master for a delay estimate, which it provides with a fourth timestamp. With a total of four successive timestamps – three of which were sent by the master – the slave derives the amount by which it needs to adjust its clock.
The protocol results in timing synchronisation accurate to tens of nanoseconds over a local area network. CERN has reduced this to 1ns across links of up to 10km using its White Rabbit extensions to IEEE1588.
As network protocols like this proliferate, we can expect to see APIs for them incorporated into software, finally giving it the ability to tell the time.