Embedded systems security: Connect to chaos
6 mins read
Chris Edwards discusses why embedded systems security is growing in importance.
The Chevy Volt will be the first car of its type: not because it is a hybrid electric/petrol vehicle, but because GM plans to give each one the company sells its own IP address.
The Volt will have no less than 100 microcontrollers running its systems from some 10 million lines of code. This makes some hackers very excited and Adriel Desautels, president of security analysis firm Netragard, very worried.
Before now, you needed physical access to reprogram the software inside a car: an 'air gap' protected vehicles from remote tampering. The Volt will have no such physical defence. Without some kind of electronic protection, Desautels sees cars such as the Volt and its likely competitors becoming 'hugely vulnerable 5000lb pieces of metal'.
Desautels adds: "We are taking systems that were not meant to be exposed to the threats that my team produces and plug it into the internet. Some 14 year old kid will be able to attack your car while you're driving.
"'Black hats' are poised and waiting for this car to come out."
Most of the systems attacked by hackers today are regular computers. But hackers have enjoyed breaking into other connected electronic systems for decades. But as they join the internet alongside pcs and servers, a growing number of embedded systems are becoming targets. Annoying you by randomly switching the lights in your house on and off when your home automation system is penetrated is not the only concern. Fraudsters, counterfeiters and rogue manufacturers present a growing threat.
An embedded system may not even be able to trust what is directly attached to the processor bus or one of its I/O ports. A widely publicised attack on point of sale (POS) terminals in UK petrol stations in 2006 captured the personal identification numbers (PINs) of credit and debit cards. The following year, security researchers at the University of Cambridge showed how supposedly tamper proof terminals could be subverted by replacing some of the internal hardware with their own. Both episodes showed how vulnerable hardware can be if it is not designed to trust only other devices that have the right credentials.
The dodgy hardware need not be there to capture personal data: it might simply be an unapproved peripheral, such as a counterfeit compute or I/O card. The shift to China for manufacturing has led to an increase in hardware copying or overbuilding – in which a licensed subcontractor makes too many subsystems and sells the surplus on the black market. If the counterfeit parts themselves use lower quality components – which may be themselves counterfeit parts or devices that failed testing, but were salvaged from the scrap bins – manufacturers can wind up with an expensive increase in warranty claims.
If you cannot trust the users or even the hardware in the system, who or what can you trust? At some point, you have to have some sort of trusted module in the system that can vouch for the integrity of at least some of the code. This is the idea behind the 'root of trust': a fundamental piece of software that you can verify because it has the right cryptographic key associated with it.
Root of trust methods demand that the first piece of code to run after system reset is check on the hardware itself. This runs a cryptographic hash such as SHA-1 over the next piece of software to load, which will generally contain the system initialisation functions. That software, in turn, will analyse the next piece of code to load. Only if its hash matches the expected result will it be allowed to run. If it has been altered by someone tampering with the system, it should fail the test and be rejected. The Trusted Computing Group calls this process a 'measured boot'.
You can break the code down into relatively small chunks, but each one that runs is responsible for measuring the next, effectively generating a 'chain of trust'. Not all code needs to be protected in this way, but anything that is involved in trusted transactions has to be measured.
To minimise the amount of storage that needs to be protected against manipulation by malware, each successive measurement is hashed into the same memory location.
You can use the chain of trust to guard against counterfeit hardware as well as corrupted or altered firmware. Software running within the trusted chain can interrogate other boards in a chassis by issuing challenges that only valid boards can respond to correctly. Printer manufacturers, among others, use this challenge-response technique to check that the ink cartridges sitting inside their hardware were made by them.
Stryker, which makes microsurgery equipment, makes sure that the cutting tips are used once for safety reasons by fitting them with RFID tags that are invalidated after use by the tool. If a tip does not contain a valid tag, the tool will not start.
A measured boot is for naught if a hacker or worm can surreptitiously alter code after it has been checked. This sounds more difficult than it really is. Hackers have demonstrated time and again on internet servers and pcs the effectiveness of one particular technique for inserting code into a running program: the buffer overflow.
Hackers have used the buffer overflow exploit for more than 20 years: the Morris worm, the first internet borne virus, used the technique in 1988. It is a prime example of how compilers can work against you if you are not careful.
A buffer overflow is devastatingly simple, especially if you have a system connected to a network. The hacker or virus writer deliberately creates oversized or malformed packets that break through the area of space set aside by the programmer to hold the data for incoming packets. Very often, these data areas or buffers will take the form of strings with a defined length. The problem generally comes when a standard C or C++ library function such as strcpy() is used to pass data captured from a network packet into one of these buffers. The problem with strcpy() is that it does not bother to perform any bounds checks before copying the string supplied by the hacker into the string that will be used by the program for further processing.
Using a call such as strcpy(), the copying takes place on the stack. The compiler will allocate just enough space on the stack to hold whatever the programmer has defined as the size of the buffer. When the code in strcpy() is run, it dutifully cycles through until it finds the zero value that typically terminates a string in C or C++. If that string happens to be larger than the space allocated to the buffer, then strcpy() will simply overwrite other valuable data, such as the function's return address.
This is where the exploit bares its teeth. Usually, a corrupted stack results in an instant crash. However, the hacker will have, in the style of a Blue Peter demonstration, done this before and will make sure the replacement data that winds up in the stack will either point the return address to a piece of operating system code that spawns a process that can then be used to control the system, or steer execution to code inserted in the buffer itself.
There are many ways to defeat this kind of exploit. For example, strncpy() will only copy characters up to a defined limit, which should be the size of the buffer. Strncpy() can even be used as a way of detecting a buffer overflow attempt as it will not insert the zero value terminator normally expected of a string. This can be tested after the operation to help detect whether data coming into the system has been corrupted or deliberately malformed.
Another technique is to copy the return address at the beginning of the stack frame. When the function returns, code inserted by the compiler or a post compilation tool can check the two addresses. Execution can only proceed normally if the two addresses match.
However, as buffer overflows are still being used to corrupt internet based systems, the message has not necessarily made it through to developers.
Datatype abuse can trip up systems. For example, the default in C is to treat integers as signed variables, even if the developer only intended the variable to be used as unsigned. A hacker may try to use negative values to fool that test and to force logic later on in the program to go off in the wrong direction. There are plenty of other integer manipulation tricks that rely on similar overflow or underflow problems in software that has not been designed to defend against them.
In principle, even if malicious code can be inserted into a running system, it is still possible to trap it before too much damage is done. In the chain of trust system, hashing is done in such a way that if a piece of code is corrupted, it can be identified by checking its hash against the result for it stored by the measured boot process. There is inevitably a performance hit if the system is continually scanning for corruption.
The root of trust itself can come under attack. This generally demands physical access to the system but specialists, such as cryptography researchers and the Cambridge researchers who showed the vulnerability of POS terminals, have demonstrated repeatedly how seemingly innocuous and subtle changes in temperature and emitted rf can be used with the right statistical techniques to uncover cryptographic keys that are stored in on chip non volatile memory and which never leave the package.
Hardening cryptographic circuits against attack – for example, by putting false paths into the algorithms – is one way to protect against the problem. The other is to make sure the consequences of an attack only affect one system by not sharing keys between units. While more complex to achieve, this is likely to pay off in the long term as hackers become more aware of how they can break into systems through embedded hardware.
Don't think that, just because Windows is such a common target for attack, embedded hardware is safe for the moment. Companies that specialise in penetration testing of corporate IT networks regard devices such as printers and network switches as soft targets. And others are busily reverse engineering home automation devices to find out what is possible.
Apology
The original version of this article contained an assertion about former telecom engineer Barrett Brown that was incorrect. The author and New Electronics apologise unreservedly for the error.