To minimise the amount of extra data needed, the researchers store memory addresses in a 'tree'. Every address is randomly assigned to a path through the tree and when the chip requires data stored at a particular address, it also requests data from all other nodes on the same path.
When the chip writes a block of data to memory, it pushes it as far down the tree as it can, which means finding the last vacancy before the block's assigned path branches off from path that was just read.
In writing data, the chip has to follow the sequence of nodes in the path to conceal information. Previously, that meant sorting memory addresses according to their ultimate locations in the tree.
"Sort is not easy to do in hardware," says graduate student Chris Fletcher. "By the time you've sorted everything, you've taken a real performance hit."
To overcome this problem, the team has used an extra memory circuit, with storage slots that can be mapped onto the sequence of nodes in any path through the tree. Once a data block's final location is determined, it's stored at the corresponding slot in the circuit. All of the blocks are then read out in order.
Rather than writing data every time it reads data, the chip writes only on every fifth read. On the other reads, it discards the decoy data. When it does write data, it will have, on average, five extra blocks of data to store on the last path it read. There are generally enough vacancies in the tree to accommodate the extra blocks, but the system's protocols for pushing data as far down the tree as possible can handle the occasional logjam.
According to the team, one of the advantages of its approach is that it can be added to existing chip designs, without much retooling. The extra layer of security can then be switched on and off as needed.