Can tweaking the analogue properties of digital circuits help to combat the effects of variability?
4 mins read
As semiconductor process technologies get ever smaller, the effects of stochastic variability become more pronounced. It's no surprise: because there are fewer atoms involved, the presence or absence of a small number of dopants can have large effects on device properties. Where variability was predictable at earlier process nodes, it is more random – or stochastic – at the leading edge.
It's a property that has been addressed by a number of projects, but more from the perspective of predicting what might happen. But a project in progress at the University of York and supported by EPSRC is trying to take that work to the device level. Leading the PAnDA – Programmable Analogue and Digital Array – project is Prof Andy Tyrrell, head of York's Intelligent Systems Group. "Our work comes from at least two, maybe three, strands. About four or five years ago, we were involved with the NanoCMOS project, along with the University of Glasgow and three other UK universities. We've now started to look at whether we could take variability aware device models from Glasgow, create standard cells and then use evolutionary computing models to optimise them work better against a number of performance measures, including variability."
He said the second strand related to York's expertise with fpgas. "We've used conventional and unconventional fpgas for the last 10 to 15 years in our bio inspired and evolutionary computing work. We will be using this expertise to develop a device designed with partial dynamic reconfiguration in mind."
Prof Tyrrell noted the third strand related to field programmable transistor arrays, allowing reconfiguration at the transistor level and, hence, the creation of analogue and digital devices. "We decided it would be interesting to build a new fpga structure in which device parameters could be tuned."
Dr Martin Trefzer is another project member, along with Dr James Walker. He said: "While fpgas are reconfigurable, we've added another layer that allows the analogue properties of the underlying fabric to be tweaked.
"With fpgas, higher level tools are needed to put together a system and our extra layer can be considered as a kind of post mapping optimisation stage."
Prof Tyrrell added: "One of the things we want to understand is how variability at low dimensions affects designs and how these effects can be mitigated. The PAnDA project will act as a research tool to investigate variability on real hardware. For example, a company might want to look at the effects of the next silicon generation. It could build a PAnDA device and experiment with silicon, rather than in simulation."
The project aims to create a novel fpga architecture based on configurable transistors (CTs), the size of which can be altered by configuring them in different ways. The PAnDA architecture (see fig 1) offers finer granularity than seen in conventional fpgas, providing access to the transistor and function levels. This, say the researchers, will enable properties to be tweaked at the analogue level, as well as at the digital level.
Alongside CTs, the PAnDA architecture features configurable analogue blocks (CABs), configurable logic blocks (CLBs), logic cells and interconnect. While some of these features are found in commercial fpgas, the PAnDA architecture adds CTs and CABs.
Dr Trefzer noted: "We are not modelling semiconductor devices at the atomic level. Rather, we are designing at the transistor level and aiming to create a commercial substrate. Our targets include creating an additional configuration layer and higher density."
The CT – the smallest reconfigurable element – is formed from seven pmos or nmos transistors connected in parallel. Each transistor can be turned on or off using a switch that connects it to a common gate. The state of each switch is controlled via configuration bits stored in a configuration sram.
The design is said to exploit the fact that a number of cmos transistors of the same gate length can be connected in parallel in order to form a device that is equivalent to a single transistor of the same length and of the sum of the individual widths.
Dr Trefzer said: "Each block has a small amount of circuitry connecting to a number of transistors to allow devices with different geometries to be represented. We can alter transistor widths, and so the amount of current flowing. We can make the block smaller and faster, with lower current requirement, but more variability and vice versa. By configuring the width:length ratio, we can define device characteristics."
The next level is the CAB, formed from four nmos CTs, four pmos CTs and configurable interconnect (see fig 2). A CAB features seven analogue I/O.
The CAB represents the basic logic blocks that may implement an fpga fabric. In this case, one CAB can be used to create either three inverters, a buffer, a transmission gate, a logic gate, a latch or an sram.
Higher up the structure are CLBs. Logic designs can be mapped at the CLB level in the same way as in commercial fpgas, which makes both architectures compatible. However, PAnDA devices allow the CLB's function set to be modified at the CAB level, which the team says may lead to more efficient and compact mappings.
Dr Trefzer admitted: "It's important for us to maximise the number of functions that can be realised by altering the configuration of the device. But it's also important for us to keep the amount of configuration circuitry as small as possible."
The project is also turning its attention towards design tools. "We know they'll be needed," said Prof Tyrrell, "and we'll be expanding some tools developed during the NanoCMOS project."
The researchers will be looking to map logic designs at the CLB level, but to extend that with the ability to allocate CAB functions automatically using global optimisation techniques developed by evolutionary computation teams. These techniques may also be suitable for device size optimisation at the CT level.
Dr Trefzer said: "When you think about COTS fpgas, you think of CLBs and logic elements and that's where the design stops. In our case, we go down to smaller levels of granularity. We don't take standard cells and put them together. Instead, we start by adding reconfigurability from the start."
A number of potential applications are being considered. One is to allow fabless companies early access to smaller technologies. "They could configure a device as an emulator," Dr Trefzer offered. He also noted that simulation is computationally intensive. "The device could be used as a hardware accelerator and, once the hardware is configured, the user would have their answer."
Although the project is targeting the 40nm node, Prof Tyrrell says that doesn't mean it's as small as the design will go. "It's as small as we can go for the money we have. Because we're interested in variability, the smaller we go, the more effective we think the device will be."
Prof Tyrrell said the first test chip will be taped out in June and he hopes to see silicon in October. "We will have another two silicon runs after that," he concluded.