When is an fpga not an asic and not an SoC?
1 min read
In the mid 1990s, when programmable logic devices – the forebears of today's fpgas – were clinging to the bottom rung of the semiconductor ladder, not many anticipated they would become a 'go to' technology for consumer and industrial products.
Originally developed as a means of providing a quick fix for design errors and for other housekeeping tasks – replacing a number of discrete logic devices with one package – the pld/fpga now lives on the bleeding edge of semiconductor technology. FPGA developers access the latest processes in order to cram as much technology as possible onto the silicon, meeting what appears to be insatiable demand for more processing power, more dsp and more logic, but with the same or less power consumption.
Back in the '90s, plds were positioned, to some extent, as asic replacements. But pld and asic technology developed at the same pace. The result was that plds – or the fpgas which replaced them – continued to bite at the ankles of asics.
Now, the eye watering cost of developing asics – at least at the forefront of process technology – combined with the need for guaranteed and huge demand, means they are the province of very rich companies - playing to the fpga's flexibility.
However, when you look at the terminology being used, you don't see the acronym 'fpga' appearing so frequently. Altera now calls its top of the line devices SoCs. The differentiator is whether it has a hard processor – if it does, it's an SoC. Xilinx, meanwhile, uses such terms as 'second generation SoC' and 'all programmable device', rather than fpga.
Let's assume the fpga has, after more than a decade, finally vanquished the asic. Is there another technology around which might just bite at its ankles? Not obviously – and by tying the technology to the leading edge, any pretender will need to be remarkably simple or remarkably well funded.