In three decades, the dsp has become a pervasive technology
1 min read
It's amazing to discover that engineers have been looking to develop technology that can imitate the human voice since the 1930s. The first such system – called the Voder – was demonstrated at the 1939 World Fair in New York.
Developed by Bell Labs, the Voder created – or attempted to create – speech using a range of functions. By using combination of wrist bars, foot pedals and gas discharge tubes, the operater could generate the various components of speech. Understandably, it was a complex machine.
Bell Labs was involved for obvious reasons; the need to compress voice for transmission over copper wires had already been identified, as had the need for encryption – it had already developed the vocoder as part of this work.
But not much happened after that, until the 1970s, when electronics technology began to be applied to the problem. Texas Instruments started exploring speech synthesis in 1976 as part of its development of bubble memory technology. This led to the launch in 1978 of Speak & Spell – famously used by ET in its attempts to phone home. It was the first time that the human voice had been replicated using one chip and, importantly it launched digital signal processing technology into the consumer domain.
Since then, dsp technology has found wide application – everything from your mobile phone to motor control to hard disk drives. As companies like TI took advantage of process technology, performance increased, power consumption declined and the rest, as they say, is history.
Today, 30 years on from the launch of the first dsp chip, dsp has blended into the background somewhat; today, rather than being a discrete product, dsp is an enabling technology. But even though it's not as visible as it was, dsp isn't yesterday's technology by any stretch. As an MIT professor points out, 'there will always be interesting signals and we will always want to process them. Therefore, there will always be signal processing'.