30 years of dsp: From a point solution for powering a toy to a pervasive technology
7 mins read
The beginning of 2012 marked an important anniversary, not only for the electronics industry, but also for consumers around the world. The celebrations – even if they were muted – recognised the launch of Texas Instruments' first digital signal processor (dsp) as a commercial product.
Gene Frantz, TI's principal fellow, was intimately involved with the technology's development in the 1970s. But he said signal processing, as a discipline, had been around for much longer. "It grew out of university curiosity in the 1960s," he recalled, "although there was work being done at Bell Labs in the 1950s. Before that, a speech vocoding system was demonstrated at the World Fair in 1939."
Frantz said the starting point at TI was the recognition that enough performance was available from electronics technology to do 'interesting things'. "In the 1960s, there were maybe eight or 10 universities around the world working on how to do speech processing in a digital format. Digital computers were available; all they had to do was to translate analogue signals into the digital domain. The breakthrough was the discovery – more accurately, the rediscovery – of the FFT."
DSP technology was boosted by continuing global suspicion. Frantz said the 1970s could be typified as a decade of military advantage. "But the technology that was being developed was expensive and difficult to use."
Nevertheless, Frantz and his fellow researchers followed a more commercial path and 1978 saw the launch by TI of the Speak & Spell. "We put speech synthesis into a toy," Frantz noted. "Until then, it was not considered possible."
Frantz credited a couple of his colleagues with the achievement. "We got it done because we had Richard Wiggins on the team, who had learned a lot about speech synthesis algorithms, and Larry Brantingham. Together, they got the circuit architecture together. My job was to get them in the same room."
The dsp in the Speak & Spell featured an algorithm called LPC10, which later became a secure telephony speech encoding standard. It used linear predictive coding to generate understandable speech, albeit with a very synthetic quality. "We had to make some compromises to get it onto a chip," Frantz added.
Technology itself wasn't a particular challenge, in Frantz' opinion. "We designed that first dsp in pmos, which was a really slow technology. NMOS was faster, but it wasn't considered possible to use it for a dsp."
Frantz and his team broke the multiplier function down into 22 sequential stages. "We'd get an answer in one clock cycle," he continued, "but it was pipelined and most of the problem was setting up the pipeline so everything turned up at the right time."
Frantz believes the significance of the achievement went beyond the fact that they had created the device. "Until then, only rich countries could afford advanced electronics technology. We moved it into the commercial world."
And the commercial world adopted dsp technology with seeming abandon; applications in the early 1980s included voice band modems, hard disk drives and 3d graphics.
"By the time we introduced the TMS32010 in 1982, customers weren't doing what we expected. We developed the device to extend speech processing, but our customers were doing everything but," he said. "Over the last 30 years, customers have continued to 'out innovate' us – and that's been fun. While it's our business to bring out devices that do more than we think, our customers do even better."
By the 1990s, dsp was being regarded as a 'solve all' technology. "It was a decade of expectation," Frantz claimed. "Now, dsp is an enabler in every embedded processor."
Fernando Mujica, director of TI's System Architectures Lab, said he got interested in signal processing whilst at university. "I've seen the evolution of applications and dsp isn't disappearing; its implemented as accelerators. The algorithms are well known and stable, so they are perfect for hard coding. This frees programmable devices, including dsps, to do the 'fun things' for which we don't have solutions."
But what was there before the dsp? "There were special computing elements available, but you needed to work in microcode. Bit slice processors also offered useful features," said Frantz. "And there were plain old analogue circuits," Mujica added.
The arrival of the dsp was, in Frantz' opinion, a 'revolution'. "As much as anything, it was not because we invented multiplication, it was because we made the technology cheap enough to be used in applications where it hadn't been possible before."
Did much work have to be done to market dsp technology? "Yes and no," Frantz said. "Folks at universities were having a lot of fun inventing theories and showing how stuff could be done. As these guys graduated with PhDs, they formed small companies which were more than willing to work with us. And the university system was helping students to become our customers."
With the growing interest in dsp, the TI team started working with professors on text books outlining the fundamentals of dsp. "But the market wasn't big enough for us, unless we could get more people to use dsp," Frantz admitted. A third party network started to develop, offering development tools and, as Frantz noted, 'filling in the holes'.
Another strand of the marketing campaign was to target educators. "Originally, we targeted undergraduates," Frantz recalled. "Now, dsp is taught in high schools and signal processing is one of the first subjects studied in EE and by engineers in general. But we wanted to make sure students graduated knowing how to use dsps."
TI is now taking advantage of the Open Source community, not only to 'spread the word', but also to expand the technology. Mujica said it has a big multiplier effect. "The benefits of Moore's Law mean we can move to higher level programming languages. It was assembler in the early days, then the C compiler became good enough. Now, it's object oriented programming and C++. More people can take advantage of dsp because it is being abstracted from the hardware."
Frantz expanded: "Artists and musicians may not be technical people, but they're innovative and can use technology without needing a PhD in it, which was the case when we started out."
While there was widespread interest in dsp, understanding hadn't reached the same level. "I spoke to many who said they only needed speech recognition," Frantz noted. "I asked them whether they had anyone with a degree in signal processing. I didn't want to talk to those who didn't, because I would have been their dsp team and I couldn't afford to do that. At that time, if you didn't understand the technology, you couldn't use the results. Today, you can."
In 30 years, dsp has moved from discrete devices to the point where it provides functionality in other parts. Mujica believes this isn't a bad thing. "It's a pervasive technology today and continues to increase device performance. DSP is used more than ever, but it's 'under the hood' and the programming models keep the details hidden."
Algorithms disappear, replaced by hardware accelerators. "One of the good things about accelerators," said Frantz, "is they give the highest performance for the lowest power and lowest cost per function. They take the mystery away and bring all the advantages."
Reflecting, Frantz said the original dsp was a microcontroller with a multiplier. "The multiplier took up 25% of the die. Today, I ask professors what they would put in 25% of a modern die that would revolutionise industry. I don't tell them my answer, but it's accelerators; it's amazing how collapsing algorithms into accelerators has continued to revolutionise industries."
What landmarks does Frantz recall for dsp? "When we did dsp in the 1980s, the world was beginning to realise there were these things called pcs. One of the things which helped dsp along was that, for the first time, there was a development environment that wasn't based on minis or mainframes. Where a design seat cost $100k, it was now available for $5k.
"But a significant achievement was the floating point signal processor. When we introduced the TMS320C30, we were asked whether it was a success because it was 32bits, floating point or because it was C friendly. It was 'all of the above'. We changed the way signal processing code was written and how data sets were handled. People didn't have to worry about overflow or underflow and it ended up being a wonderful turning point."
Frantz also pointed to the development of the digital mobile phone as another significant point. "When I first looked at the GSM spec, I realised each part of the system was defined around how many TMS320C25s it took."
As dsp headed further into the consumer world, power consumption became an issue. The discussions resulted in the eponymous Gene's Law, which responded to a customer's comment that 'if you can't get the power down, we'll find another vendor'.
"Early digital mobile phone customers were saying 'we need a device with double the performance'," Frantz observed. "Originally, we tried to explain why we couldn't get the power down, but decided to figure out a way to make it possible. Since then, power dissipation has become a vector of dsp performance."
Mujica pointed to the development of multicore dsps as critical. Frantz picked up the theme. "After the C30 had been out for a year or so, we couldn't find a customer using only one dsp; they were using multiple C30s. So we architected the C40, a single core with six comms ports to support links with neighbouring processors. We could do multiprocessing, but not on the same piece of silicon. Later, we did the C80, a risc processor tied to four signal processors through a crossbar switch; all on the same die. Some thought it impossible to program, but those who knew how to do it loved it."
Mujica added: "It was an important milestone. Multiprocessing is now at the heart of dsp performance, although there is still work to be done on programming tools."
Meanwhile, the gap between fixed and floating point performance and cost has all but closed. Mujica said: "There has been a lot of architectural innovation. Our latest Keystone Multicore DSP platform, with eight C66x cores, offers 320GMACs and 160GFLOPs. We can't wait to see what our customers come up with."
What does the future hold for the dsp? "There are a lot of things on our 'to do' list," Frantz admitted. "The world is moving from components to systems that use components. How do we integrate more components into one circuit? We're finding the real world hasn't changed; how do you put a system together where the analogue interface runs at 3 or 5 or 10V and the output is maybe 500V and in the middle is digital circuitry running at 1V?
"We're now looking at the analogue to information converter," Frantz continued. "It will need some kinds of sensor, an analogue front end and digital front end, but will output information, not data. Embedded processors will make decisions based on that information."
Will we still be talking about dsps in another 30 years? Frantz: "My friend Al Oppenheim (an MIT professor) says there will always be interesting signals and we will always want to process them. Therefore, there will always be signal processing."
Mujica had the final word. "Signal processing is a fundamental technology and will be around forever."