How creative can software be and might there be benefits beyond the arts?
7 mins read
The digital age – and social media in particular – has had a huge impact on the music industry. Consumers can download tracks instantly and share their preferences within a range of communities and musicians no longer have to win a recording contract in order to make their work widely available.
This different consumer behaviour is affecting how music is produced. And one of the questions being asked is whether music can exist without being the product of a conscious, creative mind and, if so, what does it sound like?
Looking to answer this question, a team from Imperial College London is using a computer program influenced by evolutionary biology and the power of crowd sourcing to investigate the role of consumer selection.
The Darwin Tunes experiment is based around a genetic algorithm which maintains a population of 100 loops of music, each 8s long. Listeners score the loops in batches of 20 and rate their 'likeability', with the algorithm then 'mating' the top ten, mingling their elements and creating two 'daughters' for each. The 20 new loops replace the 'parents' as a new generation, and the process continues. The team then asked users to rate the loops from different generations, with the volunteers consistently ranking the more evolved music as 'more appealing'.
The population is maintained by PerlGP, a grammar based genetic programming system and, so far, Darwin Tunes has evolved through more than 2500 generations.
According to Matthias Mauch, one of the Imperial researchers, using genetic algorithms to create music is nothing new – but letting the technology loose on thousands of users is. "We've managed to pull together a few things that have been floating around – evolutionary algorithms for music, analysis tools from music information retrieval and evolutionary biology – and string them together to go beyond what each of these fields says about music."
The result, Mauch noted, is the evolution of music that can be shaped by user choice, although this is not necessarily what happens in reality. In the experiment, Mauch and his colleagues saw musical appeal increase rapidly and then stop. The team's theory was the elements that make the best loops so listenable are difficult to pass onto the next generation. The study concluded that the ability to download, manipulate and distribute music via social networking has democratised the production of music and by partitioning these selective forces, the team believes the analysis could illustrate the future dynamics of music in a digital culture.
If technology can help us understand the relationship between sound and its audience, what about musician and teacher? Using advances in physical modelling, Graham Percival, a PhD student in electrical engineering and electronics at the University of Glasgow, is teaching a computer to play the violin. And apparently it's getting quite good.
Physical modelling is a type of sound synthesis in which waveforms of a sound are generated using equations and algorithms to simulate the physical source. For example, simulating how the external forces act upon the strings of a violin; how the vibrations in the string travel to the bridge, creating a vibrating structure; then get transferred to the air creating the sound waves we hear. "The first challenge is the physical modelling itself: do we know from physics exactly how a violin behaves? And the answer is no, but we have some fairly good approximations," said Percival. "The second problem is how do you control this thing?"
As professional musicians have thousands of hours of training before mastering their craft, Percival created a program which he said can be trained much like a human student. Vivi, the virtual violinist, functions by performing intelligent control of a violin physical model by analysing the audio output and adjusting the physical inputs to the system using trained support vector machines classifiers (SVMs).
Human users classify a series of audio examples created using the physical model, thereby training the SVMs, which are used to adjust Vivi's bowing parameters when performing. After basic training, Vivi can practise scales and pieces of music. Then, the user can identify parts of the audio which need improvement, use the corrections to retrain the SVM and influence subsequent performances. "I'm hoping to teach the computer at the same rate that I would teach a human," said Percival, also an experienced violin tutor.
The motivation to develop Vivi was born out of the idea that musical creativity can be hindered by physical constraints. "One of the main reasons I'm looking at this is to allow musicians to create music, even if they're not physically able to do so," said Percival. His hope is that technology such as this could potentially offer the option to create sound to those unable to do so through traditional means. "I view it as giving people more options," he added.
One challenge that Percival has encountered is making Vivi sound expressive. "The big difficulty in teaching a computer to play music is to get it to do things musically," he said. While software can follow a set of instructions, musicality invariably has to be added by the user.
Dr Simon Colton, a reader in computational creativity at Imperial College believes it's quite easy to get software to generate music, poems and art. "But to get it to do so in a creative way is more difficult." He uses a creativity tripod analogy as an external model for whether an audience gives creative credibility to a machine. If the computer can demonstrate the skill to produce the art, an appreciation of its context, and imaginative behaviour, then it has succeeded. "If it can't do all those simultaneously, it's difficult for me to support the idea that the software is being creative independently," he added.
Dr Colton used his poetry generator as an example. If a poem is generated randomly by a computer, an audience can still read artistic meaning into it, but it wouldn't attribute this to the software. However, if the software can generate a commentary about its artistic choices, the audience is more likely to give it artistic merit, or at least the benefit of the doubt. "Our software these days should be generating commentary on everything it does," said Dr Colton.
Outside of music and art, Dr Colton suggests this technology could enable more creative computer programs that work alongside users, challenging them and inventing new ideas that aid their work. This level of creative problem solving could apply to many sectors. "The real paradigm shift will be when we expect software to be creative," he observed.
In terms of music, Dr Colton envisages a more advanced version of Genius in Apple's iTunes software, which gives music recommendations based on user taste. In the near term, a more advanced version would be able to create a brand new piece of music written in the style of the artists the user enjoys. But the final version of Genius, which Dr Colton quips would actually be a genius, should create a truly new piece of music unlike anything the user has enjoyed previously, yet is still a perfect match to their taste. "That, to me, is the difference between mere intelligence recommending other songs, down to generative music that is in a pastiche, down to creative generation of music," he said.
While computer programs may be creating new opportunities in music, there are also new interfaces being developed which bridge the gap between artwork and pcb. One example is the Mute Synth, developed by artist John Richards of Dirty Electronics – a workshop ensemble where participants build their own instruments – and graphic designer Adrian Shaughnessy. The device combines sound synthesis with a sequencer/pulser and is controlled by the conductivity of the human body and gesture control.
Richards explained there was a lot of discussion about how to get the artwork etching to work in tandem with the functional circuit. "It has been interesting to see how the physicality of the Mute Synth has offered an alternative to the all consuming MP3 and file sharing culture," he said.
Mute Synth uses a hex inverter, NAND gate and bilateral switch and, through tilting the instrument and pressing touch contacts that use the conductivity of the human body, different feedback loops are achieved. "Simple stuff in terms of electronics, but never in terms of sound," said Richards.
The relationship between art and electronic components is one of the key themes that Mute Synth explores, with Richards noting the similarities between the device's copper etching and screen printing techniques and certain aspects of conventional pcb manufacture. "Once I worked out how to design boards using cad programs and taking images and converting these to Gerber formats, a whole artistic real opened up," he added. "Ultimately, I think I'm a bit of a closet designer and just love exploring how a circuit can be laid out and how this also appears visually."
Richards also described how he had spent a great deal of time programming using different languages to create complex generative systems for sound making, but found it was simpler to use analogue electronics. He noted that using analogue and digital systems alongside each other to create a hybrid system can lead to more interesting sonic results. "I'm really excited about embedded electronics and the different ranges of programmable ics," he said. "I think this will lead to more integrated design and create fully holistic devices where interface – the idea of some form of go between or discrete input device – will become obsolete."
Instruments like the Mute Synth and the issues it evokes are, in many ways, bringing electronics to a wider audience. Could this device be seen as a way of attracting people to electronics and encouraging experimentation in design? "There is always incredible excitement from workshop participants when they are shown how electronics can be used to make sound and consequently music," said Richards. "I have been all over the world doing 'arty' electronics and occasionally come across a workshop participant who, in another life, would make a great electronic design engineer. It is all about planting that seed of creativity and exploring design in an open way."
The University of York has been using music to inspire the next generation of electronic engineers for around 20 years. Professor David Howard has been involved in several recent projects with this aim, such as engaging students with a 'Virtual Choir' at the recent UK Electronics Skills Foundation summer school. His approach to music technology involves not only teaching students about audio systems, but also fully involving them in the electronic engineering and programming side.
Prof Howard suggests that science and the arts go hand in hand and engineering students should be encouraged to take an arts subject further. Although many of the innovative electronic approaches to music being developed can be seen as focusing on the artistic side, the interplay between the two could yield exciting new technologies with wider application. Plus, it's a great way to inspire the next generation to study the subject in times when we're experiencing a shortage of engineering students. According to Prof Howard, "For me, creativity is part of engineering."