I, robot?
6 mins read
Researchers are tackling the challenges of building truly humanoid robots.
For decades, the electronics and computing industries have been trying to create machine intelligence that could match our own. Now, teams around the world are setting themselves what could be an even harder target: completely realistic avatars.
The term 'avatar' has mostly been used to describe computer generated images of faces or bodies. But work now is taking several different routes, both hardware and software based, to build truly humanoid robots. If it succeeds, we could see remarkable 'creatures' emerge that are very like ourselves, talking and working with us.
Most projects are not tackling the whole target, but rather one aspect of the whole. For example, at the University of Sussex, Owen Holland is leading Eccerobot, in which his team is aiming to build 'anthropomimetic' robots, whose internal workings mimic human anatomy, allowing the robot to move in a natural, human like way.
While standard robots may mimic the external human form, their internal mechanisms are very different from those in humans – and their characteristics reflect this. This places limitations on the kinds of interactions such robots can engage in, on the knowledge they can acquire of their environment, and therefore on the nature of their cognitive engagement with the environment.
"Our aims are to build a robot with a human like skeleton and human like elastic muscles, to find out how to control it and to see how the human-like body influences the things the robot is able to do, and the ways in which it does them," Holland explains.
Plastic bones copy biological shapes and are moved by kiteline, a tough tendon-like polyethylene, while elastic cords mimic the bounce of muscle. A complete sensor subsystem consists of proprioceptive, visual, audio and vibration, inertial and tactile units.
Creating the ultimate realistic avatar – which both looks like a human and behaves in a human way – involves two fundamental elements: the cognitive/AI aspect; and the physical/corporeal. In a sense, Eccerobot fuses the two as it studies how to control a human body using information from human like sensors.
"I believe that having the right behaviour is going to be more important for those trying to build the perfect avatar than they realise," Holland says. "At the moment, they seem hung up on physical appearance, which is much easier. But even the best I've seen look like lifeless animated zombies – they twitch and wave limbs in a way that lets you know they're certainly not real."
A famous term amongst the avatar community is the 'uncanny valley', coined decades ago by a Japanese roboticist Masahiro Mori. His hypothesis claims that as a robot is made more humanlike in its appearance and motion, the emotional response from human beings becomes increasingly positive. But then a point is reached when the response suddenly turns to revulsion – the uncanny valley. Then, as the appearance and motion continue to become even less distinguishable from humans, the emotional response becomes positive again. The term aims to capture the idea that a robot that is 'almost human' will seem particularly strange, more than one that is clearly a robot.
"In my view, the uncanny valley is as much to do with the lack of realistic movement as with the presence of a slightly imperfect appearance," Holland says.
Actuators are a major challenge for the Eccerobot team. The actuator subsystem consists of 80 individual actuators, one for each muscle. Each actuator, in turn, consists of a screw driver motor, a gearbox, a spindle, a piece of kiteline as tendon and an elastic component represented by shock cord. The screw driver motors produce torques of around 3Nm from 6V NiCd battery packs and the direction of rotation is electrically switchable.
Eccerobot's actuation motors draw very high currents in operation, which means custom electronics are required to control them. Using a microcontroller and a CAN interface, it is possible to achieve a distributed control approach, where a lot of sensor and actuator preprocessing is already performed on the microcontrollers themselves to minimise the communication effort.
Even though building a physical object is very much the point of the Eccerobot project, the team is also producing a photorealistic and functionally accurate computer based model of the robot.
"It enables us to try out ways of controlling it without actually having to power it up," Holland explains. "With powerful computers – we have 8Tflops in our laboratory – we can go much faster than real time and run several models in parallel, 24/7, which would be impossible with a single robot."
Eccerobot is not the only anthropomimetic robot project. Another is Kojiro, under development at Tokyo University's JSK Robotics Laboratory. One of Kojiro's main innovations is a flexible spine, which can bend in different directions to let the robot arch and twist its torso.
Kojiro features small, lightweight, high performance dc motors measuring 16mm in diameter and 66mm in length, but which deliver 40W of power. The motors pull cables attached to specific locations on the body, simulating how muscles and tendons contract and relax. About 100 of these tendon-muscle structures combine to give the robot some 60 degrees of freedom, much more than could be achieved with motorised rotary joints.
The main drawback of using a musculoskeletal system is that nonlinearities make it difficult to control the robot's body and hard to model precisely. To develop control algorithms for Kojiro, the JSK team is using an iterative learning process. They first attempt small moves and gradually tweak the control parameters until the robot can handle more complex movements.
Apart from moving, one of the most important tasks the ultimate avatar will have to perform is talk to us – that is, pass the Turing Test, so we cannot distinguish talking to it from talking to another person. This is now mostly a software problem – computer processing hardware of the next five years or so will probably be sufficient.
One person aiming to create avatars capable of conversation is programmer Rollo Carpenter, best known for Jabberwacky, a chatterbot that helped create avatars called George and Joan, both of which won the Loebner prize, the annual competition for Turing Test abilities. Its stated aim is to 'simulate natural human chat in an interesting, entertaining and humorous manner', and was an early attempt at creating AI through human interaction.
His latest chatterbot avatar is Cleverbot, which like Jabberwacky is based on the principle of learning to speak directly from users, contextually, and also sharing all data, but with a newer, fuzzier coding and a cleaner interface.
"Cleverbot has rapidly grown past its forerunner, to the point where 1.5m visitors a month talk on average for 15 minutes each," Carpenter says. "Many stay for hours and 1% return more than 100 times a month. Conversational AI truly has now become a form of entertainment. Frequently, people become convinced that Cleverbot is not AI at all, but a live chatroom in which they are randomly paired with someone, then switched occasionally. In a sense, therefore, it is passing a lesser 'online chat' variety of a Turing Test quite often."
A new Cleverbot app has been in the appstore for the iPhone/iPod Touch for about a month now and soon will be for the iPad, followed by other devices. It features an avatar of a new kind, featuring about 100 different emotional states. Images represent how Cleverbot reacted to what you said and how it feels when it replies.
Another success is George, which speaks English to teach Russians who have no access to native speakers. The city of St Petersburg has signed up to make it available free to every school child.
Such developments make Carpenter determinedly optimistic about the future for realistic avatars. "Realism sufficient to satisfy us that the machine is interesting, empathetic and enjoyable cannot be far away," he says.
As well as natural body movement, another critical piece of the lifelike avatar jigsaw is the face, and one of the world leaders in creating natural looking, computer generated faces is Image Metrics, a company formed in Manchester in 2000 by a group of PhD students in computer vision. Since then, Image Metrics has provided animation of faces for dozens of games, including Grand Theft Auto IV. It now has an office in Santa Monica, to help it focus on its key target market – Hollywood.
Perhaps its most famous creation yet is Emily, an extraordinarily accurate animation of actress Emily O'Brien. Emily was produced using a new modelling technology that enables the most minute details of a facial expression to be captured and recreated. She is considered to be one of the first animations to have crossed the uncanny valley. Conventionally, animating faces has meant placing dots on a face and monitoring their movement, but Image Metrics analyses the movements at the individual pixel leve, enabling extremely subtle variations to be captured, such as the way the skin creases around the eyes. Powerful hardware has also been crucial, in the form of AMD's Radeon HD 4870 X2 chip, capable of 2.4Tflops.
A somewhat similar project at the US National Science Foundation aims to create a realistic avatar of the NSF's former director Alexander Schwarzkopf and combine this with AI techniques in natural language understanding and learning. The aim is to model gestures as well as realistic facial animation, ultimately enabling people to interact with the avatar, which has access to data about an upcoming NSF proposal.
"If you're really going to implement AI, the only way to do it is through some sort of embodiment of a human, and that's an avatar," says Avelino Gonzalez, electrical engineering professor on the project.
If Emily and Alexander are already entering the avatar hall of fame, so should Albert Einstein – or rather, the avatar of him built by Texas based Hanson Robotics. This was a walking humanoid, built in collaboration with the KAIST Hubo group of Korea. KAIST built the walking body, while Hanson built the head using Frubber, a flexible, rubber like skin material which it created. One of its latest avatar/robot models tracks faces and sound, perceives facial expressions and mimics the user's facial expressions.
No one knows quite where avatar development will lead, but one project planning for the future is aiming to build avatars to achieve something even more extraordinary – immortality. The Terasem movement runs lifenaut.com, which it calls an 'immortality social networking website'. By storing vast amounts of data about ourselves, the hope is that we can build an avatar into which we can be downloaded when death threatens.
And who said technology has no soul?