It’s a type of AI software which is used to gauge human emotion through a wide range of components, from cameras that track facial movements to adhesive body sensors capable of detecting subtle shifts in the skins electrical activity.
The software is engineered to be able to recognise, interpret and process human experiences and emotions in order to generate appropriate and tailored responses.
From tutoring students to treating patients with neurological disorders, the possibilities for affective computing are almost limitless. However, for affective computing to reach its full potential, it must be able to address an array of ethical and legal hurdles.
The origin of affective computing can be traced back to Professor Rosalind W Picard’s 1995 paper Technical Report No. 321, which was published from the Perceptual Computing wing of the Massachusetts Institute of Technology (MIT).
Picard, who is the founder and director of the Affective Computing Research Group at the MIT Media Lab, as well as co-founder of the affective computing start-ups Affectiva and Empatica, used the paper to outline what affective computing is and what it could be used for, with ideas for ‘new applications of affective computing to computer-assisted learning, perceptual information retrieval, arts and entertainment, and human health and interaction.’
Today, much of what Picard discussed has come to fruition, transforming into tangible software and hardware and an industry with significant economic value.
Emotionally intelligent AI
Picard described affective computing as ‘computing that relates to, arises from, or influences emotions.’ The concept of ‘engineering emotion’ is extremely difficult and complex, relying on masses of data being fed to machine-learning computers, which are then programmed to recognise and understand emotion.
From facial expressions and body posture, to subtle shifts in the body’s electrical energy, humans have countless ways of expressing their emotions. To provide the software with the most accurate data, a variety of hardware types have been developed to monitor all different aspects of emotional expression.
One of the core types of hardware being developed is the use of cameras and other scanning and imaging devices that monitor and track subtle facial movements and even micro expressions, as well as gestures and postures.
Through the use of high-end audio equipment, the hardware can detect variances and textures in users’ voices which depict certain emotions. This is particularly useful for situations where the technology is unable to see or be within physical distance of the user, such as over the phone. Insurance companies are even experimenting with voice call analytics which would be used to detect how truthful someone is being to their claim handlers.
Other types of hardware include adhesive body sensors that read EEGs and ‘galvanic skin response’ capable of monitoring the intensity of emotional change. Used in conjunction with other hardware types the technology would be able to not only tell what you’re feeling but also to what level of intensity.
Significant advancements have also been made in the virtual reality (VR) space. This includes the development of VR gear such as head-mounted displays (HMDs) which analyse and interpret physiological signals which the system responds to and create a customised experience for the user.
Furthermore, the use of ‘Haptic’ technology can deliver specific stimuli such as vibrations to encourage physical effects in virtual settings. Haptics have been used for years in gaming control pads and they are now being used in conjunction with VR and HMD to further enhance the realistic nature of simulated experiences.
The data produced by these hardware types is processed by an array of sophisticated AI and machine learning software applications. These programs run the complex algorithms that enable the systems of which they are a part to best interpret, and act on users’ emotional cues.
Real life applications
For Slawomir J Nasuto, Professor of Cybernetics at the University of Reading, affective computing could be integrated with existing public sector infrastructure to fulfil a social purpose. For example, Nasuto suggests that schools could begin introducing some form of computerised tutoring, whereby technology is used to monitor the mental state of pupils. In response to emotional reactions, teaching methods and materials could be tailored to maximise learning potential.
Nasuto and his colleagues have been developing affective computing technology for use in care homes as a form of music therapy. The team are currently working on a brain-computer interface for affective-state modulation with music, which is a system that is designed to recognise a subject’s emotional state, modify a music stream which would then play from a sound source. Nasuto believes this could be a form of therapy on its own or used in conjunction with other therapies.
Nasuto has also explored how a similar system could be deployed in intensive-care wards. Patients undergoing operations especially after a traumatic event can be highly stressed and face potential cognitive impairment post-operation. Nasuto believes that non-pharmacological interventions such as music may help to reduce the levels of anxiety that the patient is suffering and may even enable clinicians to lower the doses of medications that the patient receives.
Education and healthcare aren’t the only industries to adopt this type of technology. Companies such as Coca-Cola and LG have partnered up with Realeyes, who operate technology used to measure, optimise and compare effectiveness of their advertising content. Realeyes are one of the notable innovators who have developed technology that can monitor consenting consumers’ emotions and attention levels during a campaign. The information is then fed back to the companies who can use this information to compare multiple assets instantly or benchmark them against previous campaigns.
The ethical dilemma
Whilst this technology is currently being tested on consenting audiences, Nasuto believes this will eventually become implemented into everyday marketing in the future. It’s no stretch of the imagination to envisage that one day TV sets may have a camera and a microphone that can pick up reactions to shows and commercials and monitored by the media industry. However, this could be problematic for the retail industry as unlike education and healthcare, since it would involve capturing personal data for commercial use.
With laws clamping down on data privacy issues, retail companies will not only struggle to gain social approval, but also the legal rights to deploy affective computing into marketing or advertising campaigns.
Another ethical challenge facing the affective computing industry is the apprehension around engineering human emotion. Whilst Picard recognised the problems with emotionless computers, she also identified that computers could be capable of too much emotion.
“Without emotion, computers are not likely to attain creative and intelligent behaviour, but with too much emotion, we, the maker, may be eliminated by our creation.
“I have argued for a wide range of benefits if we build computers that recognise and express affect. The challenge in building computers, that not only recognise and express affect, but which have emotion and use it in making decisions, is a challenge not merely of balance, but of wisdom and spirit.
“It is a direction into which we should proceed only with the utmost respect for humans, their thoughts, feelings, and freedom.”
Author details: Richard Johnson is Partner and European Patent Attorney at Mewburn Ellis