The rise of the robot interrogator: Experts say AIs will soon understand our emotions - and could do everything from give therapy to quiz terrorists
- Artificial intelligence (AI) has become increasingly good at reading emotion
- AI can now recognise faces, speech and even turn sketches into photos
- AI may be able to match humans in recognising emotions in a few decades
- An emotionally intelligent AI has potential benefits, be it to give someone a companion or to help us performing certain tasks – ranging from criminal interrogation to talking therapy
How would you feel about getting therapy from a robot?
Emotionally intelligent machines may not be as far away as it seems.
Over the last few decades, artificial intelligence (AI) have got increasingly good at reading emotional reactions in humans.
Scroll down for videos
AI can now, among other things, recognise faces, turn face sketches into photos, recognise speech and play Go
But reading is not the same as understanding.
If AI cannot experience emotions themselves, can they ever truly understand us?
And, if not, is there a risk that we ascribe robots properties they don’t have?
The latest generation of AI's have come about thanks to an increase in data available for computers to learn from, as well as their improved processing power.
These machines are increasingly competitive in tasks that have always been perceived as human.
AI can now, among other things, recognise faces, turn face sketches into photos, recognise speech and play Go.
Recently, researchers have developed an AI that is able to tell whether a person is a criminal just by looking at their facial features.
The system was evaluated using a database of Chinese ID photos and the results are jaw dropping.
The AI mistakenly categorised innocents as criminals in only around 6% of the cases, while it was able to successfully identify around 83% of the criminals.
This leads to a staggering overall accuracy of almost 90%.
AI IN THE WORKPLACE
Starmind is an artificial intelligence software for the workplace, designed in Switzerland.
The AI assistant uses machine learning to answer employees' questions, working out who is best placed to provide an answer.
The algorithm understands and interprets questions before searching through previous interactions and builds interconnected maps which become increasingly sophisticated the more queries it is supplied with.
A number of major companies are already using the system, including finance giant UBS and big pharma company Bayer.
The system is based on an approach called “deep learning”, which has been successful in perceptive tasks such as face recognition.
Here, deep learning combined with a “face rotation model” allows the AI to verify whether two facial photos represent the same individual even if the lighting or angle changes between the photos.
Deep learning builds a “neural network”, loosely modelled on the human brain.
This is composed of hundreds of thousands of neurons organised in different layers.
Each layer transforms the input, for example a facial image, into a higher level of abstraction, such as a set of edges at certain orientations and locations.
This automatically emphasises the features that are most relevant to performing a given task.
Given the success of deep learning, it is not surprising that artificial neural networks can distinguish criminals from non-criminals – if there really are facial features that can discriminate between them.
The research suggests there are three.
One is the angle between the tip of the nose and the corners of the mouth, which was on average 19.6% smaller for criminals.
COULD YOU FALL IN LOVE WITH A ROBOT?
A recent survey found 21 per cent of British people would have sex with a droid, and one in three would go on a date.
The survey was done VoucherCodesPro who asked 2,816 sexually active Brits aged 18 to describe which activities they would then carry out with a cyborg.
Researchers asked those participants who said they would have sex with a robot why they would do it.
Seventy two per cent said they thought the robots 'would be very good at it' while 28 per cent said it would be a new experience.
The upper lip curvature was also on average 23.4% larger for criminals while the distance between the inner corners of the eyes was on average 5.6% narrower.
At first glance, this analysis seems to suggest that outdated views that criminals can be identified by physical attributes are not entirely wrong.
However, it may not be the full story.
It is interesting that two of the most relevant features are related to the lips, which are our most expressive facial features.
ID photos such as the ones used in the study are required to have neutral facial expression, but it could be that the AI managed to find hidden emotions in those photos.
These may be so minor that humans might have struggled to notice them.
It is difficult to resist the temptation to look at the sample photos displayed in the paper, which is yet to be peer reviewed.
Indeed, a careful look reveals a slight smile in the photos of noncriminals – see for yourself.
But only a few sample photos are available so we cannot generalise our conclusions to the whole database.
This would not be the first time that a computer was able to recognise human emotions.
GOOGLE SETS UP AI ETHICS BOARD TO CURB THE RISE OF THE ROBOTS
Google has set up an ethics board to oversee its work in artificial intelligence.
The search giant has recently bought several robotics companies, along with Deep Mind, a British firm creating software that tries to help computers think like humans.
One of its founders warned artificial intelligence is 'number one risk for this century,' and believes it could play a part in human extinction.
'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.
Among all forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the 'number 1 risk for this century.'
The ethics board, revealed by web site The Information, is to ensure the projects are not abused.
Neuroscientist Demis Hassabis, 37, founded DeepMind two years ago with the aim of trying to help computers think like humans.
The so-called field of “affective computing” has been around for several years.
It is argued that, if we are to comfortably live and interact with robots, these machines should be able to understand and appropriately react to human emotions.
There is much work in the area, and the possibilities are vast.
For example, researchers have used facial analysis to spot struggling students in computer tutoring sessions.
The AI was trained to recognise different levels of engagement and frustration, so that the system could know when the students were finding the work too easy or too difficult.
This technology could be useful to improve the learning experience in online platforms.
AI has also been used to detect emotions based on the sound of our voice by a company called BeyondVerbal.
They have produced software which analyses voice modulation and seeks specific patterns in the way people talk.
The company claims to be able to correctly identify emotions with 80% accuracy.
In the future, this type of technology might, for instance, help autistic individuals to identify emotions.
Sony is even trying to develop a robot able to form emotional bonds with people.
There is not much information about how they intend to achieve that, or what exactly the robot will do.
However, they mention that they seek to “integrate hardware and services to provide emotionally compelling experiences”.
The Amazon Echo's smart assistant Alexa is capable of voice interaction, music playback, making to-do lists, setting alarms, streaming podcasts and can be set up to control other smart devices
An emotionally intelligent AI has several potential benefits, be it to give someone a companion or to help us performing certain tasks – ranging from criminal interrogation to talking therapy.
But there are also ethical problems and risks involved.
Is it right to let a patient with dementia rely on an AI companion and believe it has an emotional life when it doesn’t?
And can you convict a person based on an AI that classifies them as guilty? Clearly not.
Instead, once a system like this is further improved and fully evaluated, a less harmful and potentially helpful use might be to trigger further checks on individuals considered “suspicious” by the AI.
So what should we expect from AI going forward?
Subjective topics such as emotions and sentiment are still difficult for AI to learn, partly because the AI may not have access to enough good data to analyse them objectively.
For instance, could AI ever understand sarcasm?
A given sentence may be sarcastic when spoken in one context but not in another.
Yet the amount of data and processing power continues to grow.
So, with a few exceptions, AI may well be able to match humans in recognising different types of emotions in the next few decades.
But whether an AI could ever experience emotions is a controversial subject.
Even if they could, there may certainly be emotions they could never experience – making it difficult to ever truly understand them.
Leandro Minku, Lecturer in Computer Science, University of Leicester
This article was originally published on The Conversation. Read the original article.