Can YOU spot the liar? Researchers develop online game to help AI crack down on racial biases by analyzing over a million faces

  • New effort aims to further understanding of lies based on facial and verbal cues
  • The ADDR (Automated Dyadic Data Recorder) framework matches people up 
  • Then, one is instructed to lie or tell the truth; exchange is recorded and analyzed
  • It hopes to minimize instances of racial and ethnic profiling from TSA at airports

Billions of dollars and years of study have been poured into research trying to discover if someone is lying or not.

Researchers from the University of Rochester are now using data science and an online crowdsourcing framework called ADDR (Automated Dyadic Data Recorder) to further understanding of deception based on facial and verbal cues.

By playing an online game, the researchers have already collected 1.3 million frames of facial expressions from 151 pairs of individuals in just a few weeks.


Basically, our system is like Skype on steroids,' said Tay Sen, a PhD student in the lab.

The researchers have two people sign up on Amazon Mechanical Turk.

A video then assigns one person to be the describer and the other to be the interrogator. The describer is shown an image and told to remember as many details as possible.

They are then told to either lie or tell the truth to the interrogator, who is not aware of the instructions.

The interrogator asks a series of baseline questions not related to the image, in order to establish a 'personalized model'.

The questions included 'what did you wear yesterday?' — to provoke a mental state relevant to retrieving a memory — and 'what is 14 times 4?' — to provoke a mental state relevant to analytical memory.

The questions provide a baseline 'normal' response, as the interviewee has no incentive to lie.


 'A lot of times people tend to look a certain way or show some kind of facial expression when they're remembering things,' Sen said. 'And when they are given a computational question, they have another kind of facial expression.'

The entire interaction is recorded on video for further analysis.

Data science allowed for the researchers to quickly analyze everything in a variety of new ways.

They used automated facial feature analysis software to identify which action units were being used in a given frame, and to assign a numerical weight to each.

An unsupervised cluster technique was then used to automatically find patterns.

'It told us there were basically five kinds of smile-related 'faces' that people made when responding to questions,' Sen said.

A version of the so-called Duchenne smile that extends to the muscles of the eye is most frequently associated with lying.

This is consistent with the 'Duping Delight' theory that 'when you're fooling someone, you tend to take delight in it,' Sen said.

A sign of truthfulness was when honest witnesses would contract their eyes, but not smile at all with their mouths.

'When we went back and replayed the videos, we found that this often happened when people were trying to remember what was in an image,' Sen said. 'This showed they were concentrating and trying to recall honestly.'

The researchers plan to further examine the data, as they realize there is far more for them to learn.

Ethan Hoque, an assistant professor of computer science at the university, would like to dive deeper into the fact that interrogators unknowingly leak information when they are being lied to.

Interrogators demonstrate more polite smiles when they know they are hearing a falsehood. In addition, an examiner is more likely to return a smile by a lying witness than a truth-teller.

Looking at the interrogators' data could reveal useful information and could have implications for how TSA officers are trained.

'In the end, we still want humans to make the final decision,' Hoque says.

'But as they are interrogating, it is important to provide them with some objective metrics that they could use to further inform their decisions.'


Comments are closed.