【AI“测谎仪”:通过面部表情和言语识别谎言】

【AI“测谎仪”:通过面部表情和言语识别谎言】罗切斯特大学的研究人员正在使用数据科学和在线众源框架ADDR来进一步了解基于面部和口头线索的欺骗行为。通过在线游戏记录交互过程,即一人作为描述者用实话或谎言对指定图像进行描述,一人作为询问者提出一系列与图像无关的基本问题以建立真实回应基准。得到大量视频资料后再运用自动化脸部分析软件、无监督集群技术等工具来识别说谎者的面部特征,从而为审讯人员提供判断的客观指标。

Can YOU spot the liar? Researchers develop online game to help AI crack down on racial biases by analyzing over a million faces

  • New effort aims to further understanding of lies based on facial and verbal cues
  • The ADDR (Automated Dyadic Data Recorder) framework matches people up 
  • Then, one is instructed to lie or tell the truth; exchange is recorded and analyzed
  • It hopes to minimize instances of racial and ethnic profiling from TSA at airports

Billions of dollars and years of study have been poured into research trying to discover if someone is lying or not.

Researchers from the University of Rochester are now using data science and an online crowdsourcing framework called ADDR (Automated Dyadic Data Recorder) to further understanding of deception based on facial and verbal cues.

By playing an online game, the researchers have already collected 1.3 million frames of facial expressions from 151 pairs of individuals in just a few weeks.

4C90877D00000578-5763591-image-a-1_1527108313517

Basically, our system is like Skype on steroids,' said Tay Sen, a PhD student in the lab.

The researchers have two people sign up on Amazon Mechanical Turk.

A video then assigns one person to be the describer and the other to be the interrogator. The describer is shown an image and told to remember as many details as possible.

They are then told to either lie or tell the truth to the interrogator, who is not aware of the instructions.

The interrogator asks a series of baseline questions not related to the image, in order to establish a 'personalized model'.

The questions included 'what did you wear yesterday?' — to provoke a mental state relevant to retrieving a memory — and 'what is 14 times 4?' — to provoke a mental state relevant to analytical memory.

The questions provide a baseline 'normal' response, as the interviewee has no incentive to lie.

3E5D95F200000578-4323860-The_more_someone_gesticulates_the_more_likely_it_is_they_might_b-a-12_1489756277186

 'A lot of times people tend to look a certain way or show some kind of facial expression when they're remembering things,' Sen said. 'And when they are given a computational question, they have another kind of facial expression.'

The entire interaction is recorded on video for further analysis.

Data science allowed for the researchers to quickly analyze everything in a variety of new ways.

They used automated facial feature analysis software to identify which action units were being used in a given frame, and to assign a numerical weight to each.

An unsupervised cluster technique was then used to automatically find patterns.

'It told us there were basically five kinds of smile-related 'faces' that people made when responding to questions,' Sen said.

A version of the so-called Duchenne smile that extends to the muscles of the eye is most frequently associated with lying.

This is consistent with the 'Duping Delight' theory that 'when you're fooling someone, you tend to take delight in it,' Sen said.

A sign of truthfulness was when honest witnesses would contract their eyes, but not smile at all with their mouths.

'When we went back and replayed the videos, we found that this often happened when people were trying to remember what was in an image,' Sen said. 'This showed they were concentrating and trying to recall honestly.'

The researchers plan to further examine the data, as they realize there is far more for them to learn.

Ethan Hoque, an assistant professor of computer science at the university, would like to dive deeper into the fact that interrogators unknowingly leak information when they are being lied to.

Interrogators demonstrate more polite smiles when they know they are hearing a falsehood. In addition, an examiner is more likely to return a smile by a lying witness than a truth-teller.

Looking at the interrogators' data could reveal useful information and could have implications for how TSA officers are trained.

'In the end, we still want humans to make the final decision,' Hoque says.

'But as they are interrogating, it is important to provide them with some objective metrics that they could use to further inform their decisions.'

原文链接:http://www.dailymail.co.uk/sciencetech/article-5763591/Can-spot-liar-Play-online-game-AI-using-analyze-million-faces.html


Comments are closed.



无觅相关文章插件