#科技头条#【神经网络技术助力AI:他们即将知道你在看什么和想什么】

#科技头条#【神经网络技术助力AI:他们即将知道你在看什么和想什么】基于神经网络技术,科学家揭示了解码大脑活动的新方法,发现脑活动可以转化成模拟神经元的模式,允许AI解码大脑中的模式来预测来自fMRI扫描的视觉特征。目前谷歌团队正在组织预测任意类别的实验,未来若能高概率识别目标对象,将能够帮助我们更加理解人类意识,让AI可以看出你在想什么和看什么。http://www.looooker.com/?p=45109

The AI that can see into your imagination: Scientists reveal neural network-based technique to decode brain activity

  • The new method allows AI to decode patterns in the brain to predict objects
  • Found brain activity could be translated into patterns of simulated neurons
  • These patterns could then be used to predict visual features from fMRI scans

Scientists in Japan have developed an AI that can decode patterns in the brain to predict what a person is seeing or imagining.

In a new study, researchers used signal patterns derived from a deep neural network to predict visual features from fMRI scans.

Their ‘decoder’ was able to identify objects with a high degree of accuracy, and the researchers say the breakthrough could pave the way for more advanced ‘brain-machine interfaces.’

In a new study, researchers used signal patterns derived from a deep neural network to predict visual features from fMRI scans. Their ¿decoder¿ was able to identify objects with a high degree of accuracy. An artist's impression is pictured 

In a new study, researchers used signal patterns derived from a deep neural network to predict visual features from fMRI scans. Their ‘decoder’ was able to identify objects with a high degree of accuracy. An artist's impression is pictured

HOW IT WORKS

In the new approach, the researchers trained decoders to predict arbitrary object categories based on human brain activity.

Subjects were shown natural images from the online image database ImageNet, spanning 150 categories.

Then, the trained decoders were used to predict the visual features of objects – even for objects that were not used in the training from the brain scans.

The decoder used the neural network patterns, compared them with image data from a large database, accordig to the researchers.

This allowed it to identify objects with a high degree of accuracy.

‘When we gaze at an object, our brains process these patterns hierarchically, starting with the simplest and progressing to more complex features,’ said team leader Yukiyasu Kamitani, of Kyoto University.

‘The AI we used works on the same principle.

'Named ‘Deep Neural Network,’ or DNN, it was trained by a group now at Google.’

The researchers from Kyoto University built on the idea that a set of hierarchically-processed features can be used to determine an object category, such as ‘turtle’ or ‘leopard.’

Such category names allow computers to recognize the objects in an image, the researchers explain in a paper published to Nature Communications.

In the new approach, the researchers trained decoders to predict arbitrary object categories based on human brain activity.

Subjects were shown natural images from the online image database ImageNet, spanning 150 categories

Then, the trained decoders were used to predict the visual features of objects – even for objects that were not used in the training from the brain scans.

When shown the same image, the researchers found that the brain activity patterns from the human subject could be translated into patterns of simulated neurons in the neural network.

This could then be used to predict the objects.

‘We tested whether a DNN signal pattern decoded from brain activity can be used to identify seen or imagined objects from arbitrary categories,’ says Kamitani.

‘The decoder takes neural network patterns and compares these with image data from a large database.

The researchers from Kyoto University built on the idea that a set of hierarchically-processed features can be used to determine an object category, such as ¿turtle¿ or ¿leopard.¿ Such category names allow computers to recognize the objects in an image, the researchers explain

The researchers from Kyoto University built on the idea that a set of hierarchically-processed features can be used to determine an object category, such as ‘turtle’ or ‘leopard.’ Such category names allow computers to recognize the objects in an image, the researchers explain

‘Sure enough, the decoder could identify target objects with high probability.’

The experiment showed that the features of seen objects, calculated by the computational models, can be predicted from multiple brain areas, they explain in the paper.

And, the decoders could also be used to predict imagined objects.

The researchers also found that lower and higher visual areas in the brain were better at decoding the corresponding layers of the neural network.

They’re now hoping to refine the technique to improve the image identification accuracy.

Kamitani says: ‘Bringing AI research and brain science closer together could open the door to new brain-machine interfaces, perhaps even bringing us closer to understanding consciousness itself.’

原文链接:

http://www.dailymail.co.uk/sciencetech/article-4559868/New-AI-decode-brain-activity-identify-objects.html


Comments are closed.



无觅相关文章插件