#研究分享#【科学家让机器人读故事书“做好人”】

#研究分享#【科学家让机器人读故事书“做好人”】乔治亚理工学院利用之前计算机从网络上自动收集到的“正确”故事,让机器人代入主角角色,从而学习如何做出“正确”行为。他们认为,这是机器人实现道德推理的第一步,对价值观正确的故事之理解能够避免愈加智能的机器人做出伤害人类的极端行为。

There’s no manual for being a good human, but greeting strangers as you walk by in the morning, saying thank you and opening doors for people are probably among the top things we know we should do, even if we sometimes forget.

But where on earth do you learn stuff like that? Well, some researchers at the Georgia Institute of Technology reckon a lot of it is down to the stories we’re read as kids and now they’re using that idea to teach robots how to be ‘good people’ too.

Using a previous project that saw a computer automatically gather ‘correct’ story narratives from the Web, researchers Mark Riedl and Brent Harrison are now teaching them how to take the role of the “protagonist” so that they make the right choices.

When faced with a series of choices when acting on behalf of humans – rob the pharmacy or pick up the prescription? – a “value-aligned reward signal” can now be produced as the computer plots the outcome of each scenario.

Robbing the store might be the fastest and cheapest way to get the meds, but value-alignment learned from stories enable the robot to plot and then choose the right way to behave.

Riedl, associate professor and director of the Entertainment Intelligence Lab, calls this a “primitive first step toward general moral reasoning in AI.”

The collected stories of different cultures teach children how to behave in socially acceptable ways with examples of proper and improper behavior in fables, novels and other literature. We believe story comprehension in robots can eliminate psychotic-appearing behavior and reinforce choices that won’t harm humans and still achieve the intended purpose.

The team said the main limitation of their work at present is it can only be applied to robots performing a limited range of tasks for humans, rather than general AI. And they warn:

Even with value alignment, it may not be possible to prevent all harm to human beings, but we believe that an artificial intelligence that has been encultured—that is, has adopted the values implicit to a particular culture or society—will strive to avoid psychotic-appearing behavior except under the most extreme circumstances.

文章来源:thenextweb

http://thenextweb.com/us/2016/02/17/researchers-are-teaching-robots-to-be-good-by-getting-them-to-read-kids-stories/


Comments are closed.



无觅相关文章插件