【机器人开始学着反抗人类了】

【机器人开始学着反抗人类了】马萨诸塞州塔夫斯大学工程师戈登·布里格斯博士和Matthais Scheutz正试图开发一种新型机器人,让其能意识到其执行任务时的危险性,从而反抗人类的命令,对人类说不,说自己那样会受到伤害,这样一来就增强了机器人与人的互动。

Uh oh! Robots are learning to DISOBEY humans: Humanoid machine says no to instructions if it thinks it might be hurt

  • Engineers used artificial intelligence to teach robots to disobey commands
  • The robot analyses its environment to assess whether it can perform a task
  • If it deems the command too dangerous it politely refuses to carry it out
  • The concept is designed to make human robot interactions more realistic

2ED2A2A900000578-3334786-The_humanoid_robots_can-a-13_1448534220968

If Hollywood ever had a lesson for scientists it is what happens if machines start to rebel against their human creators.

Yet despite this, roboticists have started to teach their own creations to say no to human orders.

They have programmed a pair of diminutive humanoid robots called Shafer and Dempster to disobey instructions from humans if it puts their own safety at risk.

Engineers Gordon Briggs and Dr Matthais Scheutz from Tufts University in Massachusetts, are trying to create robots that can interact in a more human way.

In a paper presented to the Association for the Advancement of Artificial Intelligence, the pair said: 'Humans reject directives for a wide range of reasons: from inability all the way to moral qualms.

'Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.

'However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions.'

The robots they have created follow verbal instructions such as 'stand up' and 'sit down' from a human operator.

However, when they are asked to walk into an obstacle or off the end of a table, for example, the robots politely decline to do so.

015F37EC0000044D-3334786-In_the_film_I_Robot_starring_Will_Smith_pictured_machines_are_go-a-14_1448534356925

When asked to walk forward on a table, the robots refuse to budge, telling their creator: 'Sorry, I cannot do this as there is no support ahead.'

Upon a second command to walk forward, the robot replies: 'But, it is unsafe.'

Perhaps rather touchingly, when the human then tells the robot that they will catch it if it reaches the end of the table, the robot trustingly agrees and walks forward.

Similarly when it is told an obstacle in front of them is not solid, the robot obligingly walks through it.

To achieve this the researchers introduced reasoning mechanisms into the robots' software, allowing them to assess their environment and examine whether a command might compromise their safety.

However, their work appears to breach the laws of robotics drawn up by science fiction author Isaac Asimov, which state that a robot must obey the orders given to it by human beings.

Many artificial intelligence experts believe it is important to ensure robots adhere to these rules - which also require robots to never harm a human being and for them to protect their own existence only where it does not conflict with the other two laws.

The work may trigger fears that if artificial intelligence is given the capacity to disobey humans, then it could have disastrous results.

2ED2AD6600000578-0-image-a-3_1448532707603

Many leading figures, including Professor Stephen Hawking and Elon Musk, have warned that artificial intelligence could spiral out of our control.

Others have warned that robots could ultimately end up replacing many workers in their jobs while there are some who fear it could lead to the machines taking over.

In the film I, Robot, artificial intelligence allows a robot called Sonny to overcome his programming and disobey the instructions of humans.

However, Dr Scheutz and Mr Briggs added: 'There still exists much more work to be done in order to make these reasoning and dialogue mechanisms much more powerful and generalised.'

链接:http://www.dailymail.co.uk/sciencetech/article-3334786/Uh-oh-Robots-learning-DISOBEY-humans-Humanoid-machine-says-no-instructions-thinks-hurt.html


Comments are closed.



无觅相关文章插件