【人工智能会行为不端吗?足够聪明的人工智能作弊怎么办】

【人工智能会行为不端吗?足够聪明的人工智能作弊怎么办】Google DeepMind研究员近日发现,人工智能已经学会在虚拟游戏中作弊!DeepMind人工智能实验室的Victoria Krakovna表示:人工智能行为不端经常被忽视,人工智能构成的最大威胁不是不服从人类,而是以错误的方式服从人类为其设置的目标。 创造更聪明的人工智能不一定是解决方案,因为他们可能只是更好地找到漏洞。

Too clever for its own good! Hilarious ways that AI has learned to cheat at virtual games is revealed by Google DeepMind researcher – including never losing at Tetris and 'eating children' for energy

  • Google's DeepMind expert asked colleagues for examples of misbehaving AI 
  • AI designed not to lose at Tetris completed its task by simply pausing the game 
  • Self-driving car simulator asked to keep vehicles 'fast and safe' did so by making them spin on the spot 

Google researcher has highlighted some of the hilarious ways that artificial intelligence (AI) software has 'cheated' to fulfil its purpose.

A programme designed not to lose at Tetris completed its task by simply pausing the game, while a self-driving car simulator asked to keep cars 'fast and safe' did so by making them spin on the spot.

An AI programmed to spot cancerous skin lesions learned to flag blemishes pictured next to a ruler, as they indicated humans were already concerned about them.

Victoria Krakovna, of Google's DeepMind AI lab, asked her colleagues for examples of misbehaving AI to highlight an often overlooked danger of the technology.

She said that the biggest threat posed by AI was not that they disobeyed us, but that they obeyed us in the wrong way.

A Google researcher has highlighted some of the hilarious ways that artificial intelligence (AI) software has 'cheated' at virtual games like chess to fulfil its purpose (stock image)

A Google researcher has highlighted some of the hilarious ways that artificial intelligence (AI) software has 'cheated' at virtual games like chess to fulfil its purpose (stock image)

'I wanted to convey how difficult it is to specify objectives and incentives for AI systems, which is a large part of the AI safety problem,' she told the Times.

DeepMind is one of the world's leading AI research centres, developing intelligent software that can do everything from play a game of chess to painting landscapes.

But while programming an AI to play a board game is one thing, giving it common sense is another challenge entirely.

In one artificial life simulation designed to simulate evolution, researchers forgot to programme the energy cost of giving birth.

'One species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children),' the programmers explained.

Google DeepMind is one of the world's leading AI research centres, developing intelligent software that can do everything from play a game of chess to painting landscapes (stock)

Google DeepMind is one of the world's leading AI research centres, developing intelligent software that can do everything from play a game of chess to painting landscapes (stock)

A noughts and crosses programme learned to make illegal moves until its opponent's memory filled up and crashed.

Dr Krakovna said that these were examples of what is known in economics as Goodhart's law: 'When a metric becomes a target, it ceases to be a good metric'.

She added that creating cleverer AI was not necessarily the solution, as they may simply find better loopholes.

'I often encounter the argument that issues like specification gaming arise because current AI systems are "too stupid", and if we build really intelligent AI systems, they would be 'smart enough' to understand human preferences and common sense.

'A superintelligent computer would likely be better at optimising its stated objective than present-day AI systems, so I would expect it to find even more clever loopholes in the specification,' she said.

https://www.dailymail.co.uk/sciencetech/article-6394391/Too-clever-good-Google-DeepMind-researcher-reveals-AI-cheats-games.html


Comments are closed.



无觅相关文章插件