Google’s DeepMind AI system showed an ability to choose “highly aggressive” strategies in order to win when it feels it will lose- showing us just how careful we need to be when designing AI.
Stephen Hawking, Elon Musk and many other experts have expressed deep concern about creating artificially intelligent machines.
While testing Google’s DeepMind AI’s willingness to cooperate with others, they found that when it feels like it’s about to lose, it resorts to “highly aggressive” strategies to ensure that it wins.
The Google team ran 40 million turns of a computer game where the two DeepMind AIs compete to gather more virtual apples than the other.
The team found that things went smoothly as long as there were still many apples to go. However, as soon as apples started becoming scarce, the two AI’s became aggressive, and used laser beams to knock each other out of the game and steal the apples.
Theoretically, the AIs didn’t use their laser beams, they could still end up with an equal number of apples- an option “less intelligent” DeepMind agents chose. However, when Google decided to test more complex code for DeepMind, sabotage, greed and aggression were developed.
According to Rhett Jones reports, the smaller the DeepMind networks used for the agents, the greater the possibility for a peaceful coexistence.
Comments on the Finding
“This model … shows that some aspects of human-like behaviour emerge as a product of the environment and learning,” one of the team, Joel Z Leibo said.
“Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action. The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”