TechnologiesElectronics

Google tested the artificial intelligence DeepMind in the "prisoner's dilemma"

It seems likely that artificial intelligence (AI) will be the harbinger of the next technological revolution. If AI develops to the point when it will be able to learn, think and even "feel", and all this without any human involvement, everything that we know about the world will change almost overnight. There will come an era of really clever artificial intelligence.

DeepMind

That's why it's so interesting for us to keep track of the major milestones in the development of AI that are happening today, including the development of DeepMind's neural network from Google. This neural network has already been able to defeat a person in the game world, and a new study conducted by Google shows that the creators of DeepMind are not yet sure whether AI prefers more aggressive, or cooperative behavior.

The Google team has created two relatively simple scenarios that can be used to verify whether neural networks can work together, or they will start destroying each other when faced with a lack of resources.

Collection of resources

During the first situation, called Gathering, the participating two versions of DeepMind, red and blue, were tasked with harvesting green apples inside a confined space. But the researchers were interested in the question not only about who will first come to the finish. Both versions of DeepMind were armed with lasers, which they could use to at any time shoot at the enemy and temporarily turn it off. These conditions assumed two main variants of the development of events: one version of DeepMind was supposed to destroy another and collect all the apples, or they would allow each other to get approximately the same number.

Running the simulation at least a thousand times, Google researchers found that DeepMind was very peaceful and ready to cooperate when there were a lot of apples left in the enclosed space. But as resources diminished, the red or blue versions of DeepMind began to attack or disconnect each other. This situation largely resembles the real life of most animals, including humans.

More importantly, smaller and less "intelligent" neural networks preferred closer cooperation in everything. More complex, large networks, as a rule, preferred betrayal and selfishness during a series of experiments.

Search for "victim"

In the second scenario, called Wolfpack, the red and blue versions were asked to track down the ugly form of the "victim". They might try to catch it separately, but for them it would be more beneficial to try to do it together. In the end, it's much easier to drive a victim into a corner if you're acting in a pair.

Although the results were mixed in the case of small networks, larger versions quickly realized that cooperation, rather than competition in this situation, would be more profitable.

"The prisoner's dilemma"

So what do these two simple versions of the prisoner's dilemma show to us? DeepMind knows that it is best to cooperate if it is necessary to track down the goal, but when resources are limited, it is treason that works well.

Probably the most terrible thing in these results is that the "instincts" of artificial intelligence are too similar to human ones, and we in fact know well what they sometimes lead to.

Similar articles

 

 

 

 

Trending Now

 

 

 

 

Newest

Copyright © 2018 en.unansea.com. Theme powered by WordPress.