Member-only story
Last year, Google’s DeepMind created a program where AI agents compete for virtual apples. They zapped each other with lasers as the apples become more scarce or as the AI agents become more intelligent. The World Economic Forum then posted this video on its Facebook page.
This caused many to believe that it is proof AI can and will develop greed and selfishness as it gets more intelligent. They will betray each other, and eventually, its creators. Or will it?
There is a big problem with such an inference. Deep Mind specializes in Deep Learning algorithms, which by design, mimic the human brain. So that is precisely what these agents have done — resource grab when it becomes more scarce, or smarter agents wanting it all for themselves. Sounds familiar? Read human history…
In fact Deep Mind went on to develop a 2nd game to test predator/prey behavior, and the results again mimic human behavior. When given enough survival based incentives, the predators will work together to trap and share the prey.
This is again, a result that follows predator behavior in the real world and to a large extent, human nature as well. No matter what our egos tell us, history has proven that…