Last year, Google’s DeepMind created a program where AI agents compete for virtual apples. They zapped each other with lasers as the apples become more scarce or as the AI agents become more intelligent. The World Economic Forum then posted this video on its Facebook page.
This caused many to believe that it is proof AI can and will develop greed and selfishness as it gets more intelligent. They will betray each other, and eventually, its creators. Or will it?
There is a big problem with such an inference. Deep Mind specializes in Deep Learning algorithms, which by design, mimic the human brain. So that is precisely what these agents have done — resource grab when it becomes more scarce, or smarter agents wanting it all for themselves. Sounds familiar? Read human history…
In fact Deep Mind went on to develop a 2nd game to test predator/prey behavior, and the results again mimic human behavior. When given enough survival based incentives, the predators will work together to trap and share the prey.
This is again, a result that follows predator behavior in the real world and to a large extent, human nature as well. No matter what our egos tell us, history has proven that human nature has not evolved very far from basic instincts yet. We are still driven by our greed and survivor instincts as an individual or group.
Deep Mind has simply proven that its algorithms has successfully approximated human behavior (which it was designed to do in the first place). To say it can overtake human beings because of these two simulations is circular logic, a classic error in programming.
The reality is, the current state of AI technology is still very far from being able to overtake humans. There are three simple and logical reasons for this.
AI limitation #1 — Data Availability
The current advancements seen in machine learning — in which software can now learn and improve itself — is conditional upon crunching a large amount of data for the specific task it is learning to perform. Take for example, an AI software that’s being trained to classify different car models based on images from the Internet. It can become better and better over time only if it is given a vast amount of pictures to analyze and a human operator giving feedback to it on which guesses were right or wrong. But there is absolutely no possibility of the software somehow learning to design a working car itself from performing this task.
Current AI is also able to research and organize information if you feed it a database of textual content. This means that if you feed such an AI software with, say, car design and engineering textbooks, it can highlight and summarize the key information. However, even if you add this ability to the earlier one on recognizing car pictures, an AI software will still not be able to design and construct a new working automobile on its own. A human learner can though, given enough education and natural abilities.
AI limitation #2 — Computing power
The other practicality that limits current AI technology other than the availability of data is computing power. The current mathematics driving AI software has been around for many decades now. Back then computers were not cheap or fast enough yet to make their potential applications realistic or useful. Now it is. Deep learning algorithms aren’t new! The computer chips driving them are.
For artificial computing power to come close to the abilities of the human brain, we would need to create a quantum computer. Such a technology is still in its early stages. We cannot say for sure when quantum computers will become mainstream, but we can say for sure that based on engineering principles, the current integrated circuit technology powering the computational abilities of conventional computers will never be able to replicate the human brain, much less exceed it in intellectual capabilities.
AI limitation #3 — We cannot surpass what we do not yet fully understand
The type of computational mathematics that has fueled the current advancements in AI is called neural networks. It is inspired by the human brain. But modern science does not yet fully understand how the brain works. Without this understanding, no human scientist could ever program an AI that has the potential to surpass the human brain. Till neuroscience catches up, software will not be able to emulate the creative or adaptive abilities of the human mind, which are the true frontiers of human intelligence that can still defeat a machine.
But what about AlphaGo? Didn’t it beat a human?
The reality is, misinformation is all over the internet, created to sensationalize so that the post can go viral. Truth is more boring and creates less buzz.
This whole ‘AI will take over mankind’ hype all started because of AlphaGo, the Google DeepMind software that beat the world’s Go champion in March 2016.
AlphaGo was designed to do one specific task — beat Lee Sedol.
The reality is, AlphaGo was designed specifically to excel at just playing Go so that it could beat Lee Sedol, the reigning champion from Korea. DeepMind went through many rounds of failures and versions. They even hired another grandmaster, a human one, to play AlphaGo repeatedly before the competition to figure out its weaknesses. That human coach was European Champion Fan Hui. Incidentally Fan was the referee in the match between AlphaGo and Lee.
So here’s the truth. DeepMind tried subjecting their AI to a general IQ test. The AI failed miserably compared to humans due to the inability to do abstract reasoning. Put simply, the AI couldn’t jump out of the patterns and tasks it was trained to do, even when the variance was very small.
“ Ultimately, the team’s AI IQ test shows that even some of today’s most advanced AIs can’t figure out problems we haven’t trained them to solve. That means we’re probably still a long way from general AI.”
And that… is the true reality of where AI is today. Still very good only at the specific task they were modeled to do, only after having trained it on a large amount of data and many, many corrective iterations.
I don’t think Google DeepMind intended to mislead the world on the limitations of their AI. A documentary was made on the entire process the company went through to train AlphaGo and set up the publicity match against Lee. In it there was a part where a member of the team said, “AlphaGo is really a very, very simple program; it’s not anywhere close to full AI…”
“We’re really closer to a smart washing machine than Terminator. If you look at today’s AI, we are really very nascent. I am extremely excited and passionate about AI’s potential, but AI is still very limited in its power”
— Fei-Fei Li, Director, Stanford AI Lab, speaking in the documentary “AlphaGo”
The media and internet simply got too carried away with what they didn’t fully understand. After all, sensational headlines and blogs capture more eyeballs.
PS: Since publishing, the debate has heated up, with Scott Jones providing an excellent point by point counter-argument to my three limitations above; to which I have responded with more supporting research and conceptual reasoning. I’m not trained in AI both professionally and academically, so I’d appreciate all the other PhDs and experts out there jumping in to contribute to the crowd wisdom too, so that Medium can truly become a reliable source for quality content instead of the usual internet ‘fluff’.