Google CEO SUNDAR PICHAI has said artificial intelligence is more important to humanity than electricity. Andrew Ng, who founded Google Brain and now works at AN AI startup, wrote: “If a person can do a task in less than 10 seconds, then we can automate it now and in the future using AI. “
Their enthusiasm for ARTIFICIAL intelligence is understandable. After decades of sluggishness, artificial intelligence has made remarkable progress today. We can tell a voice-activated personal assistant like Alexa to “play music” or ask Facebook to tag our photos; Google Translate is now as accurate as human translation. Over the past five years, billions of dollars in research funding and venture capital have flowed into AI. Artificial intelligence has become one of the hottest computer science courses at MIT and Stanford. In Silicon Valley, new AI specialists can earn as much as $500,000. All these have shown that the era of artificial intelligence is coming to us.
But AI can’t do many things quickly. Some shortcomings can easily bog ai down because they won’t be solved quickly. Once you have seen these shortcomings, you can’t help ignoring them. Deep learning is now the dominant technology in ARTIFICIAL intelligence, but it is unlikely by itself to automate ordinary human activities.
To understand why modern AI is good at some things but bad at others, it helps to understand what deep learning really is. Deep learning is math: a statistical method of computer learning that uses neural networks to classify data. Such networks have inputs and outputs, a bit like neurons in our own brains; When they have multiple hidden layers with many nodes, they are considered “deep” and have numerous connections. Deep learning uses algorithms called backpropagation, or backprop, to adjust the weights between nodes so that the inputs are more correct (note the more correct) outputs. In speech recognition, the phonemes C-A-T should spell out the word “cat”; In image recognition, pictures of cats must not be labeled as “dogs”. When neural networks are trained to recognize phonemes, images, or relationships between Latin and English, using millions or billions of years of painfully labeled data, we call this deep learning approach “supervised learning.”
Advances in deep learning are outgrowths of pattern recognition: neural networks remember categories of things and more or less know when they will be encountered again. But almost all of humanity’s cognitive problems are not categorical. “People naively believe that if you use deep learning and scale it up 100 times, or even 1,000 times as much data, neural networks will be able to do anything humans can do, but that’s not the case,” said FrancoisChollet, a Google researcher.
Gary Marcus, a professor of cognitive psychology at New York University and director of Uber’s ARTIFICIAL intelligence Lab, recently published an excellent trilogy of papers critically evaluating deep learning. Marcus argues that deep learning is not “one universal tool, but one of many”. Without new approaches, Marcus worries that AI is running into a wall that pattern recognition can’t solve. His view is shared by most in the field, with the exception of Yann LeCun, director of AI research at Facebook, who reduces it to “all wrong,” while Geoffrey Hinton, emeritus professor at the University of Toronto and creator of backpropagation, Said it was an “unproven” but imminent obstacle.
Skeptics like Marcus say deep learning is greedy, fragile, opaque and shallow (knowing little about humans). Ai systems are greedy because they need a lot of training data. Fragile because when a neural network is given a “transition test” that faces a different scenario than the example used in training, it does not work as expected and breaks frequently. They are also opaque because, unlike traditional programs with formally debugable code, the parameters of a neural network can only be interpreted in terms of weights in mathematics. As such, they are black boxes whose output cannot be explained, and their unexplained nature raises the question of their unreliability. Finally, they are shallow because they don’t know much about programming and don’t have any common sense about the world or human psychology.
These limitations mean that a lot of automation will be more elusive than the results of AI hyperbolic analysis. Pedro Domingos, a professor of algorithms and computer science at the University of Washington, explained: “Self-driving cars can travel millions of miles, but they end up encountering something new that they haven’t encountered before. And then there’s the robot problem: a robot can learn to pick up a bottle, but if it has to pick up a cup, it has to start from scratch.” In January, Facebook dropped M, a text-based virtual assistant that trained a deep system but never delivered useful ideas or natural language.
“We must have a better learning algorithm in mind than any machine algorithm we can come up with, and we need to invent better machine learning methods,” Dominguez said. The remedy for AI, according to Marcus, is convergence: combining deep learning with unsupervised learning techniques that don’t rely so much on labeled training data, and the rise of deep learning on the logic rules that previously dominated AI. Marcus claims that our best model of intelligence is ourselves, that human beings think differently. Young children can learn the general rules of language without relying too much on examples because they are naturally gifted at it. “We know by nature that there is cause and effect in the world, that the whole can be made up of parts, and that the whole world includes places and objects that persist in space and time.”
Other researchers in the field of artificial intelligence see things differently. “We’ve been using the same basic paradigm (machine learning) for so long since the 1950s that we need some new ideas to find a breakthrough,” says Pedro Domingos. Chollet finds inspiration in program composition and automatically creates programs for other programs. Hinton’s current research explores an idea he calls “capsule capsules,” which preserves algorithms like backpropagation and deep learning but addresses some of the limitations of ARTIFICIAL intelligence.
“There are a lot of core problems in AI that are completely unsolved,” Chollet said, “and for the most part don’t even come up with solutions.” But we have to answer these questions because there are many tasks that people don’t want to do, like cleaning toilets and sorting pornography, or which smart machines would do a better job, like discovering drugs to cure diseases. More: Some things we simply can’t do, most of them we can’t imagine.
AI Anxious Daniel said:
1. You can stop panicking about a Superhuman AI, Writes Kevin Kelly. It’s a myth.
2. Worry that robots will take over all our jobs. It’s not that easy.
3. But artificial intelligence is becoming an integral part of the future of work. So say hello to your new AI colleagues.
This article is recommended by Beijing Post @ Love coco – Love life teacher, translated by Ali Yunqi Community organization.
The original title of the article was greedy-monety-opaque -and shallow-the downsides-to deep-learning.
The tiger said eight things.
The article is a brief translation. For more details, please refer to the original text