Google AutoML has written its own machine learning code, outperforming a professional developer. Where does this make us human superior? You robot students, learning ability is not too strong oh

The author | Zhou Xiang


Artificial intelligence (AI) is one of the hottest industries in the world, and at the same time, the price of related talent is also rising. In 2017, the average salary of large companies with more than 2,000 employees in the field of AI recruitment was 25.2k, according to Data from Lago.com.

According to linkedin, by the end of the first quarter of 2017, the number of technical talents in the FIELD of AI based on linkedin platform has exceeded 1.9 million globally, with the number of talents in the United States topping the world at more than 850,000. In comparison, the number of talents in China, which ranks seventh, is only about 50,000, a huge gap.

Spurred by high demand and high salaries, more and more people, whether students of biology or materials, want to switch to AI, which helps explain why AI giants like Ng create a “collective high tide” every time they launch a new online course.

However, those of you who want to change careers should hold on tight. Since AI can replace stenographer and translator, is it possible to replace programmer in the future?

Five months ago, Google developed an AI-designed deep learning model that was already better than the engineers who created it, and now the AI system has gone one step further, outperforming human engineers on some complex tasks.

What do you think about that?

In the wave of artificial intelligence, deep learning model has been widely used in speech recognition, machine translation, image recognition and many other fields, and has achieved very good results. However, deep learning model design is a difficult and complicated process, because the search space behind all possible combination models is very large, such as a typical 10-layer neural networkIn order to cope with such a large order of magnitude, the design of neural network is time-consuming and requires machine learning experts to accumulate a lot of prior knowledge.

GoogleNet architecture, from the original convolutional architecture, the design of this neural network required many years of careful experimentation and debugging.

In may, Google launched AutoML, a machine learning software that automates the process of designing machine learning models, in an effort to make the process easier and more efficient in the face of a shortage of top AI talent. The system runs thousands of simulations to determine which aspects of the code could be improved, and continues the process after the changes until the goal is reached.

To test AutoML, Google used its own model on two large datassets, CIFAR-10, which focuses on image recognition, and Penn Treebank, which focuses on language modeling. Experiments show that AutoML’s model performs as well as advanced models designed by machine learning experts. Embarrassingly, some of the models were even designed by members of the AutoML team, which means that AutoML has somehow outdone its creators.

Five months later, AutoML has gone one step further. In one image recognition task, AutoML’s model achieved a record 82 percent accuracy, according to TheNextWeb. Even in some complex AI tasks, its self-created code was better than human programmers, such as marking the positions of multiple objects in an image with 42% accuracy; For comparison, only 39 percent of software is built by humans.

AutoML has gone further than many expected, so why are machines so good at designing models of deep learning? Let’s take a look at how AutoML works.


How does AutoML design the model?

As the leader in THE FIELD of AI, Google has actually made many attempts in secret, including evolutionary algorithms and Reinforcement learning algorithms, and all of them have shown good prospects. AutoML is the brainchild of the Google Brain team using reinforcement learning.

In AutoML architecture, there is an RNN named “The Controller”, which can design a model architecture called “Child” (sub-model), and this “sub-model” can be trained to evaluate quality through specific tasks. The feedback is then fed back into the controller to help improve the training Settings for the next cycle. As shown below:

New architectures are generated, tested, and feedback is passed to the controller to learn — this process is repeated thousands of times, and eventually the controller tends to design architectures that achieve better accuracy on the data set.

According to Gu Xiaofan, an AI engineer, AutoML actually operates in two parts:

  1. Hot priming of meta-learning: search for efficient algorithms in machine learning framework; To calculate the similarity between different data sets, similar data can take similar hyperparameters.

  2. Hyperparameter optimization algorithm includes Hyperopt (TPE algorithm); SMAC (based on random forest); Spearmint. Input different hyperparameters as, take the loss function as the accuracy rate, the tuner will randomly select some values on the basis of using greedy algorithm to search for optimization.

The two models below, designed by human experts on the left and AutoML on the right, are predictive models based on the Penn Treebank dataset.

According to the Google team, the process by which a machine chooses its own architecture is similar to the process by which a human designs a model architecture, using merged inputs and a forward-hiding layer. AutoML has a few bright spots, though, as the architecture chosen by the machine includes multiplicative combinations, such as the “Elem_mult” on the left of AutoML’s model above. This combination is unusual for RNNS, probably because researchers have not found a clear advantage in this combination. But the interesting thing is that this approach has been proposed recently, and the multiplicative combination is thought to be an effective way to mitigate the gradient disappearance/explosion problem. This means that the architecture chosen by the machine is of great help in exploring new neural network architectures.

It might also teach people why certain types of neural networks work better. For example, the architecture on the right of the image above has a lot of channels, so the gradient can be passed backwards, which explains why LSTM RNNs performs better than standard RNNs.

AutoML is available at github.com/automl, and interested readers can try it out for themselves.



Will AutoML replace AI engineers?

AutoML has made significant progress in a short period of time, proving a promising direction in using machines to design models, but is AutoML’s ultimate goal to replace AI engineers?

Today, AI experts must constantly tweak the internal architecture of neural networks through instinct and trial and error. “A lot of what engineers do is essentially boring,” says Roberto Calandra, a researcher at the University of California, Berkeley. “It’s trying different configurations to see which [neural networks] work better.” Calandra believes that designing a deep learning model in the future will be a tough challenge because the problems to be solved are getting harder and the neural networks are getting deeper.

In theory, the time it would take AutoML to design a deep neural network would be negligible compared with that of a human expert, and the machine-designed model would be much better.

But that doesn’t mean AutoML will cut humans out of the process of developing AI systems.

In fact, AutoML’s main goal is to lower the barriers to machine learning and democratize AI. Even Google can’t claim to have enough AI talent, so it’s important for the industry to lower the bar and improve efficiency.

“Today, these [AI systems] are built by machine learning experts that only a few thousand scientists in the world can do,” Google CEO Sundar Pichai said at an event last week. We want to enable thousands of developers to do the same.”

So while AutoML may not have inherited the theoretical and mathematical genius of Google’s top engineers, it could save time for, or inspire, AI engineers.

Gu agrees that AutoML is truly machine learning, automating experiential tasks that are now only half-finished. AutoML could significantly lower the bar for future machine learning, making it a tool for ordinary people to use.

The AutoML team will conduct in-depth analysis and testing of the architecture the machine is designed to help AI engineers re-examine their understanding of the architecture. If Google succeeds, that means AutoML could potentially lead to new types of neural networks, as well as allow non-specialist researchers to create neural networks for the benefit of humanity.

AutoML may not replace AI engineers, but with machines working so hard, there’s no excuse to be lazy!

In the comments section of this article, the battalion chief found several comments saying, “As an AI engineer, I feel a lot of pressure. Machine learning ability according to the development of what, maybe the job is really lost……”



References:

Research.googleblog.com/2017/05/usi…

Thenextweb.com/artificial-…

zhuanlan.zhihu.com/p/27792859

www.wired.com/story/googl…