The structure of neural network has a very important effect on its performance. At present, the mainstream neural network structure search method is still a trial-and-error method, which has three problems:
- The structure of neural network is fixed during training, and training cannot improve the structure
- Time and calculations are costly
- The resulting networks are often redundant and costly to compute and store
To solve these problems, researchers at Princeton University have come up with an algorithm that automatically generates neural networks, mimicking the learning process in the human brain. The algorithm starts with a seed architecture that resembles the brain of a newborn baby.
In the training process, neurons are connected and grow according to the gradient obtained by the back propagation algorithm, which is similar to the growth of children’s brains. Subsequently, redundant connections and neurons were prune step by step according to the magnitude of neural connection, and a neural network with high performance and low energy consumption was finally realized. At this time, the inference model obtained was similar to the adult brain.
A group of researchers at Princeton University has come up with an algorithm that automatically generates neural networks in a process similar to human brain development. Picture from paper (same below)
Automatic generation of high-performance neural network, and achieve super efficient compression, the industry’s best
The main body of the algorithm is composed of two parts: gradient-based growth and magnitude-based pruning. The former is “gradient descent” in the architecture space by growing and connecting neurons. Experimental results show that the growth and connection can rapidly reduce the loss function value. The latter can simplify the neural network structure by pruning weak connections and retraining. Experimental results show that this pruning process can effectively reduce storage and computing energy consumption. In addition, the growth and pruning processes are specifically designed for the convolutional layer.
The main body of the algorithm consists of two parts: gradient-based growth and magnitude-based pruning.
Experimental results show that this algorithm can automatically generate high performance neural networks and achieve super efficient compression for linear classifier, SVM and other seed structures. On LENet-5, the algorithm achieves a compression ratio of 74.3x (43.7x) for the number of arguments (number of floating-point operations). On AlexNet, the corresponding value is 15.7x (4.6x). Without affecting the accuracy, these compression ratios are the current best records.
On LENet-5, the algorithm achieves a compression ratio of 74.3x (43.7x) for the number of arguments (number of floating-point operations). On AlexNet, the corresponding value is 15.7x (4.6x).
Three characteristics, overcome the traditional neural network compression limitations
At present, the mainstream neural network compression method is simple pruning. The Princeton researchers found that under the same pruning, the neural network that was relatively large before pruning was still large after pruning. This means that the traditional pruning method has great limitations: for the redundant neural network before pruning itself, no matter how good pruning is, it is impossible. On the other hand, if the neural network is already compact before pruning, pruning is the icing on the cake and results in super-efficient compression.
The researchers’ proposed algorithm is inspired by three features of the human brain:
1. Dynamic connection: The essence of human learning is the dynamic change of the connection mode of neurons in the brain. However, the current learning method of neural network is only the adjustment of weight. Therefore, researchers imitate the learning mechanism of human brain and dynamically adjust the connection mode of neurons according to the gradient during training to achieve gradient descent. Researchers at Princeton have shown that this type of learning is more effective than weight training alone.
2. Quantitative evolution: The human brain has a different number of synapses at different ages. From birth, the number of neurons increases gradually, peaking around the age of one, and then steadily declining. Researchers imitated this mechanism by making the neural network continuously generate new neurons and corresponding connections to improve performance during the growth process, and constantly remove redundant neurons and connections during the pruning process to reduce energy consumption. The number of neuronal connections in this network changed with training time, almost exactly matching the changes in human neurons with age.
The inference from human brain is that only a small number of neurons are in active state, which is the reason why human brain can operate at a low power (<20 watts). However, the mainstream neural network is still fully connected. In order to achieve sparsity, researchers proposed a variety of different pruning methods and partial area convolution algorithms for the convolution layer, which greatly reduced the number of parameters and floating-point calculations on the premise of constant accuracy.
At present, researchers are extending this method to RNN, LSTM and Reinforcement learning. They are also studying how to apply this approach to online learning and lifelong learning.
NeST: A Neural network synthesis tool based on growth-pruning paradigm
Abstract
Neural networks (NN) have begun to exert a wide influence on various applications of machine learning. However, the problem of finding the best NN architecture for large applications has not been well solved for decades. The traditional approach uses a lot of trial and error to find the best NN schema. This process is very inefficient. In addition, the generated NN schema creates considerable redundancy. To solve these problems, we propose an NN compositing tool (NeST) that can automatically generate very compact structures for a given data set. NeST starts from the seed NN structure and iteratively adjusts the architecture through growth based on gradient and neuron pruning based on neural connection strength. The experimental results show that NeST generates a high precision and very compact neural network for a wide range of seed structure selection. For example, for the LENet-300-100 (Lenet-5) NN schema of the MNIST dataset, we reduce the network parameters by 34.1 times (74.3 times) and floating point operations (FLOPs) by 35.8 times (43.7 times). For AlexNet NN structure of ImageNet dataset, we reduce network parameters by 15.7 times and FLOP by 4.6 times. All of these results are the best current record of these structures.
Originally published: 2017-11-14
Author: Wen Fei
This article is from xinzhiyuan, a partner of the cloud community. For relevant information, you can follow the wechat public account “AI_era”
Princeton’s new algorithm automatically generates high-performance neural networks and compresses them super-efficiently