This is the 25th day of my participation in the First Challenge 2022

At Tesla AI Day, Andrei Karpathy gave some details of Tesla’s visual FSD design. Its content still has a lot of nutrition. In the network structure diagram, we can see that ReNet is the backbone network of Tesla network.

I also said ResNet in an article or shared video. I thought of it as ResNet at that time, but I didn’t know RegNet at that time and even heard about it. How important a good backbone network is for a multi-tasking network like Tesla. It is necessary to extract strong expressive features from images to meet downstream tasks. As I am not familiar with this field, I want to know about it. It wasn’t ResNet, but it was also the brainchild of Facebook’s current FAIR boss, Jimmy Ho, and I have to hand it to him.

But come to think of it, neither NAS nor RegNet nor small companies like us can afford it, let alone individuals. However, we can follow the steps of the big guy and feel the results of the paper while reading, which brings us pleasure as if we are in the scene and can also become the capital to show off when we chat in the future.

Now that the network structure is becoming more and more complex, it is not necessary to look at the network at a glance before ResNet. And now there are more and more network parameters, how to organize these parameters together effectively also requires designers to find a better combination through a lot of experiments.

NAS(Neural Architechture Search) uses learning to find a better network result. NAS is usually in a given network space, so if it finds a better network structure in this space, it is far better than other networks in this space. This may be because the network is more suitable for a particular task and lacks some generalization ability. If all networks do better in this space, then the credit is not to the NAS but to the fact that we did better in the artificial subspace.

Since finding a good space becomes the premise of finding a good network structure, we can design the network to find the network design space, so that in a good design space we will naturally find a good network. The basic idea is that samples can be taken from the design space, and then classical statistical principles are used to analyze the collected samples as evaluation indicators of the design space

In fact, RegNet takes full advantage of Ilija Radosavovic’s work Designing Network Design Spaces in an interesting article about exploring a highly malleable Network architecture. This architecture can be adapted to run on some mobile devices and will perform well. The width, depth and resolution of the network can be controlled by adjusting the parameters.

RegNet is not a network structure, but a design space. The network design space is not only composed of different model architectures as its name suggests, but also composed of different parameters, which define a network design space. Unlike NAS, which tries different network widths, depths, and resolutions to find the best network in different network architectures, RegNet uses only one existing block, such as a bottleneck block, to fight network structures.

N models can be collected for training in the design space, and then these models can be used for a small amount of training on ImageNet, and then the error EDF of these models can be evaluated as the evaluation of the design space.


F ( e ) = 1 n i = 1 n [ e i < e ] F(e) = \frac{1}{n} \sum_{i=1}^n \left[e_i < e \right]