An overview of the

Neural networks are powerful because of their powerful simulation capabilities. In theory, it can simulate any function with infinitesimally small errors.

In other words, we can use the neural network to build any function and get any algorithm.

We use some visual examples here to help you get some intuition.

Simulation of a function of one variable

A straight line

This is the simplest case, we can use a neuron with no activation function to simulate.


f ( x ) = w x + b f(x) = wx+b

Any line can be simulated by adjusting w, BW, BW and B parameters.

Step Function

We use a neuron with Sigmoid activation function to simulate.

As the WWW parameter continues to increase, the neural network will gradually approach the function.

Rectangular impulse function

Let’s break it down into several steps:

  1. One neuron is used to simulate the left half of the function.


f 1 ( x ) = sigmoid ( w 1 x + b 1 ) f_1(x) = \text{sigmoid}(w_1x+b_1)

  1. Use 1 neuron to simulate the right half of the function (upside down).


f 2 ( x ) = sigmoid ( w 2 x + b 2 ) f_2(x) = \text{sigmoid}(w_2x+b_2)

  1. Another neuron was used to synthesize the images from the first two steps


f 3 ( x . y ) = sigmoid ( w 31 x + w 32 y + b 3 ) f_3(x, y) = \text{sigmoid}(w_{31}x + w_{32}y + b_3)

The result is a good approximation of the objective function.

Other unary functions

Using the rectangle delta function, we can easily approximate any other function, just like the principle of integration.

Simulation of functions of two variables

The plane

This is the simplest case, we can use a neuron with no activation function to simulate.


f ( x . y ) = w 1 x + w 2 y + b f(x, y) = w_1x + w_2y + b

Any plane can be simulated by adjusting parameters W1, W2, BW_1, W_2, BW1, W2, and b.

Binary Step Function

We use a neuron with Sigmoid activation function to simulate. f(x)=sigmoid(w1x+w2y+b)f(x) = \text{sigmoid}(w_1x + w_2y + b)f(x)=sigmoid(w1x+w2y+b)

Binary rectangle delta function

Similar to the case of unary functions, we implement it in steps:

  1. Use a neuron to simulate one edge of a function


f 1 ( x . y ) = sigmoid ( w 11 x + w 12 y + b 1 ) f_1(x, y) = \text{sigmoid}(w_{11}x + w_{12}y + b_1)

  1. Then we can get the following function:

  1. Finally, the following functions can be synthesized

The final neural network structure is shown in the figure below:

Other functions of two variables

Using the rectangle delta function of two variables, we can easily approximate any other function of two variables, just like the principle of integration.

Simulation of n element functions

Same principle, imagine! 😥

The problem

Why do we need neural networks when we have digital circuits and software algorithms?

Software programs built on top of digital circuits can simulate any function, so why invent artificial neural networks?

Reference software

For more content and interactive version, please refer to the App:

Neural networks and deep learning