An overview of the
Neural networks are powerful because of their powerful simulation capabilities. In theory, it can simulate any function with infinitesimally small errors.
In other words, we can use the neural network to build any function and get any algorithm.
We use some visual examples here to help you get some intuition.
Simulation of a function of one variable
A straight line
This is the simplest case, we can use a neuron with no activation function to simulate.
Any line can be simulated by adjusting w, BW, BW and B parameters.
Step Function
We use a neuron with Sigmoid activation function to simulate.
As the WWW parameter continues to increase, the neural network will gradually approach the function.
Rectangular impulse function
Let’s break it down into several steps:
- One neuron is used to simulate the left half of the function.
- Use 1 neuron to simulate the right half of the function (upside down).
- Another neuron was used to synthesize the images from the first two steps
The result is a good approximation of the objective function.
Other unary functions
Using the rectangle delta function, we can easily approximate any other function, just like the principle of integration.
Simulation of functions of two variables
The plane
This is the simplest case, we can use a neuron with no activation function to simulate.
Any plane can be simulated by adjusting parameters W1, W2, BW_1, W_2, BW1, W2, and b.
Binary Step Function
We use a neuron with Sigmoid activation function to simulate. f(x)=sigmoid(w1x+w2y+b)f(x) = \text{sigmoid}(w_1x + w_2y + b)f(x)=sigmoid(w1x+w2y+b)
Binary rectangle delta function
Similar to the case of unary functions, we implement it in steps:
- Use a neuron to simulate one edge of a function
- Then we can get the following function:
- Finally, the following functions can be synthesized
The final neural network structure is shown in the figure below:
Other functions of two variables
Using the rectangle delta function of two variables, we can easily approximate any other function of two variables, just like the principle of integration.
Simulation of n element functions
Same principle, imagine! 😥
The problem
Why do we need neural networks when we have digital circuits and software algorithms?
Software programs built on top of digital circuits can simulate any function, so why invent artificial neural networks?
Reference software
For more content and interactive version, please refer to the App:
Neural networks and deep learning