This is the 8th day of my participation in the August More Text Challenge. For details, see:August is more challenging
This is the second part of the notes series of Ng’s machine learning course, mainly to learn the principle and derivation of neural network forward propagation and back propagation algorithm.
Fundamentals of Neural Networks
The concept is introduced
Artificial Neural Network (NN) for short. A neural network is a mathematical model that simulates the structure of human neurons with fixed connections. Renew neurons by forward and back propagation. In simple terms, a neural network is made up of a series of neural layers, each of which contains many neurons.
The neural network can be logically divided into three layers:
- Input Layer: the first Layer. The receiving feature is XXX.
- Output Layer: The last Layer outputs the assumption HHH of the final forecast.
- HiddenLayers: The middle layer that is not directly visible.
Features:
- Every neural network has input and output values
- How to be trained:
- Massive data sets
- Thousands and thousands of training sessions
- Learning from mistakes, comparing the difference between the predicted answer and the real answer, and improving recognition by back propagation.
The following is the simplest two-layer neural network as an example to introduce:
In the neural network above, the input feature vector is XXX, the weight parameter matrix WWW, the bias parameter BBB and aaa represent the output of each neuron, and the superscript represents the number of layers of the neural network (the hidden layer is 1).
Formula:
Computational steps of neural network:
- First, calculation of the first layer of each node in network related z [1] [1] Tx = W + bz ^ {[1]} = W ^ {[1] ^ T} x + bz [1] [1] Tx = W + b
- Activation function is used to calculate a [1] = g (z) [1] a ^ {[1]} = g (z ^ {[1]}) a [1] = g (z) [1]
- A [2]a^{[2]}a[2]
- Loss(a[2],y)Loss(a^{[2]},y)Loss(a[2],y) Loss(a[2],y)
Vectorization calculation
Formula of the first layer:
The second formula:
Output layer:
This formula is very similar to the cost function in logistic regression.
The forward propagation
A [I](z)a^{[I]}(z)a[I](z)a [I](z)a [I](z)a [I](z) Then the process of LossLossLoss obtained from the label value is called Forward Propagation.
The forward propagation process of the above example is:
Multiple local vectorization
Multiplicity is when the input feature is no longer a simple vector, but a matrix. The principle is essentially the same, the only caveat is that the matrix dimensions have to match.
Assuming that the training sample is MMM, the above formula should be updated as follows:
- For a single sample: z [1] (I) = WTx (I) + b [1] z ^ {[1] (I)} = W ^ Tx ^ {(I)} + b ^ {[1]} z [1] (I) = WTx (I) + b [1]
The activation function
Why do you need a nonlinear activation function?
If the activation function is removed or the activation function is linear, then any combination of two linear functions is still linear, so that no matter how many layers of the neural network, all you do is compute the linear function, and the hidden layer is useless.
Here are three common activation functions:
function
- Sigma ‘(z) = sigma (z) (1 – sigma (z)) \ sigma’ (z) = \ sigma (z) (1 – \ sigma (z)) sigma ‘(z) = sigma (z) – sigma (z) (1) the nature of the very useful
function
- Obviously, the range is [−1,1][-1,1][−1,1].
Relu \ mathbf {Relu} Relu function
-
-
-
LeakyReluLeaky ReluLeakyRelu function
-
Most widely used
Neural network gradient descent mechanism
As we know, the neural network can contain multiple hidden layers, and the neurons in each layer will produce prediction, and the final error is calculated at the output layer, then how to optimize the final loss function L(a[I],y)L(a^{[I]},y)L(a[I],y)?
Obviously, the traditional regression problems such as the gradient descent law of logistic regression cannot be directly applied here, because we need to consider the error layer by layer and optimize it layer by layer. Therefore, the Back Propagation Algorithm is adopted in the neural network to optimize the error.
Back propagation derivation
First, let’s review the loss function:
We know that forward propagation is a process of calculating the final value from the input layer one step at a time, and back propagation, as the name suggests, is a process of derivation from the input layer to the final value, but the process of derivation is based on the final loss function, constantly taking the partial derivative forward. And the way to do that is based on the chain rule.
From the front we know the forward propagation process:
Then the back propagation process is as follows:
Output layer:
The second layer:
The input layer is not computed.
We define the error of each layer as the vector δ(L)\delta^{(l)} δ(L), LLL represents the number of layers and LLL represents the total number of layers. From the above derivation we can get:
For m samples, then
So that’s a very detailed derivation of back propagation, and if you understand the chain derivation, then the principle of back propagation is very easy to understand.