First, the concept of extreme learning machine

Extreme Learning Machine (ELM) is an algorithm for solving single hidden layer neural network proposed by Huang Guangbin.

The biggest characteristic of ELM is that for traditional neural networks, especially single hidden layer feedforward neural network (SLFNs), it is faster than the traditional learning algorithm on the premise of ensuring the learning accuracy.

Second, the principle of extreme learning machine

ELM is a new fast learning algorithm. For single hidden layer neural network, ELM can randomly initialize input weight and bias and get corresponding output weight.

(From Teacher Huang Guangbin’s PPT)

For a single hidden layer neural network (see Figure 1), assume that there isI’m taking an arbitrary number of samples, including.. For a person who hasA single hidden layer neural network with two hidden layer nodes can be expressed as

Among them,Is the activation function,Is the input weight,Is the output weight,Is the firstThe bias of a hidden layer element.saidandThe inner product.

The goal of single hidden layer neural network learning is to minimize the output error, which can be expressed as

is.and,

It can be represented by the matrix

Among them,Is the output of the hidden layer node,Is the output weight,Is the expected output.

.

In order to be able to train single hidden layer neural networks, we hope to obtain.and,

Among them,, which is equivalent to minimizing the loss function

Some traditional algorithms based on gradient descent can be used to solve such problems, but basic gradient-based learning algorithms need to adjust all parameters in the process of iteration. In the ELM algorithm, once the weight is enteredAnd the bias of the hidden layerThe output matrix of a randomly determined hidden layerIs uniquely identified. Training single hidden layer neural network can be transformed into solving a linear system. And output weightsCan be determined

Among them,Is the matrixMoore-penrose generalized inverse of. And the solution can be provedThe norm of is smallest and unique.

Three, the implementation steps of extreme learning machine

Step 1: Get the data.

Step 2: Data preprocessing, generally using minimax normalization, eliminate the dimension and order of magnitude impact of data.

Step 3: Train the ELM model and work out the connection weights of the hidden layer and the output layer.

Step 4: For the training model, use the input data of the test to make prediction.

Step 5: Make error analysis and graph for the predicted value and actual value.

Four, part of the code

Clear close all CLC Warning off RNG ('default') P=randi([1 10],3,100); % Input sample T=sum(P,1); % output sample P1= randi([1 10],3,10); % Input sample T1=sum(P1,1); % Build test data output sample % normalized... Mapminmax function can be used, because the generated data has no dimensional and order of magnitude difference, skip %% ELM training process N=3; % Number of neuron nodes in input layer L=30; % Number of neuron nodes in hidden layer M=1; % Number of output layer neurons IW=2*rand(L,N)-1; % initialize the connection weight between the input layer and the hidden layer, range (-1,1) B=rand(L,1); % initializes the bias of the hidden layer, in the range (0,1) tempH = IW * P + B; % Calculate the output value of hidden layer h= 1./ (1 + exp(tempH)); % Use the mapping function G(x)=1/(1+exp(x)), h is the mapping value of the hidden layer to the first sample feature LW=pinv(h')*T'; % ELM prediction process tempH1=IW*P1+B; % ELM prediction process tempH1=IW*P1+B; % Calculate hidden layer output value h1= 1./ (1 + exp(tempH1)); ELM_OUT= (h1' * LW)'; %% analysis prediction error= T1-ELm_out; % error = Actual value - Predicted value DISP (' Serial expected value predicted value error ') DISP ([1:10t1 ELM_OUT error]') disP (' Sum of squares of errors... ') sse=sum(error.^2) disp('... ') mse=mean(error.^2) disp('... Figure hold on plot(1:10,T1,'bo:','linewidth',1.0) mape=mean(abs(error)./T1) % Plot (1:10, ELM_OUT, 'r * -.', 'our linewidth, 1.0) legend (' expectations',' predicted ') xlabel (' test sample number) ylabel 'index values' web https://blog.csdn.net/qq_45955094/article/details/116381999Copy the code

​​​​​

Five, reference and code private message blogger

[1] YOU L L. Extreme learning machine based on particle swarm optimization algorithm and its application in precipitation prediction [D]. Jilin Agricultural University,2020.