A list,

LSSVM features 1) It also solves the original dual problem, but replaces the QP problem in SVM (simplifies the solving process) by solving a linear equation set (caused by linear constraints in the optimization goal), which is also applicable to classification and regression tasks in high-dimensional input space. 2) It is essentially a process of solving linear matrix equations, and Gaussian processes, Regularization networks and Fisher Discriminant Analysis are combined. 3) Sparse approximation (to overcome the disadvantages of using the algorithm) and robust regression (robust statistics) are used; 4) Bayesian inference was used; 5) It can be extended to unsupervised learning: kernel PCA or density clustering; 6) It can be extended to recursive neural network.

LSSVM is used to classify tasks

1) Optimization objectives



2) Lagrange multiplier method



Where α I \alpha_iα I is the Lagrange multiplier and also the support values.

3) Solve the optimization conditions



4) Solve the duality problem (the same as SVM does not do any calculation for W, Ww and E ee)



LLSVM obtains the values of optimization variables A AA and B BB by solving the above linear equations, which is simpler than QP problem

5) Differences with standard SVM

A. Use equality constraints instead of inequality constraints; B. Since equality constraint is applied to each sample point, no constraint is imposed on the relaxed vector, which is also an important reason why LSSVM loses sparsity; C. The problem is further simplified by solving equality constraints and least squares problems.

LSSVM is used for regression tasks

1) Problem description



The disadvantages of LSSVM

Notice that α I ≠ γe \alpha_{I}=\gamma{e_{I}}α I ≠ 0 \alpha_{I}\ NEq {0}α I ≠ 0 \alpha_{I}\ NEq {0}α I ≠ 0 \alpha_{I}\neq{0}α I ≠ 0. Therefore, all training samples will be treated as support vectors, which will lead to the loss of the original sparse nature of SVM. However, training sets can also be thinned by “pruning” based on support. This step can also be regarded as a sparse approximate operation.

Ii. Source code

% = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = % initialization CLC close all clear formatlongTic % = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = % % import data data = xlsread ('1.xlsx');
[row,col]=size(data);
x=data(:,1:col- 1);
y=data(:,col);
set=1; % Set the number of measurement samples row1=row-set; % train_x=x(1:row1,:);
train_y=y(1:row1,:);
test_x=x(row1+1:row,:); % predict the input test_y=y(row1+1:row,:); % Forecast output train_x=train_x';
train_y=train_y';
test_x=test_x';
test_y=test_y'; %% data normalization [train_x, MINx, MAxx, train_YY, MINy, MAxy] =premnmx(train_x,train_y); test_x=tramnmx(test_x,minx,maxx); train_x=train_x';
train_yy=train_yy';
train_y=train_y';
test_x=test_x';
test_y=test_y'; %% parameter initialization eps =10^ (- 6); %% Defines the LSSVM parameter type='f';
kernel = 'RBF_kernel';
proprecess='proprecess';
lb=[0.01 0.02]; % c, g ub=[1000 100]; % upper limit of changes in parameters c and g dim=2; % dimension, which is an optimization parameter SearchAgents_no=20; % Number of search agents
Max_iter=100; % Maximum numbef of iterations
n=10;      % Population size, typically 10 to 25
A=0.25;      % Loudness  (constant or decreasing)
r=0.5;      % Pulse rate (constant or decreasing)
% This frequency range determines the scalings
Qmin=0;         % Frequency minimum
Qmax=2;         % Frequency maximum
% Iteration parameters
tol=10^ (- 10);    % Stop tolerance
Leader_pos=zeros(1,dim);
Leader_score=inf; %change this to -inf for maximization problems
%Initialize the positions of search agents
for i=1:SearchAgents_no
    Positions(i,1) =ceil(rand(1)*(ub(1)-lb(1))+lb(1));
    Positions(i,2) =ceil(rand(1)*(ub(2)-lb(2))+lb(2));
    Fitness(i)=Fun(Positions(i,:),train_x,train_yy,type,kernel,proprecess,miny,maxy,train_y,test_x,test_y);
v(i,:)=rand(1,dim);
end
[fmin,I]=min(Fitness);
best=Positions(I,:);
Convergence_curve=zeros(1,Max_iter);
t=0; % Loop counter % Start the iterations -- Bat Algorithm %% Plot (Convergence_curve,'LineWidth'.2);
title(['Fitness Curve of Gray Wolf Optimization Algorithm'.'(parameters c1 =',num2str(Leader_pos(1)),',c2=',num2str(Leader_pos(2)),', termination algebra ='.,num2str(Max_iter),') '].'FontSize'.13);
xlabel('Evolutionary algebra'); ylabel('Error fitness');
 
bestc = Leader_pos(1);
bestg = Leader_pos(2);
 
 
end
RD=RD'
disp([SVM prediction error optimized by Grey Wolf optimization algorithm =,num2str(D)])
 
% figure
% plot(test_predict,':og')
% hold on
% plot(test_y,'- *')
% legend('Predicted output'.'Expected output')
% title('Network predictive output'.'fontsize'.12)
% ylabel('Function output'.'fontsize'.12)
% xlabel('samples'.'fontsize'.12)
figure
plot(train_predict,':og')
hold on
plot(train_y,'- *')
legend('Predicted output'.'Expected output')
title('Grey Wolf Optimized SVM Network Prediction Output'.'fontsize'.12)
ylabel('Function output'.'fontsize'.12)
xlabel('samples'.'fontsize'.12) TOC % Calculation timeCopy the code

3. Operation results



Fourth, note

Version: 2014 a