A list,

Particle swarm Optimization (PSO) is an evolutionary computation technology. It comes from the study of predation behavior in flocks of birds. The basic idea of particle swarm optimization algorithm is to find the optimal solution through the cooperation and information sharing among individuals in the group. The advantage of PSO is that it is simple and easy to implement without many parameters adjustment. It has been widely used in function optimization, neural network training, fuzzy system control and other applications of genetic algorithms.

Particle swarm optimization (PSO) simulates a bird in a flock by designing a massless particle. The particle has only two properties: speed and position. Speed represents the speed of movement, and position represents the direction of movement. Each particle separately searches for the optimal solution in the search space, and records it as the current individual extreme value, and shares the individual extreme value with other particles in the whole particle swarm, and finds the optimal individual extreme value as the current global optimal solution of the whole particle swarm. All particles in a swarm adjust their speed and position based on the current individual extremum they find and the current global optimal solution shared by the whole swarm. The following GIF vividly shows the process of the PSO algorithm:2 Update rule PSO is initialized to a group of random particles (random solutions). Then find the optimal solution through iteration. At each iteration, the particle updates itself by tracking two “extreme values” (PBest, GBest). After finding these two optimal values, the particle updates its velocity and position by using the formula below.The first part of formula (1) is called [memory term], which represents the influence of the magnitude and direction of the last speed. The second part of Formula (1) is called [self cognition term], which is a vector pointing from the current point to the particle’s own best point, indicating that the particle’s action comes from its own experience. The third part of Formula (1) is called [group cognition term], which is a vector from the current point to the best point of the population, reflecting the cooperation and knowledge sharing among particles. The particle is determined by its own experience and the best experience of its companions. Based on the above two formulas, the standard form of PSO is formed.Formula (2) and Formula (3) are regarded as standard PSO algorithms. 3. Process and pseudocode of PSO algorithm

Ii. Source code

Function chapter_PSO close all; clear; clc; format compact; %% Data extraction % Draw the fractal visualization of the test datafigure
subplot(3.5.1);
hold on
for run = 1:178
    plot(run,wine_labels(run),The '*');
end
xlabel('samples'.'FontSize'.10);
ylabel('Category label'.'FontSize'.10);
title('class'.'FontSize'.10);
for run = 2:14
    subplot(3.5,run);
    hold on;
    str = ['attrib ',num2str(run- 1)];
    for i = 1:178
        plot(i,wine(i,run- 1),The '*');
    end
    xlabel('samples'.'FontSize'.10);
    ylabel('Attribute value'.'FontSize'.10);
    title(str,'FontSize'.10); End % selects the training set and test set % will be of the first type1- 30Of the second kind60- 95.And the third category131- 153.Train_wine = [wine()1:30, :); wine(60:95, :); wine(131:153:)]; Train_wine_labels = [wine_labels() train_wine_labels ()1:30); wine_labels(60:95); wine_labels(131:153)]; % will be the first category3159Of the second kind96- 130.And the third category154- 178.Test_wine = [wine()31:59, :); wine(96:130, :); wine(154:178:)]; Test_wine_labels = [wine_labels();31:59); wine_labels(96:130); wine_labels(154:178)]; %% data preprocessing % data preprocessing, normalized the training set and test set to [0.1[mtrain,ntrain] = size(train_wine); [mtest,ntest] = size(test_wine); dataset = [train_wine;test_wine]; % mapminmax is the normalized function [DATASet_scale,ps] = mapminmax(dataset'.0.1);
dataset_scale = dataset_scale';

train_wine = dataset_scale(1:mtrain,:);
test_wine = dataset_scale( (mtrain+1):(mtrain+mtest),: ); Select the best SVM parameter C&G [bestACC, BESTC, BESTG] = psoSVMcgForClass(train_wine_labels,train_wine); % Print selection result disp('Print selection results');
str = sprintf( 'Best Cross Validation Accuracy = %g%% Best c = %g Best g = %g',bestacc,bestc,bestg); disp(str); %% Use the best parameters for SVM network training CMD = ['-c ',num2str(bestc),' -g ',num2str(bestg)]; model = svmtrain(train_wine_labels,train_wine,cmd); Predict_label,accuracy] = svmpredict(test_wine_labels,test_wine,model); % Print test set classification accuracy total = length(test_wine_labels); right = sum(predict_label == test_wine_labels); disp('Printing test set classification accuracy');
str = sprintf( 'Accuracy = %g%% (%d/%d)',accuracy(1),right,total); disp(str); %% Actual and predicted classification of test sets % It can be seen from the figure that only three test samples are misclassified figures; hold on; plot(test_wine_labels,'o');
plot(predict_label,'r*');
xlabel('Test set sample'.'FontSize'.12);
ylabel('Category label'.'FontSize'.12);
legend('Actual Test Set Classification'.'Predictive Test Set Classification');
title('Actual and predicted classification of test sets'.'FontSize'.12); grid on; snapnow; % % subfunction psoSVMcgForClass. M function [bestCVaccuarcy bestc, bestg, pso_option] = PsoSVMcgForClass (train_label,train, pSO_option) % The psoSVMcgForClass % parameter is initializedif nargin == 2
    pso_option = struct('c1'.1.5.'c2'.1.7.'maxgen'.200.'sizepop'.20.'k'.0.6.'wV'.1.'wP'.1.'v'.5.'popcmax'.10^2.'popcmin'.10^ (- 1),'popgmax'.10^3.'popgmin'.10^ (2 -)); End % c1: the initial is1.5, pSO parameter local search capability % c2: initial is1.7, pSO parameter global search capability % maxgen: initial is200, the maximum number of evolutions % sizePOP: initial is20, the maximum population % k: the initial is0.6(k belongs to [0.1.1.0], the relationship between rate and x (V = kX) % wV: the initial is1(wV best belongs to [0.8.1.2], the elastic coefficient before the speed in the rate update formula % wP: is initially1, the elastic coefficient % v in front of the velocity in the population renewal formula3, the SVM Cross Validation parameter % popcmax: starts as100, the maximum value of change of SVM parameter c. % popcmin: the initial value is0.1, the minimum change value of SVM parameter c. % popgmax: the initial value is1000, the maximum value of change of SVM parameter g. % popgmin: the initial value is0.01, the minimum change value of SVM parameter C. Vcmax = pso_option.k*pso_option.popcmax; Vcmin = -Vcmax ; Vgmax = pso_option.k*pso_option.popgmax; Vgmin = -Vgmax ; eps =10^ (- 3); % produces the initial particle and velocityfor i=1: pso_option.sizePOP % Randomly generates population and speed pop(I,1) = (pso_option.popcmax-pso_option.popcmin)*rand+pso_option.popcmin;
    pop(i,2) = (pso_option.popgmax-pso_option.popgmin)*rand+pso_option.popgmin;
    V(i,1)=Vcmax*rands(1.1);
    V(i,2)=Vgmax*rands(1.1); % Calculate initial fitness CMD = ['-v ',num2str(pso_option.v),' -c ',num2str( pop(i,1)),' -g ',num2str( pop(i,2))]; fitness(i) = svmtrain(train_label, train, cmd); fitness(i) = -fitness(i); Global_fitness bestindex =min(fitness); % local_fitness=fitness; Global_x =pop(bestindex,:); % global extreme point local_x=pop; % Individual extreme point initialization % Average population fitness per generation avgFitness_gen = Zeros (1,pso_option.maxgen); % Iterative optimizationfor i=1:pso_option.maxgen
    
    for j=1Update: pso_option sizepop % speed V (j) = pso_option. WV * V (j) + pso_option. Rand c1 * * (local_x (j:) - pop (j:)) + pso_option.c2*rand*(global_x - pop(j,:));if V(j,1) > Vcmax
            V(j,1) = Vcmax;
        end
        if V(j,1) < Vcmin
            V(j,1) = Vcmin;
        end
        if V(j,2) > Vgmax
            V(j,2) = Vgmax;
        end
        if V(j,2) < Vgmin
            V(j,2) = Vgmin; End % population pop (j) = pop (j) + pso_option. WP * V (j:);if pop(j,1) > pso_option.popcmax
            pop(j,1) = pso_option.popcmax;
        end
        if pop(j,1) < pso_option.popcmin
            pop(j,1) = pso_option.popcmin;
        end
        if pop(j,2) > pso_option.popgmax
            pop(j,2) = pso_option.popgmax;
        end
        if pop(j,2) < pso_option.popgmin
            pop(j,2) = pso_option.popgmin; End % adaptive particle variationif rand>0.5
            k=ceil(2*rand);
            if k == 1
                pop(j,k) = (20- 1)*rand+1;
            end
            if k == 2pop(j,k) = (pso_option.popgmax-pso_option.popgmin)*rand + pso_option.popgmin; End end % CMD = ['-v ',num2str(pso_option.v),' -c ',num2str( pop(j,1)),' -g ',num2str( pop(j,2))]; fitness(j) = svmtrain(train_label, train, cmd); fitness(j) = -fitness(j); cmd_temp = ['-c ',num2str( pop(j,1)),' -g ',num2str( pop(j,2))]; model = svmtrain(train_label, train, cmd_temp);if fitness(j) >= - 65.
            continue;
        end
Copy the code

3. Operation results

Fourth, note

Version: 2014 a