A list,

Particle swarm Optimization (PSO) is an evolutionary computation technology. It comes from the study of predation behavior in flocks of birds. The basic idea of particle swarm optimization algorithm is to find the optimal solution through the cooperation and information sharing among individuals in the group. The advantage of PSO is that it is simple and easy to implement without many parameters adjustment. It has been widely used in function optimization, neural network training, fuzzy system control and other applications of genetic algorithms.

2. Analysis of particle swarm optimization

2.1 Basic Ideas

Particle swarm optimization (PSO) simulates a bird in a flock by designing a massless particle that has only two properties: speed and position. Speed represents how fast the bird moves, and position represents the direction of the bird. Each particle separately searches for the optimal solution in the search space, and records it as the current individual extreme value, and shares the individual extreme value with other particles in the whole particle swarm, and finds the optimal individual extreme value as the current global optimal solution of the whole particle swarm. All particles in a swarm adjust their speed and position based on the current individual extremum they find and the current global optimal solution shared by the whole swarm. The following GIF vividly shows the process of the PSO algorithm:



2 Update Rules

PSO initializes as a group of random particles (random solutions). Then find the optimal solution through iteration. At each iteration, the particle updates itself by tracking two “extreme values” (PBest, GBest). After finding these two optimal values, the particle updates its velocity and position by using the formula below.



The first part of formula (1) is called [memory term], which represents the influence of the magnitude and direction of the last speed. The second part of Formula (1) is called [self cognition term], which is a vector pointing from the current point to the particle’s own best point, indicating that the particle’s action comes from its own experience. The third part of Formula (1) is called [group cognition term], which is a vector from the current point to the best point of the population, reflecting the cooperation and knowledge sharing among particles. The particle is determined by its own experience and the best experience of its companions. Based on the above two formulas, the standard form of PSO is formed.



Formula (2) and Formula (3) are regarded as standard PSO algorithms.

3. Process and pseudocode of PSO algorithm

Ii. Source code

% clear CLC warning off N=10; % Total number of nodes (including power nodes) R=16; % Total number of branches sizepop=10; % Particle swarm size maxgen=200; Pop =pop_initial(sizepop,N,R); % population initialization Vmax=4; Vmin=-Vmax; % Upper and lower limits of particle velocity Sigmoid=@(x)1./ (1+exp(-x)); Utility=FAPSO_LDNP(N,pop,Vmax,Vmin,maxgen,sizepop); % Call FAPSO algorithm to solve LDNP problem %4)
plot(Utility)
grid on
xlabel('Number of iterations'.'fontsize'.12)
ylabel('Return on investment'.'fontsize'.12)
title('FAPSO iterative convergence diagram '.'fontsize'.12)
function Utility=FAPSO_LDNP(D,pop,Vmax,Vmin,maxgen,popsize)
vmax=Vmax/4;
vmin=-Vmin/4 -;
popmax=5;
popmin=- 5;
c1=2;
c2=2;
w=0.8; % fixed inertia weight wmin=0.4;
k1=1.5; % inertia weight adjustment parameter k2=0.3; % produces the initial particle and velocityfor i=1:popsize % Randomly generates a population pop0(I,:)=5*rands(1,D); % V(I,:)=rands(1,D); Fitness (I)=fun_LDNP(pop0(I,:)); % chromosome fitness end % individual extreme value and population extreme value [bestfitness,bestindex]= Max (fitness); zbest=pop0(bestindex,:); % gbest=pop0; % Individual fitnessgBest =fitness; % Individual optimum fitness value fitnesszBest =bestfitness; % Global optimum fitness % plus FA improved grouping Fav=sum(FITNESS)/ POPsize; %F_{av} C_index=find(fitness>=Fav); %C group, the third group, the least adaptable set of address variables lc=length(C_index); b_index=find(fitness<Fav); % quasi group B, some of the better particles will be grouped into group A, the rest will be true group B lb=length(b_index); Fav_=0;
for i=1:lb Fav_=Fav_+fitness(b_index(i)); end Fav_=Fav_/lb; %F_{av}^' A_index=find(fitness<Fav_); % group A la = length (A_index); B_index=find(fitness>=Fav_&fitness<Fav); Lb group B % = length (B_index); W=zeros(popsize,1); Open up inertial weight storage spacefor i=1:popsize
    if sum(i==A_index)
        W(i)=w-(w-wmin)*abs(fitness(i)-Fav_)/(fitnesszbest-Fav_);
    elseif sum(i==B_index)
        W(i)=w;
    elseif sum(i==C_index)
        W(i)=1.5- 1/ (1+k1*exp(-k2*abs(fitnesszbest-Fav_)));
    end
end
for i=1:10[~,~,~]=jiedian(pop(:,:,i)); End % iterative optimizationfor i=1:maxgen
    
    for j=1Update: popsize % velocity V (j) = W (j) * V (j:) + c1 * rand * (gbest (j:) - pop0 rand (j, :)) + c2 * * (zbest - pop0 (j:)); V(j,V(j,:)>vmax)=vmax; V(j,V(j,:)<vmin)=vmin; % population update pop0(j,:)=pop0(j,:)+0.1*V(j,:); pop0(j,pop0(j,:)>popmax)=popmax; pop0(j,pop0(j,:)<popmin)=popmin; Fitness (j)=fun_LDNP(POP0 (j,)); endfor j=1:popsize % Individual optimal updateiffitness(j) > fitnessgbest(j) gbest(j,:) = pop0(j,:); fitnessgbest(j) = fitness(j); End % group optimal updateif fitness(j) > fitnesszbest
            zbest = pop0(j,:);
            fitnesszbest = fitness(j);
        end
    end 
    yy(i)=fitnesszbest; % inertia weight update % plus FA improved grouping Fav=sum(FITNESS)/ POPsize; %F_{av} C_index=find(fitness>=Fav); %C group, the third group, the least adaptable set of address variables lc=length(C_index); b_index=find(fitness<Fav); % quasi group B, some of the better particles will be grouped into group A, the rest will be true group B lb=length(b_index); Fav_=0;
    for ii=1:lb Fav_=Fav_+fitness(b_index(ii)); end Fav_=Fav_/lb; %F_{av}^' A_index=find(fitness<Fav_); % group A la = length (A_index); B_index=find(fitness>=Fav_&fitness<Fav); Lb group B % = length (B_index); W=zeros(popsize,1); Open up inertial weight storage spacefor ii=1:popsize
        if sum(ii==A_index)
            W(ii)=w-(w-wmin)*abs(fitness(ii)-Fav_)/(fitnesszbest-Fav_);
        elseif sum(ii==B_index)
            W(ii)=w;
        elseif sum(ii==C_index)
            W(ii)=1.5- 1/ (1+k1*exp(-k2*abs(fitnesszbest-Fav_)));
        end
    end    
end
Copy the code

3. Operation results



Fourth, note

Version: 2014a need complete code or write plus QQ 1564658423