A list,

Multi-objective particle swarm optimization (MOPSO) algorithm was proposed by CarlosA. Coello Coello et al in 2004. For details, please refer to 1. The aim is to apply particle swarm optimization (PSO), which can only be used for single object, to multiple objects. We know that the original single-objective PSO process was simple: — > initialize particle positions (generally generate uniform distribution randomly) — > calculate fitness values (generally the objective function value – the object of optimization) — > initialize the historical best PBest for itself and find the global best GBest — > update the position and velocity according to the position and velocity formula — > recalculate fitness — > Update historical optimal PBest and global optimal GBest according to fitness — > converge or reach the maximum number of iterations and exit the algorithm

The speed update formula is as follows:



The right-hand side has three components. The first part is the quantity of inertia, which is the vector that continues the particle’s last motion. The second part is the quantity of individual cognition, which moves towards the optimal position in individual history. The third part is the social cognitive quantity, which is the quantity of particles moving towards the global optimal position.

With speed, position updates come naturally:



The above is the introduction of multi-objective PSO algorithm. When applied to multiple objectives, the problems are as follows:

How to choose PBest. We know that pBest is selected for single-objective optimization, and we only need to compare and choose which is better. But for multiple objects, the comparison of two particles doesn’t tell you which one is better. A particle is superior if every target of the particle is superior. If some are better and some are worse, it is impossible to say exactly which is better and which is worse. How to choose GBest. We know that there is only one optimal individual in the population for a single target. For multiple goals, there are many optimal individuals. For PSO, only one individual per particle can be selected as optimal (the tie bearer). How to choose? MOPSO’s approach to the first question is to randomly select one of them as historically best when it can’t strictly compare which one is better. For the second question, MOPSO chose a leader in the optimal set (in the archive) based on congestion. Try to select particles in less dense locations (the grid method is used here).

MOPSO applies an adaptive grid approach to selecting leaders and updating the archive (pareto temporary optimal cross section), see 2 for details. How to choose the tie bearer?

MOPSO selects a particle to follow in the save. How to choose? According to the grid division, assuming that there are several particles in each grid, I represents the number of grids. The probability of the particles in the grid being selected is, that is, the more crowded the particles are, the lower the probability of selection is. This is to ensure that unknown areas can be explored. How do I file it?

After the population is updated, how is it archived? MOPSO conducted three rounds of screening. First of all, the first round of filtering is carried out according to the dominance relation, the inferior solutions are removed, and the remaining ones are added to the archive. Secondly, the second filtering is carried out according to the dominant relation in the archive, the inferior solution is removed, and the position of the archived particles in the grid is calculated. Finally, if the number of archives exceeds the archive threshold, filtering is done according to an adaptive grid up to the threshold limit. Remeshing.

Ii. Source code

CLC clear load data %% Objnum =size(P,1); % number of items in class weight=92; % Total weight limit % initializer Dim=5; % particle dimension xSize=50; % population MaxIt=200; % Iteration times c1=0.8; % algorithm parameter C2 =0.8; % algorithm parameter wmax=1.2; % inertia factor wmin=0.1; % inertia factor x=unidrnd(4,xSize,Dim); % particle initialization v=zeros(xSize,Dim); % speed initialization xbest=x; % gbest=x(1,:); % particle swarm optimum location % particle fitness px=zeros(1,xSize); % rx=zeros(1,xSize); % cx=zeros(1,xSize); % weight constraint % Optimal value initialization pxBest =zeros(1,xSize); % Particle optimal value target rxBest =zeros(1,xSize); % CxBest =zeros(1,xSize); PxPrior =zeros(1,xSize); % rxPrior=zeros(1,xSize); % cxPrior=zeros(1,xSize); For I =1:xSize for j=1:Dim % control category px(I) = px(I)+P(x(I,j),j); % particle value rx(I) = rx(I)+R(x(I,j),j); % particle volume Cx (I) = Cx (I)+C(x(I,j),j); Pxbest =px; pxbest=px; rxbest=rx; cxbest=cx; %% initial screening non-inferior solution FLJ =[]; fljx=[]; fljNum=0; % Two real numbers equal precision tol=1e-7; for i=1:xSize flag=0; % dominant symbol for j = 1: xSize if j ~ = if I ((px (I) < p (j)) && (rx (I) > rx (j))) | | ((abs (px (I) - px (j)) < tol)... && (rx(i)>rx(j)))||((px(i)<px(j)) && (abs(rx(i)-rx(j))<tol)) || (cx(i)>weight) flag=1; break; End end end % If flag==0 fljNum=fljNum+1; % Record non-inferior solution FLJ (fljNum,1)=px(I); flj(fljNum,2)=rx(i); flj(fljNum,3)=cx(i); % non-inferior solution position FLJX (fljNum,:)=x(I,:); W =wmax-(wmax-wmin)*iter/MaxIt; % Select particle from non-inferior solution as global optimal solution s=size(FLJX,1); The index = randi (s, 1, 1); gbest=fljx(index,:); % % group updates for I = 1: xSize % update speed v (I, :) = v (I, :) + c1 * w * rand (1, 1) * (xbest (I, :) - x (I, :)) + c2 * rand (1, 1) * (gbest - x (I, :)); X (I,:)=x(I,:)+v(I,:); x(i,:) = rem(x(i,:),objnum)/double(objnum); index1=find(x(i,:)<=0); if ~isempty(index1) x(i,index1)=rand(size(index1)); end x(i,:)=ceil(4*x(i,:)); PxPrior (:)=0; rxPrior(:)=0; cxPrior(:)=0; For I =1:xSize for j=1:Dim % pxPrior(I) = pxPrior(I)+P(x(I,j),j); RxPrior (I) = rxPrior(I)+R(I,j),j); % cxPrior(I) = cxPrior(I)+C(x(I,j),j); Update particle history best for I =1:xSize % Now dominate original, Replace the original the if ((p (I) < pxPrior (I)) && (rx (I) > rxPrior (I))) | | ((abs (px (I) - pxPrior (I)) < tol)... && (rx(i)>rxPrior(i)))||((px(i)<pxPrior(i)) && (abs(rx(i)-rxPrior(i))<tol)) || (cx(i)>weight) xbest(i,:)=x(i,:); Pxbest (I)=pxPrior(I); rxbest(i)=rxPrior(i); cxbest(i)=cxPrior(i); End % is not governed by each other, and random decision if ~ (((px (I) < pxPrior (I)) && (rx (I) > rxPrior (I))) | | ((abs (px (I) - pxPrior (I)) < tol)... && (rx(i)>rxPrior(i)))||((px(i)<pxPrior(i)) && (abs(rx(i)-rxPrior(i))<tol)) || (cx(i)>weight) )... && ~( ((pxPrior(i)<px(i)) && (rxPrior(i)>rx(i))) ||((abs(pxPrior(i)-px(i))<tol) && (rxPrior(i)>rx(i)))... | | ((pxPrior (I) < p (I)) && (abs (rxPrior (I) - rx (I)) < tol)) | | (cxPrior (I) > weight)) if rand (1, 1) < 0.5 xbest (I, :) = x (I, :); pxbest(i)=pxPrior(i); rxbest(i)=rxPrior(i); cxbest(i)=cxPrior(i); End end end %% px=pxPrior; rx=rxPrior; cx=cxPrior; % update upgrade non-inferior solution set s=size(FLJ,1); % Number of elements in the current non-inferior solution set % First combine the non-inferior solution set and xBest PPPX = Zeros (1,s+xSize); rrrx=zeros(1,s+xSize); cccx=zeros(1,s+xSize); pppx(1:xSize)=pxbest; pppx(xSize+1:end)=flj(:,1)'; rrrx(1:xSize)=rxbest; rrrx(xSize+1:end)=flj(:,2)'; cccx(1:xSize)=cxbest; cccx(xSize+1:end)=flj(:,3)'; xxbest=zeros(s+xSize,Dim); xxbest(1:xSize,:)=xbest; xxbest(xSize+1:end,:)=fljx; % Screening non-inferior solution FLJ =[]; fljx=[]; k=0; tol=1e-7; for i=1:xSize+s flag=0; % % is not dominant judge whether this point is not bad for j = 1: xSize + s if j ~ = if I ((PPPX (I) < PPPX (j)) && (RRRX (I) > RRRX (j))) | | ((abs (PPPX (I) - PPPX (j)) < tol)... && (rrrx(i)>rrrx(j)))||((pppx(i)<pppx(j)) && (abs(rrrx(i)-rrrx(j))<tol)) ... | | (CCCX (I) > weight) % once dominated flag = 1; break; End end end % If flag==0 k=k+1; flj(k,1)=pppx(i); flj(k,2)=rrrx(i); flj(k,3)=cccx(i); % record non-inferior solution FLJX (k,:)=xxbest(I,:); % non-inferior solution position end endCopy the code

3. Operation results

Fourth, note

Complete code added QQ1575304183