In a detailed introduction of particle swarm optimization algorithm to achieve grouping backpack essay, has been introduced in detail the main idea of particle swarm optimization algorithm, if you master how to achieve grouping backpack with particle swarm optimization algorithm, then it is easy to modify the application into a single function for the maximum value. Here is the following first copy of a summary before using particle swarm optimization algorithm to achieve grouping backpack general idea:

  • A random bunch of particles are generated, each representing a situation of the backpack (which 3 items are selected), and the initial particles are all locally optimal particles.
  • Calculate the fitness value of each particle (i.e., the value, volume and weight of the backpack represented by each particle), and then select the non-inferior particle according to its fitness.
  • In each iteration, one of the non-inferior particles is randomly selected as the global optimal particle, and the particle velocity is calculated according to the formula (it is closely related to the better particle and the global optimal particle), so that each particle will move (that is, replace the item represented with probability).
  • If the moved particle is better than the previous particle, the original locally optimal particle is replaced
  • Then the non-inferior particles and locally optimal particles are put together, and then the non-inferior particles are selected according to their advantages and disadvantages
  • The repeated non-inferior particles are removed and the next iteration is entered.

Now for finding the minimum value of the unary function to make the following changes to the above ideas:

  • It randomly generates a bunch of particles, each of which represents an abscissa, and the initial particles are all locally optimal.
  • Calculate the fitness value of each particle (i.e., the ordinate value represented by each particle), and then select the initial non-inferior particle according to the fitness value of each particle.
  • In each iteration, one of the non-inferior particles is randomly selected as the global optimal particle, and the particle velocity is calculated according to the formula (it is closely related to the better particle and the global optimal particle), so that each particle moves (that is, the abscissa moves left or right).
  • If the moved particle is better than the previous particle, the original locally optimal particle is replaced
  • Then the non-inferior particles and locally optimal particles are put together, and then the non-inferior particles are selected according to their advantages and disadvantages
  • The repeated non-inferior particles are removed and the next iteration is entered.

One: the most value problem of one variable function

Suppose we have the following function:


y ( x ) = s i n ( x 2 ) 2 s i n ( x ) + x s i n ( x ) y(x) = sin(x^2) – 2sin(x) + x * sin(x)

Draw the function as shown below:

How do we find the minimum between [-3, 9]? Let’s start with a motion picture that shows how particle swarms find minimums:

As can be seen from the figure, we have 50 particles, which are represented by blue dots, and non-inferior solution particles are represented by red dots (the last minimum point must be included in the non-inferior solution particles), but here we can see that there is only one non-inferior solution particle, so it is the minimum point we seek. You can see that the position of the particle changes with each iteration, moving towards the minimum.

The following is a detailed introduction of particle swarm optimization algorithm how to obtain the maximum value of one variable function. Or according to the last essay will be a complete MATLAB code into seven parts.


Two: particle swarm optimization algorithm to find the optimal value of one variable function

2.1 Initialization of input and fixed Parameters

Note that the inertia weights in the parameters can be modified to the desired value, which affects the step size of each particle. The larger wmax is, the larger the particle step is at the beginning of iteration. The smaller wMIN is, the smaller the particle step is at the end of iteration.

clear, clc, close all;

%% input parameter, fixed parameter initialization
xsize = 50;         % particle number
ITER = 50;         % Number of iterations
c1 = 0.8; c2 = 0.8;      % constant
wmax = 1.0; wmin = 0.01;  % inertia weight correlation constant
v = zeros(1, xsize); % particle velocity initialized
Copy the code

2.2 Initialization of particle swarm position, fitness, optimum position and optimum fitness

Randomly generate particle group XXX, which represents the abscissa of each particle. Note that if you want to generate evenly distributed random numbers between [a,b][a, b][a, B], With a + (b – a) ∗ rand (1, 1) + a (b – a) * rand (1, 1) + a (b – a) ∗ rand (1, 1) the expression.

Fitness is just using the ordinate of the particle swarm.

%% particle swarm position, fitness, best position, best fitness initialization
x = - 3 + 12 * rand(1, xsize); % Random particle swarm position generation (abscissa [-3, 9])

% particle swarm fitness
y = zeros(1, xsize); % particle swarm ordinate
for i = 1 : xsize
    y(i) = sin(x(i). ^2) - 2*sin(x(i)) + x(i) * sin(x(i));
end

bestx = x; % Optimal particle swarm position
besty = y; % particle swarm optimization fitness
Copy the code

2.3 Initial screening of non-inferior solutions

Non-inferior solutions are filtered for the first time and then re-filtered for each subsequent iteration. The judgment conditions are important and can be changed according to the limitations of the problem. The idea here is to determine whether each particle fits better than all the others (with a smaller ordinate).

%% initial filter non-inferior solution
cnt = 1;
for i = 1 : xsize
    fljflag = 1;
    for j = 1 : xsize
        if j~ =i
            if y(i) > y(j) % I is the inferior solution
                fljflag = 0;
                break;
            end
        end
    end 
    if fljflag == 1
        flj(cnt) = y(i); % non-inferior solution fitness
        fljx(cnt) = x(i); % non-inferior solution position
        cnt = cnt + 1;
    end
end
Copy the code

2.4 Particle motion calculation

The particle velocity is calculated according to the following classical formula:


v i + 1 = w v i + c 1 r 1 ( p l o c a l i x i ) + c 2 r 2 ( p g l o b a l x i ) v^{i+1} = wv^i + c1r1(p_{local}^i – x^i) + c2r2(p_{global} – x^i)

Where WWW is inertia weight, c1c1c1 and c2c2c2 are constants, r1r1r1 and r2r2r2 are random numbers that obey uniform distribution between [0,1], plocalip_{local}^iplocali is local optimal particle. Pglobalp_ {global}pglobal is a global optimal particle. Note that there is only one global optimal particle so there is no superscript III.

The value of inertia weight is related to the number of iterations, Here we use w=wmax−(wmax−wmin)∗niter/iterallw =wmax -(wmax-wmin) * Niter /iterallw =wmax−(wmax−wmin)∗niter/iterall. The calculation of the relevant inertia weight is also the focus of particle swarm optimization, the inertia weight changes greatly, the particle speed is fast, the position transformation is also fast, the good inertia weight can make the particle swarm convergence to the global optimal faster!

When updating the position of each particle with velocity, one problem is not to let the horizontal coordinate of the moving particle exceed our judgment interval [−3,9][-3, 9][−3,9].

for niter = 1 : ITER The % iteration begins, and the particle begins to move
    xx = [- 10 : 0.01 : 10]; % drawing with
    yy = sin(xx.^2) - 2*sin(xx) + xx .* sin(xx);
    plot(xx, yy, 'k');
    xlim([- 10.10]); ylim([- 8 -.10]);
    title('Particle swarm result'); xlabel('x'); ylabel('y');
    hold on;
    
    rnd = randi(length(flj), 1.1);
    gbest = fljx(rnd); % particle global optimal solution
    
    %% particle motion calculation
    w = wmax - (wmax - wmin) * niter / ITER; % inertia weight updated
    r1 = rand(1.1); r2 = rand(1.1); % produces evenly distributed random values between $[0, 1]$
    for i = 1 : xsize
       v(i) = w * v(i) + c1 * r1 * (bestx(i) - x(i)) + c2 * r2 * (gbest - x(i)); % particle velocity
       x(i) = x(i) + v(i);
       
       if (x(i) > 9 || x(i) < - 3); % movement after the x range is exceeded, replace with a new random number
          x(i) = - 3 + 12 * rand(1.1);
       end
    end
Copy the code

2.5 Current particle swarm fitness, best location, best fitness

The particle has moved to a new position. Of course, the new particle and the old particle are compared. If the fitness of the new particle is better than the old particle (the ordinate is smaller), the local optimal particle position is updated. Unlike the knapsack problem, where you have to make multiple judgments, it’s pretty simple, and you don’t have to worry about the other cases.

    y_cur = zeros(1, xsize);                                 
    for i = 1 : xsize
        y_cur(i) = sin(x(i). ^2) - 2 * sin(x(i)) + x(i) * sin(x(i));
    end
    
    for i = 1 : xsize
        if y_cur(i) < y(i) % if the current particle fitness is better
            bestx(i) = x(i); % particle best location updated
            besty(i) = y_cur(i);% particle optimum fitness updated
        end
    end
    
    y = y_cur; New particles become old particles
Copy the code

2.6 The non-inferior solution is screened after the combination of the best particle swarm position and the best fitness

It is basically the same as the first step of non-inferior solution screening, except that a merge operation is added to merge the local optimal particles with the non-inferior solution particles, and then screen a wave of non-inferior solution particles.

    The optimal location and fitness of %% particle swarm were combined and then the non-inferior solution was screened
    bestxmerge = [bestx, fljx];
    ymerge = [besty, flj];

    n = length(flj);
    flj = [];
    fljx = [];
    cnt = 1;
    for i = 1 : xsize + n 
        fljflag = 1;
        for j = 1 : xsize + n
            if j~ =i
                if ymerge(i) > ymerge(j) % I is the inferior solution
                    fljflag = 0;
                    break;
                end
            end
        end 
        if fljflag == 1
            flj(cnt) = ymerge(i); % non-inferior solution fitness
            fljx(cnt) = bestxmerge(i); % non-inferior solution position
            cnt = cnt + 1;
        end
    end
Copy the code

2.7 Remove repeated non-inferior solutions

This is also important, and there are many ways to do it, so just pick one of them and get rid of the repeated non-inferior solutions.

    %% remove the repeated non-inferior solution
    issame = zeros(cnt - 1.1);
    for i = 1 : cnt - 1
        for j = i + 1 : cnt - 1
            if ~issame(j)
                issame(j) = (abs(fljx(j) - fljx(i))"0.0001);
            end
        end
    end
    flj(find(issame == 1)) = [];
    fljx(find(issame == 1)) = [];
            
    
    plot(bestxmerge, ymerge, 'bo'); % draw the optimal position of particle swarm
    plot(fljx, flj, 'ro'); Draw the position of the non-inferior solution
    pause(0.5);
    hold off;
end
Copy the code