A list,
Particle swarm optimization (PSO) is a numerical optimization algorithm based on swarm intelligence, which was proposed by social psychologist James Kennedy and electrical engineer Russell Eberhart in 1995. Since the birth of PSO, it has been improved in many aspects. This part will introduce the basic principle and process of particle swarm optimization algorithm.
1.1 Particle swarm optimization
Particle swarm optimization (PSO) is a population intelligence algorithm inspired by bird swarm or fish swarm learning. It is used to solve nonlinear, non-convex or combinatorial optimization problems in many fields of science and engineering.
1.1.1 Algorithm idea
Many birds are gregarious and form different groups for various reasons. Flocks may vary in size, appear in different seasons, and may even consist of different species that work well together in the colony. More eyes and ears mean more opportunities to spot food and predators in time. A flock is always beneficial to the survival of its members in many ways:
Foraging: Sociobiologist E.O.Wilson said that, at least in theory, individual members of a group can benefit from the discoveries and prior experiences of other members in their search for food [1]. If a group of birds has the same food source, then certain species of birds will flock together in a non-competitive way. That way, more birds can take advantage of other birds’ discoveries about food locations.
Defense against Predators: Flocks of birds have many advantages when it comes to protecting themselves from predators.
More ears and eyes mean more chances of spotting predators or any other potential danger;
A flock of birds may confuse or neutralize predators by besieging or agile flight;
In a colony, warning each other can reduce the danger to any one bird.
Aerodynamics: When birds fly in groups, they often arrange themselves into specific shapes or formations. Depending on the number of birds in a flock, each bird produces different air currents as it flaps its wings, which results in varying wind patterns. These formations take advantage of different patterns, allowing birds in flight to use the air around them in the most energy-efficient way.
The development of particle swarm optimization requires some advantages of simulating birds. However, in order to understand an important property of swarm intelligence and particle swarm optimization, it is worth mentioning some disadvantages of birds. There are also some risks for birds when they travel in groups. More ears and eyes mean more wings and mouths, which leads to more noise and movement. In this case, more predators can locate the flock, posing a constant threat to the birds. A larger group will also require more food, which leads to more competition for food, potentially weeding out some of the weaker birds in the group. It should be noted here that PSO does not mimic the disadvantage of bird group behavior, so no individuals are allowed to be killed during the search, while in genetic algorithms some of the weaker individuals will die out. In a PSO, all individuals will survive and strive to become stronger throughout the search. In particle swarm optimization, the improvement of potential solutions is the result of cooperation, while in evolutionary algorithms, it is due to competition. This concept makes swarm intelligence different from evolutionary algorithms. In short, in evolutionary algorithms, a new population evolves with each iteration, whereas in swarm intelligence algorithms, each generation has individuals who make themselves better. An individual’s identity does not change with iteration. Mataric[2] gave the following flock rule:
Safe roaming: when birds fly, there is no collision with each other or with obstacles; Scatter: Each bird keeps a minimum distance from the others; Aggregation: Each bird also maintains a maximum distance from other birds; Homing: All birds have the potential to find food sources or nests. In the design of particle swarm optimization algorithm, these four rules are not used to simulate the group behavior of birds. In the fundamental particle swarm optimization model developed by Kennedy and Eberhart, the movement of agents does not follow the rules of safe roaming and dispersion. In other words, the agents in the pSO are allowed to be as close to each other as possible during the movement of the PSO. Aggregation and homing are effective in particle swarm optimization. In particle swarm optimization, agents must fly within a specific area in order to maintain maximum distance from any other agents. This means that the search stays within or at the boundary of the search space throughout the whole process. The fourth rule, homing means that any agent in the group can be globally optimal.
During the development of the PSO model, Kennedy and Eberhart proposed five basic principles for judging whether a group of agents is a group:
Proximity principle: agent groups should be able to perform simple spatial and temporal calculations; Quality principle: agent group can respond to quality factors in the environment; Multiple response principle: agent groups should not engage in activities in too narrow channels; Stability principle: the agent group cannot change its behavior pattern every time the environment changes; Adaptability principle: agent groups can change their behavior patterns when the calculated costs are small.
Considering these five principles, Kennedy and Eberhart developed a PSO model for function optimization. In particle swarm optimization (PSO), the random search method is adopted and swarm intelligence is used to solve the problem. In other words, particle swarm optimization is a swarm intelligent search algorithm. This search is done by a randomly generated set of possible solutions. This set of possible solutions is called a group, and each possible solution is called a particle. In particle swarm optimization, particle search is influenced by two learning methods. Each particle is learning from the others, as well as its own experiences along the way. Learning from others can be called social learning, while learning from one’s own experience can be called cognitive learning. As a result of social learning, the particle stores in its memory the best solution accessed by all particles in the group, which we call GBest. Through cognitive learning, the particle stores in its memory the best solution so far that it itself has accessed, called PBest.
The change in direction and size of any particle is determined by a factor called velocity, which is the rate of change in position relative to time. For PSO, the iteration is time. Thus, for particle swarm optimization, velocity can be defined as the rate of change of position relative to iteration. Since the iteration counter increases in units, the dimension of velocity V is the same as that of position X.
For D dimensional search space, The ith particle in the population under the time step t is represented by the d-dimensional vector x I t=(x I 1t,…,x I Dt) t x_i^t = {(x_{i1}^t, \cdots,x_{iD}t) t}xit=(xi1t,…,xiDt) t, Its velocity is expressed by another D-dimensional vector v I t=(v I 1t,… v I Dt) t v_i^t = {(v_{i1}^t, \cdots,v_{iD}t) t}vit=(vi1t,…,viDt) t. P I t = (p I 1 t,… p I D t) t p_i^t = {\left({p_{i1}^t, \cdots,p_{iD}^t} \right)^ t} pit=(pi1t,… The velocity and position of the ith particle are updated by the following formula respectively: V I d t + 1 = v I d t + c 1 r 1 (p I d t − x I d t) + c 2 r 2 (p g d t − x I d t) (1) v_{id}^{t + 1} = v_{id}^t + {c_1}{r_1}\left( {p_{id}^t – x_{id}^t} \right) + {c_2}{r_2}\left( {p_{gd}^t – x_{id}^t} \right)\tag 1 + 1 = vidt vidt + c1r1 (pidt – xidt) + c2r2 (PGDT – xidt) (1)
x i d t + 1 = x i d t + v i d t + 1 (2) x_{id}^{t + 1} = x_{id}^t + v_{id}^{t + 1}\tag 2xidt+1=xidt+vidt+1(2)
Where d = 1, 2,… ,D is dimension, I =1,2… ,S is the particle index,S is the population size. C1 and c2 are constants called cognitive and social scaling parameters, respectively, or simply acceleration coefficients. R1 and r2 are random numbers satisfying uniform distribution [0,1]. The above two formulas are separately updated for each dimension of each particle, and the only connection between different dimensions in the problem space is introduced through the objective function, namely gBest and pBest, the best positions currently found [3]. The algorithm flow of PSO is as follows:
1.1.3 Interpretation of the update equation
The right side of the speed update equation (1) consists of three parts 3:
The velocity of the previous time, v, can be thought of as a momentum term that stores the direction of the previous motion in order to prevent the particle from drastically changing direction.
The second is the cognitive or ego part, by which the particle’s current position moves towards its own best position, so that throughout the search the particle remembers its best position and avoids wandering around. It should be noted that PIDt-xidt is a vector in the direction from XIDt to PIDT, so as to attract the current position to the optimal position of the particle. The order of the two cannot be changed, otherwise the current position will be far away from the optimal position.
The third is the social component, which is responsible for sharing information through groups. By this term, the particle moves to the optimal individual in the group, that is, each individual learns from the other individuals in the group. Again, they should be PGBT – xIDT.
It can be seen that the cognitive scale parameter C1 adjusts the maximum stride length in the direction of its optimal position, while the social scale parameter C2 adjusts the maximum stride length in the direction of the globally optimal particle. Figure 2 shows the typical geometry of particles moving in two dimensions.
FIG. 2 Geometric illustration of particle movement during particle swarm optimization
It can be seen from the update equation that Kennedy and Eberhart’s PSO design follows five basic principles of PSO. In the process of particle swarm optimization, a series of time steps are calculated in d – dimensional space. At any time step, the population follows the guiding direction of GBest and PBest, that is, the population responds to the quality factor and thus follows the quality principle. As uniformly distributed random numbers R1 and R2 are included in the velocity update equation, the current position between pBest and GBest is randomly assigned, which proves the diversity of response principles. In the process of particle swarm optimization, only when the particle swarm receives good information from GBEST, random motion will occur, thus proving the stability principle of particle swarm optimization process. Populations change when GBest changes and therefore follow the principle of adaptation. 1.2 Parameters in Particle Swarm Optimization The convergence speed and optimization ability of any population-based algorithm are affected by its parameter selection. In general, it is not possible to make general recommendations on parameter Settings for these algorithms because the parameters of these algorithms are highly dependent on the problem parameters. However, available theoretical and/or experimental studies give a general range of parameter values. Similar to other population-based search algorithms, parameter adjustment of general PSO is always a challenging task due to the existence of random factors R1 and R2 in the search process. The basic version of PSO requires very few parameters. This chapter discusses only the parameters of the basic version of THE PSO introduced in [4].
One basic parameter is population size, which is usually set empirically based on the number of decision variables in the problem and the complexity of the problem. The general recommendation is 20-50 particles.
The other parameters are the scaling factors C1 and c2. As mentioned earlier, these parameters determine the step size of the particle in the next iteration. In other words, c1 and C2 determine the velocity of the particle. In the base version of PSO, select C1 = C2 =2. In this case, the increase in the particle S velocity is uncontrolled, which is conducive to faster convergence but not to better use of the search space. If we set c1=c2>0, then the particle will attract the average of pBest and gBest. The C1 > C2 setting favors multimodal problems, while c2> C1 favors single-modal problems. In the search process, the smaller the values of C1 and C2 are, the smoother the particle track will be, while the larger the values of C1 and C2 are, the more intense the particle motion will be and the greater the acceleration will be. Researchers have also proposed adaptive acceleration coefficients [5]. The stop criterion is not only a parameter of pSO, but also a parameter of any population-based metaheuristic algorithm. The commonly used stop criterion is usually based on the maximum number of function evaluations or iterations, which is proportional to the time taken by the algorithm. A more efficient stop criterion is based on the search ability of the algorithm, if an algorithm does not significantly improve the solution within a certain number of iterations, then the search should be stopped.
Ii. Source code
function [xOpt,fval,exitflag,output,population,scores] = ...
pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB,nonlcon,options)
% Find the minimum of a function using Particle Swarm Optimization.
%
% This is an implementation of Particle Swarm Optimization algorithm using
% the same syntax as the Genetic Algorithm Toolbox, with some additional
% options specific to PSO. Allows code-reusability when trying different
% population-based optimization algorithms. Certain GA-specific parameters
% such as cross-over and mutation functions will not be applicable to the
% PSO algorithm. Demo function included, with a small library of test
% functions. Requires Optimization Toolbox.
%
% New features will be added regularly until this is made redundant by an
% official MATLAB PSO toolbox.
%
% Author: S. Samuel Chen. Version 1.312..
% Available from http://www.mathworks.com/matlabcentral/fileexchange/25986
% Distributed under BSD license. First published in 2009.
%
% Syntax:
% psodemo
% pso
% x = pso(fitnessfcn,nvars)
% x = pso(fitnessfcn,nvars,Aineq,bineq)
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq)
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB)
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB,nonlcon)
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB,nonlcon,options)
% x = pso(problem)
% [x, fval] = pso(...)
% [x, fval,exitflag] = pso(...)
% [x, fval,exitflag,output] = pso(...)
% [x, fval,exitflag,output,population] = pso(...)
% [x, fval,exitflag,output,population,scores] = pso(...)
%
% Description:
% psodemo
% Runs a demonstration of the PSO algorithm using test function specified
% by user.
%
% pso
% Runs a default demonstration using Rosenbrock's banana function.
%
% x = pso(fitnessfcn,nvars)
% Runs the particle swarm algorithm with no constraints and default
% options. fitnessfcn is a function handle, nvars is the number of
% parameters to be optimized, i.e. the dimensionality of the problem. x is
% a 1xnvars vector representing the coordinates of the global optimum
% found by the pso algorithm.
%
% x = pso(fitnessfcn,nvars,Aineq,bineq)
% Linear constraints, such that Aineq*x <= bineq. Aineq is a matrix of size
% nconstraints x nvars, while b is a column vector of length nvars.
%
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq)
% Linear equality constraints, such that Aeq*x = beq. 'Soft' or 'penalize'
% boundaries only. If no inequality constraints exist, set Aineq and bineq
% to [].
%
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB)
% Lower and upper bounds defined as LB and UB respectively. Both LB and UB,
% if defined, should be 1 x nvars vectors. Use empty arrays [] for Aineq,
% bineq, Aeq, or beq if no linear constraints exist.
%
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB,nonlcon)
% Non-linear constraints. Nonlinear inequality constraints in the form c(x)
% <= 0 have now been implemented using 'soft' boundaries, or
% experimentally, using 'penalize' constraint handling method. See the
% Optimization Toolbox documentation for the proper syntax for defining
% nonlinear constraints. Use empty arrays [] for Aineq, bineq, Aeq, beq,
% LB, or UB if they are not needed.
%
% x = pso(fitnessfcn,nvars,Aineq,bineq,Aeq,beq,LB,UB,nonlcon,options)
% Default algorithm parameters replaced with those defined in the options
% structure:
% Use >> options = psooptimset('Param1,'value1'.'Param2,'value2',...). to % generate the options structure. Type >> psooptimset with no inputor
% output arguments to display a list of available options and their
% default values.
%
% NOTE: the swarm will perform better if the PopInitRange option is defined
% so that it closely bounds the expected domain of the feasible region.
% This is automatically done if the problem has both lower and upper bound
% constraints, but not for linear and nonlinear constraints.
%
% NOTE 2: If options.HybridFcn is to be defined, and if it is necessary to
% pass a non-default options structure to the hybrid function, then the
% options structure may be included in a cell array along with the pointer
% to the hybrid function. Example:
% >> % Let's say that we want to use fmincon to refine the result from PSO:
% >> hybridoptions = optimset(@fmincon) ;
% >> options.HybridFcn = {@fmincon, hybridoptions} ;
%
% NOTE 3:
% Perez and Behdinan (2007) demonstrated that the particle swarm is only
% stable if the following conditions are satisfied:
% Given that C0 = particle inertia
% C1 = options.SocialAttraction
% C2 = options.CognitiveAttraction
% 1) 0 < (C1 + C2) < 4
% 2) (C1 + C2)/2 - 1 < C0 < 1
% If conditions 1 and 2 are satisfied, the system will be guaranteed to
% converge to a stable equilibrium point. However, whether or not this
% point is actually the global minimum cannot be guaranteed, and its
% acceptability as a solution should be verified by the user.
%
% x = pso(problem)
% Parameters imported from problem structure.
%
% [x,fval] = pso(...)
% Returns the fitness value of the best solution.
%
% [x,fval,exitflag] = pso(...)
% Returns information on outcome of pso run. This should match exitflag
% values for GA where applicable, for code reuseability between the two
% toolboxes.
%
% [x,fval,exitflag,output] = pso(...)
% The structure output contains more detailed information about the PSO
% run. It should match output fields of GA, where applicable.
%
% [x,fval,exitflag,output,population] = pso(...)
% A matrix population of size PopulationSize x nvars, with the locations of
% particles across the rows. This may be used to save the final positions
% of all the particles in a swarm.
%
% [x,fval,exitflag,output,population,scores] = pso(...)
% Final scores of the particles in population.
%
% Bibliography
% J Kennedy, RC Eberhart, YH Shi. Swarm Intelligence. Academic Press, 2001.
%
% SM Mikki, AA Kishk. Particle Swarm Optimization: A Physics-Based
% Approach. Morgan & Claypool, 2008.
%
% RE Perez and K Behdinan. "Particle swarm approach for structural
% design optimization." Computers and Structures, Vol. 85:1579- 88..2007.
%
% See also:
% PSODEMO, PSOOPTIMSET, PSOBINARY.
if ~nargin % Rosenbrock's banana function by default, as a demonstration
fitnessfcn = @(x)100*(x(2)-x(1) ^2) ^2+ (1-x(1)) ^2 ;
nvars = 2 ;
LB = [1.5.2 -]; UB = [2.2]; options.PopInitRange = [[2 -;4], [- 1;2]]. options.PlotFcns = {@psoplotbestf,@psoplotswarmsurf} ; options.Generations =200 ;
options.DemoMode = 'on' ;
options.KnownMin = [1 1]; options.OutputFcns = {} ; options.ConstrBoundary ='penalize' ;
options.UseParallel = 'never' ;
elseif isstruct(fitnessfcn)
nvars = fitnessfcn.nvars ;
Aineq = fitnessfcn.Aineq ;
bineq = fitnessfcn.bineq ;
Aeq = fitnessfcn.Aeq ;
beq = fitnessfcn.beq ;
LB = fitnessfcn.LB ;
UB = fitnessfcn.UB ;
nonlcon = fitnessfcn.nonlcon ;
if ischar(nonlcon) && ~isempty(nonlcon)
nonlcon = str2func(nonlcon) ;
end
options = fitnessfcn.options ;
fitnessfcn = fitnessfcn.fitnessfcn ;
elseif nargin < 2
msg = 'PSO requires at least two input arguments' ;
error('%s, or a problem structure. Type >> help pso for details'. msg) end %if ~nargin
if ~exist('options'.'var') % Set default options
options = struct ;
end % if ~exist
options = psooptimset(options) ;
options.Verbosity = 1 ; % For options.Display == 'final' (default)
if strcmpi(options.Display,'off')
options.Verbosity = 0 ;
elseif strncmpi(options.Display,'iter'.4)
options.Verbosity = 2 ;
elseif strncmpi(options.Display,'diag'.4)
options.Verbosity = 3 ;
end
if ~exist('Aineq'.'var'), Aineq = [] ; end
if ~exist('bineq'.'var'), bineq = [] ; end
if ~exist('Aeq'.'var'), Aeq = [] ; end
if ~exist('beq'.'var'), beq = [] ; end
if ~exist('LB'.'var'), LB = [] ; end
if ~exist('UB'.'var'), UB = [] ; end
if ~exist('nonlcon'.'var'), nonlcon = [] ; end
% Check for swarm stability
% -------------------------------------------------------------------------
if options.SocialAttraction + options.CognitiveAttraction >= 4&&... options.Verbosity >2
msg = 'Warning: Swarm is unstable and may not converge ' ;
msg = [msg 'since the sum of the cognitive and social attraction']; msg = [msg' parameters is greater than or equal to 4.']; msg = [msg' Suggest adjusting options.CognitiveAttraction and/or'];sprintf('%s options.SocialAttraction.',msg)
end
% -------------------------------------------------------------------------
% Check for constraints and bit string population type
% -------------------------------------------------------------------------
if strncmpi(options.PopulationType,'bitstring'.2)&&...(~isempty([Aineq,bineq]) || ~isempty([Aeq,beq]) || ... ~isempty(nonlcon) || ~isempty([LB,UB]))
Aineq = []; bineq = [] ; Aeq = [] ; beq = [] ; nonlcon = [] ; LB = [] ; UB = [] ;if options.Verbosity > 2
msg = sprintf('Constraints will be ignored'); msg =sprintf('%s for options.PopulationType ''bitstring''',msg) ; warning('%s',msg) ;
end
end
% -------------------------------------------------------------------------
% Change this when nonlcon gets fully implemented:
% -------------------------------------------------------------------------
if ~isempty(nonlcon) && strcmpi(options.ConstrBoundary,'reflect')
if options.Verbosity > 2
msg = 'Non-linear constraints don''t have ''reflect'' boundaries' ;
msg = [msg, ' implemented.']; warning('pso:main:nonlcon'.'%s Changing options.ConstrBoundary to ''penalize''. '. msg) end options.ConstrBoundary ='penalize' ;
end
% -------------------------------------------------------------------------
% Is options.PopInitRange reconcilable with LB andUB constraints? % -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - %Resize PopInitRange in case it was given as one range for all dimensions
if size(options.PopInitRange,1) ~= 2 && size(options.PopInitRange,2) = =2
% Transpose PopInitRange if user provides nvars x 2 matrix instead
options.PopInitRange = options.PopInitRange' ;
elseif size(options.PopInitRange,2) = =1 && nvars > 1
% Resize PopInitRange in case it was given as one range for all dim
options.PopInitRange = repmat(options.PopInitRange,1,nvars) ;
elseif size(options.PopInitRange,2) ~= nvars
msg = 'Number of dimensions of options.PopInitRange does not' ;
msg = sprintf('%s match nvars. PopInitRange should be a',msg) ;
error('%s 2 x 1 or 2 x nvars matrix.',msg) ;
end
% Check initial population with respect to bound constraints
% Is this really desirable? Maybe there are some situations where the user
% specifically does not want a uniform inital population covering all of
% LB and UB?
if ~isempty(LB) || ~isempty(UB)
options.LinearConstr.type = 'boundconstraints' ;
if isempty(LB), LB = -inf*ones(1,nvars) ; end
if isempty(UB), UB = inf*ones(1,nvars) ; end
LB = reshape(LB,1[]); UB = reshape(UB,1[]); options.PopInitRange = ... psocheckpopulationinitrange(options.PopInitRange,LB,UB) ; end % ------------------------------------------------------------------------- % Check validity of VelocityLimit % -------------------------------------------------------------------------if all(~isfinite(options.VelocityLimit))
options.VelocityLimit = [];elseif isscalar(options.VelocityLimit)
options.VelocityLimit = repmat(options.VelocityLimit,1,nvars) ;
elseif ~isempty(length(options.VelocityLimit)) && ...
~isequal(length(options.VelocityLimit),nvars)
msg = 'options.VelocityLimit must be either a positive scalar' ;
error('%s, or a vector of size 1xnvars.',msg)
end % if isscalar
options.VelocityLimit = abs(options.VelocityLimit) ;
% -------------------------------------------------------------------------
% Setup for parallel computing
% -------------------------------------------------------------------------
if strcmpi(options.UseParallel,'always')
if strcmpi(options.Vectorized,'on')
if options.Verbosity > 2
msg = 'Both ''Vectorized'' and ''UseParallel'' options have ' ;
msg = [msg 'been set. The problem will be computed locally ']; warning('%s using the ''Vectorized'' computation method.'. msg) ;end
elseif isempty(ver('distcomp')) % Check for toolbox installed
if options.Verbosity > 2
msg = 'Parallel computing toolbox not installed. Problem' ;
warning('%s will be computed locally instead.',msg) ;
end
options.UseParallel = 'never' ;
else
poolalreadyopen = false ;
if isempty(gcp('nocreate'))
poolobj = parpool ;
addAttachedFiles(poolobj,which(func2str(fitnessfcn))) ;
else
poolalreadyopen = true ;
end
end
end
Copy the code
3. Operation results
Fourth, note
Version: 2014 a