I. Introduction of weed Algorithm (with project Report)

1 IWO Definition IWO is A random search algorithm proposed by A. R. Mehrabian et al in 2006, which is evolved from the principle of weed evolution in nature. IWO mimics the basic process of seed space diffusion, growth, reproduction and competitive extinction of weed invasion, and has strong robustness and adaptability.

IWO algorithm is a kind of efficient random intelligent optimization algorithm. It takes the excellent individuals in the population to guide the evolution of the population, and superposes the offspring generated by the excellent individuals around the parent individuals by dynamically changing the standard deviation with normal distribution, and then obtains the optimal individuals through competition among individuals. The algorithm takes into account the diversity and selection strength of the population.

2 IWO search and performance Compared with IWO, other evolutionary algorithms have larger search space and better performance. Compared with GA, IWO algorithm is simple, easy to implement, does not need genetic operators, can simply and effectively converge the optimal solution of the problem, is a powerful intelligent optimization tool.

3 IWO Algorithm Implementation Step 3.1 Initialize the initial distribution of weeds with certain data of the population in the search space, the position is random, and the number is adjusted according to the actual situation;

3.2 Progeny Reproduction The parent generation distributed in the whole search space produces the next generation of seeds according to the adaptive value of the parent generation, and the number of seeds is determined by the fitness value. Individuals with high adaptive value produce more seeds, while individuals with low adaptive value produce fewer seeds.

3.3 Spatial diffusion The offspring are distributed around the parent according to a certain law, and the distribution position meets the normal distribution (the parent axis (mean), and the standard deviation changes with algebra).

3.4 Competitive selection When the number of individuals in a breeding exceeds the upper limit of the population, the offspring and parents are sorted together, and the individuals with low fitness will be eliminated.

Two, some source code

% IWO % CPSO % IIWO %% Clear CLC %% network parameter L =20; % region side length V =24; % Number of nodes Rs =2.5; % Perception radius Rc =5; % Communication radius Re =0.05; % Perception error data =1; % Discrete granularity %% Basic parameter N =30; % Population size DIM =2*V; % dimension maxgen =300; % Maximum iteration number ub = L; lb =0; X = rand(N, dim).*(UB -lb)+lb;for i = 1:N fitness(i) = fun(X(i, :), L, Rs, Re, data); End %% pass the function variable [bestvalue_IWO, gbest_IWO, Curve_IWO] = IWO(N, maxgen, Rs, Re, L, data, X, FITNESS, lb, UB, dim); [bestvalue_CPSO, gbest_CPSO, Curve_CPSO] = CPSO(N, maxgen, Rs, Re, L, data, X, fitness, lb, ub, dim); [bestvalue_IIWO, gbest_IIWO, Curve_IIWO] = IIWO(N, maxgen, Rs, Re, L, data, X, fitness, lb, ub, dim); %% draw compare figure; t =1:maxgen;
plot(t, Curve_IWO, 'ro-', t, Curve_CPSO, 'kx-', t, Curve_IIWO, 'bp-'.'linewidth'.2.'linewidth'.1.5.'MarkerSize'.7.'MarkerIndices'.1:20:maxgen);
legend('IWO'.'CPSO'.'IIWO');
xlabel 'Number of iterations';
ylabel 'Coverage'; Function Rcov = fun(Position, L, Rs, Re, data) %%1:2:end); Y = Position(y = Position(2:2:end); % get the y coordinate lamda1 =0.1; % Perception attenuation coefficient lamda2 =0.1;
epsilon1 = 2;
epsilon2 = 2; N = length(x); % Total number of nodes % Points in the discretized region m =0:data:L;
n = 0:data:L;
k = 1;
for i = 1:numel(m)
    for j = 1:numel(n)
        p(k, :) = [m(i), n(j)];
        k = k+1;
    end
end
Total = size(p, 1); % Calculated coveragefor j = 1:size(p, 1)
    C = zeros(N, 1);
    for i = 1:N
        dij = sqrt((p(j, 1)-x(i))^2+(p(j, 2)-y(i))^2);
        if Rs-Re >= dij
            C(i) = 1; Elseif Rs+Re >dij
            C(i) = exp(-lamda1*(dij-(Rs-Re))^epsilon1/((Rs+Re-dij)^epsilon2+lamda2));
        end
    end
    P(j) = 1-prod(1-C); % Indicates the thresholdif P(j) < 0.75
        P(j) = 0; Rcov = sum(P(1:end))/Total; Function [fitnessgBest, gBest, YY] = CPSO(N, MaxGen, Rs, Re, L, data, X, FITNESS, LB, UB, DIM2; % Social cognitive parameter C2 =2; % Self-recognition parameter Vmax =2; % Maximum speed Vmin =2 -; % Minimum speed u =4; % chaos coefficient W =0.8; V = rand(N, dim).*(vmax-vmin)+Vmin; %% Individual extreme value and population extreme value [bestfitness, bestindex] = Max (fitness); gbest = X(bestindex, :); % zbest = X; % Individual optimum fitnessgBest = bestfitness; % Global optimum fitness value fitnesszBest = fitness; %% Initial results show that x = gbest(1:2:end);
y = gbest(2:2:end);
disp('Initial position:' );
for i = 1:dim/2
    disp([num2str(x(i)), ' ', num2str(y(i))]);
end
disp(['Initial coverage:', num2str(fitnessgbest)]); % initial overlay figure;for i = 1:dim/2
    axis([0 L 0L]); % limits the coordinate range sita =0:pi/100:2*pi; [% Angle0.2*pi]
    hold on;
    fill(x(i)+Rs*cos(sita), y(i)+Rs*sin(sita), 'b');
    plot(x(i)+Rs*cos(sita), y(i)+Rs*sin(sita), 'b');
end
plot(x, y, 'r+');
title 'Initial deployment'; %% iterative optimizationfor i = 1:maxgen
    
    for j=1Update: N % speed V (j) = V (j:) + c1 * W * * (zbest rand (j:) - X (j:)) + c2 * rand * (gbest - X (j:)); V(j, V(j, :) > Vmax) = Vmax; V(j, V(j, :) < Vmin) = Vmin; X(j, :) = X(j, :)+V(j, :); X(j, X(j, :) > ub) = ub; X(j, X(j, :) < lb) = lb; Fitness (j)=fun(X(j, :), L, Rs, Re, data); endfor j = 1:N % Individual optimal updateiffitness(j) > fitnesszbest(j) zbest(j, :) = X(j, :); fitnesszbest(j) = fitness(j); End % group optimal updateif fitness(j) > fitnessgbest
            gbest = X(j, :); fitnessgbest = fitness(j); End end %% Chaotic optimization of the optimal position of particle swarm Y(1, :) = (gbest-lb)/(ub-lb);           
    % logistic
    for t = 1:N- 1 
        for e = 1:dim
            Y(t+1, e) = u*Y(t, e)*(1-Y(t, e));
        end
    end
    Y = Y.*(ub-lb)+lb;
    for j = 1:N fit(j) = fun(Y(j, :), L, Rs, Re, data); End % Find the optimal chaos feasible solution vector [ybestfitness, ybestIndex] = Max (fit); ran =1+fix(rand()*N); % generates a random number1X(ran, :) = Y(ybestindex, :); fitness(ran) = ybestfitness; % The optimal solution of each generation is stored in yy array YY (I) = FitnessgBest; disp(['At iteration ', num2str(i), ', the best fitness is ', num2str(yy(i))]); End %% x = gbest(1:2:end);
y = gbest(2:2:end);
disp('Optimal location:' );
for i = 1:dim/2
    disp([num2str(x(i)), ' ', num2str(y(i))]);
end
disp(['Optimal coverage:', num2str(yy(end))]); % % drawing figure; plot(yy,'r'.'lineWidth'.2); % Draw iteration graph title('Algorithm Training Process'.'fontsize'.12);
xlabel('Number of iterations'.'fontsize'.12);
ylabel('Network Coverage'.'fontsize'.12);
figure
for i = 1:dim/2
    axis([0 L 0L]); % limits the coordinate range sita =0:pi/100:2*pi; [% Angle0.2*pi]
    hold on;
    fill(x(i)+Rs*cos(sita), y(i)+Rs*sin(sita), 'k');
end
plot(x, y, 'r+');
title 'CPSO optimized coverage ';

Copy the code

3. Operation results

Matlab version and references

1 matlab version 2014A

[1] Yang Baoyang, YU Jizhou, Yang Shan. Intelligent Optimization Algorithm and Its MATLAB Example (2nd Edition) [M]. Publishing House of Electronics Industry, 2016. [2] ZHANG Yan, WU Shuigen. MATLAB Optimization Algorithm source code [M]. Tsinghua University Press, 2017. [3] Zhou Pin. Design and Application of MATLAB Neural Network [M]. Tsinghua University Press, 2013. [4] Chen Ming. MATLAB Neural Network Principle and Examples of Fine Solution [M]. Tsinghua University Press, 2013. [5] FANG Qingcheng. MATLAB R2016a Neural Network Design and Application of 28 Cases Analysis [M]. Tsinghua University Press, 2018.