A brief introduction of A_star algorithm

0 the introductionWith the development of modern technology, there are more types of aircraft, and their applications are becoming more specialized and perfect. For example, DJI PS-X625 UAV is used for plant protection, BAOji X8 UAV is used for street scene shooting and monitoring patrol, and White Shark MIX underwater UAV is used for underwater rescue. The performance of the aircraft is mainly determined by the internal flight control system and external path planning. In terms of path problem, only by the operator in the concrete implementation of the task in the hands of the remote control of unmanned aerial vehicles to perform the corresponding work, may be psychological and technology put forward high request of the operator, in order to avoid personal error, the risk of causing the damage of aircraft, a solution is carried out on the aircraft flight path planning. The measurement accuracy of aircraft, the reasonable planning of flight path, the stability and safety of aircraft at work and other changes require more and more high requirements for the integrated control system of aircraft. Uav route planning is a problem to design the optimal route to ensure that UAV can complete specific flight tasks and avoid various obstacles and threat areas in the process of completing the tasks. Common path planning algorithms are shown in Figure 1.Figure 1 Common path planning algorithms This paper mainly studies the flight path planning of UAV in the cruise stage. Assuming that uav maintains constant altitude and speed in flight, flight path planning becomes a two-dimensional plane planning problem. In the path planning algorithm,AThe algorithm is simple and easy to implement. To improve ABased on the algorithm, A new and easy to understand improvement A is proposedAn algorithm for uav flight path planning. A traditionalIn this algorithm, the planning area is rasterized, and node expansion is limited to the intersection of grid lines, and there are usually two directions of motion at a certain Angle between the intersection of grid lines. Enlarge and refine the two sections of paths with angles infinitely, and then use the corresponding path planning points on the two sections respectively as tangent points to find the corresponding center of the circle that forms the tangent circle, then make an arc, and find the corresponding center Angle of the arc between the corresponding two tangent points, and calculate the length of the arc according to the following formulaWhere: R — radius of the tangent circle; Alpha — the central Angle corresponding to the arcs between the tangent points.

Introduction to bat optimization algorithm (BA)

Bat Algorithm (BA) is a heuristic search algorithm based on swarm intelligence proposed by Professor Yang in 2010. It is an effective method to search for global optimal solutions. The algorithm is an iterative optimization technique, which initializes into a group of random solutions, searches for the optimal solution through iteration, and generates new local solutions around the optimal solution by random flight, thus strengthening the local search. Compared with other algorithms,BA is far superior in accuracy and effectiveness, and there are not many parameters to be adjusted.

3. Introduction to differential evolution algorithm

1 introduction

Under the action of heredity, selection and variation, nature organism survival of the fittest, constantly from lower to higher evolution and development. It is noted that the evolutionary law of survival of the fittest can be modeled to form some optimization algorithms. In recent years, evolutionary algorithms have attracted extensive attention. Differential Evolution (DE) algorithm is an emerging evolutionary computing technology [1]. It was proposed by S Torn et al in 1995. It was originally conceived to solve Chebyshev polynomials, and later found to be an effective technique for solving complex optimization problems. Differential evolution algorithm is an optimization algorithm based on the theory of swarm intelligence. It is an intelligent optimization search algorithm generated by the cooperation and competition between individuals in a group. But compared with evolutionary computation, it retains the global search strategy based on population, and adopts real number coding, simple mutation operation based on difference and “one to one” competitive survival strategy, which reduces the complexity of evolutionary computation operation. At the same time, the differential evolution algorithm is characteristic of the memory ability make it can dynamically track the current search, to adjust its search strategy, it has strong global convergence ability and robustness, and do not need to use the characteristic information of the problem is suitable for solving some using conventional mathematical programming methods are difficult to solve even unable to solve the complex optimization problem [2-5]. Therefore, as an efficient parallel search algorithm, differential evolution algorithm has important academic significance and engineering value for its theoretical and application research. At present, differential evolution algorithm has been applied in many fields, such as artificial neural networks, electricity, mechanical design, robotics, signal processing, bioinformation, economics, modern agriculture and operational research. However, although differential evolution algorithm has been widely studied, compared with other evolutionary algorithms, its research results are quite scattered and lack of systematicness, especially in the theoretical aspect, there is no significant breakthrough.

2 Theory of Differential Evolution Algorithm 2.1 Principle of Differential Evolution Algorithm Differential evolution algorithm is a random heuristic search algorithm, easy to use, has strong robustness and global optimization ability. It is a random search algorithm from the mathematical point of view and an adaptive iterative optimization process from the engineering point of view. In addition to good convergence, differential evolution algorithm is very easy to understand and execute, it only contains a few control parameters, and the values of these parameters can remain unchanged in the whole iterative process. Differential evolution algorithm is a self-organizing minimization method that requires very little input from the user. The key idea is different from the traditional evolutionary method: the traditional method uses the predetermined probability distribution function to determine the vector disturbance; The self-organizing program of differential evolution algorithm uses two randomly selected different vectors in the population to interfere with an existing vector, and each vector in the population has to interfere. Differential evolution algorithms make use of a population of vectors in which random perturbations of population vectors can be carried out independently and are therefore parallel. If the cost of the corresponding function value of the new vectors is less than that of their predecessors, they will replace the predecessors. Like other evolutionary algorithms, differential evolution algorithm operates on the population of candidate solutions, but its population propagation scheme is different from other evolutionary algorithms: it generates a new parameter vector by adding the weighted difference vector between two members of the population to the third member, which is called “variation”; Then the parameters of the variation vector are mixed with other pre-determined target vector parameters according to certain rules to produce the test amount, which is called “crossover”. Finally, if the cost function of the test vector is lower than that of the target vector, the test vector replaces the target vector in the next generation. This operation is called “selection”. All members of the population must perform this operation once as target vectors in order to have the same number of competitors in the next generation. The optimal parameter vector of each generation is evaluated during evolution to record the minimization process. In this way, by using random deviation perturbation to generate new individuals, a result with very good convergence can be obtained to guide the search process to approach the global optimal solution [6-7].

2.2 Characteristics of differential evolution Algorithm From the proposed to the present, differential evolution algorithm has been widely studied and successfully applied in just twenty years. The algorithm has the following characteristics: (1) simple structure, easy to use. Differential evolution algorithm mainly carries on the genetic operation through the differential mutation operator, because the operator only involves the vector addition and subtraction operation, so it is easy to implement; This algorithm adopts probability transition rule without deterministic rule. In addition, the differential evolution algorithm has few control parameters, and the influence of these parameters on algorithm performance has been studied, and some guidance suggestions have been obtained, so it is convenient for users to choose optimal parameter Settings according to the problem. (2) Excellent performance. Differential evolution algorithm has good reliability, high efficiency and robustness. For large space, nonlinear and undifferentiable continuous problems, its solving efficiency is better than other evolution methods, and many scholars are still improving the differential evolution algorithm to improve its performance. (3) Adaptability. The difference mutation operator of differential evolution algorithm can be fixed constant, or it can adjust the mutation step size and search direction automatically according to different objective functions, so as to improve the search quality. (4) The differential evolution algorithm has inherent parallelism, can cooperate with the search, has the ability to use individual local information and group global information to guide the algorithm to further search. Under the same precision requirement, differential evolution algorithm has faster convergence speed. (5) The algorithm is universal and can operate directly on the structure object, independent of the problem information and without the limitation of the objective function. Differential evolution algorithm is very simple to operate, easy to implement programming, especially for solving high-dimensional function optimization problems.

3 Types of differential evolution algorithms 3.1 Basic differential evolution algorithmsThe operating procedures of the basic differential evolution algorithm are as follows [8] : (1) initialization; (2) variation; (3) cross; (4); (5) Boundary condition processing.Initialize theThe differential evolution algorithm uses NP real numerical parameter vectors with dimension D to treat them as populations of each generation, and each individual is expressed as: Another method is boundary absorption processing, that is, the individual value exceeding the boundary constraint is set to the adjacent boundary value.

3.2 Other forms of differential evolution algorithmThe above is the most basic operation procedure of differential evolution algorithm. In practical application, several deformation forms of differential evolution algorithm are also developed, which are distinguished by symbols DE/x/y/ Z. Among them, X defines the current vector to be mutated as “random” or “best”; Y is the number of difference vectors used; Z indicates the operation of the crossover procedure. The crossover operation described earlier is denoted as “bin”. Using this representation, the basic differential evolution algorithm strategy can be described as DE/ RAND /1/bin. There are other forms [5, e.g. 3.3 Improved differential evolution algorithm Adaptive differential evolution algorithmAs an efficient parallel optimization algorithm, differential evolution algorithm develops rapidly and many improved differential evolution algorithms appear. A differential evolution algorithm with adaptive operator is introduced below [9].

4. Differential evolution algorithm flowDifferential evolution algorithm adopts real number coding, simple mutation operation based on difference and “one to one” competitive survival strategy. The specific steps are as follows: (1) Determine the control parameters of differential evolution algorithm and the specific strategy to be adopted. The control parameters of differential evolution algorithm include population number, mutation operator, crossover operator, maximum evolution algebra, termination condition, etc. (2) Random generation of initial population, evolution algebra K =1. (3) Evaluate the initial population, that is, calculate the objective function value of each individual in the initial population. (4) Judge whether the termination condition or the maximum evolution algebra is reached: if so, the evolution terminates and the optimal individual at this time is output as the solution; Otherwise, go to the next step. (5) Mutation operation and crossover operation were carried out to process boundary conditions and obtain temporary population. (6) Evaluate the temporary population and calculate the objective function value of each individual in the temporary population. (7) A “pair -” selection operation is carried out between the individuals in the temporary population and the corresponding individuals in the original population to obtain the new population. (8) Evolution algebra k= K +1, go to step (4). The operation flow of differential evolution algorithm is shown in Figure 3.1.

Control parameters have a great influence on a global optimization algorithm, and there are some empirical rules for the selection of control variables in differential evolution algorithm.

Population size NP Under normal circumstances, the larger the population size AP is, the more individuals there are, the better the diversity of the population is, and the stronger the ability to search for optimization is, but it also increases the difficulty of calculation. So NP cannot be infinitely large. According to experience, the reasonable selection of population NP is between 5D and 10D, and NP≥4 must be satisfied to ensure that the differential evolution algorithm has enough different variation vectors.

Mutation operator F mutation operator FE[0,2] is a real constant factor, which determines the amplification ratio of the deviation vector. If the mutation operator is small, the algorithm may be premature. As the value of/increases, the ability to prevent the algorithm from falling into local optimal value increases. However, when F>1, it becomes very difficult to rapidly converge to the optimal value. This is because the convergence of the population becomes poor when the perturbation of the difference vector is greater than the distance between the two individuals. Current studies have shown that F values less than 0.4 and greater than 1 are only occasionally valid, and /=0.5 is usually a good initial choice. If the population converges prematurely, then F or NP should increase.

Crossover operator CR the crossover operator CR is a real number in the range [0,1], which controls the probability that the parameters of a test vector come from a randomly selected variant vector rather than the original vector. The greater the crossover operator CK, the greater the possibility of crossover. A good choice for CR is 0.1, but larger CK usually accelerates convergence, and to see if it is possible to get a quick solution, try CR=0.9 or CF=1.0 first.

Maximum evolution algebra G maximum evolution algebra 6 is a parameter that indicates the end condition of the differential evolution algorithm. It means that the differential evolution algorithm stops running after it reaches the specified evolution algebra and outputs the best individual in the current population as the optimal solution of the problem. Generally, the value of 6 is 100 to 500.

The maximum evolution algebra can be used as the termination condition of the differential evolution algorithm, and other criteria can be added. Generally, the program terminates when the objective function value is less than the threshold value, which is usually selected as 10-6. Among the above parameters, F and CR, like NP, are constants in the search process. Generally, F and CR affect the convergence speed and robustness of the search process, and their optimization values are not only dependent on the characteristics of the objective function, but also related to NP. Suitable values for F, CR, and NP can usually be found by using test and result errors after some tests with different values.

Four, part of the source code

clear all;
clc;
close all;
tic
global DEM safth hmax scfitness;
a=load('XYZmesh.mat'); DEM DEM=a; DEM.Z=DEM.Z; safth=60;

hmax=max(max(DEM.Z));
hmin=min(min(DEM.Z));
%DEM.Z=randint(size(DEM.Z,1),size(DEM.Z,2),0 100]);
ganum=100;
dnanum=50;
dnalength=50;
childrennum=dnanum;

np=zeros(1,dnanum);

nsort1=zeros(1,dnanum);
nsort2=zeros(1,dnanum);
nsort3=zeros(1,dnanum);

fitness1=zeros(dnanum,3);
fitness2=zeros(dnanum,3);
fitness3=zeros(dnanum,3);

scfitness=zeros(dnanum,3);

congestion=zeros(1,dnanum);
startpoint=[1.1.100];
goalpoint=[101.101.100];
startpoint(3)=DEM.Z(startpoint(1),startpoint(2))+safth;
goalpoint(3)=DEM.Z(goalpoint(1),goalpoint(2))+safth;
dna1=zeros(dnanum,dnalength+1.3);
dna2=zeros(dnanum,dnalength+1.3);
dna3=zeros(dnanum,dnalength+1.3);
totalfitness=zeros(2,ganum+1.3);  %make a 3*101*3 size matrix ,the last 3 means three dimensional 

for i=1:1:dnanum
    for j=2:1:dnalength
        for k=1:1:3
         %dna1(i,j,k)=randint(1.1[1 101]);    
        dna1(i,j,1)=j*100/dnalength; 
        dna2(i,j,1)=j*100/dnalength; 
        dna3(i,j,1)=j*100/dnalength; 
        dna1(i,j,2)=j*100/dnalength+randint(1.1[0 20]- 10);
        dna2(i,j,2)=j*100/dnalength+randint(1.1[0 20]- 20);
        dna3(i,j,2)=j*100/dnalength+randint(1.1[0 20] +15); %key point to improvement
        if dna1(i,j,2)< 1dna1(i,j,2)=1;
        elseif dna1(i,j,2)> 101dna1(i,j,2)=101;      
        end
        if dna2(i,j,2)< 1dna2(i,j,2)=1;
        elseif dna2(i,j,2)> 101dna2(i,j,2)=101;      
        end
         if dna3(i,j,2)< 1dna3(i,j,2)=1;
        elseif dna3(i,j,2)> 101dna3(i,j,2)=101;      
        end
        %dna1(i,j,k)=floor(100*j/dnalength);
        %dna1(i,j,k)=j;
        end    
        dna1(i,j,3)=DEM.Z(dna1(i,j,1),dna1(i,j,2))+safth; 
        dna2(i,j,3)=DEM.Z(dna2(i,j,1),dna2(i,j,2))+safth; 
        dna3(i,j,3)=DEM.Z(dna3(i,j,1),dna3(i,j,2))+safth; 
%         dna1(i,j,3)=randint(1.1,[safth safth+hmax]);
        %dna1(i,j,3)=safth+hmax;
    end
    for k=1:1:3
    dna1(i,1,k)=startpoint(k);
    dna1(i,dnalength+1,k)=goalpoint(k);
    dna2(i,1,k)=startpoint(k);
    dna2(i,dnalength+1,k)=goalpoint(k);
    dna3(i,1,k)=startpoint(k);
    dna3(i,dnalength+1,k)=goalpoint(k);
    end
end

for n=1:1:ganum
n

fitness1=NSGA2_fitness(dna1);
fitness2=NSGA2_fitness(dna2);
fitness3=NSGA2_fitness(dna3);
scfitness=fitness1;

mmin(1)=min(fitness1(:,1));
mmin(2)=min(fitness1(:,2));
mmin(3)=min(fitness1(:,3));
mmax(1)=max(fitness1(:,1));
mmax(2)=max(fitness1(:,2));
mmax(3)=max(fitness1(:,3));

totalfitness(1,n,:)=mmin;
totalfitness(2,n,:)=mmax;
totalfitness(3,n,:)=(mmin+mmax)/2;
nsort1=NSGA2_SORT(fitness1);
nsort2=NSGA2_SORT(fitness2);
nsort3=NSGA2_SORT(fitness3);

children1=NSGA2_chlidren(dna1,childrennum,nsort1);
children2=NSGA2_chlidren(dna2,childrennum,nsort2);
children3=NSGA2_chlidren(dna3,childrennum,nsort3);

children_out1=NSGA2_cross(children1);
children_out2=NSGA2_cross(children2);
children_out3=NSGA2_cross(children3);

%children1=children_out1;

children1=NSGA2_variation(children_out1);
children2=NSGA2_variation(children_out2);
children3=NSGA2_variation(children_out3);


combinedna1(1:1:dnanum,:,:)=dna1(1:1:dnanum,:,:);
combinedna2(1:1:dnanum,:,:)=dna2(1:1:dnanum,:,:);
combinedna3(1:1:dnanum,:,:)=dna3(1:1:dnanum,:,:);

combinedna1(dnanum+1:1:dnanum+childrennum,:,:)=children1(1:1:dnanum,:,:);
combinedna2(dnanum+1:1:dnanum+childrennum,:,:)=children2(1:1:dnanum,:,:);
combinedna3(dnanum+1:1:dnanum+childrennum,:,:)=children3(1:1:dnanum,:,:);

childrenfitness1=NSGA2_fitness(children1); 
childrenfitness2=NSGA2_fitness(children2); 
childrenfitness3=NSGA2_fitness(children3); 

combinefitness1(1:1:dnanum,:)=fitness1(1:1:dnanum,:);
combinefitness2(1:1:dnanum,:)=fitness2(1:1:dnanum,:);
combinefitness3(1:1:dnanum,:)=fitness3(1:1:dnanum,:);

combinefitness1(dnanum+1:1:dnanum+childrennum,:)=childrenfitness1(1:1:dnanum,:);
combinefitness2(dnanum+1:1:dnanum+childrennum,:)=childrenfitness2(1:1:dnanum,:);
combinefitness3(dnanum+1:1:dnanum+childrennum,:)=childrenfitness3(1:1:dnanum,:);


dna1=NSGA2_BESTN(combinedna1,combinefitness1,dnanum);
dna2=NSGA2_BESTN(combinedna2,combinefitness2,dnanum);
dna3=NSGA2_BESTN(combinedna3,combinefitness3,dnanum);

end
mmin(1)=min(fitness1(:,1));
mmin(2)=min(fitness1(:,2));
mmin(3)=min(fitness1(:,3));

mmax(1)=max(fitness1(:,1));
mmax(2)=max(fitness1(:,2));
mmax(3)=max(fitness1(:,3));

totalfitness(1,ganum+1,:)=mmin;
totalfitness(2,ganum+1,:)=mmax;
totalfitness(3,ganum+1,:)=(mmin+mmax)/2;

nsort1=NSGA2_SORT(fitness1);
nsort2=NSGA2_SORT(fitness2);
nsort3=NSGA2_SORT(fitness3);

fitness1=NSGA2_fitness(dna1);
fitness2=NSGA2_fitness(dna2);
fitness3=NSGA2_fitness(dna3);

bestnum=5;

bestdna1=zeros(bestnum,dnalength+1.3);
bestdna2=zeros(bestnum,dnalength+1.3);
bestdna3=zeros(bestnum,dnalength+1.3);

bestdna1=NSGA2_RESULTN(dna1,fitness1,bestnum);
bestdna2=NSGA2_RESULTN(dna2,fitness2,bestnum);
bestdna3=NSGA2_RESULTN(dna3,fitness3,bestnum);

bestfitness1=NSGA2_fitness(bestdna1);
bestfitness2=NSGA2_fitness(bestdna2);
bestfitness3=NSGA2_fitness(bestdna3);

nsort1=NSGA2_SORT(bestfitness1);
nsort2=NSGA2_SORT(bestfitness2);
nsort3=NSGA2_SORT(bestfitness3);

resultnum=0;

for i=1:1:bestnum
    if nsort1(1,i)==1
       resultnum=resultnum+1;
       resultdna1(resultnum,:,:)=bestdna1(i,:,:);
       resultdnafitness1(resultnum,:)=bestfitness1(i,:);
    end
    if nsort2(1,i)= =1
       resultnum=resultnum+1;
       resultdna2(resultnum,:,:)=bestdna2(i,:,:);
       resultdnafitness2(resultnum,:)=bestfitness2(i,:);
    end
    if nsort3(1,i)= =1
       resultnum=resultnum+1;
       resultdna3(resultnum,:,:)=bestdna3(i,:,:);
       resultdnafitness3(resultnum,:)=bestfitness3(i,:);
    end
end

resultdnafitness1(:,1)=resultdnafitness1(:,1)/min(resultdnafitness1(:,1));
resultdnafitness1(:,2)=resultdnafitness1(:,2)/min(resultdnafitness1(:,2));
resultdnafitness1(:,3)=resultdnafitness1(:,3)/min(resultdnafitness1(:,3));

resultdnafitness2(:,1)=resultdnafitness2(:,1)/min(resultdnafitness2(:,1));
resultdnafitness2(:,2)=resultdnafitness2(:,2)/min(resultdnafitness2(:,2));
resultdnafitness2(:,3)=resultdnafitness2(:,3)/min(resultdnafitness2(:,3));

resultdnafitness3(:,1)=resultdnafitness3(:,1)/min(resultdnafitness3(:,1));
resultdnafitness3(:,2)=resultdnafitness3(:,2)/min(resultdnafitness3(:,2));
resultdnafitness3(:,3)=resultdnafitness3(:,3)/min(resultdnafitness3(:,3));

resultnum
resultdnafitness1
resultdnafitness2
resultdnafitness3


% figure(1);
% mesh(DEM.X,DEM.Y,DEM.Z);
% dna1(i,j,3)=DEM.Z(dna1(i,j,1),dna1(i,j,2))+safth; 
% dna2(i,j,3)=DEM.Z(dna2(i,j,1),dna2(i,j,2))+safth; 
% dna3(i,j,3)=DEM.Z(dna3(i,j,1),dna3(i,j,2))+safth; 
% axis([0 100 0 100 hmin hmax*2]);
% colormap jet;
% grid off;
% xlabel('x/m');
% ylabel('x/m');
% zlabel('x/m');
% hold on;
Copy the code

5. Operation results

Matlab version and references

1 matlab version 2014A

[1] Yang Baoyang, YU Jizhou, Yang Shan. Intelligent Optimization Algorithm and Its MATLAB Example (2nd Edition) [M]. Publishing House of Electronics Industry, 2016. [2] ZHANG Yan, WU Shuigen. MATLAB Optimization Algorithm source code [M]. Tsinghua University Press, 2017. [3] WU Qian, LUO Jinbiao, GU Xiaoqun, ZENG Qing. [4] Deng Ye, Jiang Xiangju. Optimization Algorithm of UAV 3D Trajectory Planning Based on Improved PSO [J]. Journal of Ordnance Equipment Engineering, 201,42(08). [5] Yunhong Ma, Heng Zhang, Legrong Qi, Jianliang He. Trajectory Planning Algorithm of quadrotor UAV based on improved Artificial Potential Field Method [J]. Sensors and Microsystems, 201,40(07) [5] 3d uav path planning based on improved A* algorithm [J]. Electronics optics & control, 2019,26(10) [6] jiao Yang. Research on 3d path planning of uav based on improved ant colony algorithm [J]. Ship electronic engineering, 2019,39(03)