A list,
Genetic Algorithm (GA) is a part of evolutionary computing. It is a computational model simulating Darwin’s Genetic selection and natural selection of biological evolution process. It is a method to search for the optimal solution by simulating natural evolution process. The algorithm is simple, universal, robust and suitable for parallel processing.
2 Characteristics and Applications of genetic Algorithm Genetic algorithm is a kind of robust search algorithm which can be used for complex system optimization. Compared with traditional optimization algorithm, it has the following characteristics: (1) It takes the coding of decision variables as the operation object. Traditional optimization algorithm often directly uses the actual value of decision variables to carry out optimization calculation, but genetic algorithm uses some form of coding of decision variables as the operation object. This coding processing method of decision variables makes it possible for us to learn from the concepts of chromosome and gene in biology, to imitate the genetic and evolutionary incentives of organisms in nature, and to easily apply genetic operators in optimization calculation. (2) Use fitness as the search information directly. The traditional optimization algorithm not only needs to use the value of the objective function, but also the search process is often constrained by the continuity of the objective function. It may also need to meet the requirement of “the derivative of the objective function must exist” to determine the search direction. Genetic algorithm only uses the fitness function value transformed from the value of the objective function to determine the further search range, without the derivative of the objective function and other auxiliary information. Directly using the value of the objective function or the individual fitness value can also focus the search range to the search space with higher fitness, so as to improve the search efficiency. (3) The search information of multiple points is used, which has implicit parallelism. The traditional optimization algorithm usually starts from an initial point in the solution space and iteratively searches for the optimal solution. A single point provides little search information, so the search efficiency is not high, and may fall into the local optimal solution and stagnation; Genetic algorithms start the search process for the optimal solution from an initial population composed of many individuals, rather than from a single individual. The operation, selection, crossover and variation of the initial population produce a new generation of population, which includes a lot of population information. These information can avoid searching some unnecessary points, so as to avoid falling into the local optimal solution, and gradually approach the global optimal solution. (4) Use probabilistic searches rather than deterministic rules. Traditional optimization algorithms often use deterministic search methods. The transition from one search point to another has a certain direction and relationship, which may make the search not reach the optimal store and limit the application scope of the algorithm. Genetic algorithm is a kind of adaptive search technology, its selection, crossover, mutation and other operations are carried out in a probabilistic way, which increases the flexibility of the search process, and can converge to the optimal solution with a large probability, has a good global optimization solving ability. However, parameters such as crossover probability and mutation probability will also affect the search results and search efficiency of the algorithm, so how to select the parameters of genetic algorithm is a relatively important problem in its application. To sum up, because the overall search strategy and optimization search method of genetic algorithm do not rely on gradient information or other auxiliary knowledge, and only need to solve the objective function and the corresponding fitness function that affect the search direction, genetic algorithm provides a general framework for solving complex system problems. It does not depend on the specific domain of the problem, and has strong robustness to the type of the problem, so it is widely used in a variety of fields, including: function optimization, combinatorial optimization of production scheduling problems, automatic control, robotics, image processing (image recovery, image edge feature extraction……) Artificial life, genetic programming, machine learning.
Simple Genetic Algorithms (SGA) only uses selection operator, crossover operator and mutation operator. The evolution process is Simple, and it is the basis of other Genetic Algorithms.
3.1 The basic flow of genetic algorithm generates a number of initial populations coded by a certain length (the length is related to the accuracy of the problem to be solved) in a random way; Each individual was evaluated by fitness function, individuals with high fitness were selected to participate in genetic operation, and individuals with low fitness were eliminated; The collection of individuals by genetic manipulation (replication, crossover, mutation) forms a new generation population until the stop criterion is met (evolutionary algebra GEN>=?). ; The best realized individual in the offspring is taken as the execution result of the genetic algorithm.Where GEN is the current algebra; M is population size, I is population number.
The basic genetic algorithm (SGA) consists of coding, fitness function, genetic operators (selection, crossover, mutation) and operating parameters. 3.2.1 Coding (1) Binary coding The length of binary coding string is related to the accuracy of the problem. You need to make sure that every individual in the space you’re solving for can be coded. Advantages: simple operation of coding and decoding, and easy realization of heredity and crossover Disadvantages: large length (2) Other coding methods: Gray code, floating point number coding, symbol coding, multi-parameter coding, etc. 3.2.2 Fitness function The fitness function should effectively reflect the gap between each chromosome and the optimal solution chromosome of the problem. 3.2.3 Selection operator3.2.4 Crossover operator Crossover operation refers to the formation of two new individuals by the exchange of some genes between two pairs of chromosomes in a certain way. Cross operation is an important characteristic of genetic algorithm which is different from other evolutionary algorithms and is the main method of generating new individuals. Before crossover, individuals in the group need to be paired, and the principle of random pairing is generally adopted. Common ways to cross: Single-point crossover, double-point crossover (multi-point crossover, the more crossover points, the more likely the individual structure is destroyed, Uniform crossover arithmetic crossover 3.2.5 Mutation operator Mutation operation in genetic algorithm is to replace the gene value of some loci in the individual chromosome coding string with other alleles of the locus, thus forming a new individual.
In terms of the ability of generating new individuals in the process of genetic algorithm operation, cross operation is the main method of generating new individuals, which determines the global search ability of genetic algorithm. Mutation operation is only an auxiliary method to generate new individuals, but it is also an essential operation step, which determines the local search ability of genetic algorithm. The combination of crossover operator and mutation operator completes the global search and local search of the search space, so that the genetic algorithm can complete the optimization process of the optimization problem with good search performance.
3.2.6 Running Parameters4 Basic principles of genetic algorithm 4.1 Pattern theorem4.2 Building Blocks A pattern that assumes low order, short defined length, and fitness value higher than the average fitness value of the population is called a genetic block or building block. Building block hypothesis: individual gene blocks can be spliced together to form individual coding strings with higher fitness through the action of genetic operators such as selection, crossover and mutation. The building block hypothesis illustrates the basic idea of solving various problems with genetic algorithms, that is, better solutions can be produced by building blocks directly joining each other together.
Ii. Source code
% % % % % % % % % % % % % % % % % % % % % % % % % genetic algorithm to solve TSP problem % % % % % % % % % % % % % % % % % % % % % % % clear all; % clear all variables close all; % qing figure CLC; % clear screen C=[1304 2312;3639 1315;4177 2244;3712 1399;3488 1535;3326 1556;... 3238 1229;4196 1044;4312 790;4386 570;3007 1970;2562 1756;... 2788 1491;2381 1676;1332 695;3715 1678;3918 2179;4061 2370;... 3780 2212;3676 2578;4029 2838;4263 2931;3429 1908;3507 2376;... 3394 2643;3439 3201;2935 3240;3140 3550;2545 2357;2778 2826;... 2370 2975]; N=size(C,1); %TSP problem size, that is, the number of cities D= Zeros (N); % matrix between any two cities distance % % % % % % % % % % % % % % % % % % % % % o matrix between any two cities distance % % % % % % % % % % % % % % % % % % % % % for I = 1: N for j = 1: N D (I, j) = (C (C (I, 1) - (j, 1)) ^ 2 + (C (I, 2) - C (j, 2)) ^ 2) ^ 0.5; end end NP=200; % population size G=1000; % maximum genetic algebra f= Zeros (NP,N); F=[]; For I =1:NP f(I,:)=randperm(N); % randomly generate initial population end R=f(1,:); % storage optimal population len= Zeros (NP,1); % storage path length fitness= Zeros (NP,1); % stores the normalized adaptive value gen=0; % % % % % % % % % % % % % % % % % % % % % % % % % genetic algorithm cycle % % % % % % % % % % % % % % % % % % % % % % % % % % % % % while gen < G % % % % % % % % % % % % % % % % % % % % % path length calculation % % % % % % % % % % % % % % % % % % % % % % % % % % % % % for I = 1: NP len (I, 1) = D (f (I, N), f (I, 1)); for j=1:(N-1) len(i,1)=len(i,1)+D(f(i,j),f(i,j+1)); end end maxlen=max(len); Minlen =min(len); The shortest path % % % % % % % % % % % % % % % % % % % % % % % % % % update the shortest path % % % % % % % % % % % % % % % % % % % % % % % % % % rr = find (len = = minlen); R = f (rr (1, 1)); % % % % % % % % % % % % % % % % % % % % % calculation of normalized adaptive value % % % % % % % % % % % % % % % % % % % % % % % % % % for I = 1: length (len) The fitness (I, 1) = (1 - ((len (I, 1) - minlen)/(0.001) maxlen minlen +)); End % % % % % % % % % % % % % % % % % % % % % % % % % % selects % % % % % % % % % % % % % % % % % % % % % % % % % % % % % nn = 0; for i=1:NP if fitness(i,1)>=rand nn=nn+1; F(nn,:)=f(i,:); end end [aa,bb]=size(F); while aa<NP nnper=randperm(nn); A=F(nnper(1),:); B=F(nnper(2),:); % % % % % % % % % % % % % % % % % % % % % % % crossover operation % % % % % % % % % % % % % % % % % % % % % % % % % % % % W = ceil (N / 10); P = unidrNd (n-W +1); For I =1:W x=find(A==B(p+ I -1)); y=find(B==A(p+i-1)); temp=A(p+i-1); A(p+i-1)=B(p+i-1); B(p+i-1)=temp; temp=A(x); A(x)=B(y); B(y)=temp; End % % % % % % % % % % % % % % % % % % % % % % % % % % mutation % % % % % % % % % % % % % % % % % % % % % % % % % p1 = floor (1 + N * rand ()); p2=floor(1+N*rand()); while p1==p2 p1=floor(1+N*rand()); p2=floor(1+N*rand()); end tmp=A(p1); A(p1)=A(p2); A(p2)=tmp; tmp=B(p1); B(p1)=B(p2); B(p2)=tmp; F=[F;A;B]; [aa,bb]=size(F); endCopy the code
Third, the operation result
Fourth, note
Version: 2014 a