A list,
Genetic Algorithm (GA) is a part of evolutionary computing. It is a computational model simulating Darwin’s Genetic selection and natural selection of biological evolution process. It is a method to search for the optimal solution by simulating natural evolution process. The algorithm is simple, universal, robust and suitable for parallel processing.
2 Characteristics and Applications of genetic Algorithm Genetic algorithm is a kind of robust search algorithm which can be used for complex system optimization. Compared with traditional optimization algorithm, it has the following characteristics: (1) It takes the coding of decision variables as the operation object. Traditional optimization algorithm often directly uses the actual value of decision variables to carry out optimization calculation, but genetic algorithm uses some form of coding of decision variables as the operation object. This coding processing method of decision variables makes it possible for us to learn from the concepts of chromosome and gene in biology, to imitate the genetic and evolutionary incentives of organisms in nature, and to easily apply genetic operators in optimization calculation. (2) Use fitness as the search information directly. The traditional optimization algorithm not only needs to use the value of the objective function, but also the search process is often constrained by the continuity of the objective function. It may also need to meet the requirement of “the derivative of the objective function must exist” to determine the search direction. Genetic algorithm only uses the fitness function value transformed from the value of the objective function to determine the further search range, without the derivative of the objective function and other auxiliary information. Directly using the value of the objective function or the individual fitness value can also focus the search range to the search space with higher fitness, so as to improve the search efficiency. (3) The search information of multiple points is used, which has implicit parallelism. The traditional optimization algorithm usually starts from an initial point in the solution space and iteratively searches for the optimal solution. A single point provides little search information, so the search efficiency is not high, and may fall into the local optimal solution and stagnation; Genetic algorithms start the search process for the optimal solution from an initial population composed of many individuals, rather than from a single individual. The operation, selection, crossover and variation of the initial population produce a new generation of population, which includes a lot of population information. These information can avoid searching some unnecessary points, so as to avoid falling into the local optimal solution, and gradually approach the global optimal solution. (4) Use probabilistic searches rather than deterministic rules. Traditional optimization algorithms often use deterministic search methods. The transition from one search point to another has a certain direction and relationship, which may make the search not reach the optimal store and limit the application scope of the algorithm. Genetic algorithm is a kind of adaptive search technology, its selection, crossover, mutation and other operations are carried out in a probabilistic way, which increases the flexibility of the search process, and can converge to the optimal solution with a large probability, has a good global optimization solving ability. However, parameters such as crossover probability and mutation probability will also affect the search results and search efficiency of the algorithm, so how to select the parameters of genetic algorithm is a relatively important problem in its application. To sum up, because the overall search strategy and optimization search method of genetic algorithm do not rely on gradient information or other auxiliary knowledge, and only need to solve the objective function and the corresponding fitness function that affect the search direction, genetic algorithm provides a general framework for solving complex system problems. It does not depend on the specific domain of the problem, and has strong robustness to the type of the problem, so it is widely used in a variety of fields, including: function optimization, combinatorial optimization of production scheduling problems, automatic control, robotics, image processing (image recovery, image edge feature extraction……) Artificial life, genetic programming, machine learning.
Simple Genetic Algorithms (SGA) only uses selection operator, crossover operator and mutation operator. The evolution process is Simple, and it is the basis of other Genetic Algorithms.
3.1 The basic flow of genetic algorithm generates a number of initial populations coded by a certain length (the length is related to the accuracy of the problem to be solved) in a random way; Each individual was evaluated by fitness function, individuals with high fitness were selected to participate in genetic operation, and individuals with low fitness were eliminated; The collection of individuals by genetic manipulation (replication, crossover, mutation) forms a new generation population until the stop criterion is met (evolutionary algebra GEN>=?). ; The best realized individual in the offspring is taken as the execution result of the genetic algorithm.Where GEN is the current algebra; M is population size, I is population number.
The basic genetic algorithm (SGA) consists of coding, fitness function, genetic operators (selection, crossover, mutation) and operating parameters. 3.2.1 Coding (1) Binary coding The length of binary coding string is related to the accuracy of the problem. You need to make sure that every individual in the space you’re solving for can be coded. Advantages: simple operation of coding and decoding, and easy realization of heredity and crossover Disadvantages: large length (2) Other coding methods: Gray code, floating point number coding, symbol coding, multi-parameter coding, etc. 3.2.2 Fitness function The fitness function should effectively reflect the gap between each chromosome and the optimal solution chromosome of the problem. 3.2.3 Selection operator3.2.4 Crossover operator Crossover operation refers to the formation of two new individuals by the exchange of some genes between two pairs of chromosomes in a certain way. Cross operation is an important characteristic of genetic algorithm which is different from other evolutionary algorithms and is the main method of generating new individuals. Before crossover, individuals in the group need to be paired, and the principle of random pairing is generally adopted. Common ways to cross: Single-point crossover, double-point crossover (multi-point crossover, the more crossover points, the more likely the individual structure is destroyed, Uniform crossover arithmetic crossover 3.2.5 Mutation operator Mutation operation in genetic algorithm is to replace the gene value of some loci in the individual chromosome coding string with other alleles of the locus, thus forming a new individual.
In terms of the ability of generating new individuals in the process of genetic algorithm operation, cross operation is the main method of generating new individuals, which determines the global search ability of genetic algorithm. Mutation operation is only an auxiliary method to generate new individuals, but it is also an essential operation step, which determines the local search ability of genetic algorithm. The combination of crossover operator and mutation operator completes the global search and local search of the search space, so that the genetic algorithm can complete the optimization process of the optimization problem with good search performance.
3.2.6 Running Parameters4 Basic principles of genetic algorithm 4.1 Pattern theorem4.2 Building Blocks A pattern that assumes low order, short defined length, and fitness value higher than the average fitness value of the population is called a genetic block or building block. Building block hypothesis: individual gene blocks can be spliced together to form individual coding strings with higher fitness through the action of genetic operators such as selection, crossover and mutation. The building block hypothesis illustrates the basic idea of solving various problems with genetic algorithms, that is, better solutions can be produced by building blocks directly joining each other together.
Ii. Source code
function mutation(pop_size, chromo_size, mutate_rate)
global pop;
global aim;
global q;
for i=1:pop_size
if rand < mutate_rate
mutate_pos = round(rand*chromo_size);
if mutate_pos == 0
continue;
end
pop_tem = pop;
vaild = 1;
pop_tem(i,mutate_pos) = round(rand()*50); % variationif ( pop_tem(i,1)>=aim(1.1)&&pop_tem(i,4)>=aim(1.1)&&pop_tem(i,7)>=aim(1.1) ) || ( pop_tem(i,2)>=aim(1.2)&&pop_tem(i,5)>=aim(1.2)&&pop_tem(i,8)>=aim(1.2) ) || ( pop_tem(i,3)>=aim(1.3)&&pop_tem(i,6)>=aim(1.3)&&pop_tem(i,9)>=aim(1.3) )
vaild = 0;
elseif ( pop_tem(i,1)<=aim(1.1)&&pop_tem(i,4)<=aim(1.1)&&pop_tem(i,7)<=aim(1.1) ) || ( pop_tem(i,2)<=aim(1.2)&&pop_tem(i,5)<=aim(1.2)&&pop_tem(i,8)<=aim(1.2) ) || ( pop_tem(i,3)<=aim(1.3)&&pop_tem(i,6)<=aim(1.3)&&pop_tem(i,9)<=aim(1.3) )
vaild = 0;
elseif (sqrt((pop_tem(i,1)-aim(1.1)) ^2+(pop_tem(i,2)-aim(1.2)) ^2+(pop_tem(i,3)-aim(1.3)) ^2) +sqrt((pop_tem(i,4)-aim(1.1)) ^2+(pop_tem(i,5)-aim(1.2)) ^2+(pop_tem(i,6)-aim(1.3)) ^2) +sqrt((pop_tem(i,7)-aim(1.1)) ^2+(pop_tem(i,8)-aim(1.2)) ^2+(pop_tem(i,9)-aim(1.3)) ^2)) /3 > q
vaild = 0;
end
if(vaild == 1)
pop = pop_tem;
end
end
end
clear i;
clear mutate_pos;
clear pop_tem;
function crossover(pop_size, chromo_size, cross_rate)
global pop;
global aim;
global q;
for i=2:2:pop_size
if(rand < cross_rate) cross_pos = round(rand * chromo_size); % random cross positionif or (cross_pos == 0, cross_pos == 1)
continue;
end
pop_tem = pop;
forTemp = pop_tem(I,j); j=cross_pos: Chromo_size % pop_tem(i,j) = pop_tem(i+1,j);
pop_tem(i+1,j) = temp;
end
vaild = 1;
if ( pop_tem(i,1)>=aim(1.1)&&pop_tem(i,4)>=aim(1.1)&&pop_tem(i,7)>=aim(1.1) ) || ( pop_tem(i,2)>=aim(1.2)&&pop_tem(i,5)>=aim(1.2)&&pop_tem(i,8)>=aim(1.2) ) || ( pop_tem(i,3)>=aim(1.3)&&pop_tem(i,6)>=aim(1.3)&&pop_tem(i,9)>=aim(1.3) )
vaild = 0;
elseif ( pop_tem(i,1)<=aim(1.1)&&pop_tem(i,4)<=aim(1.1)&&pop_tem(i,7)<=aim(1.1) ) || ( pop_tem(i,2)<=aim(1.2)&&pop_tem(i,5)<=aim(1.2)&&pop_tem(i,8)<=aim(1.2) ) || ( pop_tem(i,3)<=aim(1.3)&&pop_tem(i,6)<=aim(1.3)&&pop_tem(i,9)<=aim(1.3) )
vaild = 0;
elseif (sqrt((pop_tem(i,1)-aim(1.1)) ^2+(pop_tem(i,2)-aim(1.2)) ^2+(pop_tem(i,3)-aim(1.3)) ^2) +sqrt((pop_tem(i,4)-aim(1.1)) ^2+(pop_tem(i,5)-aim(1.2)) ^2+(pop_tem(i,6)-aim(1.3)) ^2) +sqrt((pop_tem(i,7)-aim(1.1)) ^2+(pop_tem(i,8)-aim(1.2)) ^2+(pop_tem(i,9)-aim(1.3)) ^2)) /3 > q
vaild = 0;
elseif( pop_tem(i+1.1)>=aim(1.1)&&pop_tem(i+1.4)>=aim(1.1)&&pop_tem(i+1.7)>=aim(1.1) ) || ( pop_tem(i+1.2)>=aim(1.2)&&pop_tem(i+1.5)>=aim(1.2)&&pop_tem(i+1.8)>=aim(1.2) ) || ( pop_tem(i+1.3)>=aim(1.3)&&pop_tem(i+1.6)>=aim(1.3)&&pop_tem(i+1.9)>=aim(1.3) )
vaild = 0;
elseif ( pop_tem(i+1.1)<=aim(1.1)&&pop_tem(i+1.4)<=aim(1.1)&&pop_tem(i+1.7)<=aim(1.1) ) || ( pop_tem(i+1.2)<=aim(1.2)&&pop_tem(i+1.5)<=aim(1.2)&&pop_tem(i+1.8)<=aim(1.2) ) || ( pop_tem(i+1.3)<=aim(1.3)&&pop_tem(i+1.6)<=aim(1.3)&&pop_tem(i+1.9)<=aim(1.3) )
vaild = 0;
elseif (sqrt((pop_tem(i+1.1)-aim(1.1)) ^2+(pop_tem(i+1.2)-aim(1.2)) ^2+(pop_tem(i+1.3)-aim(1.3)) ^2) +sqrt((pop_tem(i+1.4)-aim(1.1)) ^2+(pop_tem(i+1.5)-aim(1.2)) ^2+(pop_tem(i+1.6)-aim(1.3)) ^2) +sqrt((pop_tem(i+1.7)-aim(1.1)) ^2+(pop_tem(i+1.8)-aim(1.2)) ^2+(pop_tem(i+1.9)-aim(1.3)) ^2)) /3 > q
vaild = 0;
end
if(vaild == 1)
pop = pop_tem;
end
end
end
clear i;
clear j;
clear temp;
clear vaild;
clear cross_pos;
clear pop_temp;
function initilize(pop_size, chromo_size)
global pop; global aim; Global q; vaild =0;
for i=1:pop_size
for l = 1:10000000;
vaild = 1;
for j=1:chromo_size
pop(i,j) = round(rand()*50);
end
if ( pop(i,1)>=aim(1.1)&&pop(i,4)>=aim(1.1)&&pop(i,7)>=aim(1.1)) || ( pop(i,2)>=aim(1.2)&&pop(i,5)>=aim(1.2)&&pop(i,8)>=aim(1.2)) || ( pop(i,3)>=aim(1.3)&&pop(i,6)>=aim(1.3)&&pop(i,9)>=aim(1.3))
vaild = 0;
elseif ( pop(i,1)<=aim(1.1)&&pop(i,4)<=aim(1.1)&&pop(i,7)<=aim(1.1) ) || ( pop(i,2)<=aim(1.2)&&pop(i,5)<=aim(1.2)&&pop(i,8)<=aim(1.2) ) || ( pop(i,3)<=aim(1.3)&&pop(i,6)<=aim(1.3)&&pop(i,9)<=aim(1.3) )
vaild = 0;
elseif (sqrt((pop(i,1)-aim(1.1)) ^2+(pop(i,2)-aim(1.2)) ^2+(pop(i,3)-aim(1.3)) ^2) +sqrt((pop(i,4)-aim(1.1)) ^2+(pop(i,5)-aim(1.2)) ^2+(pop(i,6)-aim(1.3)) ^2) +sqrt((pop(i,7)-aim(1.1)) ^2+(pop(i,8)-aim(1.2)) ^2+(pop(i,9)-aim(1.3)) ^2)) /3 > q
vaild = 0;
end
if vaild == 1
break;
end;
end
end
clear i;
clear j;
clear l;
clear vaild;
function rank(pop_size, chromo_size)
global fitness_value;
global fitness_table;
global fitness_avg;
global best_fitness;
global best_individual;
global best_generation;
global pop;
global G;
for i=1:pop_size
fitness_table(i) = 0.;
end
min = 1;
temp = 1;
temp1(chromo_size)=0;
for i=1:pop_size
min = i;
for j = i+1:pop_size
if fitness_value(j)<fitness_value(min);
min = j;
end
end
if min~=i
temp = fitness_value(i);
fitness_value(i) = fitness_value(min);
fitness_value(min) = temp;
for k = 1:chromo_size
temp1(k) = pop(i,k);
pop(i,k) = pop(min,k);
pop(min,k) = temp1(k);
end
end
end
for i=1:pop_size
if i==1
fitness_table(i) = fitness_table(i) + fitness_value(i);
else
fitness_table(i) = fitness_table(i- 1) + fitness_value(i);
end
end
fitness_avg(G) = fitness_table(pop_size)/pop_size;
if fitness_value(pop_size) > best_fitness
best_fitness = fitness_value(pop_size);
best_generation = G;
for j=1:chromo_size
best_individual(j) = pop(pop_size,j);
end
end
clear i;
clear j;
clear k;
clear min;
clear temp;
clear temp1;
Copy the code
Third, the operation result
Fourth, note
Version: 2014 a