I. Introduction to TSP

Traveling Salesman Problem, also known as TSP Problem, is one of the famous problems in mathematics. Suppose a traveling businessman wants to visit n cities. He must choose the route he wants to take. The limit of the route is that he can visit each city only once and return to the original city. The goal of path selection is to obtain the minimum path distance among all paths.Mathematical model of TSP

Introduction to genetic algorithm

1 the introduction Genetic algorithm theory2.1 Biological basis of genetic algorithm 2.2 Theoretical basis of genetic algorithm 2.3 Basic concepts of genetic algorithm 2.4 Standard genetic algorithm 2.5 Characteristics of genetic algorithm 2.6 Improvement direction of genetic algorithm 3. Genetic algorithm flow 4 Key Parameters

Iii. Introduction to simulated annealing algorithm

The idea of Simulated Annealing algorithm (SA) was first proposed by Metropolis et alin 1953: Kirkpatrick first used Simulated Annealing algorithm to solve combinatorial optimization problems in 1983 [1]. Simulated annealing algorithm is a random optimization algorithm based on MonteCarlo iterative solution strategy. Its starting point is based on the similarity between physical solid material annealing process and general combinatorial optimization problem. Its purpose is to provide an effective approximation solution algorithm for problems with NP(non-deterministic Polynomial) complexity. It overcomes the defects that other optimization processes are prone to fall into local minima and dependence on initial values. Simulated annealing algorithm is a general optimization algorithm and an extension of local search algorithm. It is different from local search algorithm in that it selects the inferior solution with large target value in the neighborhood with a certain probability. Theoretically, it is a global optimal algorithm. The simulated annealing algorithm is based on the similarity between the solution of the optimization problem and the annealing process of the physical system. It makes use of Metropolis algorithm and appropriately controls the temperature drop process to realize simulated annealing, so as to achieve the purpose of solving the global optimization problem [2]. Simulated annealing algorithm is an optimization process that can be applied to the minimization problem. In this process, the length of each step of the updating process is proportional to the corresponding parameters, which play the role of temperature. Similar to the principle of metal annealing, the temperature is initially raised very high for faster minimization, and then slowly cooled to stabilize. At present, simulated annealing algorithm is in its heyday, and both theoretical and applied research has become a very popular topic [3-7]. In particular, it has been widely used in engineering, such as production scheduling, control engineering, machine learning, neural networks, pattern recognition, image processing, structural optimization of discrete/continuous variables and so on. It can effectively solve combinatorial optimization problems and complex function optimization problems which are difficult to be solved by conventional optimization methods. Simulated annealing algorithm has the very strong global search performance, this is because compared with the common optimization search method, it adopted many unique methods and techniques: in simulated annealing algorithm, the basic without the knowledge of the search space or other auxiliary information, just define the neighborhood structure, adjacent solution within its neighborhood structure selection, using to evaluate the objective function: The simulated annealing algorithm does not use deterministic rules, but uses the change of probability to guide its search direction. The probability it uses is only a tool to guide its search process to move towards the region of more optimal solution. Therefore, although it appears to be a blind search method, it actually has a clear search direction.

2 theory of simulated annealing algorithm, simulated annealing algorithm to optimize the problem solving process and physical, which is based on the similarity between the annealing process, optimizing the objective function is equivalent to the internal energy of the metal, the optimization problem of independent variables is equivalent to the internal energy of the metal composite state space state space, the problem solving process is to look for a group of state, to minimize the objective function values. Using Me to POLIS algorithm and appropriately controlling the temperature drop process to achieve simulated annealing, so as to achieve the purpose of solving the global optimization problem [8]. 2.1 Physical Annealing Process The core idea of simulated annealing algorithm is very similar to the principle of thermodynamics. At high temperatures, a large number of molecules in a liquid move relatively freely between each other. If the liquid cools slowly, the mobility of the thermal atoms is lost. Large numbers of atoms can often arrange themselves in rows to form a pure crystal, perfectly ordered in all directions within millions of times the distance of individual atoms. For this system, the crystal state is the lowest energy state, and all slowly cooling systems can naturally reach this lowest energy state. In fact, if the liquid metal is cooled rapidly, it will not reach this state, but only a polycrystalline or amorphous state with a higher energy. The essence of the process, therefore, is to cool slowly to buy enough time for a large number of atoms to redistribute before they lose their mobility, which is necessary to ensure energy reaches a low-energy state. Simply speaking, the physical annealing process consists of the following parts: heating process, isothermal process and cooling process.

The purpose of the heating process is to enhance the thermal motion of particles and make them deviate from the equilibrium position. When the temperature is high enough, the solid will melt into liquid, thus eliminating the non-uniform state that may have existed in the system and allowing the subsequent cooling process to start with an equilibrium state. The melting process is related to the energy increase of the system, and the energy of the system also increases with the increase of temperature.

According to the knowledge of physics, for a closed system with constant temperature while exchanging heat with the surrounding environment, the spontaneous change of the system state is always in the direction of reducing the free energy: when the free energy reaches the minimum, the system reaches an equilibrium state.

The purpose of the cooling process is to weaken the thermal motion of particles and gradually tend to order, and the system energy gradually decreases, so as to obtain a low energy crystal structure.

2.2 Principle of simulated annealingThe simulated annealing algorithm is derived from the solid annealing principle. The solid is heated to a high enough temperature and then cooled slowly. When the temperature is increased, the particles inside the solid become disordered and their internal energy increases. When the particles are cooled slowly, they become orderly and reach equilibrium state at every temperature. Finally, they reach ground state at constant temperature and their internal energy decreases to a minimum. The similarity between simulated annealing algorithm and metal annealing process is shown in Table 7.1. In the root Metropolis criterion, the probability of particles approaching equilibrium at temperature 7 is exp(-▲E/T), where E is the internal energy at temperature 7 and ▲E is its change. Solid annealing is used to simulate the combinatorial optimization problem, the internal energy E is simulated as the objective function value, and the temperature 7 is evolved into the control parameter, namely, the simulated annealing algorithm to solve the combinatorial optimization problem is obtained: Starting from the initial solution % and the initial value of the control parameter 7, the iteration of “generating a new solution → calculating the difference of the objective function to accept or discard” is repeated for the current solution, and the T value is gradually reduced. The current solution at the end of the algorithm is the approximate optimal solution, which is a heuristic random search process of the MonteCarlo iterative solution method. The annealing process is controlled by the cooling schedule, including the initial value of the control parameter 7 and its decay factor K, the number of iterations at each value of 7 and the stop condition. 2.3 Idea of simulated annealing algorithmThe main idea of simulated annealing is that the random walk (i.e. random selection point) in the search interval is used to make the random walk gradually converge to the local optimal solution by using the Metropolis sampling criterion. As temperature is an important control parameter in Metropolis algorithm, it can be considered that the size of this parameter controls how fast the random process moves to the local or global optimal solution. Metropolis is an effective emphasis sampling method whose algorithm is: when the system changes from one energy state to another, the corresponding energy changes from E to E with the probability ofAfter a certain number of iterations, the system will gradually tend to a stable distribution state. In key sampling, if the new state is downward, then accept (local optimal); If up (global search), then accept with a certain probability. Starting from a certain initial solution, simulated annealing algorithm can obtain the relative optimal solution of combinatorial optimization problem with given control parameter values after the transformation of a large number of solutions. Then, the value of the control parameter 7 is reduced and the Metropolis algorithm is repeatedly executed. When the control parameter T approaches zero, the overall optimal solution of the combinatorial optimization problem can be finally obtained. The value of the control parameter must decay slowly. Temperature is an important control parameter of Metropolis algorithm, and simulated annealing can be regarded as an iteration of Metro PL is algorithm when the control parameter is decreased by 7. The value of 7 is large at the beginning, which can accept the poor deteriorating solution; With the decrease of 7, only a better deteriorating solution can be accepted; And finally, as 7 goes to 0, you don’t accept any deteriorating solutions anymore. At infinitely high temperature, the system is uniformly distributed at once, accepting all the proposed transformations. The smaller the attenuation of T, the longer the time for 7 to reach the terminal point; However, Markov chain can be reduced to shorten the time to reach the quasi-equilibrium distribution.

2.4 Characteristics of simulated annealing Algorithm Simulated annealing algorithm has a wide range of application, high reliability to obtain the global optimal solution, simple algorithm, easy to implement; The search strategy of the algorithm is helpful to avoid the defect of local optimal solution and improve the reliability of global optimal solution. Simulated annealing algorithm has very strong robustness, because compared with ordinary optimization search method, it adopts many unique methods and techniques. There are mainly the following aspects: (1) Accept the deteriorating solution with a certain probability. Simulated annealing algorithm not only introduces appropriate stochastic factors but also introduces the natural mechanism of physical system annealing process. The introduction of this natural mechanism enables the simulated annealing algorithm to accept not only the points that make the objective function value “good”, but also the points that make the objective function value “bad” with a certain probability in the iterative process. The states appearing in the iteration process are randomly generated, and it is not required that the latter state must be superior to the former state, and the acceptance probability gradually decreases with the decrease of temperature. Many traditional optimization algorithms are deterministic, and the transfer from one search point to another has certain transfer methods and transfer relations. This deterministic method often makes the search point far less than the optimal advantage, thus limiting the application scope of the algorithm. Simulated annealing algorithm searches in a probabilistic way, which increases the flexibility of the search process. (2) Introduced algorithm control parameters. The control parameters of an algorithm similar to annealing temperature are introduced, which divides the optimization process into several stages and determines the criteria of random state selection at each stage. The acceptance function is presented by Metropolis algorithm with a simple mathematical model. The simulated annealing algorithm has two important steps: first, under each control parameter, the neighboring random state is generated from the previous iteration point, and the acceptance criterion determined by the control parameter determines the choice of the new state, and then a certain length of random Markov chain is formed. The second is to slowly reduce the control parameters and improve the acceptance criteria until the control parameters approach zero and the state chain stabilizes at the optimal state of the optimization problem, thus improving the reliability of the global optimal solution of the simulated annealing algorithm. (3) Less requirements for the objective function. Traditional search algorithms need not only the value of the objective function, but also the derivative value of the objective function and other auxiliary information to determine the search direction: when these information does not exist, the algorithm is invalid. The simulated annealing algorithm does not need other auxiliary information, but only defines the neighborhood structure, selects adjacent solutions in the neighborhood structure, and evaluates them with the objective function.

2.5 Improvement Direction of simulated Annealing Algorithm Improving the search efficiency of simulated annealing algorithm on the basis of ensuring certain required optimization quality is the main content of simulated annealing algorithm improvement [9-10]. There are the following feasible schemes: choose the appropriate initial state; Design appropriate state generation functions to show the spatial dispersion or local region of the state according to the needs of the search process: design efficient annealing process; To improve the temperature control mode: using parallel search structure; Design suitable algorithm termination criteria: etc. In addition, the improvement of simulated annealing algorithm can be realized by adding some steps. The main improvement methods are as follows: (1) increase the memory function. In order to avoid the loss of the optimal solution currently encountered due to the execution of the probabilistic acceptance link in the search process, the best state so far can be stored by adding a storage link. (2) Increasing or reheating process. When the temperature is appropriately raised in the algorithm process, the acceptance probability of each state can be activated to adjust the current state in the search process, and the algorithm to avoid rabbits stagnates at the local minimum solution. (3) For each current state, multiple search strategy is adopted to accept the optimal state in the region with probability, instead of the single comparison method of the standard simulated annealing algorithm. (4) Combine with other search algorithms (such as genetic algorithm, immune algorithm, etc.). The advantages of other algorithms can be combined to improve the efficiency and quality of solution.

3. Simulated annealing algorithm flowThe generation and acceptance of new solutions in simulated annealing algorithm can be divided into three steps as follows: (1) a generation function generates a new solution in the solution space from the current solution; In order to facilitate the subsequent calculation and acceptance and reduce the time consuming of the algorithm, the method of generating a new solution by simple transformation of the current solution is usually selected. It should be noted that the transformation method to generate the new solution determines the neighborhood structure of the current new solution, so it has a certain influence on the selection of cooling schedule. (2) Whether a new solution is accepted or not is judged based on an acceptance criterion, the most commonly used of which is Metropolis criterion: If AK 0, accept X as the new current solution; otherwise, accept X “as the new current solution X with probability exp(-▲E/7). (3) When the new solution is confirmed to be accepted, replace the current solution with the new solution. This only needs to realize the transformation part of the current solution corresponding to the new solution when the new solution is generated, and modify the value of the objective function. At this point, the current solution has realized an iteration, and the next round of experiments can be started on this basis. If the new solution is judged to be abandoned, the next round of experiments will be continued on the basis of the original current solution. The solution obtained by simulated annealing algorithm is independent of the initial solution state (the starting point of algorithm iteration) and has asymptotic convergence, which has been proved to be an optimal algorithm with probability 1 converging to the global optimal solution. The simulated annealing algorithm can be divided into three parts: solution space, objective function and initial solution. The specific process of the algorithm is as follows [8] : (1) Initialization: set the initial temperature 7(sufficiently large), the initial solution state %(which is the starting point of algorithm iteration), and the number of iterations of each 7 value L: (2) for k=1,… , L do steps (3) to (6); (3) Generate a new solution X; (4) Calculate the increment A BE(X) -e (X), where E()) is the evaluation function: (5) If AK0, accept X as the new current solution, otherwise accept X as the new current solution with probability exp(-▲E/7); (6) If the termination condition is met, output the current solution as the optimal solution and end the program: (7)7 gradually decreases and T→0, and then go to step (2). The simulated annealing algorithm flow is shown in Figure 7.1. 4 Key ParametersSimulated annealing algorithm has high performance quality, it is more general and easy to implement. However, in order to obtain the optimal solution, the algorithm usually requires a high initial temperature and enough sampling times, which makes the optimization time of the algorithm often too long. From the algorithm structure, the new state generation function, initial temperature, detemperature function, Markov chain length and algorithm stop criterion are the main links that directly affect the algorithm optimization results.State generation functionThe state generation function should be designed to ensure that the generated candidate solutions are spread over the entire solution space as much as possible. In general, the state generation function consists of two parts, namely, the way of generating candidate solutions and the probability distribution of generating candidate solutions. The generation mode of the candidate solution is determined by the nature of the problem and is usually generated with a certain probability in the neighborhood structure of the current state.Initial temperatureTemperature 7 plays a decisive role in the algorithm, which directly controls the direction of annealing. According to the acceptance criterion of random movement, the greater the initial temperature, the greater the probability of obtaining high-quality solutions, and the acceptance rate of Metropolis is about 1. However, too high initial temperature will increase the calculation time. To this end, a group of states can be sampled uniformly, and the variance of the target value of each state can be used as the initial temperature.Annealing temperature functionThe detemperature function is the temperature update function, which is used to modify the temperature value in the outer loop. At present, the most commonly used temperature updating function is exponential dewarming function, namely T(n+1) =KXT(n), where 0<K1 is a constant very close to 1.Selection of Markov chain length LMarkov chain length is the number of iterative optimization under isothermal conditions. The selection principle is that L should be selected to restore quasi-equilibrium on each value of the control parameter under the premise that the attenuation function of attenuation parameter 7 has been selected. Generally, L is 100~1000.Algorithm stop criterionThe algorithm stop rule is used to determine when an algorithm ends. You can simply set the final temperature T, and when F=T, the algorithm terminates. However, the convergence theory of simulated fire algorithm requires that T tends to zero, which is actually not practical. Commonly used stop criteria package: set the threshold of the termination temperature, set the threshold of the number of iterations, or stop the search when the optimal value is continuously unchanged.

Four, part of the source code

Function f = Genetic_algorithm (D, MUTE, Pm) % Simple implementation of genetic algorithm to solve TSP problems in Chinese provincial capitals % Including Taipei, Hong Kong, Macao %% Default parameter setting clear;if ~exist('mute'.'var')
        mute = 0; % Whether to display various prompt messages endif ~exist('Pm'.'var')
        Pm = 0.2; % mutation probability, the greater the convergence is slower, but generally the better the solution endif ~exist('D'.'var')
        data=xlsread('Provincial capital longitude and latitude coordinates. XLSX');
        C=data;
        D=zeros(35.35); LA1,LA2]=meshgrid(C(:,2));
        [LO1,LO2]=meshgrid(C(:,1)); R = distance(LA1,LO1,LA2,LO2,almanac)'earth'.'wgs84'));
        D = num2str(R,'% 10.2 f'); %disp(D); End %% genetic algorithm parameter setting RNG (1); % control random number generation (seed) n = size(D,1);
    N = 100; % population size (must be even) TOL =20; % maximum tolerance times (if the rate does not rise for consecutive TOL times, or no better solution is found, iteration will be stopped) %% Initial feasible solution solutions = Zeros (N, N); % Since the last city is the first city returned by default, there is no need to include the last visited city in the feasible solution fs = zeros(N,1); % fitnessfor i = 1:N
        solutions(i, :) = [1, randperm(n- 1) + 1]; % generates N solutions, given from1Fs (I) = TSP_distance(D, solutions(I, :)); end Pu = max(fs) - fs +1; % to add1In case the bit for Max (fs) is zero0P = Pu/sum(Pu); % Thus, the higher the fitness, the lower the probability of possible selection cumP = cumsum(P); % Calculate cumulative probability best = min(fs); % Avg = mean(fs); rate = best/avg;if ~mute
    disp('The shortest path length in the group of initial solutions is:');
    disp(best);
    disp('The average path length in the population of initial solutions is:'); disp(avg); End %% evolution tol =0;
    count = 0;
    while 1
        count = count + 1;
        if ~mute
            fprintf('Current iteration %d \n', count); Parents = zeros(size(solutions));for i = 1:N
            index = sum(cumP <= rand) + 1; % random number rand parents(I, :) = solutions(index, :);end
        
      
        assert(mod(N, 2) = =0); % N must be even, otherwise error % mating operation; By default, N is even, and every two parents together produce two childrenfor i = 1:N/2% generation of children1Take a parent1The first half of the chromosome, the second half of the father2Provide; Same thing with children2
            p1 = parents(2*i- 1, :);
            p2 = parents(2*i, :);
            middle = ceil(n/2);
            s1 = p1(1:middle);
            res1 = setdiff(p2, s1, 'stable'); % find the gene in P2 but not in S1, equivalent to p2\s1 s1 = [s1, res1]; s2 = p2(1:middle);
            res2 = setdiff(p1, s2, 'stable');
            s2 = [s2, res2];
            new_solutions(2*i- 1, :) = s1;
            new_solutions(2*i, :) = s2; End % mutation operation; The variation is by taking a random city and swapping with the first cityfor i = 1:N 
            if rand < Pm
                temp = randperm(n- 1) + 1;
                k = temp(1);
                new_solutions(i, [1, k]) = new_solutions(i, [k, 1]); End end % End % End % End % End % End % End % End %for i = 1:N
            fs(i) = TSP_distance(D, solutions(i, :));
        end
       
        avg = mean(fs);
        rate_new = best_new/avg;
        if ~mute
            disp('The shortest path length is:');
            disp(best_new);
            disp('The average path length is:');
            disp(avg);
        end
        tol = tol + 1; % If this round of update, the optimal solution is better, then TOL return0
        if best_new < best || rate_new > rate  
            best = best_new;
            rate = rate_new;
            tol = 0;
        end
        if tol >= TOL
            break
        end
        if count > 5000
            breakEnd end %% [f, index] = min(fs); solution = solutions(index, :);if ~mute
    fprintf('The final search results in the optimal path: \n'); disp(solution); Finally, you have to go back to the beginning1
    disp('Path length is:'); disp(f); End function f = simulated_rolling (D, MUTE, MAX_ITER) % Simulated annealing simple implementation of resolving TSP problems in Chinese capital cities % including Taipei, Hong Kong, Macao %% default parameter setting clear;if ~exist('mute'.'var')
        mute = 0; % Whether to display various prompt messages endif ~exist('MAX_ITER'.'var')
        MAX_ITER = 50; % Maximum number of state exchange attempts per temperature %if ~exist('D'.'var')
        data=xlsread('Provincial capital longitude and latitude coordinates. XLSX');
        C=data;
        D=zeros(35.35); LA1,LA2]=meshgrid(C(:,2));
        [LO1,LO2]=meshgrid(C(:,1)); R = distance(LA1,LO1,LA2,LO2,almanac)'earth'.'wgs84'));
        D = num2str(R,'% 10.2 f'); %disp(D); End %% Simulated annealing algorithm parameter setting RNG0);
    n = size(D, 1);
    
    T_range_factor = exp(0:0.1:- 5); % temperature range coefficient solution = [1, randperm(n- 1) + 1]; % generates a solution, given from1startif ~mute
        fprintf(There are %d solutions in the neighborhood. \n', numel(P)); end f = TSP_distance(D, solution); % Find the total distance of the current solutionif ~mute
        disp('The initial path is:');   
        disp(solution);
        disp('Path length is:'); disp(f); end P = generate_neighbors(solution); % generate the initial solution of the neighborhood P TMAX = f; % temperature range dynamically varies with the quality of the initial solution (the worse the initial solution, the greater the need to explore) T_range = T_range_factor * TMAX; Try to start high enough and finish low enoughfor i = 1:MAX_ITER % inner loop index = randi(numel(P),1); % produces a1~ | | P a random integer between neighbor = P {index}; % select a random solution in P f_neighbor = TSP_distance(D, neighbor); % computes the distance of the solution Pt =exp(-(f_neighbor - f)/t); % transfer probabilityif Pt > 1, Pt = 1; End % indicates f_neighbor<fifPt > rand f = f_neighbor; solution = neighbor; P = generate_neighbors(solution); End % records the transition probability of the first round of the outer loopif ~mute && t == T_range(1) first(i) = Pt; End % records the transition probability of the last round of the outer loopif ~mute && t == T_range(end)
                final(i) = Pt;
            end
        end
        if ~mute && t == T_range(1)
            fprintf('The median transition probability at initial temperature is %f\n', median(first));
        end
        if ~mute && t == T_range(end)
            fprintf('The median transition probability at the last temperature is %f\n', median(final));
        end
    end
    if ~mute
    fprintf('The final search results in the optimal path: \n');
    disp(solution);
    disp('Path length is:');
    disp(f);
    end



Copy the code

5. Operation results

Matlab version and references

1 matlab version 2014A

[1] Yang Baoyang, YU Jizhou, Yang Shan. Intelligent Optimization Algorithm and Its MATLAB Example (2nd Edition) [M]. Publishing House of Electronics Industry, 2016. [2] ZHANG Yan, WU Shuigen. MATLAB Optimization Algorithm source code [M]. Tsinghua University Press, 2017.