I. Introduction to VRP
1 Basic principles of VRPVehicle Routing Problem (VRP) is one of the most important research problems in operations research. VRP is concerned with the path planning of a supplier and K points of sale, which can be summarized as: for a series of delivery points and receiving points, the organization calls certain vehicles, arranges appropriate driving routes, and makes the vehicles pass through them in an orderly manner, under specified constraints (e.g. The demand and quantity of goods, delivery and delivery time, vehicle capacity limit, mileage limit, driving time limit, etc.), and strive to achieve certain goals (such as the shortest total mileage of empty vehicles, the lowest total transportation cost, vehicles arrive at a certain time, the use of the smallest number of vehicles, etc.). The legend of VRP is as follows: 2 Problem Attributes and FaQsThe characteristics of vehicle routing problem are relatively complex and generally include four attributes: (1) Address characteristics include the number of parking lots, demand types and operation requirements. (2) Vehicle characteristics include: vehicle number, carrying weight constraints, carrying variety constraints, running route constraints, and working time constraints. (3) Other characteristics of the problem. (4) The objective function may be to minimize the total cost, or minimize the maximum operating cost, or maximize just-in-time operations.
3 Common problems are as follows:(1) Traveling salesman Problem (2) Vehicle Routing problem with Capacity Constraint (CVRP) The model was difficult to extend to other scenarios of VRP and the execution paths of specific vehicles were not known, so the model continued to be improved. (3) Vehicle routing problem with Time Windows (VRP with Time Windows) is a vehicle routing problem with Time Windows (VRP with Time Windows, VRPTW), considering that demand points have requirements on vehicle arrival Time. The Vehicle Routing problem with Time Windows (VRPTW) is a time window constraint for the customer to be visited on the VRP. In the VRPTW problem, in addition to the driving cost, the cost function also includes the waiting time caused by arriving at a customer early and the service time required by the customer. In VRPTW, vehicles must not only meet the constraints of VRP problems, but also meet the Time Window constraints of demand points. The Time Window constraints of demand points can be divided into two types, one is Hard Time Window, which requires that vehicles must arrive within the Time Window. Early arrival must wait, while late arrival is rejected. The other is the Soft Time Window. It is not necessary to arrive in the Time Window, but to arrive outside the Time Window must be punished. The biggest difference between the Soft Time Window and the hard Time Window is to replace waiting and rejection with punishment. Model 2(refer to 2017 A generalized Formulation for Vehicle Routing problems) : This model was formulated with two-dimensional decision variables (4) Collection and distribution problem (5) Reference for multi-yard vehicle routing problem (2005 Lim, Genetic Algorithm for multi-yard vehicle routing problem _ Zou Tong, 1996 Renaud)Since vehicles are homogeneous, the modeling here does not include vehicle dimensions in the variables. (6) priority constraint vehicle route problem (7) compatibility constraint vehicle route problem (8) random demand vehicle route problem
4 solutions (1) mathematical analysis method (2) human-computer interaction method (3) group and then arrange route method (4) group and then arrange route method (5) save or insert method (6) improve or exchange method (7) mathematical programming approximation method (8) heuristic algorithm
5 Comparison between VRP and VRPTW
Ii. Introduction to simulated annealing algorithm
The idea of Simulated Annealing algorithm (SA) was first proposed by Metropolis et alin 1953: Kirkpatrick first used Simulated Annealing algorithm to solve combinatorial optimization problems in 1983 [1]. Simulated annealing algorithm is a random optimization algorithm based on MonteCarlo iterative solution strategy. Its starting point is based on the similarity between physical solid material annealing process and general combinatorial optimization problem. Its purpose is to provide an effective approximation solution algorithm for problems with NP(non-deterministic Polynomial) complexity. It overcomes the defects that other optimization processes are prone to fall into local minima and dependence on initial values. Simulated annealing algorithm is a general optimization algorithm and an extension of local search algorithm. It is different from local search algorithm in that it selects the inferior solution with large target value in the neighborhood with a certain probability. Theoretically, it is a global optimal algorithm. The simulated annealing algorithm is based on the similarity between the solution of the optimization problem and the annealing process of the physical system. It makes use of Metropolis algorithm and appropriately controls the temperature drop process to realize simulated annealing, so as to achieve the purpose of solving the global optimization problem [2]. Simulated annealing algorithm is an optimization process that can be applied to the minimization problem. In this process, the length of each step of the updating process is proportional to the corresponding parameters, which play the role of temperature. Similar to the principle of metal annealing, the temperature is initially raised very high for faster minimization, and then slowly cooled to stabilize. At present, simulated annealing algorithm is in its heyday, and both theoretical and applied research has become a very popular topic [3-7]. In particular, it has been widely used in engineering, such as production scheduling, control engineering, machine learning, neural networks, pattern recognition, image processing, structural optimization of discrete/continuous variables and so on. It can effectively solve combinatorial optimization problems and complex function optimization problems which are difficult to be solved by conventional optimization methods. Simulated annealing algorithm has the very strong global search performance, this is because compared with the common optimization search method, it adopted many unique methods and techniques: in simulated annealing algorithm, the basic without the knowledge of the search space or other auxiliary information, just define the neighborhood structure, adjacent solution within its neighborhood structure selection, using to evaluate the objective function: The simulated annealing algorithm does not use deterministic rules, but uses the change of probability to guide its search direction. The probability it uses is only a tool to guide its search process to move towards the region of more optimal solution. Therefore, although it appears to be a blind search method, it actually has a clear search direction.
2 theory of simulated annealing algorithm, simulated annealing algorithm to optimize the problem solving process and physical, which is based on the similarity between the annealing process, optimizing the objective function is equivalent to the internal energy of the metal, the optimization problem of independent variables is equivalent to the internal energy of the metal composite state space state space, the problem solving process is to look for a group of state, to minimize the objective function values. Using Me to POLIS algorithm and appropriately controlling the temperature drop process to achieve simulated annealing, so as to achieve the purpose of solving the global optimization problem [8]. 2.1 Physical Annealing Process The core idea of simulated annealing algorithm is very similar to the principle of thermodynamics. At high temperatures, a large number of molecules in a liquid move relatively freely between each other. If the liquid cools slowly, the mobility of the thermal atoms is lost. Large numbers of atoms can often arrange themselves in rows to form a pure crystal, perfectly ordered in all directions within millions of times the distance of individual atoms. For this system, the crystal state is the lowest energy state, and all slowly cooling systems can naturally reach this lowest energy state. In fact, if the liquid metal is cooled rapidly, it will not reach this state, but only a polycrystalline or amorphous state with a higher energy. The essence of the process, therefore, is to cool slowly to buy enough time for a large number of atoms to redistribute before they lose their mobility, which is necessary to ensure energy reaches a low-energy state. Simply speaking, the physical annealing process consists of the following parts: heating process, isothermal process and cooling process.
The purpose of the heating process is to enhance the thermal motion of particles and make them deviate from the equilibrium position. When the temperature is high enough, the solid will melt into liquid, thus eliminating the non-uniform state that may have existed in the system and allowing the subsequent cooling process to start with an equilibrium state. The melting process is related to the energy increase of the system, and the energy of the system also increases with the increase of temperature.
According to the knowledge of physics, for a closed system with constant temperature while exchanging heat with the surrounding environment, the spontaneous change of the system state is always in the direction of reducing the free energy: when the free energy reaches the minimum, the system reaches an equilibrium state.
The purpose of the cooling process is to weaken the thermal motion of particles and gradually tend to order, and the system energy gradually decreases, so as to obtain a low energy crystal structure.
2.2 Principle of simulated annealingThe simulated annealing algorithm is derived from the solid annealing principle. The solid is heated to a high enough temperature and then cooled slowly. When the temperature is increased, the particles inside the solid become disordered and their internal energy increases. When the particles are cooled slowly, they become orderly and reach equilibrium state at every temperature. Finally, they reach ground state at constant temperature and their internal energy decreases to a minimum. The similarity between simulated annealing algorithm and metal annealing process is shown in Table 7.1. In the root Metropolis criterion, the probability of particles approaching equilibrium at temperature 7 is exp(-▲E/T), where E is the internal energy at temperature 7 and ▲E is its change. Solid annealing is used to simulate the combinatorial optimization problem, the internal energy E is simulated as the objective function value, and the temperature 7 is evolved into the control parameter, namely, the simulated annealing algorithm to solve the combinatorial optimization problem is obtained: Starting from the initial solution % and the initial value of the control parameter 7, the iteration of “generating a new solution → calculating the difference of the objective function to accept or discard” is repeated for the current solution, and the T value is gradually reduced. The current solution at the end of the algorithm is the approximate optimal solution, which is a heuristic random search process of the MonteCarlo iterative solution method. The annealing process is controlled by the cooling schedule, including the initial value of the control parameter 7 and its decay factor K, the number of iterations at each value of 7 and the stop condition. 2.3 Idea of simulated annealing algorithmThe main idea of simulated annealing is that the random walk (i.e. random selection point) in the search interval is used to make the random walk gradually converge to the local optimal solution by using the Metropolis sampling criterion. As temperature is an important control parameter in Metropolis algorithm, it can be considered that the size of this parameter controls how fast the random process moves to the local or global optimal solution. Metropolis is an effective emphasis sampling method whose algorithm is: when the system changes from one energy state to another, the corresponding energy changes from E to E with the probability ofAfter a certain number of iterations, the system will gradually tend to a stable distribution state. In key sampling, if the new state is downward, then accept (local optimal); If up (global search), then accept with a certain probability. Starting from a certain initial solution, simulated annealing algorithm can obtain the relative optimal solution of combinatorial optimization problem with given control parameter values after the transformation of a large number of solutions. Then, the value of the control parameter 7 is reduced and the Metropolis algorithm is repeatedly executed. When the control parameter T approaches zero, the overall optimal solution of the combinatorial optimization problem can be finally obtained. The value of the control parameter must decay slowly. Temperature is an important control parameter of Metropolis algorithm, and simulated annealing can be regarded as an iteration of Metro PL is algorithm when the control parameter is decreased by 7. The value of 7 is large at the beginning, which can accept the poor deteriorating solution; With the decrease of 7, only a better deteriorating solution can be accepted; And finally, as 7 goes to 0, you don’t accept any deteriorating solutions anymore. At infinitely high temperature, the system is uniformly distributed at once, accepting all the proposed transformations. The smaller the attenuation of T, the longer the time for 7 to reach the terminal point; However, Markov chain can be reduced to shorten the time to reach the quasi-equilibrium distribution.
2.4 Characteristics of simulated annealing Algorithm Simulated annealing algorithm has a wide range of application, high reliability to obtain the global optimal solution, simple algorithm, easy to implement; The search strategy of the algorithm is helpful to avoid the defect of local optimal solution and improve the reliability of global optimal solution. Simulated annealing algorithm has very strong robustness, because compared with ordinary optimization search method, it adopts many unique methods and techniques. There are mainly the following aspects: (1) Accept the deteriorating solution with a certain probability. Simulated annealing algorithm not only introduces appropriate stochastic factors but also introduces the natural mechanism of physical system annealing process. The introduction of this natural mechanism enables the simulated annealing algorithm to accept not only the points that make the objective function value “good”, but also the points that make the objective function value “bad” with a certain probability in the iterative process. The states appearing in the iteration process are randomly generated, and it is not required that the latter state must be superior to the former state, and the acceptance probability gradually decreases with the decrease of temperature. Many traditional optimization algorithms are deterministic, and the transfer from one search point to another has certain transfer methods and transfer relations. This deterministic method often makes the search point far less than the optimal advantage, thus limiting the application scope of the algorithm. Simulated annealing algorithm searches in a probabilistic way, which increases the flexibility of the search process. (2) Introduced algorithm control parameters. The control parameters of an algorithm similar to annealing temperature are introduced, which divides the optimization process into several stages and determines the criteria of random state selection at each stage. The acceptance function is presented by Metropolis algorithm with a simple mathematical model. The simulated annealing algorithm has two important steps: first, under each control parameter, the neighboring random state is generated from the previous iteration point, and the acceptance criterion determined by the control parameter determines the choice of the new state, and then a certain length of random Markov chain is formed. The second is to slowly reduce the control parameters and improve the acceptance criteria until the control parameters approach zero and the state chain stabilizes at the optimal state of the optimization problem, thus improving the reliability of the global optimal solution of the simulated annealing algorithm. (3) Less requirements for the objective function. Traditional search algorithms need not only the value of the objective function, but also the derivative value of the objective function and other auxiliary information to determine the search direction: when these information does not exist, the algorithm is invalid. The simulated annealing algorithm does not need other auxiliary information, but only defines the neighborhood structure, selects adjacent solutions in the neighborhood structure, and evaluates them with the objective function.
2.5 Improvement Direction of simulated Annealing Algorithm Improving the search efficiency of simulated annealing algorithm on the basis of ensuring certain required optimization quality is the main content of simulated annealing algorithm improvement [9-10]. There are the following feasible schemes: choose the appropriate initial state; Design appropriate state generation functions to show the spatial dispersion or local region of the state according to the needs of the search process: design efficient annealing process; To improve the temperature control mode: using parallel search structure; Design suitable algorithm termination criteria: etc. In addition, the improvement of simulated annealing algorithm can be realized by adding some steps. The main improvement methods are as follows: (1) increase the memory function. In order to avoid the loss of the optimal solution currently encountered due to the execution of the probabilistic acceptance link in the search process, the best state so far can be stored by adding a storage link. (2) Increasing or reheating process. When the temperature is appropriately raised in the algorithm process, the acceptance probability of each state can be activated to adjust the current state in the search process, and the algorithm to avoid rabbits stagnates at the local minimum solution. (3) For each current state, multiple search strategy is adopted to accept the optimal state in the region with probability, instead of the single comparison method of the standard simulated annealing algorithm. (4) Combine with other search algorithms (such as genetic algorithm, immune algorithm, etc.). The advantages of other algorithms can be combined to improve the efficiency and quality of solution.
3. Simulated annealing algorithm flowThe generation and acceptance of new solutions in simulated annealing algorithm can be divided into three steps as follows: (1) a generation function generates a new solution in the solution space from the current solution; In order to facilitate the subsequent calculation and acceptance and reduce the time consuming of the algorithm, the method of generating a new solution by simple transformation of the current solution is usually selected. It should be noted that the transformation method to generate the new solution determines the neighborhood structure of the current new solution, so it has a certain influence on the selection of cooling schedule. (2) Whether a new solution is accepted or not is judged based on an acceptance criterion, the most commonly used of which is Metropolis criterion: If AK 0, accept X as the new current solution; otherwise, accept X “as the new current solution X with probability exp(-▲E/7). (3) When the new solution is confirmed to be accepted, replace the current solution with the new solution. This only needs to realize the transformation part of the current solution corresponding to the new solution when the new solution is generated, and modify the value of the objective function. At this point, the current solution has realized an iteration, and the next round of experiments can be started on this basis. If the new solution is judged to be abandoned, the next round of experiments will be continued on the basis of the original current solution. The solution obtained by simulated annealing algorithm is independent of the initial solution state (the starting point of algorithm iteration) and has asymptotic convergence, which has been proved to be an optimal algorithm with probability 1 converging to the global optimal solution. The simulated annealing algorithm can be divided into three parts: solution space, objective function and initial solution. The specific process of the algorithm is as follows [8] : (1) Initialization: set the initial temperature 7(sufficiently large), the initial solution state %(which is the starting point of algorithm iteration), and the number of iterations of each 7 value L: (2) for k=1,… , L do steps (3) to (6); (3) Generate a new solution X; (4) Calculate the increment A BE(X) -e (X), where E()) is the evaluation function: (5) If AK0, accept X as the new current solution, otherwise accept X as the new current solution with probability exp(-▲E/7); (6) If the termination condition is met, output the current solution as the optimal solution and end the program: (7)7 gradually decreases and T→0, and then go to step (2). The simulated annealing algorithm flow is shown in Figure 7.1. 4 Key ParametersSimulated annealing algorithm has high performance quality, it is more general and easy to implement. However, in order to obtain the optimal solution, the algorithm usually requires a high initial temperature and enough sampling times, which makes the optimization time of the algorithm often too long. From the algorithm structure, the new state generation function, initial temperature, detemperature function, Markov chain length and algorithm stop criterion are the main links that directly affect the algorithm optimization results.State generation functionThe state generation function should be designed to ensure that the generated candidate solutions are spread over the entire solution space as much as possible. In general, the state generation function consists of two parts, namely, the way of generating candidate solutions and the probability distribution of generating candidate solutions. The generation mode of the candidate solution is determined by the nature of the problem and is usually generated with a certain probability in the neighborhood structure of the current state.Initial temperatureTemperature 7 plays a decisive role in the algorithm, which directly controls the direction of annealing. According to the acceptance criterion of random movement, the greater the initial temperature, the greater the probability of obtaining high-quality solutions, and the acceptance rate of Metropolis is about 1. However, too high initial temperature will increase the calculation time. To this end, a group of states can be sampled uniformly, and the variance of the target value of each state can be used as the initial temperature.Annealing temperature functionThe detemperature function is the temperature update function, which is used to modify the temperature value in the outer loop. At present, the most commonly used temperature updating function is exponential dewarming function, namely T(n+1) =KXT(n), where 0<K1 is a constant very close to 1.Selection of Markov chain length LMarkov chain length is the number of iterative optimization under isothermal conditions. The selection principle is that L should be selected to restore quasi-equilibrium on each value of the control parameter under the premise that the attenuation function of attenuation parameter 7 has been selected. Generally, L is 100~1000.Algorithm stop criterionThe algorithm stop rule is used to determine when an algorithm ends. You can simply set the final temperature T, and when F=T, the algorithm terminates. However, the convergence theory of simulated fire algorithm requires that T tends to zero, which is actually not practical. Commonly used stop criteria package: set the threshold of the termination temperature, set the threshold of the number of iterations, or stop the search when the optimal value is continuously unchanged.
Three, some source code
%
clc;
clear;
close all;
%% Problem Definition
model=SelectModel(); % Select Model of the Problem
model.eta=0.1;
CostFunction=@(q) MyCost(q,model); % Cost Function
%% SA Parameters
MaxIt=1200; % Maximum Number of Iterations
MaxIt2=80; % Maximum Number of Inner Iterations
T0=100; % Initial Temperature
alpha=0.98; % Temperature Damping Rate
%% Initialization
% Create Initial Solution
x.Position=CreateRandomSolution(model);
[x.Cost, x.Sol]=CostFunction(x.Position);
% Update Best Solution Ever Found
BestSol=x;
% Array to Hold Best Cost Values
BestCost=zeros(MaxIt,1);
% Set Initial Temperature
T=T0;
%% SA Main Loop
for it=1:MaxIt
for it2=1:MaxIt2
% Create Neighbor
xnew.Position=CreateNeighbor(x.Position);
[xnew.Cost, xnew.Sol]=CostFunction(xnew.Position);
if xnew.Cost<=x.Cost
% xnew is better, so it is accepted
x=xnew;
else
% xnew is not better, so it is accepted conditionally
delta=xnew.Cost-x.Cost;
p=exp(-delta/T);
if rand<=p
x=xnew;
end
end
% Update Best Solution
if x.Cost<=BestSol.Cost
BestSol=x;
end
end
% Store Best Cost
BestCost(it)=BestSol.Cost;
% Display Iteration Information
if BestSol.Sol.IsFeasible
FLAG=The '*';
else
FLAG="'; end disp(['Iteration ' num2str(it) ': Best Cost = ' num2str(BestCost(it)) FLAG]); % Reduce Temperature T=alpha*T; % Plot Solution figure(1); PlotSolution(BestSol.Sol,model); Pause (0.01); end %% Results figure; plot(BestCost,'LineWidth'.2);
xlabel('Iteration');
ylabel('Best Cost');
grid on;
%
function qnew=CreateNeighbor(q)
m=randi([1 3]);
switch m
case 1
% Do Swap
qnew=Swap(q);
case 2
% Do Reversion
qnew=Reversion(q);
case 3
% Do Insertion
qnew=Insertion(q);
end
end
function qnew=Swap(q)
n=numel(q);
i=randsample(n,2);
i1=i(1);
i2=i(2);
qnew=q;
qnew([i1 i2])=q([i2 i1]);
end
function qnew=Reversion(q)
n=numel(q);
i=randsample(n,2);
i1=min(i(1),i(2));
i2=max(i(1),i(2));
qnew=q;
qnew(i1:i2)=q(i2:- 1:i1);
end
function qnew=Insertion(q)
n=numel(q);
i=randsample(n,2);
i1=i(1);
i2=i(2);
if i1<i2
qnew=[q(1:i1- 1) q(i1+1:i2) q(i1) q(i2+1:end)];
else
qnew=[q(1:i2) q(i1) q(i2+1:i1- 1) q(i1+1:end)];
end
end
Copy the code
4. Operation results
Matlab version and references
1 matlab version 2014A
[1] Yang Baoyang, YU Jizhou, Yang Shan. Intelligent Optimization Algorithm and Its MATLAB Example (2nd Edition) [M]. Publishing House of Electronics Industry, 2016. [2] ZHANG Yan, WU Shuigen. MATLAB Optimization Algorithm source code [M]. Tsinghua University Press, 2017.