Neural network – support vector machine
Support Vector Machine (SVM) was first proposed by Cortes and Vapnik in 1995. It shows many unique advantages in solving small sample size, nonlinear and high-dimensional pattern recognition, and can be generalized to other Machine learning problems such as function fitting. 1 Mathematics section 1.1 Two-dimensional space 2 algorithm Part
Second, the gray Wolf algorithm
The Wolf algorithm
1 introduction:
Grey Wolf Optimizer (GWO) is a population intelligent optimization algorithm proposed by Mirjalili et al from Griffith University in Australia in 2014. This algorithm is an optimization search method developed by the gray Wolf predation activity, and it has strong convergence performance, few parameters, easy to implement and so on. In recent years, it has been widely concerned by scholars, and has been successfully applied to shop scheduling, parameter optimization, image classification and other fields.
2 algorithm principle:
Gray wolves belong to the gregarious canid family and are at the top of the food chain. Gray wolves adhere to a rigid social dominance hierarchy. As shown in figure:
** The first layer of the social hierarchy: ** the leader of the pack is marked as.Wolves are mainly responsible for making decisions about activities such as feeding, roosting, and sleeping time. Because the other wolves need obedienceWolf’s orders, soWolves are also known as dominant wolves. In addition,The Wolf is not necessarily the strongest Wolf of the pack, but in terms of management ability,The Wolf must be the best.
The second tier of the social hierarchy:The Wolf, it obeysWolf, and assistWolves make decisions. inWhen wolves die or grow old,The Wolf will becomeThe best candidate for Wolf. althoughThe Wolf to obeyThe Wolf, butWolves have control over wolves in other social hierarchies.
The third tier of the social hierarchy:The Wolf, he obeyed 、Wolves, who also dominate the rest of the hierarchy.Wolves are generally composed of juvenile wolves, sentinel wolves, hunting wolves, old wolves and nursing wolves.
The fourth rung of the social hierarchy:Wolves, which usually need to obey other wolves in the social hierarchy. Although it may seemWolves don’t play much of a role in the pack, but if notThe existence of wolves, wolves will have internal problems such as cannibalism.
The GWO optimization process includes social stratification, tracking, encircling, and attacking of gray wolves, as described below.
1) Social hierarchyWhen designing A GWO, the first step is to build a gray Wolf Social Hierarchy model. The fitness of each individual in the population was calculated, and the three gray wolves with the best fitness were marked as, 、And the rest of the wolves were labeled. In other words, the social hierarchy of the gray Wolf group is from high to low;, 、 及 . The optimization process of GWO mainly consists of the best three solutions in each generation population (i.e, 、) to guide completion.
When gray wolves search for Prey, they gradually approach the Prey and encircle it. The mathematical model of this behavior is as follows:
Where, t is the current iteration number:. Represents the Hadamard product operation; A and C are synergy coefficient vectors; Xp represents the position vector of prey; X(t) represents the position vector of the current gray Wolf; During the entire iteration, a decreases linearly from 2 to 0; R1 and r2 are random vectors in [0,1].
3) Hunring
Gray wolves have the ability to identify the location of potential prey (the optimal solution), and the search process depends on, 、Gray Wolf’s guide to complete. However, the solution space characteristics of many problems are unknown, and gray wolves cannot determine the exact location of prey (optimal solution). In order to simulate the search behavior of the gray Wolf (candidate solution), the hypothesis is given, 、Strong ability to identify the location of potential prey. Therefore, during each iteration, the best three wolves in the current population (, 、), and update other search agents based on their location information (including). The mathematical model of this behavior can be expressed as follows:
Type:,,Respectively represent the current population, 、Position vector of; X represents the position vector of the gray Wolf;,,Respectively represent the distance between the current candidate gray wolves and the optimal three wolves; When | A | > 1, gray wolves try to disperse in different areas and search for prey. When | A | < 1, the Wolf will focus its search on one or A few areas of prey.
As can be seen from the figure, the position of the candidate solution finally falls at, 、Define random circle position inside. In general,, 、The approximate position of the prey (lurking in the optimal solution) was first predicted, and then other candidate wolves randomly updated their positions near the prey under the guidance of the current optimal blue Wolf.
In the process of constructing the Attacking Prey model, the decrease of a value will cause the fluctuation of A value according to the formula in 2). In other words, A is A random vector on the interval [-a, A] ** (note: in the first paper of the original author, it is [-2a,2a], A, A]) **, where A decreases linearly during the iteration. When A is in the range [-1, 1], the next position of the Search Agent can be anywhere between the current gray Wolf and the prey.
5) Hunt for prey(Search for Prey) Gray wolves mainly rely on, 、To find prey. They start dispersing to search for information about the location of prey, and then focus on attacking prey. For the establishment of decentralized model, the search agent can be removed from the prey by | A | > 1. This search method enables GWO to conduct global search. Another search coefficient in the GWO algorithm is C. It can be seen from the formula in 2) that C vector is a vector composed of random values in the interval range [0,2], and this coefficient provides random weight for prey to increase (| C | > 1) or decrease (| C | < 1). This helps GWO exhibit random search behavior during optimization to avoid the algorithm falling into local optimality. It is worth noting that C does not decrease linearly. C is a random value in the process of iteration, and this coefficient helps the algorithm to jump out of the local area, especially the algorithm becomes particularly important in the later stage of iteration.
Three, code,
% Grey Wolf Optimizer function [Alpha_score,Alpha_pos,Convergence_curve]=GWO(SearchAgents_no,Max_iter,lb,ub,dim,fobj) % initialize alpha, beta, and delta_pos Alpha_pos=zeros(1,dim); Alpha_score=inf; %change this to -inf for maximization problems Beta_pos=zeros(1,dim); Beta_score=inf; %change this to -inf for maximization problems Delta_pos=zeros(1,dim); Delta_score=inf; %change this to -inf for maximization problems %Initialize the positions of search agents Positions=initialization(SearchAgents_no,dim,ub,lb); Convergence_curve=zeros(1,Max_iter); l=0; % Loop counter % Main loop while l<Max_iter for i=1:size(Positions,1) % Return back the search agents that go beyond the boundaries of the search space Flag4ub=Positions(i,:)>ub; Flag4lb=Positions(i,:)<lb; Positions(i,:)=(Positions(i,:).*(~(Flag4ub+Flag4lb)))+ub.*Flag4ub+lb.*Flag4lb; % Calculate objective function for each search agent fitness=fobj(Positions(i,:)); % Update Alpha, Beta, and Delta if fitness<Alpha_score Alpha_score=fitness; % Update alpha Alpha_pos=Positions(i,:); end if fitness>Alpha_score && fitness<Beta_score Beta_score=fitness; % Update beta Beta_pos=Positions(i,:); end if fitness>Alpha_score && fitness>Beta_score && fitness<Delta_score Delta_score=fitness; % Update delta Delta_pos=Positions(i,:); end end a=2-l*((2)/Max_iter); % a decreases linearly fron 2 to 0 % Update the Position of search agents including omegas for i=1:size(Positions,1) for j=1:size(Positions,2) r1=rand(); % r1 is a random number in [0,1] r2=rand(); % r2 is a random number in [0,1] A1=2*a*r1-a; % r2 is a random number in [0,1] A1=2*a*r1-a; C1 = 2 * r2% Equation (3.3); D_alpha = % Equation (3.4) abs (C1 * Alpha_pos (j) - Positions (I, j)); % Equation (3.5)-part 1 X1=Alpha_pos(j) -a1 *D_alpha; % Equation (3.6)-part 1 r1=rand(); r2=rand(); A2=2*a*r1-a; C2 r2 = 2 * % Equation (3.3); D_beta = % Equation (3.4) abs (C2 * Beta_pos (j) - Positions (I, j)); % Equation (3.5)-part 2 X2=Beta_pos(j)-A2*D_beta; % Equation (3.6)-part 2 r1=rand(); r2=rand(); A3=2*a*r1-a; C3 r2 = 2 * % Equation (3.3); D_delta = % Equation (3.4) abs (C3 * Delta_pos (j) - Positions (I, j)); % Equation (3.5)-part 3 X3=Delta_pos(j)-A3*D_delta; % Equation (3)- Part 3 Positions(I,j)=(X1+X2+X3)/3; % Equation (3.7) end end l=l+1; Convergence_curve(l)=Alpha_score; endCopy the code
5. References:
The book “MATLAB Neural Network 43 Case Analysis”
Complete code download or simulation consulting www.cnblogs.com/ttmatlab/p/…