I. Introduction of whale algorithm and LSTM
Introduction to Whale Optimization Algorithm (WOA), which simulates the social behavior of humpback whales and introduces bubble net hunting strategies.
1.1 the inspirationWhales are considered to be the largest mammals in the world. An adult whale can be 30 meters long and weigh 180 tons. There are seven main species of these giant mammals, including killer whales, minke whales, Minke whales, humpback whales, right whales, fin whales and blue whales. Whales are often thought of as carnivores who never sleep because they must travel to the surface of the ocean to breathe, but in fact half their brains are asleep. Whales have human-like cells in certain areas of the brain, called spindle cells. These cells are responsible for human judgment, emotion and social behavior. In other words, the spindle cell sets us apart from other living things. Whales have twice as many of these cells as adults, which is the main reason they are highly intelligent and more emotional. Whales have been shown to think, learn, judge, communicate and even become emotional like humans, but apparently only at a very low level of intelligence. Whales (mainly killer whales) have also been observed to develop their own dialects. Another interesting point is about the social behavior of whales, they can live alone or in groups, but most of what we observe is still in groups. Some of them (killer whales, for example) can live in a family for their entire life cycle. One of the largest baleen whales is the humpback whale, which is almost as big as a school bus in an adult. Their favorite prey are krill and small fish. Figure 1 shows this mammal.The most interesting thing about humpback whales is their special hunting method. This feeding behavior is known as the bubble-net feeding method. Humpback whales like to hunt krill or small fish close to the surface. It has been observed that this foraging is accomplished by creating distinctive bubbles along a circular or similar numerical “9” shaped path, as shown in Figure 1. Until 2011, this behavior was based solely on sea surface observations. However, researchers have studied this behavior using tag sensors. They captured 300 tagged bubble net feeding events in nine humpback whales. They identified two bubble-related strategies, which they named outer-spirals and doubleloops. In the former strategy, the humpback dives to about 12 meters of water, then starts creating a spiral bubble around its prey and swims to the surface. The latter strategy consists of three distinct stages: the coral cycle, the tail flap against the water and the capture cycle. I won’t go into details here. However, bubble net feeding is a special behavior unique to humpback whales, and the whale optimization algorithm simulates spiral bubble net feeding strategy to achieve the optimization purpose.
1.2 Mathematical Modeling and Optimization Algorithm 1.2.1 Encircling preyHumpback whales can recognize the location of prey and surround it. Since the position of the optimal design in the search space is not a priori known, the WOA algorithm assumes that the current best candidate solution is the target prey or close to the optimal solution. After the best search agent is defined, other search agents will therefore attempt to update their location to the best search agent. This behavior is expressed by the following equation: Figure 2A describes the basic principle of equation (2) for 2D problems. The position of the search agent (X, Y) can be updated according to the position of the current optimal solution (X ∗, Y ∗). By adjusting the values of vectors A ⃗ and C, different places near the optimal agent at the next time relative to the current position can be found. The possible update location of the agent is searched in 3D space, as shown in Figure 2b. By defining a random vector R, it is possible to reach any position in the search space between the key points shown in FIG. 2, so equation (2) allows any search agent to update its position in the neighborhood of the current optimal solution, thus simulating encirclement predation by whales. A similar concept can be extended to n-dimensional search Spaces. Note that the two graphs in Figure 2 are in the case of a=1 and C=1. Figure 2 2D and 3D position vectors and their possible next positions1.2.2 Bubble-net Attacking Method (Utilization stage)Two methods were designed to model the bubble net behavior of humpback whales: the constriction enveloping mechanism: achieved by reducing the value of A in Equation (3). Note that the fluctuation range of A is also reduced by A, in other words, A is A random value within an interval [-a, A], and A decreases from 2 to 0 as the iteration progresses. Set the random value in A between [-1,1], and the new location of the search agent can be defined as any location between the original location of the agent and the current optimal location of the agent. Figure 3A shows all possible positions from (X, Y) near (X ∗, Y ∗) in 2D space when 0 ≤ A ≤ 1 0. This mechanism is essentially an enveloping predator. Screw to update position. As shown in Figure 3B, the method first calculates the distance between whale position (X, Y) and prey position (X ∗, Y ∗), and then creates a spiral equation between whale and prey position to simulate the spiral movement of humpback whales: (a) Contraction containment mechanism(b) Spiral update location The bubble net search mechanism implemented in Figure 3 WOA It is noteworthy that the humpback whale swims around the prey in an ever-shrinking circle while following a spiral path. To model this simultaneous behavior, assume a 50% chance of choosing between the contract-enveloping mechanism and the spiral model in order to update the whale’s position during optimization, the mathematical model is as follows:Where p pp is a random number between [0,1].1.2.3 Search for Prey (Exploration Phase)In addition to the bubble net approach, humpbacks also randomly search for prey, again based on variable A vectors. In fact, humpbacks randomly search each other based on their positions, thus using A ⃗ with A random value greater than 1 or less than -1 to force the search agent away from the reference whale. In contrast to the exploit phase, the location of the search agent in the exploration phase is updated based on randomly selected search agents, rather than the best search agent so far. This mechanism and ∣ A ⃗ ∣ > 1 emphasize exploration and allow the WOA algorithm to perform global searches. The mathematical model is as follows: Where X → r a n d is a random position vector selected from the current population (representing a random whale). Some possible solutions near A particular solution satisfying A ⃗ > 1 are shown in Figure 4.The exploration mechanism in Figure 4 WOA (X* is a randomly selected search agent) The WOA algorithm starts by randomly initializing a set of solutions, and in each iteration, the search agent updates their position based on the randomly selected search agent or the best solution obtained so far. The parameters of A and AA were reduced from 2 to 0 with the number of iterations, so as to gradually develop from exploration to utilization. Given A ⃗ ∣ > 1 select random search agent, given A ⃗ ∣ < 1 select the optimal solution to update the search agent position. Depending on the value of p pp, WOA can switch between helical and circular motion. Finally, the WOA algorithm is terminated by satisfying the termination criterion. The pseudo code of WOA algorithm is shown in Figure 5.FIG. 5 WOA pseudocode1.3 Code AnalysisAs long as you understand the basic process of the principle, in fact, the code does not explain the difficulties, we mainly introduce how to achieve the above analysis of several important principles, the problem to optimize is the minimum sum of squares of thirty numbers (1) Parameter initialization. At the beginning, the number of agents and the maximum number of iterations can be mainly set. Other parameters related to the algorithm need to be set in the iteration because they are related to the current number of iterations.
SearchAgents_no=30; % Number of agents to search Max_iteration=500; % Maximum number of iterations' '** (2Population initialization ** Randomly initialize the location values of all proxies in each dimension and ensure that the values are within the range. ```c Positions=rand(SearchAgents_no,dim).*(ub-lb)+lb;Copy the code
(3) Population assessment. Evaluate the target value of each agent in the population, if there is an agent due to the current optimal solution, then set it as the optimal solution.
for i=1:size(Positions,1) % calculate the target value of each agent fitness=fobj(Positions(I,:)); % Update the optimal solutionifFitness <Leader_score % if it's a maximization problem, here it is">"
Leader_score=fitness;
Leader_pos=Positions(i,:);
end
end
Copy the code
(4) Set algorithm parameters related to iteration times.
a=2-t*((2)/Max_iter); % equation (3), a with the number of iterations from2Linear decline to0% a2 from- 1Linear decline to2 -And when we calculate l, a2 is equal to- 1+t*((- 1)/Max_iter);
Copy the code
(5) Update the position of each dimension of each agent.
% Update the Position of search agents
for i=1:size(Positions,1) r1=rand(); R1 for [%0.1] r2=rand(); R2 for [%0.1] is A random number between A=2*a*r1-a; % equation (3C =)2*r2; % equation (4B =)1; % equation (5B l= a2- 1)*rand+1; % equation (5Random number l p = rand(); % equation (6) in probability pfor j=1:size(Positions,2)
if p<0.5
if abs(A)>=1
rand_leader_index = floor(SearchAgents_no*rand()+1);
X_rand = Positions(rand_leader_index, :);
D_X_rand=abs(C*X_rand(j)-Positions(i,j)); % equation (7) Positions (I, j) = X_rand (j) - A * D_X_rand; % equation (8)elseif abs(A)<1
D_Leader=abs(C*Leader_pos(j)-Positions(i,j)); % equation (1) Positions (I, j) = Leader_pos (j) - A * D_Leader; % equation (2) end elseif p > =0.5
distance2Leader=abs(Leader_pos(j)-Positions(i,j)); % equation (5) Positions (I, j) = distance2Leader *exp(b.*l).*cos(l.*2*pi)+Leader_pos(j);
end
end
end
Copy the code
2 LSTM profile 2.1 LSTM control processControl flow of LSTM: Data flowing through cells is processed in the forward propagation process. The difference lies in the changes of cell structure and computation in LSTM.This sequence of operations gives the LSTM the ability to choose to save or forget information. These operations may seem a little complicated at first, but it’s okay to walk you through them step by step.
2.2 Core Concepts The core concepts of LSTM lie in the cell state and the “gate” structure. The cellular state acts as a pathway for information to travel through a sequence. You can think of it as the “memory” of the web. In theory, cell states can transmit information about sequence processing all the way down. Thus, even information of earlier time steps can be carried into cells of later time steps, which overcomes the effect of short-term memory. Information is added and removed through a “gate” structure, which learns what information to save or forget during training.
2.3 SigmoidThe gate structure contains the SigmoID activation function. The Sigmoid activation function is similar to the TANh function, except that the Sigmoid compresses the value to 0Between 1 instead of minus 11. This makes it easier to update or forget information, because any number multiplied by 0 is 0, and the information is stripped out. Similarly, any number multiplied by 1 yields itself, and that piece of information is perfectly preserved. So the network knows what data to forget and what to save. 2.4 LSTM door structureLSTM has three types of gate structures: forget gate, input gate, and output gate.Against 2.4.1 oblivion gateThe function of the oblivion gate is to decide what information should be discarded or retained. Both information from the previous hidden state and the current input are passed to the sigmoID function with output values between 0 and 1, with output values closer to 0 indicating more should be discarded and closer to 1 indicating more should be retained. Enter the door 2.4.2The input gate is used to update the cell state. The sigmoID function first passes information about the previous layer’s hidden state and the current input. Adjust the value between 0 and 1 to determine which information to update. 0 indicates unimportant, and 1 indicates important. Second, the previous layer of hidden state information and the current input information are passed into the TANH function to create a new candidate value vector. Finally, the output value of sigmoID is multiplied by the output value of TANH, which determines which information in the output value of TANH is important and needs to be retained. 2.4.3 Cell stateThe next step is to calculate the cell state. First, the cell state of the previous layer is multiplied point by point with the forgetting vector. If it’s multiplied by something close to zero, that means that in the new cell state, that information needs to be discarded. This value is then added point by point to the output value of the input gate to update the new information discovered by the neural network into the cell state. At this point, you have the updated cell state. 2.4.4 output doorThe output gate is used to determine the value of the next hidden state, which contains previously entered information. First, we pass the previous hidden state and the current input to the SigmoID function, and then pass the newly obtained cell state to the TANH function. Finally, the output of TANh is multiplied by the output of sigmoID to determine what information the hidden state should carry. The hidden state is then taken as the output of the current cell, and the new cell state and new hidden state are passed to the next time step.Let’s go through this one more time. The forgetting gate determines which relevant information from the previous step needs to be retained; The input gate determines which information in the current input is important and needs to be added; The output gate determines what the next hidden state should be.
Two, some source code
Insert the code slice hereCopy the code
3. Operation results
Matlab version and references
1 matlab version 2014A
[1] Yang Baoyang, YU Jizhou, Yang Shan. Intelligent Optimization Algorithm and Its MATLAB Example (2nd Edition) [M]. Publishing House of Electronics Industry, 2016. [2] ZHANG Yan, WU Shuigen. MATLAB Optimization Algorithm source code [M]. Tsinghua University Press, 2017. [3] Zhou Pin. Design and Application of MATLAB Neural Network [M]. Tsinghua University Press, 2013. [4] Chen Ming. MATLAB Neural Network Principle and Examples of Fine Solution [M]. Tsinghua University Press, 2013. [5] FANG Qingcheng. MATLAB R2016a Neural Network Design and Application of 28 Cases Analysis [M]. Tsinghua University Press, 2018. [6] Whale Optimization Algorithm (WOA) for Swarm Intelligence Optimization