Whale Optimization Algorithm (WOA)
This paper introduces a new nature-inspired meta-heuristic optimization algorithm, whale Optimization Algorithm (WOA), which simulates the social behavior of humpback whales and introduces the bubble net hunting strategy. Click “here” to download MATLAB source code.
1.1 the inspiration
Whales are considered to be the largest mammals in the world. An adult whale can be 30 meters long and weigh 180 tons. There are seven main species of these giant mammals, including killer whales, minke whales, Minke whales, humpback whales, right whales, fin whales and blue whales. Whales are often thought of as carnivores who never sleep because they must travel to the surface of the ocean to breathe, but in fact half their brains are asleep. Whales have human-like cells in certain areas of the brain, called spindle cells. These cells are responsible for human judgment, emotion and social behavior. In other words, the spindle cell sets us apart from other living things. Whales have twice as many of these cells as adults, which is the main reason they are highly intelligent and more emotional. Whales have been shown to think, learn, judge, communicate and even become emotional like humans, but apparently only at a very low level of intelligence. Whales (mainly killer whales) have also been observed to develop their own dialects. Another interesting point is about the social behavior of whales, they can live alone or in groups, but most of what we observe is still in groups. Some of them (killer whales, for example) can live in a family for their entire life cycle. One of the largest baleen whales is the humpback whale, which is almost as big as a school bus in an adult. Their favorite prey are krill and small fish. Figure 1 shows this mammal.
Figure 1 bubble net feeding behavior of humpback whales
The most interesting thing about humpback whales is their special hunting method. This feeding behavior is known as the bubble-net feeding method. Humpback whales like to hunt krill or small fish close to the surface. It has been observed that this foraging is accomplished by creating distinctive bubbles along a circular or similar numerical “9” shaped path, as shown in Figure 1. Until 2011, this behavior was based solely on sea surface observations. However, researchers have studied this behavior using tag sensors. They captured 300 tagged bubble net feeding events in nine humpback whales. They identified two bubble-related strategies, which they named outer-spirals and doubleloops. In the former strategy, the humpback dives to about 12 meters of water, then starts creating a spiral bubble around its prey and swims to the surface. The latter strategy consists of three distinct stages: the coral cycle, the tail flap against the water and the capture cycle. I won’t go into details here. However, bubble net feeding is a special behavior unique to humpback whales, and the whale optimization algorithm simulates spiral bubble net feeding strategy to achieve the optimization purpose.
2. LSTM model
Long-short Term Memory is a special RNN model, which aims to solve the phenomenon of gradient disappearance and gradient explosion in the process of back propagation. By introducing gate mechanism, it solves the problem of long memory that RNN model does not have. The structure of LSTM model is shown as follows:
Specifically, one neuron in the LSTM model contains one cell state and three gate mechanisms. The cell is the key of LSTM model, which is similar to memory and is the memory space of the model. Cell states change over time, and the recorded information is determined and updated by the gate mechanism. Gate mechanism is a way to let information selection through, through sigmoID function and dot product operation. The value of sigmoID is between 0 and 1. Multiplication, namely dot product, determines the amount of information to be transmitted (how much of each part can be passed). When sigmoID is set to 0, it means that information is discarded; when sigmoID is set to 1, it means that information is completely transmitted (that is, completely remembered)[2].
The LSTM has three gates to protect and control the cell state: forget gate, Update gate and output gate.
The cellular state is like a conveyor belt. It runs directly along the chain, with just a few linear interactions. It would be easy for the message to circulate and stay the same.
As shown in figure:
%% CLC clear all close all % num=100; x=1:num; The db = 0.1; Data = abs (0.5. * sin (x) + 0.5 * cos (x) + db * rand (1, num)); data1 =data; % Assign your load data to the data variable. %data is the row vector. If you don't get it, leave a message. The first 90% of the %% % sequence is used for training and the last 10% for testing numTimeStepsTrain = floor(0.9*numel(data)); dataTrain = data(1:numTimeStepsTrain+1); dataTest = data1(numTimeStepsTrain+1:end); % data preprocessing, training data were normalized to have zero mean and unit variance. mu = mean(dataTrain); sig = std(dataTrain); dataTrainStandardized = dataTrain; XTrain = dataTrainStandardized(1:end-1); YTrain = dataTrainStandardized(2:end); Create LSTM regression network, specify LSTM layer hidden cell number 96*3 % sequence prediction, therefore, input one-dimensional, output one-dimensional numFeatures = 1; numResponses = 1; numHiddenUnits = 20*3; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits) fullyConnectedLayer(numResponses) regressionLayer]; % % WOA lb = 0.001; Lower limit of % learning rate UB =0.1; % Maximum learning rate % Main loop while t<Max_iter t end % Compares the predicted value with the test data. Figure (1) subplot(2,1,1) plot(YTest,'gs-','LineWidth',2) hold on plot(YPred_best,'ro-','LineWidth',2) hold off Legend ('Forecast with Updates') xlabel(' time ') title('Forecast with Updates') subplot(2,1,2) stem(ypred_best-ytest) Figure (2) plot(dataTrain(1:end-1)) hold on IDx =. Figure (2) plot(dataTrain(1:end-1) hold on idx = numTimeStepsTrain:(numTimeStepsTrain+numTimeStepsTest); Plot (idx,[data(numTimeStepsTrain) YPred_best],'.-') hold off xlabel(' time ') ylabel(' data ') title(' forecast graph ') legend(' forecast graph ') Plot (1:Max_iter,Convergence_curve,'bo-'); figure(3) Plot (1:Max_iter,Convergence_curve,'bo-'); hold on; Title (' Error-cost curve after whale optimization '); Xlabel (' number of iterations ') yLabel (' error fitness ')Copy the code