The parameter selection of prediction model plays an important role in its generalization ability and prediction accuracy. The parameters of least-squares support vector machine based on radial basis kernel function mainly involve penalty factor and kernel function parameters, the choice of these two parameters will directly affect the learning and generalization ability of least-squares support vector machine. In order to improve the prediction result of least square support vector machine, grey Wolf optimization algorithm is used to optimize its parameters and build a software aging prediction model. Experiments show that the model is effective in predicting software aging.
The defects left in the software will cause memory leakage, rounding error accumulation, file lock not released and other phenomena with the long-term continuous operation of the software system, leading to system performance degradation and even crash. These software aging phenomena not only reduce system reliability, but also endanger the safety of human life and property. In order to reduce the harm caused by software aging, it is particularly important to predict the software aging trend and adopt anti-aging strategies to avoid the occurrence of software aging [1].
Many domestic and foreign research institutions, such as Bell LABS, IBM, Nanjing University, Wuhan University [2], Xi ‘an Jiaotong University [3], have carried out in-depth research on software aging and achieved some results. Their main research direction is to find the best execution time of software anti-aging strategy by predicting software aging trend.
In this paper, Tomcat server is taken as the research object, Tomcat operation is monitored, system performance parameters are collected, and software aging prediction model of least squares support vector machine based on gray Wolf optimization algorithm is established. Predict the running status of software and determine the timing of software anti-aging policy.
Least squares support vector machine
The Support Vector Machine (SVM) was proposed by Cortes and Vapnik[4]. SVM, based on VC dimension theory and structural risk minimization principle, can well solve problems such as small sample size, nonlinearity, high dimension and local minimum.
When the number of training samples is larger, the quadratic programming problem of SVM is more complex and the model training time is too long. Snykens et al. [5] proposed the Least Squares Support Vector Machine (LSSVM). On the basis of SVM, Gen replaced inequality constraints with equality constraints to transform quadratic programming problems into linear equations problems. To a large extent, it avoids the complex computation of SVM and reduces the difficulty of training. In recent years, LSSVM has been widely used in regression estimation and nonlinear modeling and achieved good prediction results.
In this paper, radial basis kernel function is used as kernel function of LSSVM model. Parameters of LSSVM algorithm based on RBF kernel function mainly involve penalty factor C and kernel function parameter “. In this paper, grey Wolf optimization algorithm is used to optimize parameters of LSSVM.
2. Grey Wolf optimization algorithm
In 2014, Mirjalili et al. [6] proposed the Grey WolfOptimizer (GWO) algorithm, which seeks the optimal value by simulating the hierarchy and predation strategy of gray wolves in nature. GWO algorithm has attracted much attention for its fast convergence, few adjustment parameters, and superior performance in solving function optimization problems. This method is superior to particle swarm optimization, differential evolution algorithm and gravity search algorithm in global search and convergence, and is widely used in feature subset selection, surface wave parameter optimization and other fields.
2.1 Principle of gray Wolf optimization algorithm
Gray wolves achieve the prosperity of the population through cooperation among individuals, especially in the process of hunting gray wolves have a strict pyramidal social hierarchy. The highest-rated Wolf is alpha, and the rest of the wolves are labeled beta, δ, and ω in turn, and work together to hunt.
In the whole gray Wolf pack, alpha Wolf plays the role of the leader in the hunting process, responsible for the decision-making and management of the whole pack. β Wolf and δ Wolf are the second best fit groups, which help α Wolf to manage the whole pack and have decision-making power during hunting. The remaining gray Wolf individuals are defined as omega, which assists alpha, beta, and δ in attacking prey.
2.2 Description of gray Wolf optimization algorithm
GWO algorithm imitates the hunting behavior of wolves and divides the whole hunting process into three stages: encircled, hunted and attacked. The process of capturing prey is the process of searching for the optimal solution. It is assumed that the solution space of the gray Wolf is V-dimensional, and the gray Wolf population X is composed of N individuals, that is, X=[Xi; X2… XN]; For gray Wolf Xi (1≤ I ≤N), its position in v-dimensional space Xi=[Xi1; Xi2…, XiV], the distance between the location of the gray Wolf and the location of the prey is measured by fitness, and the smaller the distance is, the greater the fitness is. The optimization process of GWO algorithm is as follows.
2.2.1 surrounded
Firstly, the prey is surrounded, during which the distance between the prey and the gray Wolf is expressed by mathematical model:
Where, Xp (m) is the prey position after the MTH iteration, X (m) is the gray Wolf position, D is the distance between the gray Wolf and the prey, A and C are convergence factor and oscillation factor respectively, and the calculation formula is as follows:
2.2.2 chase
The optimization process of GWO algorithm is to locate the prey location according to the positions of α, β and δ. ω wolves are guided by alpha, beta, and δ wolves to hunt prey, updating their respective positions based on the current best search units, and relocating prey based on the updated alpha, beta, and δ positions. The individual positions of wolves will change with the escape of prey, and the mathematical description of the renewal process at this stage is as follows:
2.2.3 attack
Wolves attack and capture prey to obtain the optimal solution. This process is realized by decreasing in Formula (2). When one is less than ∣A∣, that means the wolves are getting closer to the prey, so the wolves will narrow down their search area and do A local search; When one is less than ∣A∣, the wolves scatter away from the prey, expanding their search area and doing A global search
Clear CLC close all format Long load WNDSPD %% gwo-svr input_train=[560.318,1710.53; 562.267,1595.17; 564.511,1479.78; 566.909,1363.74; 569.256,1247.72; 571.847,1131.3; 574.528,1015.33; 673.834,1827.52; 562.267,1595.17; 564.511,1479.78; 566.909,1363.74; 569.256,1247.72; 571.847,1131.3; 574.528,1015.33; 673.834,1827.52; 574.528,1015.33. 678.13,1597.84; 680.534,1482.11; 683.001,1366.24; 685.534,1250.1; 688.026,1133.91; 690.841,1017.81; 789.313,1830.18; 688.513,1597.84; 680.534,1482.11; 683.001,1366.24; 685.534,1250.1; 688.026,1133.91; 690.841,1017.81; 789.313,1830.18. 791.618,1715.56; 796.509,1484.76; 799.097,1368.85; 801.674,1252.76; 804.215,1136.49; 806.928,1020.41; 904.711,1832.73; 791.618,1715.56; 796.509,1484.76; 799.097,1368.85; 801.674,1252.76; 804.215,1136.49; 806.928,1020.41; 904.711,1832.73. 907.196,1718.05; 909.807,1603.01; 915.127,1371.43; 917.75,1255.36; 920.417,1139.16; 923.149,1023.09; 1020.18,1835.16; 915.127,1371.43; 917.75,1255.36; 920.417,1139.16; 923.149,1023.09; 1020.18,1835.16. 1022.94,1720.67; 1025.63,1605.48; 1028.4,1489.91; 1033.81,1258.06; 1036.42,1141.89; 1039.11,1025.92; 1135.36,1837.45; 1022.94,1720.67; 1025.63,1605.48; 1028.4,1489.91; 1033.81,1258.06; 1036.42,1141.89; 1039.11,1025.92; 1135.36,1837.45. 1138.33, 1722.94, 1141.35, 1607.96, 1144.25, 1492.43, 1147.03, 1376.63, 1152.23, 1144.56, 1154.83, 1028.73, 1250.31, 1839.19; 1253.44,1725.01; 1256.74,1610.12; 1259.78,1494.74; 1262.67,1379.1; 1265.43,1263.29; 1270.48,1031.58; 1364.32,1840.51; 1253.44,1725.01; 1256.74,1610.12; 1259.78,1494.74; 1262.67,1379.1; 1265.43,1263.29; 1270.48,1031.58; 1364.32,1840.51. 1367.94, 1726.52, 1371.2, 1611.99, 1374.43, 1496.85, 1377.53, 1381.5, 1380.4, 1265.81, 1382.89, 1150.18, 1477.65, 1841.49; 1481.34,1727.86; 1485.07,1613.64; 1488.44,1498.81; 1491.57,1383.71; 1494.47,1268.49; 1497.11,1153.04; 1590.49,1842.51; 1481.34,1727.86; 1485.07,1613.64; 1488.44,1498.81; 1491.57,1383.71; 1494.47,1268.49; 1497.11,1153.04; 1590.49,1842.51. 1594.53, 1729.18, 1598.15, 1615.15, 1601.61, 1500.72, 1604.72, 1385.93, 1607.78, 1271.04, 1610.43, 1155.93, 1702.82, 1843.56; 1706.88,1730.52; 1710.65,1616.79; 1714.29,1502.66; 1717.69,1388.22; 1720.81,1273.68; 1723.77,1158.8; 1710.65,1616.79; 1714.29,1502.66; 1717.69,1388.22; 1720.81,1273.68; 1723.77,1158.8.] ; Input_test =[558.317,1825.04; 675.909,1712.89; 793.979,1600.35; 912.466,1487.32; 1031.17,1374.03; 1149.79,1260.68; input_test=[558.317,1825.04; 675.909,1712.89; 793.979,1600.35; 912.466,1487.32; 1031.17,1374.03; 1149.79,1260.68; 1268.05, 1147.33, 1385.36, 1034.68, 1499.33, 1037.87, 1613.11, 1040.92, 1726.27, 1044.19;] ; Output_train =[235,175; 235,190; 235,205; 235,220; 235,235; 235,250; 235,265; 250,160; 250,190; 250,205; 250,220; 250235; 250250; 250265; 265160; 265175; 265205; 265220; 265235; 265250; 265265; 270160; 270175; 270190; 270,220; 270,235; 270,250; 270,265; 285,160; 285,175; 285,190; 285,205; 285,235; 285,250; 285,265; 290,160; 290,175; 285,500; 285,250; 285,265; 290,160; 290,175; 285,500; 285,250; 285,265; 290,160; 290,175. 290190; 290205; 290220; 290250; 290265; 305160; 305175; 305190; 305205; 305220; 305235; 305265; 320160; 320175; 320190; 320205; 320220; 320235; 320250; 335160; 335175; 335190; 335205; 335220; 335235; 335250; 350,160; 350,175; 350,190; 350,205; 350,220; 350,235; 350,250; 365,160; 365,175; 365,190; 365,205; 365,220; 365,235; 365,235; 365,200; 365,235; 365,200; 365,235; 365,200; 365,235; 365,350; 365,350; 365,175; 365,220; 365,235. 365250;] ; Output_test =[235,160; 250,175; 265,190;270,205; 285,220; 290,235; 305,250; 320,265; 335,265; 350,265; 365,265;] ; % for regression data x = [0.1, 0.1, 0.2, 0.2, 0.3, 0.3, 0.4, 0.4, 0.5, 0.5, 0.6, 0.6, 0.7, 0.7, 0.8, 0.8, 0.9, 0.9, 1, 1]. Y = (10, 10, 20, 20, 30, 30, 40, 40, 50, 50, 60, 60, 70; 80; 90, living; 100100]. X = input_train; Y = output_train; Xt = input_test; Yt = output_test; %% Use grey Wolf algorithm to select the best SVR parameter SearchAgents_no=60; % Number of wolves Max_iteration=500; % Max iterations dim=2; % In this example, two parameters C and g lb=[0.1,0.1] need to be optimized; % The value is described as follows: ub=[100,100]; % Upper bound Alpha_pos= Zeros (1,dim); % Initialize Alpha Wolf position Alpha_score= INF; % initialize the Alpha Wolf target function value, change this to-INF for maximization problems Beta_pos=zeros(1,dim); % initialize the location of the Beta Wolf Beta_score= INF; % initialize the target function value of the Beta Wolf, change this to-INF for maximization problems Delta_pos=zeros(1,dim); % initialize Delta Wolf position Delta_score= INF; % Initialize Delta Wolf target function value, change this to-INF for maximization problem Positions=initialization(SearchAgents_no,dim, UB,lb); Convergence_curve=zeros(1,Max_iteration); l=0; % [output_test_pre, ACC,~]=svmpredict(output_test',input_test',model_gwo_svr); % SVM model prediction and its accuracy % test_pre=mapminmax('reverse',output_test_pre',rule2); % test_pre = test_pre'; % % gam = [bestc bestc]; % Regularization parameter % sig2 =[bestg bestg]; % model = initlssvm(X,Y,type,gam,sig2,kernel); % model initialization % model = trainlssVM (model); % training % Yp = simlssVM (model,Xt); % regression plot (1: length (Yt), Yt, 'r + :' 1: length (Yp), Yp, 'bo:') the title (' + for the real value, o for prediction) % err_pre = WNDSPD (104: end) - test_pre; % figure (' Name ', 'test data, residual figure) % set (GCF,' unit ', 'centimeters',' the position ', [0.5, 5,30,5]) % plot (err_pre, '* -'); % figure('Name',' original - forecast plot ') % plot(test_pre,'*r-'); hold on; plot(wndspd(104:end),'bo-'); % legend (' prediction ', 'original') % set (GCF, 'unit', 'centimeters',' the position ', [0.5 and 13,30,5]) % % result = [WNDSPD (104: the end), test_pre] % % MAE=mymae(wndspd(104:end),test_pre) % MSE=mymse(wndspd(104:end),test_pre) % MAPE=mymape(wndspd(104:end),test_pre) %% Displays toc for application runtimeCopy the code
Complete code or simulation consulting to add QQ1575304183