• The principle of

clear; clc; TestSamNum = 20; % Number of learning samples ForcastSamNum = 2; HiddenUnitNum=8; % hidden layer InDim = 3; % input layer OutDim = 2; % Output layer % Raw data % Number of people (unit: SQRS = [20.55 22.44 25.37 27.13 29.45 30.10 30.96 34.06 36.42 38.09 39.13 39.99... 41.93 44.59 47.30 52.89 55.73 56.76 59.17 60.63]; SQJDCS = [0.6 0.75 0.85 0.9 1.05 1.35 1.45 1.6 1.7 1.85 2.15 2.2 2.25 2.35 2.5 2.6... 2.7 2.85 2.95 3.1]; % Road area (Unit: SQGLMJ = [0.09 0.11 0.11 0.14 0.20 0.23 0.23 0.32 0.34 0.36 0.36 0.38 0.49... 0.56 0.59 0.59 0.67 0.69 0.79]; % Highway Passenger Volume (Unit: Glkyl = [5126 6217 7730 9145 10460 11387 12353 15750 18304 19836 21024 19490 20433... 22598 25107 33442 36836 40548 42927, 43462]; % Highway freight volume (Unit: Glhyl = [1237 1379 1385 1399 1663 1714 1834 4322 8132 8936 11099 11203 10524 11115... 13320 16762 18673 20724 20803 21804]; p = [sqrs; sqjdcs; sqglmj]; % input data matrix t = [glKYl; glhyl]; % target data matrix [SamIn, MINp, MAXP, TN, mint, maxt] = premnmx(p, t); % Original sample pair (input and output) initializes SamOut = tn; % Output sample MaxEpochs = 50000; % Maximum training times LR = 0.05; % learning rate E0 = 1E-3; % target error RNG ('default'); W1 = rand(HiddenUnitNum, InDim); B1 = rand(HiddenUnitNum, 1); % Initialize the threshold between the input layer and the hidden layer W2 = rand(OutDim, HiddenUnitNum); % initialize the weight between the output layer and hidden layer B2 = rand(OutDim, 1); % Initializes the threshold between the output layer and the hidden layer ErrHistory = zeros(MaxEpochs, 1); for i = 1 : MaxEpochs HiddenOut = logsig(W1*SamIn + repmat(B1, 1, TestSamNum)); % NetworkOut = W2*HiddenOut + repmat(B2, 1, TestSamNum); % output layer network output Error = samout-networkout; % Difference between actual output and network output SSE = sumsqr(Error); % Energy function (sum of error squares) ErrHistory(I) = SSE; if SSE < E0 break; The six lines below end % are the most core program of BP network. The weight (threshold) of each step is dynamically adjusted by Delta2 = Error according to the principle of negative gradient descent of energy function. Delta1 = W2' * Delta2 .* HiddenOut .* (1 - HiddenOut); dW2 = Delta2 * HiddenOut'; dB2 = Delta2 * ones(TestSamNum, 1); dW1 = Delta1 * SamIn'; dB1 = Delta1 * ones(TestSamNum, 1); % The weights and thresholds between the output layer and the hidden layer are revised. B2 = B2 + lr*dB2; % To modify the weight and threshold between the input layer and the hidden layer W1 = W1 + lr*dW1; B1 = B1 + lr*dB1; end HiddenOut = logsig(W1*SamIn + repmat(B1, 1, TestSamNum)); NetworkOut = W2*HiddenOut + repmat(B2, 1, TestSamNum); A = postmnmx(NetworkOut, mint, maxt); % restore network output layer result x = 1990:2009; % time scale newk = a(1, :); % network output passenger volume newh = a(2, :); % Network output freight volume subplot(2, 1, 1); plot(x, newk, 'r-o', x, glkyl, 'b--+'); Legend (' Network output passenger volume ', 'actual passenger volume '); Xlabel (' year '); Ylabel (' Passenger volume/ten thousand people '); subplot(2, 1, 2); plot(x, newh, 'r-o', x, glhyl, 'b--+'); Legend (' network output volume ', 'actual volume '); Xlabel (' year '); Ylabel (' Cargo volume/KMT '); % pnew=[73.39 75.55 3.9635 4.0975 0.9880 1.0268]; % Data for 2010 and 2011; pnewn = tramnmx(pnew, minp, maxp); HiddenOut = logsig(W1*pnewn + repmat(B1, 1, ForcastSamNum)); Anewn = W2*HiddenOut + repmat(B2, 1, ForcastSamNum); Anew = postmnmx(anewn, mint, maxt); Disp (' predicted value d: '); disp(anew);Copy the code

   

Recently for a period of time in how to use the ability to predict the number of sales, search on the net, found a lot of model to predict, such as using a regression model, time series model, GM (1, 1) model, in combination with the actual work content, however, found that several kinds of model prediction accuracy is not high, so to search on the Internet again, It is found that the neural network model can be used for prediction, and many of them are combined with time series or SVM (support vector machine) and other combined models for prediction. In this paper, combined with the actual data, the commonly used BP neural network algorithm is selected. Its algorithm principle is a lot of online, so it is not necessary to show one by one here. And reference bp neural network traffic prediction Matlab source code this blog, using Matlab 2016a, given the following code, and the final prediction

clc

clear all

close all

% BP neural network prediction code

Load the output and input data

load C:\Users\amzon\Desktop\p.txt;

load C:\Users\amzon\Desktop\t.txt;

% save data to matlab work path inside

save p.mat;

save t.mat; % Notice that t must be a row vector

% is assigned to the output p and the input t

p=p;

t=t;

% data normalization processing, using mapMinmax function, the value normalized to [-1.1]

% this function is used as follows: [y,ps] =mapminmax(x,ymin,ymax),

%ymin, ymax indicates the range to be naturalized, default is to naturalize to [-1,1]

% returns the naturalized value y, and the argument ps, which is called in the result de-normalization

[p1,ps]=mapminmax(p);

[t1,ts]=mapminmax(t);

% determine the training data, test data, generally randomly selected from the sample of 70% data as training data

%15% of the data is used as the test data, which is generally used in the following ways:

%[trainInd,valInd,testInd] = dividerand(Q,trainRatio,valRatio,testRatio)

[trainsample. P, valsample. J p, testsample. P] = dividerand (p, 0.7, 0.15, 0.15);

[trainsample. T, valsample. J t, testsample. T] = dividerand (t, 0.7, 0.15, 0.15);

To establish BP neural network of back propagation algorithm, newff function is used, and its general usage is as follows

%net = newff(minmax(p),[number of neurons in the hidden layer, number of neurons in the output layer],{transmission function of neurons in the hidden layer, transmission function of the output layer},’ backpropagation training function ‘), where P is the input data, t is the output data

%tf is the transmission function of the neural network. By default, ‘tansig’ is the transmission function of the hidden layer.

The %purelin function is the transport function for the output layer

% generally there are other transfer functions here generally the following, if the predicted effect is not very good, you can adjust

%TF1 = ‘tansig’; TF2 = ‘logsig’;

%TF1 = ‘logsig’; TF2 = ‘purelin’;

%TF1 = ‘logsig’; TF2 = ‘logsig’;

%TF1 = ‘purelin’; TF2 = ‘purelin’;

TF1=’tansig’; TF2=’purelin’;

Net = newff (minmax (p), [10], {TF1 TF2}, ‘traingdm’); % Network creation

% Network parameter Settings

net.trainParam.epochs=10000; % Training times Settings

net.trainParam.goal=1e-7; % Training target setting

Net. TrainParam. Lr = 0.01; % learning rate setting, should be set to less value, too large although will accelerate the convergence speed at the beginning, but near the optimal point, will produce turbulence, resulting in convergence

Net. TrainParam. MC = 0.9; % momentum factor setting, default is 0.9

net.trainParam.show=25; % Number of intervals displayed

% Specifies the training parameter

% net.trainFcn = ‘traingd’; % gradient descent algorithm

% net.trainFcn = ‘traingdm’; % momentum gradient descent algorithm

% net.trainFcn = ‘traingda’; % variable learning rate gradient descent algorithm

% net.trainFcn = ‘traingdx’; % variable learning rate momentum gradient descent algorithm

% (preferred algorithm for large networks)

% net.trainFcn = ‘trainrp’; % RPROP(elastic BP) algorithm, minimum memory requirements

% conjugate gradient algorithm

% net.trainFcn = ‘traincgf’; %Fletcher-Reeves modified algorithm

% net.trainFcn = ‘traincgp’; %Polak-Ribiere correction algorithm, slightly larger memory requirements than Fletcher-Reeves correction algorithm

% net.trainFcn = ‘traincgb’; The % Powell-Beal reset algorithm has a slightly larger memory requirement than polak-Ribiere correction algorithm

% (preferred algorithm for large networks)

%net.trainFcn = ‘trainscg’; The % ScaledConjugate Gradient algorithm has the same memory requirement as the Fletcher-Reeves correction algorithm and much less computation than the above three algorithms

% net.trainFcn = ‘trainbfg’; % quasi-newton algorithm-BFGS Algorithm, which has more computation and memory requirements than conjugate gradient Algorithm, but has faster convergence

% net.trainFcn = ‘trainoss’; % OneStep Secant Algorithm, the calculation and memory requirements are smaller than BFGS Algorithm, slightly larger than conjugate gradient Algorithm

% (preferred algorithm for medium networks)

%net.trainFcn = ‘trainlm’; %Levenberg-Marquardt algorithm, the largest memory requirements, the fastest convergence

% net.trainFcn = ‘trainbr’; % Bayesian regularization algorithm

% Representative five algorithms are :’ TraingDX ‘,’ trainRP ‘,’ trainSCg ‘,’trainoss’, ‘trainLM’

% here is generally selected ‘trainLM’ function to train, its calculation is the corresponding Levenberg-Marquardt algorithm

net.trainFcn=’trainlm’;

[net,tr]=train(net,trainsample.p,trainsample.t);

% calculation simulation, its general use sim function

[normtrainoutput,trainPerf]=sim(net,trainsample.p,[],[],trainsample.t); % training data, according to the results of BP

[normvalidateoutput,validatePerf]=sim(net,valsample.p,[],[],valsample.t); % validated data, results obtained by BP

[normtestoutput,testPerf]=sim(net,testsample.p,[],[],testsample.t); % test data, results obtained by BP

% The results are reversely normalized to obtain the fitting data

trainoutput=mapminmax(‘reverse’,normtrainoutput,ts);

validateoutput=mapminmax(‘reverse’,normvalidateoutput,ts);

testoutput=mapminmax(‘reverse’,normtestoutput,ts);

% Reverse normalization of normal input data to obtain its formal value

trainvalue=mapminmax(‘reverse’,trainsample.t,ts); % Normal validation data

validatevalue=mapminmax(‘reverse’,valsample.t,ts); % Normal validation data

testvalue=mapminmax(‘reverse’,testsample.t,ts); % Normal test data

% to make a prediction, enter the data to predict pnew

Pnew = [313256239] ‘;

pnewn=mapminmax(pnew);

anewn=sim(net,pnewn);

anew=mapminmax(‘reverse’,anewn,ts);

% absolute error calculation

errors=trainvalue-trainoutput;

% plotregression fitting figure

figure,plotregression(trainvalue,trainoutput)

% error figure

figure,plot(1:length(errors),errors,’-b’)

Title (‘ Error variation chart ‘)

% error value normality test

figure,hist(errors); % frequency histogram

figure,normplot(errors); % Q – Q figure

[muhat,sigmahat,muci,sigmaci]=normfit(errors); The % parameter estimates mean, variance, 0.95 confidence interval for mean, 0.95 confidence interval for variance

[h1,sig,ci]= ttest(errors,muhat); % hypothesis test

figure, ploterrcorr(errors); % Draw autocorrelation graph of error

figure, parcorr(errors); % Draw partial correlation graph

After running, the result is as follows:

Result analysis diagram of BP neural network

The relationship between gradient and mean square error of training data

Verify the data gradient and learning times

Test diagram of residual normal (Q-Q diagram)

 

 

On the Internet, it is found that neural networks can be created through the NEURAL Network Toolbox GUI interface. The general operation steps are as follows:

1: Enter nntool in the input command, or find the Fitting of Netrual Net under The “Application” option and click “Open”, you will see the following interface

 

 

 

2. Import of input data and output data (matlab case data is selected in this paper)

3: Randomly select the proportion of three types of data in the sample size, generally select the default

   




4. Determination of hidden layer neurons 



 



5: Training algorithm selection, generally choose the default can be, after the selection is completed, click <train> button to run the program



 





6: According to the obtained results, the smaller the MSE value is, the closer the R value is to 1, and the training effect is compared. The second figure shows the Settings of various parameters of the neural network and its final results. The closer the fitting figure R is to 1, the better the model fitting will be



 



 



 



 

The final result diagram

7: If the resulting model does not meet your requirements, repeat the above steps until you get the accuracy you want



8: Save the final obtained data and its fitting value, and then view it, you can get the fitting value



Finally, reference online and MATLAB help, give some neural network related functions, I hope to help you. Graphical user interface functionality. Nnstart n/A Neural network startup GUI nctool n/A Neural network classification tool nftool n/A Neural network fitting tool nntrainTool N/a Neural network training tool nprtool n/a neural network pattern recognition tool ntstool N/A NFTool Neural network time series tool nntool – Graphical user interface for neural network toolbox. View – View a neural network. Network establishment function. Cascadeforwardnet – Cascade, feedforward neural network. Competlayer – the competitive neural layer. Distdelaynet – Neural network with distributed delay. Elman Net – Elman neural network. Feedforwardnet – Feedforward neural network. Fitnet – Function fitting neural network. Layrecnet – Hierarchical recursive neural network. Linearlayer – Linear neural layer. Lvqnet – Learning Vector quantization (LVQ) neural network. Narnet – Nonlinear self-binding time series networks. Narxnet – Nonlinear self-binding time Series with external input networks. Newgrnn – Design a generalized regression neural network. Newhop – Establish regular Hopfield networks. Newlind – Design a linear layer. Newpnn – Design probabilistic neural networks. Newrb – Radial Basis network design. Newrbe – Design an exact radial basis network. Patternnet – Neural Network Pattern Recognition. Perception. – Perception. Selforgmap – Self-organizing feature map. Timedelaynet – Neural networks with time delays. Use the Internet. Network – Creates a custom neural network. SIM card – Simulates a neural network. Initialization – Initializes a neural network. Adaptation – Allows a neural network to adapt. Train. – Neural network of trains. DISP key – Displays the properties of a neural network. Display – Displays the name and neural network attributes addDelay – Adds delay to the neural network response. Closeloop – Switch the open feedback loop of the neural network to the closed feedback loop. Formwb – Table bias and weights into a single vector. Getwb – treat it as all network weights and biases in a single vector. Noloop – Removes open and close feedback loops for neural networks. Open loop – switching neural network feedback, open the closed feedback loop. Removedelay – Removes the response of the delayed neural network. Separatewb – Separate bias and weight/bias vector weights. Setwb – will all weights and deviations with a single vector network. Simulink support. Gensim – Generates Simulink modules to simulate neural networks. Setsiminit – set neural network Simulink module initial conditions getsiminit – obtain neural network Simulink module initial conditions neuron – neural network Simulink module library. Training functions. Trainb – Batch training with weight and bias learning rules. Trainbfg – BFGS quasi-Newton inverse transfer. Trainbr – BP algorithm of Bayes rule. Trainbu – A batch of unsupervised learning rules with weight and prejudice training. Trainbuwb – Batch training with weight unsupervised learning rules and Biases. Trainc – Cycle sequential weight/bias training. Traincgb – Conjugate Powell Bill restarts gradient back propagation. Trainfgf-conjugate Fletcher-Reeves update gradient back propagation. Traincgp – Conjugate Polak-Ribiere update gradient back propagation. Traingd – Gradient descent back propagation. Traingda – Backpropagation gradient descent with adaptive LR. Traingdm – with momentum gradient descent. Traingdx – Gradient descent of watts/inertia with adaptive LR back propagation. Trainlm – Levenberg-Marquaid reverse transfer. Trainoss – One-step secant reverse transfer. Trainr – Random weight/bias training. Trainrp-rprop backpropagation. Trainru – Unsupervised random weight/bias training. Trains – Sequential weight/bias training. Trainscg – Scale conjugate gradient BP algorithm. Drawing function. Plotconfusion – Graph classification confusion matrix. Ploterrcorr – Error autocorrelation time series diagram. Ploterrhist – Draw error histogram. Plotfit – Plotting function fits. Plotinerrcorr – Graph input error time series cross-correlation. Plotperform – Cell network performance. Plotregression – Linear regression plot. Plotresponse – Time series response of dynamic network graph. Plotroc – Draw subject working characteristics. Plotsomhits – Sample plot self-organization chart. Plotsomnc – The connection of self-organizing mapped neighbors of a cell. Plotsomnd – The distance between neighbors of a cell self-organizing map. Plotsomplanes – Cell self-organizing planes with mapped weights. Plotsompos – Cell self-organizing map weight standpoint. Plotsomtop – Topology of cell self-organizing map. Plottrainstate – Plot training status value. Plotwb – Diagram Cold spring weight and deviation value diagram. List other functions implemented by neural networks. Nnadapt – Adaptation functions. Nnderivative – Derivative functions. Nndistance – Distance function. Nndivision – Except functions. Nninitlayer – Initialization layer function. Nninitnetwork – Initializes the network function. Nninitweight – Initializes the weight function. Nnlearn – Learning function. Nnnetinput – Net input function. Nnperformance – Performance function. Nnprocess – Processing function. Nnsearch – Line search function. Nntopology – The function of the topology. Nntransfer-transfer function. Nnweight – Weight function. Nndemos – Demonstration of neural Network Toolbox. Nndatasets – A dataset of the neural network toolbox. Nntextdemos – Demonstration of Neural Network Design textbooks. Nntextbook – Information for neural network design textbooks.