A list,

BP network (Back Propagation), proposed by a group of scientists led by Rumelhart and McCelland in 1986, is a multi-layer feedforward network trained by the error Back Propagation algorithm, and is one of the most widely used neural network models at present. BP network can learn and store a large number of input-output mode mapping relations without revealing the mathematical equation describing the mapping relations in advance. In the development history of artificial neural network, there is no effective algorithm to adjust the connection weight of hidden layer for a long time. Until the error back propagation algorithm (BP algorithm) was proposed, the weight adjustment problem of multi-layer feedforward neural network for solving nonlinear continuous function was successfully solved.

BP (Back Propagation) neural network is the learning process of error Back Propagation algorithm, which consists of two processes: forward Propagation of information and Back Propagation of error. Each neuron in the input layer is responsible for receiving input information from the outside world and transmitting it to each neuron in the middle layer. The middle layer is the internal information processing layer, responsible for information transformation. According to the demand of information change ability, the middle layer can be designed as single hidden layer or multiple hidden layer structure. The last hidden layer passes the information to the neurons in the output layer. After further processing, the forward propagation processing process of a learning is completed, and the information processing results are output from the output layer to the outside world. When the actual output is inconsistent with the expected output, the error back propagation stage is entered. The error is transmitted back to the hidden layer and the input layer layer by layer by modifying the weight of each layer according to the method of error gradient descent. The recurrent process of information forward propagation and error back propagation is the process of constant adjustment of the weights of each layer, as well as the process of neural network learning and training. This process is carried out until the error output by the network is reduced to an acceptable degree or the pre-set learning times.

BP neural network model includes its input and output model, function model, error calculation model and self-learning model.





BP neural network model and its basic principle



3 BP_PID algorithm flow

Ii. Source code

CLC Clear %% Training data Prediction data extraction and normalized % Download four types of speech signal load data1 C1 load datA2 C2 load data3 C3 Load DatA4 C4 % data(1:500,:)=c1(1:500,:); data(501:1000,:)=c2(1:500,:); data(1001:1500,:)=c3(1:500,:); data(1501:2000,:)=c4(1:500,:); K =rand(1,2000); [m,n]=sort(k); Input =data(:,2:25); output1 =data(:,1); Output =zeros(2000,4); for i=1:2000 switch output1(i) case 1 output(i,:)=[1 0 0 0]; case 2 output(i,:)=[0 1 0 0]; case 3 output(i,:)=[0 0 1 0]; case 4 output(i,:)=[0 0 0 1]; Input_train =input(n(1:1500),:)'; output_train=output(n(1:1500),:)'; input_test=input(n(1501:2000),:)'; output_test=output(n(1501:2000),:)'; % Input data normalization [inputn,inputps]=mapminmax(input_train); %% Network structure initialization innum=24; midnum=25; outnum=4; W1 =rands(midnum,innum); b1=rands(midnum,1); w2=rands(midnum,outnum); b2=rands(outnum,1); w2_1=w2; w2_2=w2_1; w1_1=w1; w1_2=w1_1; b1_1=b1; b1_2=b1_1; b2_1=b2; b2_2=b2_1; Xite =0.1; Alfa = 0.01; loopNumber=10; I=zeros(1,midnum); Iout=zeros(1,midnum); FI=zeros(1,midnum); dw1=zeros(innum,midnum); db1=zeros(1,midnum); %% network training E=zeros(1,loopNumber); for ii=1:10 E(ii)=0; For I =1:1:1500 %% network forecast output x=inputn(:, I); % hidden layer outputs for j = 1:1: midnum I (j) = inputn (: I) (j:) ', '* w1 + b1 (j); Iout(j)=1/(1+exp(-I(j))); End % output layer output yn=w2'*Iout'+b2; E =output_train(:, I)-yn; E(ii)=E(ii)+sum(abs(e)); % calculate the rate of weight change dw2=e*Iout; db2=e'; for j=1:1:midnum S=1/(1+exp(-I(j))); FI(j)=S*(1-S); end for k=1:1:innum for j=1:1:midnum dw1(k,j)=FI(j)*x(k)*(e(1)*w2(j,1)+e(2)*w2(j,2)+e(3)*w2(j,3)+e(4)*w2(j,4)); db1(j)=FI(j)*(e(1)*w2(j,1)+e(2)*w2(j,2)+e(3)*w2(j,3)+e(4)*w2(j,4)); end end w1=w1_1+xite*dw1'+alfa*(w1_1-w1_2); b1=b1_1+xite*db1'+alfa*(b1_1-b1_2); w2=w2_1+xite*dw2'+alfa*(w2_1-w2_2); b2=b2_1+xite*db2'+alfa*(b2_1-b2_2); w1_2=w1_1; w1_1=w1; w2_2=w2_1; w2_1=w2; b1_2=b1_1; b1_1=b1; b2_2=b2_1; b2_1=b2; Inputn_test =mapminmax('apply',input_test,inputps); Fore = zeros (4500); 0% 1500% for ii = 1:1 for I = 1:50 hidden layer outputs for j = 1:1: midnum I (j) = inputn_test (: I) (j:) ', '* w1 + b1 (j); Iout(j)=1/(1+exp(-I(j))); end fore(:,i)=w2'*Iout'+b2; Output_fore =zeros(1500); output_fore=zeros(1500); for i=1:500 output_fore(i)=find(fore(:,i)==max(fore(:,i))); End %BP network prediction error=output_fore-output1(n(1501:2000))'; Figure (1) plot(output_fore,'r') hold on plot(output1(n(1501:2000))','b') legend(' predicted speech category ',' actual speech category ') Figure (2) plot(error) title('BP network classification error ','fontsize',12) xlabel(' speech signal ','fontsize',12) ylabel(' classification error ','fontsize',12) %print -dtiff -r600 1-4 k=zeros(1,4); For I =1:500 if error(I)~=0 [b,c]= Max (output_test(:, I)); switch c case 1 k(1)=k(1)+1; case 2 k(2)=k(2)+1; case 3 k(3)=k(3)+1; case 4 k(4)=k(4)+1; end end endCopy the code

Third, the operation result