A list,
In the face of complex nonlinear system problems, because the initial weights of BP network are dependent on the experience of designers and repeated experiments in sample space, it is easy to produce a series of problems such as slow convergence rate, unstable network and falling into local optimum. The combination of BP neural network algorithm and genetic algorithm can map any nonlinear system theoretically and obtain the global optimal effect, thus forming a more effective nonlinear inversion method. In Chat, the genetic algorithm optimizates the BP neural network as follows. The topic will start from the genetic algorithm and BP neural network respectively. It mainly introduces the application of BP neural network quickly and practically, and can learn to call GA_BP algorithm to process data after learning.
The main content of Chat: the principle of BP neural network; Genetic algorithm introduction; GA_BP demonstrates code interpretation. The first step to understand BP neural network is that we need to understand the neural network, because the addition of genetic algorithm is the improvement of neural network, in addition to genetic algorithm can also use wavelet algorithm, ant colony algorithm. Genetic algorithm can get the initial weight and threshold of the optimal individual. Let’s start with neural network algorithms.
Artificial neural Network is classified into the following two types: 1.1.1 Classification based on Learning strategy: Supervised Learning Network is the main one. Unsupervised Learning Network Hybrid Learning Network. Associate Learning Network. Optimization Application Network. 1.1.2 The classification of Connectionism mainly includes: Feed Forward Network. It has a Recurrent Network. Reinforcement Network.
1.2 Learning Algorithm (1) Hebb learning rules. (2) Delta learning rules. (3) Gradient descent learning rules. (4) Kohonen learning rules (SOM). (5) Backward propagation of departmental rules (BP). (6) Probabilistic learning rule (GA). (7) Competitive learning rules (SOM,ART,CPN).
1.3 BP neural network BP neural network training algorithm: (1) initialization of network Settings. (2) Forward propagation input. (3) Reverse error propagation. (4) Network weight and neuron bias adjustment. (5) End of judgment. Detail: what is the definition of BP neural network? Look at this sentence: a multilayer feed-forward network trained by an error backpropagation algorithm. The idea of BP is to use the errors after the output to estimate the errors of the previous layer of the output layer, and then use the errors of this layer to estimate the errors of the previous layer, so as to obtain the error estimates of all layers. The error estimation here can be understood as a partial derivative, according to which we adjust the connection weights of each layer, and then recalculate the output error with the adjusted connection weights. Until the output error reaches the desired value or the number of iterations overflows the set point. After all, the word “error” is used a lot, so does this algorithm have a lot to do with error? Yes, the propagation object of BP is “error”, and the purpose of propagation is to get the estimated error of all layers. Its learning rules are: using the fastest descent method, through back propagation (i.e. layer by layer forward) constantly adjust the weight and threshold of the network, and finally minimize the global error coefficient. Its learning essence is: the dynamic adjustment of each connection weight. BP network is composed of input layer, hidden layer and output layer. The hidden layer can have one or multiple layers: input layer, hide layer, output layer. The advantage of BP network is that it can learn and store a large number of input-output relations without pointing out such mathematical relations in advance. So how does it learn? BP uses a differentiable activation function to describe the relationship between the input and output of the layer, and S type function δ is often used as the activation function. It is a three-layer BP network model of M × K × N. S-type transfer function is selected in the network. By backtransmitting error function (Ti is the expected output and Oi is the calculated output of the network), the network weight and threshold are constantly adjusted to minimize the error function E. Where, n is the number of neurons in the input layer, m is the number of neurons in the output layer, and A is a constant between [1, 10]. We now have supervised BP neural network learning algorithm: 1. Forward propagation to obtain the output layer error E; => Input layer Input samples => each hidden layer => Output layer. 2. Determine whether it is backpropagation; => If the error at the output layer does not meet the expectation, go to reverse propagation. 3. Error backpropagation; => The error is displayed in each layer => Modify the weight of each layer element until the error is reduced to an acceptable level. The algorithm is relatively simple to explain, then through the mathematical formula to understand the real face of BP. Suppose our network structure is an input layer with N neurons, a hidden layer with P neurons, and an output layer with Q neurons. After understanding the above variables, start to calculate:
1. Initialize the error function with the random number in (-1, 1), set the precision ε, and the maximum number of iterations M. 2. Randomly select the KTH input sample and the corresponding expected output. Repeat the following steps until the error reaches the requirement: 3. Calculate the input and output of each neuron in the hidden layer. 4. Calculate the partial derivative of the error function E to each neuron in the output layer according to the expected output and actual output of the output layer and input parameters of the output layer. 5. Calculate the partial derivative of the error function to each neuron in the hidden layer according to the sensitivity of the latter layer (here, the output layer) δ O (k), the connection weight of the latter layer W, Use partial derivative in step 4 to correct connection weights of output layer 7. Use partial derivative in Step 5 to correct connection weights of hidden layer 8. Calculate global errors (M samples, Q categories)
1.1 Purpose of genetic algorithm: There are many parameters in neural network, but you do not know which parameters can be trained with the highest efficiency and recognition rate. At this time, genetic algorithm can be used to optimize the parameters of neural network with this recognition rate as the objective function. The essential difference between genetic neural network algorithm and neural network algorithm can be said to be different learning methods, or model optimization methods. The former should be based on genetic algorithm for network weight learning, while the latter mostly adopts back propagation (BP) algorithm for weight learning, and the two algorithms are very different. It can be understood as follows: 1) Genetic algorithm: Genetic algorithm is an evolutionary algorithm, which simulates the process of biological evolution in nature. Individuals evolve, and only high-quality individuals (with minimum (large) objective function) enter the next generation of reproduction. And so on and so forth, and so forth, and so forth, and so forth, and so forth, and so forth, and so forth. Genetic algorithm (GA) can solve highly nonlinear optimization problems that conventional optimization algorithms cannot solve well and is widely used in all walks of life. Differential evolution, ant colony algorithm, particle swarm optimization and so on are all evolutionary algorithms, but the simulated biological population object is different.
2) Back propagation algorithm genetic algorithm: Genetic algorithm GA — a parallel stochastic search optimization method which simulates natural genetic mechanism and biological evolution. (with the iterative process of “survival + detection” search algorithm) based on natural evolution principle of “selection, survival of the fittest” into optimization parameters of coding series, according to the selected in the fitness function and genetic selection, crossover and mutation of individual screening, make good individual fitness value is retained, Poor fitness individuals are weeded out, and new populations both inherit the information of the previous generation and excel it. Repeat until the condition is satisfied. Each individual in a population is a solution to the problem, called a “chromosome”, and a chromosome is a string of symbols, such as a binary string. Use fitness (fitness function) to measure the quality of chromosomes. The basic operation of genetic algorithm can be divided into: selection operation: select individuals from the old population to the new population with a certain probability. The probability of being selected is related to the fitness value of the individual. The better the fitness of the individual, the greater the selection. Crosswise operation – information exchange ideas select two individuals to swap and combine to produce a new excellent individual, chromosomal position swap. Mutation operation – occurs with a low probability of chromosomal location variation (usually between 0.001 and 0.01). Genetic algorithm has the characteristics of efficient heuristic search and parallel computation, and is applied to function optimization, combinatorial optimization and production scheduling. Basic elements of the algorithm: 1. Chromosome coding method. 2. Fitness function. 3, genetic operation (selection, crossover, variation). 4. Operating parameters — (parameters: population size M, genetic algebra G, crossover probability Pc and mutation probability Pm).
1. The population initialization individual coding method is real number coding, and every other individual is a real number string, which is composed of four parts: the connection weight of input layer and hidden layer, the threshold value of hidden layer, the connection weight of hidden layer and output layer, and the threshold value of output layer. The individual contains all the weights and thresholds of the neural network. Under the condition that the network structure is constant, a neural network with definite structure, weight and threshold can be formed. 2. The fitness function obtains the initial weights and thresholds of BP neural network according to individuals. After training BP neural network with training data, the output of the system is predicted. 3, selection operation genetic algorithm selection operation roulette, tournament method and other methods. When you choose roulette, that is, the selection strategy based on fitness ratio, the selection probability PI for each individual I. 4, crossover operation because the individual uses real number coding, so the crossover operation method adopts real number crossover method. 5. Mutation operation The JTH gene AIJ of the ith individual was selected for mutation and mutation operation. 1.2.1 Application of genetic algorithm in neural network Neural network design to use genetic algorithm, genetic algorithm in neural network application is mainly reflected in three aspects: network learning, network structure design, network analysis.
1.2.1.1 Application of Genetic Algorithm in Network Learning In neural network, genetic algorithm can be used for network learning. In this case, it works in two ways. Optimization of learning rules Genetic algorithm is used to realize automatic optimization of neural network learning rules so as to improve the learning rate. The global optimization and implicit parallelism of genetic algorithm are used to improve the speed of network weight optimization. 1.2.1.2 Application of Genetic Algorithm in Network Design To design an excellent neural network structure, the first is to solve the coding problem of network structure; Then the optimal structure can be obtained by selection, crossover and mutation. There are three main coding methods: direct coding method: this is the neural network structure directly with binary string representation, in genetic algorithm, “chromosome” and neural network is essentially a mapping relationship. Through the optimization of “chromosome” to achieve the optimization of the network. Parameterized coding method: Parameterized coding adopts abstract coding, including the number of network layers, the number of neurons in each layer, the interconnection mode of each layer and other information. Generally, the optimized “chromosomes” of evolution are analyzed to produce the structure of the network. Reproduction and growth method: this method does not directly encode the structure of neural network in the “chromosome”, but codes some simple growth grammar rules into the “chromosome”; Then, these rules are modified by genetic algorithm, and a neural network suitable for the problem is generated. This method is consistent with the biological growth and evolution in nature. 1.2.1.3 Application of Genetic Algorithm in Network Analysis Genetic algorithm can be used to analyze neural networks. It is difficult to understand the function of neural network directly from its topological structure because of its characteristics of distributed storage. Genetic algorithm can analyze the function, property and state of neural network. Although genetic algorithm can be applied in many fields, it also shows its potential and broad prospects. However, genetic algorithm still has a lot of problems to be studied, and there are also various deficiencies at present. First, the convergence rate decreases when there are many variables and a large value range or no given range. Secondly, the vicinity of the optimal solution can be found, but the location of the most disturbing solution cannot be determined accurately. Finally, there is no quantitative method for parameter selection of genetic algorithm. As for genetic algorithm, it is necessary to further study its basic mathematical theory. The advantages and disadvantages of it and other optimization techniques and the reasons need to be proved theoretically. The hardware-based genetic algorithm should also be studied. And the general programming and form of genetic algorithm. Summary: Both genetic algorithm and neural network are important algorithms in the field of computational intelligence. Genetic algorithm refers to the survival of the fittest and survival of the fittest genetic mechanism to find the optimal solution of a problem. Neural network is a mathematical model or computational model that simulates the structure and function of biological neural network. Large numbers of cells form neural networks. By adjusting the structure of neural network in genetic algorithm, the neural network can obtain dynamic structure and become more intelligent. Genetic algorithm is used to adjust the weight to get faster speed and avoid convergence in local optimal case to a greater extent. Genetic algorithm is applied to neural network to improve the performance of neural network.
Ii. Source code
unction varargout = interface(varargin)
% INTERFACE M-file for interface.fig
% INTERFACE, by itself, creates a new INTERFACE or raises the existing
% singleton*.
%
% H = INTERFACE returns the handle to a new INTERFACE or the handle to
% the existing singleton*.
%
% INTERFACE('CALLBACK',hObject,eventData,handles,...) calls the local
% function named CALLBACK in INTERFACE.M with the given input arguments.
%
% INTERFACE('Property'.'Value',...). creates anew INTERFACE or raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before interface_OpeningFunction gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to interface_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
% Copyright 2002- 2003. The MathWorks, Inc.
% Edit the above text to modify the response to help interface
% Last Modified by GUIDE v2. 5 13-Mar- 2006. 11:25:28% the Begin initialization code - DO NOT EDIT % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % this define the global variables
global OriginImg; %this is the image of original
global SegmentImg ; %this is the image of segmented
global threshold ; %this is the fixed threshold for segment image
global nnet ; %this is the neural network
global P ; %this is the sample of original
global T ; %this is the sample of target
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @interface_OpeningFcn, ...
'gui_OutputFcn', @interface_OutputFcn, ...
'gui_LayoutFcn', [],...'gui_Callback'[]);if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
%initialize variables
threshold = 80 ;
% End initialization code - DO NOT EDIT
% --- Executes just before interface is made visible.
function interface_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to interface (see VARARGIN)
% Choose default command line output for interface
handles.output = hObject;
% Update handles structure
guidata(hObject, handles);
global OriginImg;
global SegmentImg ;
OriginImg = imread('image\a.bmp');
SegmentImg = OriginImg ;
set(gcf,'CurrentAxes',handles.h1);
imshow(OriginImg);
set(gcf,'CurrentAxes',handles.h2);
imshow(SegmentImg);
% UIWAIT makes interface wait for user response (see UIRESUME)
% uiwait(handles.figure1);
% --- Outputs from this function are returned to the command line.
function varargout = interface_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Get default command line output from handles structure
varargout{1} = handles.output;
function edit1_Callback(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: get(hObject,'String') returns contents of edit1 as text
% str2double(get(hObject,'String')) returns contents of edit1 as a double
% --- Executes during object creation, after setting all properties.
function edit1_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc
set(hObject,'BackgroundColor'.'white');
else
set(hObject,'BackgroundColor',get(0.'defaultUicontrolBackgroundColor'));
end
function edit2_Callback(hObject, eventdata, handles)
% hObject handle to edit2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: get(hObject,'String') returns contents of edit2 as text
% str2double(get(hObject,'String')) returns contents of edit2 as a double
% --- Executes during object creation, after setting all properties.
function edit2_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc
set(hObject,'BackgroundColor'.'white');
else
set(hObject,'BackgroundColor',get(0.'defaultUicontrolBackgroundColor'));
end
function edit3_Callback(hObject, eventdata, handles)
% hObject handle to edit3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: get(hObject,'String') returns contents of edit3 as text
% str2double(get(hObject,'String')) returns contents of edit3 as a double
% --- Executes during object creation, after setting all properties.
function edit3_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc
set(hObject,'BackgroundColor'.'white');
else
set(hObject,'BackgroundColor',get(0.'defaultUicontrolBackgroundColor'));
end
function edit4_Callback(hObject, eventdata, handles)
% hObject handle to edit4 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% Hints: get(hObject,'String') returns contents of edit4 as text
% str2double(get(hObject,'String')) returns contents of edit4 as a double
% --- Executes during object creation, after setting all properties.
function edit4_CreateFcn(hObject, eventdata, handles)
% hObject handle to edit4 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles empty - handles not created until after all CreateFcns called
% Hint: edit controls usually have a white background on Windows.
% See ISPC and COMPUTER.
if ispc
set(hObject,'BackgroundColor'.'white');
else
set(hObject,'BackgroundColor',get(0.'defaultUicontrolBackgroundColor'));
end
Copy the code
3. Operation results
Fourth, note
Version: 2014 a