A list,

Pattern recognition is to study the automatic processing and interpretation of patterns by means of solving mathematical models through computers. Among all kinds of pattern recognition methods, template matching is the easiest one, and its mathematical model is easy to establish. Pattern recognition of digital image through template matching helps us understand the application of mathematical model in digital image.

2 Template Matching algorithm 2.1 Similarity Measure matching The actual operation idea of template matching is very simple: take the known template, and the original image of the same size of a piece of area to match. At the beginning, the top left corner of the template overlaps with the top left corner of the image. Compare the template with an area of the same size in the original image, then move to the next pixel, and still do the same operation… When all the positions are correct, the one with the least difference is the object we’re looking for.

What is described above is the solution idea of similarity measure method for matching, and its operation in the computer is shown in Figure 2. Suppose template T is superimposed and translated on the search graph, and the image under the search graph covered by template is called subgraph Si, j, I and j are the coordinates of the upper left pixel point of this subgraph in S graph, which is called the reference point. As can be seen from Figure 2, the value range of I and j is: 1< I, j < n-m +1. Now you can compare the contents of T with Si and j. If they are the same, then the difference between T and S is zero. Therefore, the similarity of T, Si and j can be measured by formula (1) and Formula (2) below. In Equation (2), the third term represents the total energy of the template, which is a constant independent of (I, j). The first term is the energy of the template covering sub-graph, which changes slowly with the position of (I, j); The interrelation between the subgraph and the template represented by the second term changes with the change of (I, j), and this value is maximum when T matches Si, j. Therefore, the following correlation function (3) can be used as similarity measure.When the Angle between the vectors t and S1 is 0, that is, when S1(I, j) =kt (k is constant), R(I, j) =1, otherwise R(I, j) <1. Obviously, the larger R(I, j) is, the more similar the template T and Si, j are, and the point (I, j) is the matching point we are looking for.

2.2 Algorithm of Sequential Similarity Detection The calculation of matching with correlation method is very large, because the template has to perform correlation calculation at (n-m +1) two reference positions, except at the matching point, the other points are useless. Therefore, people put forward a kind of call the Sequential similarity detection algorithm, referred to as the SSDA (Sequential SimiliarityDetectionAlgorithm) the point is: In digital images, SSDA method uses Formula (6) to calculate the dissimilarity m(I, j) of image F (x, y) at point (I, j) as the matching scale. In the formula, (I, j) is not the central coordinate of the template, but its upper-left coordinate. The size of the template is N x M.The value of m(I, j) is small if there is a pattern consistent with the template at (I, j), and large otherwise. In particular, when the subgraph part under the template and the search graph is completely inconsistent, if the absolute value of the pixel gray difference corresponding to the overlap part in the template increases successively, the sum will increase sharply. Therefore, in addition, if the sum of the absolute value of gray difference exceeds a certain threshold, it is considered that there is no pattern consistent with the template at this position, and then it is moved to the next position to calculate m(I, j). In addition, the calculation of each pixel point under this template is suspended, so the calculation time can be greatly shortened and the matching speed can be improved. According to the above ideas, we can further improve the SSDA algorithm. The template movement on the image is divided into two stages: coarse retrieval and fine retrieval. The first is a rough search, which, instead of moving the template one pixel at a time, overlaps the template with the image every few pixels and calculates the scale of the match to work out the approximate range within which the pattern to be sought exists. Then within this range, the template is moved every pixel, depending on the scale of the match to determine where to look for the pattern. In this way, The Times of calculating template matching are reduced, the calculation time is shortened, and the matching speed is improved. But there is a danger of missing the most appropriate position in the image.

2.3 Correlation Algorithm The correlation definition of the two functions can be expressed by formula (7) :F star is the complex conjugate of f. We know that the relevant theory is similar to convolution theory, where F(u, v) and H(u, v) represent the Fourier transform of F(x, y) and H(x, y) respectively. According to the theory of convolutionConvolution is the link between spatial domain filtering and frequency domain filtering. An important related use is for matching. In matching, f (x, y) is an image containing an object or region. If you want to determine if F contains an object or region of interest, let H (x, y) be the region of that object (often called the image template). If the match is successful, the correlation between the two functions is maximized at the point where H finds the corresponding point in F. From the above analysis, it can be seen that the correlation algorithm can be carried out in two ways: in the spatial domain or in the frequency domain.

2.4 sorting algorithms The algorithm consists of two steps: step 1, the real-time image of the grey value according to the size of the amplitude in column form, and then to binary or ternary encoded, according to the sequence of binary sort, real-time image transform into an ordered set of binary array {Cn, n = 1, 2,… , N}. This process is called amplitude sorting preprocessing. In step 2, the binary sequences are sequentially correlated with the reference graph from coarse to fine until the matching points are determined. Due to space constraints, I will not list examples here.

This hierarchical search algorithm is directly formed based on the convention that people search for things first and then in detail. For example, when looking for the location of Zhaoqing on the map of China, they can first look for guangdong Province, which is called rough correlation. Then in this region, carefully determine the location of Zhaoqing, which is called fine correlation. Obviously, the location of Zhaoqing can be found quickly by using this method. In this process, the time needed to search for regions outside Guangdong province is omitted. This method is called sequential decision method of hierarchical search, and the hierarchical algorithm formed by using this idea has high search speed. For the sake of space, here is a general idea of how to do this.

The essence of pattern matching is applied mathematics. The template matching process is as follows: (1) Digitize the image and take out the pixel value of each point in order; (2) Substituting the established mathematical model for preprocessing; ③ Select an appropriate algorithm for pattern matching; ④ List the coordinates of the matched image or display them directly in the original image. In the template matching operation, the most important thing is how to establish mathematical model, which is the core of correct matching.

Ii. Source code

function varargout = IdentifyEnglish(varargin)
% IDENTIFYENGLISH MATLAB code for IdentifyEnglish.fig
%      IDENTIFYENGLISH, by itself, creates a new IDENTIFYENGLISH or raises the existing
%      singleton*.
%
%      H = IDENTIFYENGLISH returns the handle to a new IDENTIFYENGLISH or the handle to
%      the existing singleton*.
%
%      IDENTIFYENGLISH('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in IDENTIFYENGLISH.M with the given input arguments.
%
%      IDENTIFYENGLISH('Property'.'Value',...). creates anew IDENTIFYENGLISH or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before IdentifyEnglish_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to IdentifyEnglish_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one % instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help IdentifyEnglish

% Last Modified by GUIDE v2. 5 05-May- 2019. 16:46:08

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @IdentifyEnglish_OpeningFcn, ...
                   'gui_OutputFcn',  @IdentifyEnglish_OutputFcn, ...
                   'gui_LayoutFcn', [],...'gui_Callback'[]);if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before IdentifyEnglish is made visible.
function IdentifyEnglish_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to IdentifyEnglish (see VARARGIN)

% Choose default command line output for IdentifyEnglish
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes IdentifyEnglish wait for user response (see UIRESUME)
% uiwait(handles.figure1);
axis([0 240 0 240]);

% --- Outputs from this function are returned to the command line.
function varargout = IdentifyEnglish_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;
clc;

% --- Executes on button press in pushbuttonSave.
function pushbuttonSave_Callback(hObject, eventdata, handles)
% hObject    handle to pushbuttonSave (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
[f, p] = uiputfile({'*.bmp'},'save image file'); % Open the dialog box STR = for saving the filestrcat(p,f); % concatenate two strings (concatenate paths and files) px = getFrame (handles. Axes1); % use getFrame to capture the image as a movie frame. CurImg = frame2im(px); Frame2im then converts captured movie frames into image data. imwrite(CurImg,str,'bmp');


% --- Executes on mouse press over figure background, over a disabled or
% --- inactive control, or over an axes background.
function figure1_WindowButtonDownFcn(hObject, eventdata, handles)
% hObject    handle to figure1 (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
global ButtonDown pos1
if strcmp(get(gcf,'SelectionType'),'normal')
    ButtonDown = 1;
    pos1 = get(handles.axes1,'CurrentPoint'); % disp(pos1); End function [feature] = GetFeature(Img) %UNTITLED = rgb2gray(Img); [rows,cols] = find(Img==0);
top = min(rows);
buttom = max(rows);
left = min(cols);
right = max(cols);
x = floor((right - left)/5);


k = 1;
Vec = zeros(1.5*5);
for i = top:y:top+(5- 1)*y
    for j = left:x:left+(5- 1)*x
        Vec(k) = sum(sum(Img(i:i+y- 1,j:j+x- 1))); % find white k = k+1;
    end
end

end
Copy the code

3. Operation results

Fourth, note

Version: 2014 a