A list,

  1. Different types of Corners In the real world, corners correspond to corners of objects, intersections of roads, t-junctions, etc. Corner points can be defined in the following two ways from the perspective of image analysis:

A corner can be a corner of two edges; Corner points are characteristic points with two main directions in the neighborhood. The former often needs to encode the image edge, which largely depends on image segmentation and edge extraction, with considerable difficulty and calculation, and once the local change of the target to be detected, it is likely to lead to the failure of the operation. In the early stage, there were methods of Rosenfeld and Freeman, and in the later stage, there were methods of CSS.

The method based on image gray scale detects corner points by calculating point curvature and gradient, avoiding the defects of the first type of method, which mainly includes Moravec operator, Forstner operator, Harris operator, SUSAN operator, etc.



This paper mainly introduces the algorithm principle of Harris corner detection. Shi-tomasi algorithm proposed by Jianbo Shi and Carlo Tomasi is also a well-known corner detection method. This algorithm was mainly used to solve the tracking problem and measure the similarity of two images. We can also view it as an improvement on the Harris algorithm. It has been implemented in OpenCV with the interface function GoodFeaturesToTrack(). Another well-known corner detection operator is SUSAN, which is short for The Smallest Univalue Segment Assimilating Nucleus. SUSAN uses a circular template and the center point of a circle. By comparing the pixels of the center point of the circle with the values of other pixels in the template circle, she calculates the number of pixels similar to the center pixel of the circle. When the number of pixels is less than a certain threshold, it is considered as the corner point to be detected. I think you can think of SUSAN as a simplification of the Harris algorithm. This algorithm principle is very simple, algorithm efficiency is also high, so in OpenCV, its interface function name is: FAST().

  1. Harris corner

    2.1 Basic Principles

    The recognition of human eye diagonal points is usually done in a local small area or window. If the small window of this feature is moved in all directions, the gray level of the area inside the window changes greatly, then it is considered that the corner point is encountered in the window. If the gray level of the image in the window does not change when this particular window moves in all directions, then there is no corner in the window. If the gray level of the image in the window changes greatly when the window moves in one direction, but does not change in other directions, then the image in the window may be a straight line segment.



    For image I (x, y) I (x, y), when in point (x, y) (x, y) translation (Δ Δ x, y) after (Δ Δ x, y) of self-similarity, can through the autocorrelation function is given:

c(x,y; Δ Δ x, y) = ∑ (u, v) ∈ W (x, y) W (u, v) (I) (u, v – I (u + Δ x, v + Δ y)) 2 c (x, y; Δ Δ x, y) = ∑ (u, v) ∈ W (x, y) W (u, v) (I) (u, v – I (u + Δ x, v + Δ y)) 2

Where, W(x,y)W(x,y) is a window centered on point (x,y)(x,y), W(u,v) W(u,v) is a weighting function, which can be a constant or a Gaussian weighting function.







The relationship between the eigenvalues of the elliptic function and corner points, lines (edges) and planes in the image is shown in the figure below. It can be divided into three cases:

Lines in the graph. One of the eigenvalues is large and the other is small, λ1 the result of lambda 2 lambda 2 or lambda 2 the equivalent of lambda 1 lambda 2. The autocorrelation function is large in one direction and small in the other.

The plane in the image. Both eigenvalues are small and approximately equal; The autocorrelation function is small in all directions.

A corner point in an image. Both eigenvalues are large and approximately equal, and the autocorrelation function increases in all directions.



Ii. Source code

%only for RGB image homography

clc;
clear all;
close all
f = 'hall';
ext = 'jpg';
img1 = imread([f '1. ext]);
img2 = imread([f '2.' ext]);

if size(img1,3)= =1%to find whether input is RGB image
fprintf('error,only for RGB images\n'); end img1Dup=rgb2gray(img1); %duplicate img1 img1Dup=double(img1Dup); img2Dup=rgb2gray(img2); %duplicate img2 img2Dup=double(img2Dup);

% use Harris in both images to find corner.

[locs1] = Harris(img1Dup);
[locs2] = Harris(img2Dup);


%using NCC to find coorespondence between two images
[matchLoc1 matchLoc2] =  findCorr(img1Dup,img2Dup,locs1, locs2);

% use RANSAC to find homography matrix
[H inlierIdx] = estHomography(img1Dup,img2Dup,matchLoc2',matchLoc1');
 H  %#ok
[imgout]=warpTheImage(H,img1,img2);
% Harris detector
% The code calculates
% the Harris Feature Points(FP) 
% 
% When u execute the code, the test image file opened
% and u have to select by the mouse the region where u
% want to find the Harris points, 
% then the code will print out and display the feature
% points in the selected region.
% You can select the number of FPs by changing the variables 
% max_N & min_N
% A. Ganoun

function [locs] = Harris(frame)
% I=rgb2gray(frame);
% I =double(I);
I=frame;
%****************************
% imshow(frame);
% 
% waitforbuttonpress;
% point1 = get(gca,'CurrentPoint'); %button down detected % rectregion = rbbox; % % %return figure units
% point2 = get(gca,'CurrentPoint'); %%%%button up detected % point1 = point1(1.1:2); %%% extract col/row min and maxs
% point2 = point2(1.1:2);
% lowerleft = min(point1, point2);
% upperright = max(point1, point2); 
% ymin = round(lowerleft(1)); %%% arrondissement aux nombrs les plus proches
% ymax = round(upperright(1));
% xmin = round(lowerleft(2));
% xmax = round(upperright(2));
% 
% 
% %***********************************
% Aj=6;
% cmin=xmin-Aj; cmax=xmax+Aj; rmin=ymin-Aj; rmax=ymax+Aj;
 min_N=350; max_N=450;


%%%%%%%%%%%%%%Intrest Points %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
sigma=1.4; Thrshold=20; r=4; 
dx = [- 1 0 1; - 1 0 1; - 1 0 1]; % The Mask 
    dy = dx';
    %%%%%% 
    Ix = conv2(I, dx, 'same');   
    Iy = conv2(I, dy, 'same');
    g = fspecial('gaussian'.5*sigma, sigma); %%%%%% Gaussien Filter
    
    %%%%% 
    Ix2 = conv2(Ix.^2, g, 'same');  
    Iy2 = conv2(Iy.^2, g, 'same');
    Ixy = conv2(Ix.*Iy, g,'same');
    %%%%%%%%%%%%%%
    k = 0.04;
    R11 = (Ix2.*Iy2 - Ixy.^2) - k*(Ix2 + Iy2).^2;
    R11=(1000/max(max(R11)))*R11;  %make the largest one to be 1000
   
    R=R11;
   
    sze = 2*r+1;                  
    MX = ordfilt2(R,sze^2,ones(sze)); % non-Maximun supression R11 = (R==MX)&(R>Thrshold); count=sum(sum(R11(5:size(R11,1)- 5.5:size(R11,2)- 5)));
    
    
    loop=0;  %use adaptive threshold here
    while (((count<min_N)||(count>max_N))&&(loop<30))
        if count>max_N
            Thrshold=Thrshold*1.5;
        elseif count < min_N
            Thrshold=Thrshold*0.5;
        end
        
        R11 = (R==MX)&(R>Thrshold); 
        count=sum(sum(R11(5:size(R11,1)- 5.5:size(R11,2)- 5)));
        loop=loop+1;
    end
    
    
	R=R*0;
    R(5:size(R11,1)- 5.5:size(R11,2)- 5)=R11(5:size(R11,1)- 5.5:size(R11,2)- 5); % ignore the corners on the boundary [r1,c1] = find(R); PIP=[r1,c1]; %% IP locs=PIP;Copy the code

3. Operation results

Fourth, note

Version: 2014 a