A list,

1 basic concepts (1) What is image segmentation? Image segmentation refers to the technology and process of dividing an image into various regions with corresponding characteristics and extracting objects of interest. Features can be grayscale, color, texture, etc. A target can correspond to a single region or multiple regions. (2) Classification divides each region according to the different amplitude: amplitude segmentation divides each region according to the edge; edge detection divides each region according to the different shape: region segmentation gray image segmentation is usually based on two basic characteristics of image brightness: discontinuity and similarity. The pixels inside the region generally have grayscale similarity, while the boundaries between regions generally have grayscale discontinuity. The image segmentation methods derived from this are: discontinuous detection, threshold segmentation and region segmentation.

2 point detection and line detection

Utilizing discontinuous segmentation between regions involves:

Intermittent detection – Detects points, lines, and edges

Edge assembly – Assembles to form boundaries

Threshold processing — In edge detection, features that distinguish different regions need to be defined, so the cut-off point of eigenvalues is a threshold.

(2) Point detection



(2) Line detection

Line detection requires consideration of directionality





3 Edge Detection

(1) Distinguish between “edge” and “line”

Edge – The dividing line between areas of the image that have different characteristics

Line — a pair of edges with very small widths that share the same image features in the middle region

(2) What is edge detection?

The place where the image gray scale jumps is “edge”;

It is a kind of image segmentation based on the discontinuity of gray value in space of gray image pixel.

(3) Image edge classification



(4) Description of image edges

The direction of

amplitude

Along the direction of edge strike, its amplitude changes gently, while along the direction perpendicular to the edge strike, its amplitude changes sharply.

(5) Criterion of image edge

The most common method of edge detection is to detect the discontinuity of brightness value, which is characterized by first and second order differentials in mathematics.



At the edge, the magnitude of the first derivative of the image is larger and a peak value is formed near it. The magnitude and polarity of the peak value reflect the intensity and direction of the edge points.

The second derivative of the image is zero at the edge, and there are two opposite wave peaks on the left and right sides of the zero value. The magnitude and direction of the crest reflect the intensity and direction of the edge points.

(6) First-order differential edge detection operator

Gradient operator

Roberts operator

Sobel operator

Prewitt operator

(7) Second-order differential edge detection operator

Laplacian operator

LOG operator (Mal operator/Mexican straw hat)

DOG operator

Canny edge detection

(7.1) Canny edge detection

7.1.1 Basic principle of Canny edge detection

Image edge detection must meet two conditions :(1) can effectively suppress noise; (2) The optimal approximation operator must be obtained by measuring the product of SNR and positioning. 7.1.2 What is “Optimal edge Detection”?

Weak edges also have strong correspondences — the algorithm should identify as many actual edges in the image as possible; Good positioning — the marked edges should be as close as possible to the edges in the actual image; An edge is only detected once 7.1.3 Canny edge detection algorithm steps

Denoising — smoothing the image with a Gaussian filter; The first partial differential finite difference is used to calculate the magnitude and direction of the gradient. The gradient amplitude is suppressed by non-maximum. (Ensure accurate positioning and thin edges) Detect and connect edges using a double threshold algorithm. (To ensure the accuracy of edge recognition) (8) Edge detection MATLAB

BW=edge(I,method,threshold,direction,'nothinning');
BW=edge(I,method,threshold,direction,sigma);
Copy the code

Where, I is the detection image; BW is the output result; Method is the type of detection operator (‘ Canny ‘,’ log ‘,’ Prewitt ‘,’ Roberts’,’ Sobel ‘,’ Zerocross ‘, etc.);

Threshold indicates the threshold. Direction Specifies the direction of the image. ‘Nothnning’ indicates that there is no need to refine the image;

Sigma is used to specify the gaussian kernel standard deviation of the LOG operator and Canny operator.

4 image segmentation based on gray threshold





4.1.4 Threshold selection by OTSU algorithm

It is also called maximum interclass variance threshold selection method, which is an adaptive threshold determination method. According to the gray characteristics of the image, the image is divided into background and target. The greater the inter-class variance between background and target, the greater the difference between the two parts of the image. When part of the target is misclassified into the background or part of the background is misclassified into the target, the difference between the two parts will become smaller. Therefore, the segmentation with the largest variance between classes means the lowest probability of misclassification.

4.2 Multi-threshold segmentation

Firstly, the image is segmented by a set of thresholds related to pixel position.

The global threshold method is used for image segmentation in each part.

Ii. Source code

clc;
clear all;
close all;

%Read Input Retina Image
inImg = imread('Input.bmp');
dim = ndims(inImg);
if(dim == 3)
    %Input is a color image
    inImg = rgb2gray(inImg);
end
function bloodVessels = VesselExtract(inImg, threshold)

%Kirsch's Templates
h1=[5 - 3 - 3;
    5  0 - 3;
    5 - 3 - 3] /15;
h2=[- 3 - 3 5;
    - 3  0 5;
    - 3 - 3 5] /15;
h3=[- 3 - 3 - 3;
     5  0 - 3;
     5  5 - 3] /15;
h4=[- 3  5  5;
    - 3  0  5;
    - 3 - 3 - 3] /15;
h5=[- 3 - 3 - 3;
    - 3  0 - 3;
     5  5  5] /15;
h6=[ 5  5  5;
    - 3  0 - 3;
    - 3 - 3 - 3] /15;
h7=[- 3 - 3 - 3;
    - 3  0  5;
    - 3  5  5] /15;
h8=[ 5  5 - 3;
     5  0 - 3;
    - 3 - 3 - 3] /15;

%Spatial Filtering by Kirsch's Templates
t1=filter2(h1,inImg);
t2=filter2(h2,inImg);
t3=filter2(h3,inImg);
t4=filter2(h4,inImg);
t5=filter2(h5,inImg);
t6=filter2(h6,inImg);
t7=filter2(h7,inImg);
t8=filter2(h8,inImg);

s=size(inImg);
bloodVessels=zeros(s(1),s(2));
temp=zeros(1.8);
Copy the code

3. Operation results

Fourth, note

Version: 2014 a