1.1 Gray threshold segmentation method

It is one of the most commonly used parallel region techniques, which is the most widely used in image segmentation. Threshold segmentation method is actually the following transformation from input image F to output image G:

 

Where, T is the threshold value, g(I,j)= L for the image element of the object, g(I,j)=0 for the image element of the background.

It can be seen that the key of threshold segmentation algorithm is to determine the threshold value. If an appropriate threshold value can be determined, the image can be segmented accurately. If the threshold value is too high, too many target areas will be divided into backgrounds; on the contrary, if the threshold value is too low, too many backgrounds will be divided into target areas [7]. After the threshold value is determined, the comparison between the threshold value and the gray value of the pixel point and the pixel segmentation can be carried out in parallel for each pixel, and the segmentation result can be directly given to the image region.

Threshold segmentation must satisfy an assumption: the histogram of the image has obvious bimodal or multi-modal peaks, and the closed value is selected at the bottom of the valley. Therefore, the segmentation effect of this method is very obvious for the image with a large contrast between the target and the background, and the closed and connected boundary can always be used to define the non-overlapping region.

Threshold segmentation methods are mainly divided into global and local two kinds, and the closed value segmentation methods currently used are developed on this basis, such as minimum error method, maximum correlation method, maximum entropy method, moment retention method, Otsu maximum inter-class variance method, etc., and the most widely used is Otsu maximum inter-class variance method.

Various threshold processing techniques have been developed, including global threshold, adaptive threshold, optimal threshold and so on. Global threshold refers to the segmentation of the whole image using the same threshold, which is suitable for the image with obvious contrast between background and foreground. It is determined from the whole picture: T=T(f). However, this method only considers the gray value of the pixel itself and generally does not consider the spatial characteristics, so it is very sensitive to noise. The commonly used global threshold selection methods include peak-valley method, minimum error method, maximum interclass variance method, maximum entropy automatic threshold method and other methods. In many cases, the contrast between the object and the background is not the same everywhere in the image, making it difficult to use a uniform threshold to separate the object from the background. In this case, different thresholds can be used for segmentation according to the local features of the image. In actual processing, the image needs to be divided into several sub-regions according to the specific problem to select the threshold value, or dynamically select the threshold value at each point according to a certain neighborhood range to carry out image segmentation. In this case, the threshold is adaptive. Threshold selection needs to be determined according to specific problems, generally through experiments. For a given image, the best threshold can be determined by analyzing the histogram. For example, when the histogram obviously shows two peaks, the midpoint of two peaks can be selected as the best threshold.

The advantages of threshold segmentation are simple calculation, high efficiency, high speed, and easy to implement in algorithm. It has been widely used in applications that attach importance to efficiency (such as hardware implementation). It is effective for segmentation of images with high contrast between target and background, and can always define non-overlapping regions with closed, connected boundaries. However, it is not suitable for multi-channel images with little correlation between characteristic values, and it is difficult to obtain accurate results for image segmentation problems where there is no obvious gray difference in the image or the gray value range of each object has a large overlap. In addition due to determine mainly depends on the gray-level histogram threshold, and rarely consider the image pixels of space position relations, so when the background is complex, especially in the background was the same overlap appeared a number of research objectives, or are too many noise in the image signal, the target of grey value is almost the same with the background circumstances, such as easy to lose some of their boundary information, The segmentation results obtained according to the fixed threshold are not accurate, resulting in incomplete segmentation, which requires further precise positioning.

1.2 Region-based segmentation method

Region growth and split-merge are two typical serial region techniques, and the processing of the subsequent steps of the segmentation process should be determined according to the results of the previous steps.

(1) Regional growth

The basic idea of region growth is to gather the pixels with similar quality to form a region. To be specific, a seed pixel is firstly found for each region to be divided as the starting point of growth, and then the pixels with the same or similar quality with the seed pixel in the neighborhood of the seed pixel are merged into the region where the seed pixel is located (determined according to some pre-determined growth or similarity criteria). These new pixels are treated as new seed pixels and the process continues until no more pixels that meet the criteria can be included.

Region growth requires selecting a set of seed pixels that correctly represent the desired region, determining similarity criteria during growth, and formulating conditions or criteria for stopping growth. Similarity criteria can be gray level, color, texture, gradient and other characteristics. The selected seed pixel can be a single pixel or a small area containing several pixels. Most region growth criteria use the local nature of the image. Growth criteria can be formulated according to different principles, and the use of different growth criteria can affect the process of area growth.

The advantages of the region growth method are simple calculation, good segmentation effect for uniform connected objects, and ideal segmentation effect for complex scene with complex object definition, natural scenery segmentation and other similar image segmentation without prior knowledge. Wu H S et al proposed to use the vector composed of the mean and standard deviation of lung cancer cell images as the feature of cell segmentation, and proposed the region growth segmentation algorithm to segment lung cancer cell texture images, achieving good results [10]. Its disadvantage is that seed points need to be determined manually. Although its anti-noise performance is better than edge segmentation and histogram segmentation, it is still sensitive to noise and may lead to cavities in the region. In addition, it is a serial algorithm, when the target is large, the segmentation speed is slow, so when designing the algorithm, we should try to improve the efficiency; Moreover, improper selection of the predetermined error value introduced in the calculation process will lead to misjudgment, which is susceptible to the overlapping interference between the internal organizations of the analysis target. Therefore, the segmentation method based on region growth is generally suitable for the segmentation of cell images with smooth edges and no overlap.

(2) Regional splitting and merging

Region growth starts from some or some pixels and finally obtains the whole region to achieve target extraction. Splitting and merging is almost the reverse process of region growth: starting from the whole image, constantly splitting to get each sub-region, and then merging the foreground region to achieve target extraction. The hypothesis of split merging is that for an image, the foreground region is composed of some interconnected pixels. Therefore, if an image is split to the pixel level, it can be determined whether the pixel is a foreground pixel. When all pixels or sub-regions are judged, the foreground region or pixels are combined to obtain the foreground target. In this kind of method, the most commonly used method is quadtree decomposition. Let R represent the entire square image area and P represent the logical predicate. The steps of the basic splitting and merging algorithm are as follows:

① For any region, if H(Ri)=FALSE, split it into four non-overlapping equal parts;

(2) For two adjacent regions Ri and Rj, they can also be of different sizes (that is, not at the same level). If H(Ri∪Rj)=TRUE, they can be combined.

③ If further split or merger is not possible, end.

The key of split-merge method is the design of split-merge criterion. This method has good segmentation effect on complex image, but the algorithm is complicated and the calculation is large, and the splitting may destroy the boundary of the region.

1.3 Edge based segmentation method

An important way of image segmentation is through edge detection, that is, detection of gray level or structure with mutations, indicating the end of one area, is also the beginning of another area. This discontinuity is called an edge. Different image grayscale is different, and there are obvious edges at the boundary, which can be used to segment images. The gray value of pixels at the edge of the image is discontinuous, which can be detected by taking derivatives. For a step edge, its position corresponds to the extreme point of the first derivative and the zero crossing point (zero crossing point) of the second derivative. Therefore, differential operators are commonly used for edge detection [11].

The commonly used first-order differential operators include Roberts operator, Prewitt operator and Sobel operator, and second-order differential operators include Laplace operator and Kirsh operator. In practice, various differential operators are often represented by small region templates.

Algorithm idea:

Suppose an image has L gray levels [1,2… L]. The number of pixels with gray level I is Ni, so the total number of pixels should be N= N1 +n2+… + nL. For the convenience of discussion, we use the normalized grayscale histogram and regard it as the probability distribution of this image:

Now suppose we divide these pixels into two categories, C0 and C1(background and target, or vice versa), by a threshold of gray scale k; C0 represents the pixel with gray level of [1,…, K], and C1 represents the pixel with gray level of [K +1,…,L]. Then, the probability of occurrence of each category and the average gray level of each category are respectively given by the following formula:

As well as

Among them,

Are the cumulative occurrence probability and average gray level (first-order cumulative moment) of gray level from 1 to K respectively, and

Is the average gray level of the whole image. We can easily verify that, for any selected k, there is:

The intraclass variances of the two classes are given by the following formula:

This requires a second-order cumulative moment.

In order to evaluate the degree of “good” of the threshold (gray level K), we need to introduce the discriminant standard used in discriminant analysis to measure (class separability measure) :

Among them:

According to Equation (9), it can be concluded that:

These three formulas are respectively the variance within class, the variance between classes and the total variance of gray level. Then, our problem is reduced to an optimization problem, that is, to find a threshold k to maximize an objective function given in Equation (12).

This idea is based on the assumption that a good threshold will divide the gray level into two categories, and conversely, if a threshold can divide the image into the best two categories at the gray level, then this threshold is the best threshold.

The discriminant criterion given above is to take the maximum values of λ, κ, and η, respectively. However, in the case of κ, it is equal to another factor, such as κ=λ+1; For λ, η=λ/(λ+1) because the following basic relationship always exists:

  

The differential operation is realized by using template and image convolution. These operators are sensitive to noise and are only suitable for less complicated images. Since both edge and noise are gray discontinuous points and are high frequency components in frequency domain, it is difficult to overcome the influence of noise by direct differential operation. Therefore, the image should be smoothed and filtered before edge detection by differential operator.

Roberts operator is good for segmentation of low noise images with steep edges. Laplacian operator is isotropic. Roberts operator and Laplacian operator greatly enhance noise and worsen signal-to-noise ratio during implementation. Prewitt operator and Sobel operator are beneficial to the segmentation of gray gradient image with more noise. Log operator and Canny operator are second-order and first-order differential operators with smooth function, which have better edge detection effect. The Log operator uses Laplacian operator to calculate the second derivative of Gaussian function, and the Canny operator is the first derivative of Gaussian function, which achieves a good balance between noise suppression and edge detection. Marr algorithm has a smoothing effect on images with lots of noise, and its edge detection effect is better than the above operators, but Marr algorithm leads to image contrast reduction while smoothing [7]. Kirch algorithm binarizes the appropriate threshold of gradient image to make the target and background pixels lower than the threshold, while most edge points are higher than the threshold. Meanwhile, in order to improve performance, watershed algorithm can be introduced into this algorithm for accurate segmentation [1].

Hough transform is a common method to directly detect the target contour by using the global characteristics of the image and connect the edge pixels to form a closed boundary of the region. Under the condition of knowing the shape of the region in advance, the boundary curve can be easily obtained by using the Huff transform to connect the discontinuous boundary pixels. Its main advantage is that it is less affected by noise and curve discontinuities.

For images with complex grayscale changes and rich details, it is difficult for edge detection operators to completely detect edges, and once there is noise interference, the direct processing effect of the above operators is not ideal. There are few examples of this method being used to segment microscopic images, because many textures or particles in microscopic images will mask the true edges. Although it can be improved by relevant algorithms, the results are not very good.

Principle of fitting operator (parameter model matching algorithm) : the parameter model of edge is used to fit the local gray value of the image, and then edge detection is carried out on the fitted parameter model. Advantages and disadvantages: This kind of operator not only detects the edge, but also smooths the noise, and has good processing effect on the cell image with large noise and high texture. However, because the parameter model records more edge structure information, the calculation overhead is very high, the algorithm is complex, and the requirements on the edge type are high.

Among the above three methods, the most common problem based on edge segmentation is that there is an edge where there is no boundary and no boundary where there is an actual boundary, which is caused by image noise or inappropriate information in the image [24]. Image segmentation based on region growth method is often caused by non-optimality of parameter setting, which either contains too many regions or too few regions. Thresholding is the simplest segmentation process with low computation cost and high speed. It uses a constant brightness value, namely threshold value, to segment objects and backgrounds.

 

function varargout = ImageDivision(varargin)
% IMAGEDIVISION M-file for ImageDivision.fig
%      IMAGEDIVISION, by itself, creates a new IMAGEDIVISION or raises the existing
%      singleton*.
%
%      H = IMAGEDIVISION returns the handle to a new IMAGEDIVISION or the handle to
%      the existing singleton*.
%
%      IMAGEDIVISION('CALLBACK',hObject,eventData,handles,...) calls the local
%      function named CALLBACK in IMAGEDIVISION.M with the given input arguments.
%
%      IMAGEDIVISION('Property','Value',...) creates a new IMAGEDIVISION or raises the
%      existing singleton*.  Starting from the left, property value pairs are
%      applied to the GUI before ImageDivision_OpeningFcn gets called.  An
%      unrecognized property name or invalid value makes property application
%      stop.  All inputs are passed to ImageDivision_OpeningFcn via varargin.
%
%      *See GUI Options on GUIDE's Tools menu.  Choose "GUI allows only one
%      instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help ImageDivision

% Last Modified by GUIDE v2.5 26-Aug-2013 17:11:44

% Begin initialization code - DO NOT EDIT
gui_Singleton = 1;
gui_State = struct('gui_Name',       mfilename, ...
                   'gui_Singleton',  gui_Singleton, ...
                   'gui_OpeningFcn', @ImageDivision_OpeningFcn, ...
                   'gui_OutputFcn',  @ImageDivision_OutputFcn, ...
                   'gui_LayoutFcn',  [] , ...
                   'gui_Callback',   []);
if nargin && ischar(varargin{1})
    gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
    [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
    gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT


% --- Executes just before ImageDivision is made visible.
function ImageDivision_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
% varargin   command line arguments to ImageDivision (see VARARGIN)

% Choose default command line output for ImageDivision
handles.output = hObject;

% Update handles structure
guidata(hObject, handles);

% UIWAIT makes ImageDivision wait for user response (see UIRESUME)
% uiwait(handles.figure1);


% --- Outputs from this function are returned to the command line.
function varargout = ImageDivision_OutputFcn(hObject, eventdata, handles) 
% varargout  cell array for returning output args (see VARARGOUT);
% hObject    handle to figure
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure
varargout{1} = handles.output;


% --------------------------------------------------------------------
function file_Callback(hObject, eventdata, handles)
% hObject    handle to file (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)


% --------------------------------------------------------------------
function openfile_Callback(hObject, eventdata, handles)
% hObject    handle to openfile (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
global x;
[name,path]=uigetfile('*.*','');
file=[path,name];
axes(handles.axes1);
x=imread(file);
imshow(x);
x=rgb2gray(x);
handles.img=x;
guidata(hObject,handles);


% --------------------------------------------------------------------
function savefile_Callback(hObject, eventdata, handles)
% hObject    handle to savefile (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
[filename,pathname]=uiputfile({'*.jpg';'*.bmp';'*.tif';'*.*'},'save image as');
file=strcat(pathname,filename);
i=getimage(gca);
imwrite(i,file);
set(handles.axes2,'HandleVisibility','OFF');


% --------------------------------------------------------------------
function exit_Callback(hObject, eventdata, handles)
% hObject    handle to exit (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
clc;
close all;
close('gcf');


% --------------------------------------------------------------------
function preprocessing_Callback(hObject, eventdata, handles)
% hObject    handle to preprocessing (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)


% --------------------------------------------------------------------
function HistEnhance_Callback(hObject, eventdata, handles)
% hObject    handle to HistEnhance (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
I=handles.img;
I_hist=histeq(I);
imshow(I_hist);
set(handles.axes2,'HandleVisibility','OFF');


% --------------------------------------------------------------------
function LiSmooth_Callback(hObject, eventdata, handles)
% hObject    handle to LiSmooth (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
I=handles.img;
I_aver=filter2(fspecial('average',3),I)/255;
imshow(I_aver);
set(handles.axes2,'HandleVisibility','OFF');


% --------------------------------------------------------------------
function LogTrans_Callback(hObject, eventdata, handles)
% hObject    handle to LogTrans (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
I=handles.img;
I_log=double(I);
I_log=log(I_log+1);
imshow(mat2gray(I_log));
set(handles.axes2,'HandleVisibility','OFF');


% --------------------------------------------------------------------
function EdgeDetection_Callback(hObject, eventdata, handles)
% hObject    handle to EdgeDetection (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)


% --------------------------------------------------------------------
function Edge_Sobel_Callback(hObject, eventdata, handles)
% hObject    handle to Edge_Sobel (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
I=handles.img;
BW1=edge(I,'sobel');
imshow(BW1);
set(handles.axes2,'HandleVisibility','OFF');


% --------------------------------------------------------------------
function Edge_Prewitt_Callback(hObject, eventdata, handles)
% hObject    handle to Edge_Prewitt (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
I=handles.img;
BW2=edge(I,'prewitt');
imshow(BW2);
set(handles.axes2,'HandleVisibility','OFF');


% --------------------------------------------------------------------
function Edge_Canny_Callback(hObject, eventdata, handles)
% hObject    handle to Edge_Canny (see GCBO)
% eventdata  reserved - to be defined in a future version of MATLAB
% handles    structure with handles and user data (see GUIDATA)
set(handles.axes2,'HandleVisibility','ON');
axes(handles.axes2);
I=handles.img;
BW3=edge(I,'canny');
imshow(BW3);
set(handles.axes2,'HandleVisibility','OFF');
Copy the code

3. Operation results

Complete code or simulation consultation QQ1575304183