I. Introduction to morphological detection

1 image analysis and preprocessing Shooting images will produce random disturbance, image has a certain noise, in order to eliminate irrelevant information in the image, the image is preprocessed.

1.1 gray,In order to reduce the amount of computation, it is necessary to convert the THREE-channel RGB image to the single-channel gray image. The grayscale method of weighted average method is adopted, wherein the psychological grayscale formula selects different weights according to the sensitivity of human eyes to RGB trichromatic colors:In Formula (1), R, G and B are the grayscale values of RGB three channels respectively, and the grayscale results are shown in Figure 1 (a).

1.2 SmoothingIn order to avoid treating background as a defect as much as possible, the image needs to be smoothed, which will blur the boundary of defect, but is beneficial to reduce the interference of background. Note that the denoising processing adopted is mean filtering, and the mean filtering formula is:In Formula (2), m and n are the length and width of the selected filter kernel respectively, and Sxy is the location set of the corresponding pixel by filtering with (x,y) as the center. The smoothing results are shown in Figure 1 (b). The disadvantage of mean filtering is that some details, such as edges, will be lost. Therefore, after seed points are found, regions of images without smoothing are grown to find defect boundaries.

2 Algorithm Principle 2.1 Threshold SegmentationThreshold segmentation is the most simple and basic method in image segmentation, with relatively stable performance, small amount of calculation and fast operation speed. It mainly includes global threshold segmentation, local threshold segmentation and adaptive threshold segmentation. The threshold algorithm is based on threshold T, and the part of pixel gray value greater than threshold T and less than threshold T is called foreground and background respectively. Transformation function expression:FIG. 1 Mean filteringIn Formula (3), T is the threshold value, g (x,y) is the gray value of the original image pixel point (x,y), and F (x,y) is the gray value of the image pixel point (x, Y) after segmentation. The threshold segmentation result is shown in Figure 2.Figure 2 threshold segmentation results

Mathematical morphology is referred to as morphology, and its processing mode is domain operation, that is, logical operation is carried out between domain structural elements and corresponding pixels of images. The influencing factors of this operation are mainly the size, shape and rules of logical operation of structural elements. Morphological operations mainly include expansion, corrosion, gradient operation, top hat operation, black hat operation, open operation and closed operation, etc., but their foundation is corrosion and expansion, which can be used to complete different forms of operation.

Corrosion operation can eliminate contour boundary points and make the boundary shrink inward, which is mainly used to refine the target contour of binary image and remove noise.In Formula (4), A is the original image and B is the structural element. Firstly, an origin is defined for structural element B. When the origin of structural element B moves to (x,y) of image A, the gray value of (x,y) of image A is set to 1 if the pixel point equal to 1 on structural element B is also equal to 1, otherwise it is set to 0. The corrosion diagram is shown in FIG. 3.On the contrary to the corrosion operation, the expansion operation makes the boundary expand outwards, which is mainly used to fill the blank after image segmentation and connect the adjacent disconnected contours. The formula is:In Formula (5), A is the original image and B is the structural element. Firstly, an origin is defined for structural element B. When the origin of structural element B moves to (x,y) of image A, if at least one of the pixel points equal to 1 on structural element B corresponds to image A, the gray value of (x,y) of image A is set to 1, otherwise it is set to 0.

First corrosion operation, and then on the basis of corrosion expansion operation, mainly used for denoising and counting. The formula is:In Formula (6), A is the original image, and B and C are structural elements. The effect of open operation is shown in FIG. 4, and FIG. 5 is the result of open operation.FIG. 4 Effect of open operation2.3 Zone growth methodThe idea of regional growth is to transform the fields (four fields, eight fields, etc.) into one region. First, a seed point is required as the start of growth, and then the pixels in the seed point field that meet the requirements of the similarity criterion are merged into the seed area. The pixels in this area are used as seed points, and the growth continues until there are no points that meet the requirements. At the end of growth, all the pixels in the seed point are used as growth areas. The segmentation is determined by the initial seed point and similarity criterion.FIG. 5 Morphological open operation results2.3.1 Seed spot selection and detectionAfter thresholding segmentation and morphological processing, each contour center of binary image is regarded as the undetermined seed point. If the selected seed point is located in the absolute region of the defect, then the seed point always has a high-low-high depth value for each pixel in one direction. The detection template is designed as shown in Figure 6, and the depth changes of seed points in the directions of 0°, 45°, 90° and 135° are calculated to judge whether the changes are high and low.Figure 6 Gray mean values of R pixels on the left and right sides of the detection template seed point are:The grayscale change in each direction is:Determination of depth form Si state change:In formula (10), I (u) is the gray value of the u pixel in the detection template, w=1,2,3,4, respectively representing the directions of 0°, 45°, 90° and 135°, MWM is the minimum gray value on both sides of the w direction, T1 is the threshold of morphological change. If the seed point does not meet the determination of depth morphological change, the seed point to be determined will be removed.

2.3.2 Growth processThe specific process of regional growth is as follows: (1) Place seed point coordinates into seed point set seeds. (2) A seed point in the seed point set is ejected, and the similarity criterion is judged for the pixel points in the eight neighborhood of the seed point; The points that meet the similarity criterion are regarded as seed points and placed into seed points set. (3) Store the ejected seed points into seed set S. (4) If there is no element in the seed point set, skip to step (4); If there are still elements in the seed point set, skip to step (2). (5) Generate an image I with the same length and width as the input image and all pixel values of 0. (6) Set the pixel value of the coordinate S of the corresponding seed set in image I to 255 to obtain the segmented image I’. The similarity criterion of growth is:In Formula (11), gray (seed) is the gray value of the seed point in the current round, gray (8_n) is the pixel value of each point in its eight neighborhood, and Thresh is the set threshold. Regional growth results are shown in Figure 7.FIG. 7 Regional growth results3. Experimental ProcessImage segmentation is the process of dividing an image into meaningful foreground and background according to preset rules. Region growth is a good segmentation algorithm, but the premise is to find suitable seed points. A single segmentation algorithm is prone to encounter such deficiencies. Morphology and threshold segmentation are combined to find suitable seed points to help the region growth algorithm complete the segmentation task and achieve the segmentation effect that meets the requirements. The process of segmentation method is shown in Figure 8.Figure 8 segmentation flowchart Gray level of input images are first, into a single channel of gray level image, and then get rid of the noise filtering, make the image more smooth, select the appropriate threshold segmentation threshold, smaller after using open operation to get rid of segmentation, the prospect of foreground area is to seeds as the starting point, the center of the region growing, The prospect of getting the final requirements met.

Two, some source code

% 
clear all,close all;
I = imread('mixed1.jpg');
I = rgb2gray(I);
[M,N]=size(I);
figure(1); subplot(131),imshow(I); title('Raw grayscale image');
medfilt_I=midfilt(I,3); % for the template is3x3Subplot (132),imshow(medfilt_I); title('Median filtered image');
P=adapthisteq(medfilt_I,'Numtiles'[4 4]); % contrast enhancement P1=imadjust(P); % brightness adjustment P2=midfilt(P1,3); % Perform median filtering on the processed image again and smooth subplot(133),imshow(P2); title('Enhanced contrast and brightness image'); H=im2double(P2); % is converted todoubleType g_gradient = SobelFilter (H); % Find the gradient figure(2),subplot(221); imshow(g_gradient); title('Gradient value');
th=0.997; % gradient threshold g_T=zeros(M,N); G_T =uint8(g_T); Uint8 ind=find(g_gradient>th); % Record the positions of pixel points whose gradient value is greater than the threshold value g_T(ind)=256; % Set the grayscale of these points to256, the highest gray level subplot(222); imshow(g_T); title({'Target profile extracted from gradient value';'(There are also large gradient areas around the image)'}); lap = uint8(zeros(M,N)); % initializes a black image of the same sizefor i=2:M- 1
    for j=2:N- 1lap(i,j) = g_T(i,j); % assigns g_T to all values except around the imagelap
    end
end
subplot(223).imshow(lap); title('Defect target profile');
se90=strel('line'.3.90);
se0=strel('line'.3.0); % Two linear elements BW2=imdilate(lap,[se90,se0]); % Expand the target contour lap subplot(224); imshow(BW2); title('Inflated defect profile after repair');
J = imfill(BW2,'holes'); % Fill the figure inside the contour (3); subplot(121),imshow(I); title('Raw grayscale image');
subplot(122); imshow(J); title('Extracted binary image of defect'% to obtain the binary image of the target [L,m]= bwLabel (J,8); % is found in J8M status = regionprops(L,'all'); % Measure the characteristic attributes of each area in the defect areaif(Max (lap(:))) % as long as it can extract the edge value is not all0If point_defect_number= is faulty0; % initializes the number of defects line_defect_number=0;
block_defect_number=0;
point_area_max=200; % specifies that the maximum area of point defect area is200
line_area_max=3000; % specifies that the maximum area of line defect area is3000
   for i=1:m
         if(status(I).Area < point_areA_max) % The point_defect_number =point_defect_number+1;
             rectangle('position' ,status(i).BoundingBox,'Curvature'[1.1].'edgecolor'.'r');
         else% Determine whether the condition of line defect is metif(status(i).Area<line_area_max&&(status(i).MajorAxisLength/ status(i).MinorAxisLength > 20))
             line_defect_number =line_defect_number+1;
              rectangle('position' ,status(i).BoundingBox,'Curvature'[1.1].'edgecolor'.'g');
             else% defect block_defect_number=block_defect_number+1;
                 rectangle('position' ,status(i).BoundingBox,'Curvature'[1.1].'edgecolor'.'b');
             end
         end
    
    end
    hold off;
    fprintf('Test result :\n point defect number: %d\n',point_defect_number);
    fprintf('Line defect number: %d\n',line_defect_number);
    fprintf('Number of defects: %d\n',block_defect_number);
 else
  fprintf('Test result :\n no defect \n');
end
% 
function g=SobelFilter(f)
len=1; The number of rows and columns filled with % is len=1F_pad = PADarray (f,[len,len]); f_pad=padarray(f,[len]); % around the image1line1Column extension, with0To populate the [M, N] = size (f_pad); Lx = [% sobel operator- 1 2 - - 1; Operator template in %x direction0 0 0;
    1 2 1];
Ly=[- 1 0 1; %y direction of the template2 - 0 2;
     - 1 0 1];     

Copy the code

3. Operation results

Matlab version and references

1 matlab version 2014A

2 Reference [1] CAI Limei. MATLAB Image Processing — Theory, Algorithm and Case Analysis [M]. Tsinghua University Press, 2020. [2] Yang Dan, ZHAO Haibin, LONG Zhe. Examples of MATLAB Image Processing In detail [M]. Tsinghua University Press, 2013. [3] Zhou Pin. MATLAB Image Processing and Graphical User Interface Design [M]. Tsinghua University Press, 2013. [4] LIU Chenglong. Proficient in MATLAB Image Processing [M]. Tsinghua University Press, 2015.