“This is the 21st day of my participation in the Gwen Challenge in November. See details: The Last Gwen Challenge in 2021”
CLC: clear the contents of the command window without any impact on all variables in the working environment close: close the current Figure window close all: close all the Figure Windows clear: clear all variables in the workspace clear all: Clear all variables, functions, and MEX files in the workspaceCopy the code
Lecture 2
1. Define the file path
# set it in the current folder.
2. Image reading and output
(1) Imread function
Function: achieve a variety of types of image file read, such as: BMP, GIF, JPEG, PNG, RAS, etc. Call format: A = imread(filename, FMT). Filename indicates the filename of an image, which can be gray or color. If the file is not in the current directory or Matlab directory, the full file path must be listed. FMT is the extension of the file and specifies the file type. A is the image data matrix.
(2) imshow function
Function: Display image. Call format: imshow(I,n) : display grayscale image I,n is the grayscale level of the image to be displayed, integer, default is 256. Imshow(I,[LOW HIGH]) : use the specified gray level range [LOW HIGH] to display gray level image I, lower than LOW value is displayed as black, higher than HIGH value is displayed as white, by default, 256 gray level display. Imshow (RGB) : Displays true color image RGB. Imshow (BW) : Displays binary image BW. Imshow (X,map) : shows the indexed image, where X is the data matrix of the indexed image and map is its color mapping table. Imshow filename: displays the image specified by filename. If the file contains multiple frames, the first image is displayed and the file must be in the current directory of MATLAB.
(3) Imwrite function
Function: realize the image file save. Call format: imwrite(A, ‘filename’, FMT) : A is the image data matrix to save, filename is the filename, and FMT is the file format. Imwrite (X,map, ‘filename’, FMT) : X is the data matrix of the index image, and map is its color mapping table.
(4) WHOS function
Function: display the current memory image information, plus a specific file to display the specific image information; Call format: WHOS; % displays all current variables; whos imgxx; Display variable information;
(5) Figure function
Function: new graphics display window; Call format: figure;
(6) Subplot function
Function: display multiple graphics in a drawing window; Call format: subplot(m,n,k); Subplot (1,3,1); Build a 1 row 3 column graphic display to display the first one
Example 1: Read image, display, output
Img1 = imread (' orange. JPG "); Img2 = imread (' house1. JPG "); imshow(img1); imshow(img2); Imwrite (img1, 'orangenew. JPG "); Imwrite (img2, 'housenew. JPG ");Copy the code
Example 2: transform to gray image, 0~1 gray image, RGB2HSV, RGB2YCBCR
img3=rgb2gray(img1);
img4=rgb2gray(img2);
img5=im2double(img2);
P1=rgb2hsv(img1);
P2=rgb2ycbcr(img1);
Copy the code
Example 3: the realization of image inversion
img6=255-img3;
imshow(img6);
Copy the code
Example 4: Binary image
img6=im2double(img3); bw=zeros(size(img6)); Bw (img7 > 0.4) = 1; imshow(bw);Copy the code
Example 5: Implement blue-green channel interchange
A=imread('house1.jpg'); R=A(:,:,1); G=A(:,:,2); % B=A(:,:,3); % get blue component % green blue exchange P1=A; P1(:,:,3)=G; P1(:,:,2)=B; subplot(121),imshow(A),title('Original image');
subplot(122),imshow(P1),title('Green2Blue image');
Copy the code
Example 6: Double the blue component
A=imread('house1.jpg'); A1=A; R=A1(:,:,1); G=A1(:,:,2); B=A1(:,:,3); A1(:,:,1)=R; A1(:,:,2)=G; A1(:,:,3)=2*B; Subplot (121),imshow(A),title('Original image');
subplot(122),imshow(A1),title('DoubleBlue image');
Copy the code
Example 7: Drawing grayscale image histogram
# Draw the histogramImg2 = imread (" 123123 JPG "); Img1=rgb2gray(img2); Figure; imhist(img2); Axis tight;Draw the histogram of the number of fixed intervals;
# Read the picture house1, orange, and grayscale it to observe the difference between the gray histogram of the two pictures.
#
Copy the code
Lecture 3_1 Image basic Operations
Review:
(1) RGB2HSV function
Function: realize RGB data image to HSV data image conversion. Call format: IMg1 = RGB2HSV (RGB). RGB is RGB color image, is a 3-dimensional matrix; HSV is a three-dimensional HSV image matrix, which is H, S and V in sequence, and the values are all within the range of [0,1].
(2) RGB2yCBCR function
Img2 = RGB2YCBCR (RGB) : realize conversion from RGB data image to YCbCr data image.
(3) RGB2Gray function
Function: Color image grayscale. Call format: IMG3 = RGB2Gray (RGB) : true color RGB image transform to grayscale image I. NEWMAP = rGB2Gray (MAP) : Transform the palette of the indexed image to a grayscale palette.
Functions and experiments in this section:
(1) IMTransform function
Function: image geometric transformation. Call format: B = imtransform (, A, TFORM, INTERP param1, val1, param2, val2,…). : To achieve spatial transformation of image A, TFORM is the structure generated by maketForm function or cp2tForm function; INTERP is the interpolation method, take “nearest”, “bilinear”, “bicubic”. T = maketform(TRANSFORMTYPE,…) : generate conversion structure; TRANSFORMTYPE is a transformation type, which can be “‘affine’ affine transform, ‘projective’ projection transform, ‘custom’ custom, ‘box’ with additional parameters, ‘Composite’ multiple calls”.
Experiment:
Image=imread('lotus.jpg'); deltax=20; deltay=20; T=maketform('affine',[1 0 0;0 1 0;deltax deltay 1]);
NewImage1=imtransform(Image,T,'XData',[1 size(Image,2)],'YData',[1,size(Image,1)],'FillValue', 255); NewImage2=imtransform(Image,T,'XData',[1 size(Image,2)+deltax],'YData',[1,size(Image,1)+deltay],'FillValue', 255); subplot(131),imshow(Image),title('original');
subplot(132),imshow(NewImage1),title('Canvas size invariant translation');
subplot(133),imshow(NewImage2),title('Canvas Size enlargement and Translation');
Copy the code
(2) Imrotate
Function: image rotation. B = imrotate(A,ANGLE,METHOD,BBOX) : A is the image to be rotated. ANGLE indicates the ANGLE to be rotated () anticlockwise is positive, clockwise is negative; METHOD is the image rotation interpolation METHOD, can be “nearest”, “bilinear”, “bicubic”, default is nearest; BBOX specifies the size of the returned image, which can be “crop”. The output image B has the same size as the input image A, and the rotated image is cut to meet the requirements. Loose is desirable, and by default, B contains the entire rotated image.
Experiment 2: Image rotation
Bird = imread (' bird. JPG "); graybird=rgb2gray(bird); Newgray1=imrotate(graybird,15); Newgray2=imrotate(graybird,15,'bilinear');
figure;
subplot(121),imshow(Newgray1),title('Rotation 15° (nearest interpolation)');
subplot(122),imshow(Newgray2),title('Rotation 15° (bilinear interpolation)');
imwrite(Newgray1,'rotatebird1.jpg');
imwrite(Newgray2,'rotatebird2.jpg');
Copy the code
(3) Imresize function
Function: Achieve image zoom. Call format: B = imresize(A, SCALE,METHOD)) : return the SCALE multiple of the original image A image B; B = imresize(A, [NUMROWS NUMCOLS], METHOD) : Scale the original image A, return the number of rows NUMROWS and NUMCOLS NUMCOLS of image B, if the two are NaN, it indicates that Matlab automatically adjusts the scale of the image; [Y, NEWMAP] = imresize(X, MAP, SCALE, METHOD)) : SCALE the index image.
Experiment 3: Image scaling
Bird1 = imread (' bird. JPG "); graybird=rgb2gray(bird1); Newgray3 = imresize (graybird, 2.5,'nearest'); Newgray4 = imresize (graybird, 2.5,'bilinear');
figure;
subplot(121),imshow(Newgray3),title('Magnification 2.5x (nearest interpolation)');
subplot(122),imshow(Newgray4),title('Magnification 2.5 times (bilinear interpolation)');
imwrite(Newgray3,'scale1.jpg');
imwrite(Newgray4,'scale2.jpg');
Copy the code
(4) Fliplr function
B=fliplr(X) : realize the left and right flip of the two-dimensional matrix X along the vertical axis.
(5) Flipud function
B= flipud(X) : flips the two-dimensional matrix X up and down.
(6) Flipdim function
B= flipDim (X,DIM) : make the matrix X flip on a specific axis,DIM specifies the flip mode: 1 means flip on a row; A value of 2 indicates column flipping.
Experiment 4: Image mirroring and Mosaic
Image2=imread('lotus.bmp'); HImage = flipdim (Image2, 2); VImage = flipdim (Image2, 1); CImage=flipdim(HImage,1); [h w]=size(Image2); NewImage = zeros (* 2 * 2 h, w, 3); NewImage=[Image2 HImage;VImage CImage]; figure,imshow(NewImage);Copy the code
(7) IMadd function
C=imadd(A,B) : realize the addition of two images. 1) Both A and B are images, so the dimensions of B and A are required to be equal; If B is A scalar, then C represents adding some value to the whole of image A (rounding the decimal part). 2) If the corresponding operation sum of A and B is greater than 255, C still takes 255, that is, truncation; To avoid truncation, C can be stored as uint16, i.e. C=imadd(A,B, ‘uint16’).
Code:
Back=imread('desert.jpg');
Foreground=imread('car.jpg');
result1=imadd(Foreground,-100);
result2=imadd(Back,Foreground);
result3=imadd(Back,result1);
imwrite(result1,'jiabiaoliang.jpg');
imwrite(result2,'jiabeijing.jpg');
imwrite(result3,'jiabiaoliangjiabeijing.jpg');
subplot(221),imshow(Foreground),title('Original target map');
subplot(222),imshow(result1),title('Original target graph plus scalar');
subplot(223),imshow(result2),title('Original object with background');
subplot(224),imshow(result3),title('Add scalar graph to overlay background');
Copy the code
(8) Imsubtract function
Function: two images subtraction. Call format: C=imsubtract(A,B) : If the difference is less than 0, the value is set to 0. The requirements for A and B are the same as for imadd. C= IMabsdiff (A,B) : The difference result takes the absolute value.
Code:
Back=imread('hallback.bmp');
Foreground=imread('hallforeground.bmp');
result=imabsdiff(Foreground,Back);
imwrite(result,'xiangjian.jpg');
subplot(131),imshow(Back),title('background');
subplot(132),imshow(Foreground),title('Foreground view');
subplot(133),imshow(result),title('Graph subtraction');
Copy the code
(9) Immultiply function
C=immultiply(A,B) : realize two images multiply.
Code:
Back=im2double(imread('bird.jpg'));
Templet=im2double(imread('birdtemplet.bmp'));
result=immultiply(Templet,Back);
imwrite(result,'xiangcheng.jpg');
% imwrite(result2,'jianbeijing.jpg');
% imwrite(result3,'jianbiaoliangjiabeijing.jpg');
subplot(131),imshow(Back),title('background');
subplot(132),imshow(Templet),title('template');
subplot(133),imshow(result),title(Image multiplication);
Copy the code
(10) imdivide function
C=imdivide(A,B) : realize two images divide.
Lecture 3_2 Image Conversion (Fourier Transform)
(1) FFT2 function
Function: two-dimensional fast Fourier transform. Call format: ImgDFT= FFT2 (X) : ABS (FFT2 (X)) used to return the Fourier amplitude spectrum (frequency spectrum), is the main object of image enhancement processing, represents the frequency of the sine (cosine) plane wave in the stack of the proportion. Amplitude spectrum directly reflects frequency information and is the main basis of frequency domain filtering
(2) fftShift function
Function: Used to move the zero frequency point to the center of the image. In the spectrum analysis data output by ffT2 function, the spectrum is arranged in accordance with the order obtained by the original calculation, instead of the zero frequency as the center to arrange, so that the zero frequency in the Angle of the output spectrum matrix, the amplitude spectrum image is displayed as four higher brightness angles.
Experiments: L3_E1_2, L3_E2, L3_E3, L3_E5, L3_E7
L3_E1_2:
clear,clc,close all;
Image=rgb2gray(imread('block.bmp')); Scale = imresize (Image, 0.5,'bilinear');
rotate=imrotate(Image,30,'bilinear'.'crop');
tform=maketform('affine',[1 0 0;0 1 0;20 20 1]);
trans=imtransform(Image,tform,'XData',[1 size(Image,2)],'YData',[1,size(Image,1)]);
Originaldft=abs(fftshift(fft2(Image)));
Scaledft=abs(fftshift(fft2(scale)));
Rotatedft=abs(fftshift(fft2(rotate)));
Transdft=abs(fftshift(fft2(trans)));
figure,imshow(Image),title('original');
figure,imshow(scale),title('proportion');
figure,imshow(rotate),title('rotation');
figure,imshow(trans),title('translation');
figure,imshow(Originaldft,[]),title('original DFT');
figure,imshow(Scaledft,[]),title('proportion DFT');
figure,imshow(Rotatedft,[]),title('rotation DFT');
figure,imshow(Transdft,[]),title('sliding DFT');
% imwrite(scale,'scale.jpg');
% imwrite(rotate,'rotate.jpg');
% imwrite(trans,'trans.jpg');
L3_E2:
Image=rgb2gray(imread('cameraman.jpg')); Scale = imresize (Image, 0.5,'bilinear');
rotate=imrotate(Image,30,'bilinear'.'crop');
tform=maketform('affine',[1 0 0;0 1 0;50 70 1]);
trans=imtransform(Image,tform,'XData',[1 size(Image,2)],'YData',[1,size(Image,1)]); Subplot (2, 2, 1); imshow(Image),title('original'); Subplot (2,2,2); imshow(scale),title('proportion'); Subplot (2, 2, 3); imshow(rotate),title('rotation'); Subplot (2, 2, 4-trichlorobenzene); imshow(trans),title('translation'); Originaldft=abs(fft2(Image)); Scaledft=abs(fft2(scale)); Rotatedft=abs(fft2(rotate)); Transdft=abs(fft2(trans)); figure; Subplot (2, 2, 1); imshow(Originaldft,[]),title('original DFT'); Subplot (2,2,2); imshow(Scaledft,[]),title('proportion DFT'); Subplot (2, 2, 3); imshow(Rotatedft,[]),title('rotation DFT'); Subplot (2, 2, 4-trichlorobenzene); imshow(Transdft,[]),title('sliding DFT'); Originaldft2=abs(fftshift(fft2(Image))); Scaledft2=abs(fftshift(fft2(scale))); Rotatedft2=abs(fftshift(fft2(rotate))); Transdft2=abs(fftshift(fft2(trans))); figure; Subplot (2, 2, 1); imshow(Originaldft2,[]),title('original DFT2'); Subplot (2,2,2); imshow(Scaledft2,[]),title('proportion DFT2'); Subplot (2, 2, 3); imshow(Rotatedft2,[]),title('rotation DFT2'); Subplot (2, 2, 4-trichlorobenzene); imshow(Transdft2,[]),title('translation DFT2');
L3_E3:
Image=rgb2gray(imread('cameraman.jpg')); Scale = imresize (Image, 0.5,'bilinear');
rotate=imrotate(Image,30,'bilinear'.'crop');
tform=maketform('affine',[1 0 0;0 1 0;50 70 1]);
trans=imtransform(Image,tform,'XData',[1 size(Image,2)],'YData',[1,size(Image,1)]);
Originaldft=log(1+abs(fftshift(fft2(Image))));
Scaledft=log(1+abs(fftshift(fft2(scale))));
Rotatedft=log(1+abs(fftshift(fft2(rotate))));
Transdft=log(1+abs(fftshift(fft2(trans)))); figure; Subplot (2, 2, 1); imshow(Originaldft,[]),title('original DFT'); Subplot (2,2,2); imshow(Scaledft,[]),title('proportion DFT'); Subplot (2, 2, 3); imshow(Rotatedft,[]),title('rotation DFT'); Subplot (2, 2, 4-trichlorobenzene); imshow(Transdft,[]),title('sliding DFT');
L3_E5:
Image=imread('peppers.jpg'); GrayIn =rgb2gray(Image); % Grayscale color image [h, W]=size(grayIn); DFTI=fftshift(fft2(grayIn)); cf=30; HDFTI=DFTI; HDFTI(h/2-cf:h/2+cf,w/2-cf:w/2+cf)=0; grayrecovery1=uint8(abs(ifft2(ifftshift(HDFTI)))); LDFTI=zeros(h,w); LDFTI(h/2-cf:h/2+cf,w/2-cf:w/2+cf)=DFTI(h/2-cf:h/2+cf,w/2-cf:w/2+cf); grayrecovery2=uint8(abs(ifft2(ifftshift(LDFTI)))); subplot(131),imshow(Image),title('original'); Subplot (132),imshow(grayrecovery1),title('High-pass filtering');
subplot(133),imshow(grayrecovery2),title('Low pass filtering');
%imwrite(grayrecovery1,'highpass.jpg');
%imwrite(grayrecovery2,'lowpass.jpg');
L3_E7:
Image=imread('desert.jpg'); GrayIn =rgb2gray(Image); % Grayscale color image [h, W]=size(grayIn); DCTI=dct2(grayIn); % DCT transform CF =60; FDCTI=zeros(h,w); FDCTI(1:cf,1:cf)=DCTI(1:cf,1:cf); grayOut=uint8(abs(idct2(FDCTI))); subplot(121),imshow(Image),title('original'); Subplot (122),imshow(grayOut),title('Compress reconstruction');
%imwrite(grayOut,'peppersidct.jpg');
whos('DCTI');
Copy the code
Lecture 3_3 Image Transformation (2d Discrete Cosine Transform)
(1) DCT2 function
Function: realize two dimensional discrete cosine transform. ImgDCTI= dCT2 (X)
(2) IDCT function
Function: Inverse discrete cosine transform. Call format: ImgIDCTI= IDCT2 (X)
Experiments: L3_E6, L3_E7
L3_E6:
Image=imread('desert.jpg'); GrayI =rgb2gray(Image); Subplot (131); imshow(Image); % display original image subplot(132) imshow(grayI); DCTI=dct2(grayI); % Calculate cosine transform ADCTI= ABS (DCTI); top=max(ADCTI(:)); bottom=min(ADCTI(:)); ADCTI=(ADCTI-bottom)/(top-bottom)*100; subplot(133); imshow(ADCTI); % %imwrite(ADCTI,'cameramandct.jpg');
L3_E7:
Image=imread('desert.jpg'); GrayIn =rgb2gray(Image); % Grayscale color image [h, W]=size(grayIn); DCTI=dct2(grayIn); % DCT transform CF =60; FDCTI=zeros(h,w); FDCTI(1:cf,1:cf)=DCTI(1:cf,1:cf); grayOut=uint8(abs(idct2(FDCTI))); subplot(121),imshow(Image),title('original'); Subplot (122),imshow(grayOut),title('Compress reconstruction');
%imwrite(grayOut,'peppersidct.jpg');
whos('DCTI');
Copy the code
Lecture 4_1 Image enhancement
Function review:
Im2double function
If the input is a uint8 unit16 or a binary logical type, the function im2double normalizes its value to between 0 and 1. Double is simply to convert the type of a variable to double with the same value size. In order to avoid overflow in image processing (because the data of image channel is of the uint8 type, it is easy to overflow in the calculation process), it usually needs to convert to double precision floating point number. Im2uint8 sets all inputs less than 0 to 0 and all inputs greater than 1 to 255.
(1) Linear gray level transformation enhancement – segmentation gray level enhancement
J=imadjust(imageI, [LOW_IN; HIGH_IN],[LOW_OUT;HIGH_OUT],GAMMA) GAMMA: Is a scalar, if GAMMA<1, the image brightness increases; If GAMMA>1, the image brightness decreases; The default value is linear mapping.
Experiment: L4_1
Image=im2double(rgb2gray(imread('lotus.bmp'))); Double [h,w]=size(Image); NewImage1=zeros(h,w); NewImage2=zeros(h,w); NewImage3=Image; a=30/256; b=100/256; c=75/256; d=200/256; % parameter Settingsfor x=1:w
for y=1:h
ifImage(y,x)<a NewImage1(y,x)=Image(y,x)*c/a; elseif Image(y,x)<b NewImage1(y,x)=(Image(y,x)-a)*(d-c)/(b-a)+c; Piecewise linear transformationelse
NewImage1(y,x)=(Image(y,x)-b)*(1-d)/(1-b)+d;
end
ifImage(y,x)>a && Image(y,x)<b NewImage3(y,x)=(Image(y,x)-a)*(d-c)/(b-a)+c; End end NewImage2=imadjust(Image,[a;b],[c;d]); % intercept grayscale transform imwrite(Image,'gray_lotus.bmp');
imwrite(NewImage1,'lotus1.bmp');
imwrite(NewImage2,'lotus2.bmp');
imwrite(NewImage3,'lotus3.bmp'); imshow(Image); title('Original Lotus image'); figure; imshow(NewImage1); title('Piecewise linear grayscale transformation image'); figure; imshow(NewImage2); title('Truncated gray scale transformation image'); figure; imshow(NewImage3); title('Image with high and low gray levels unchanged');
Copy the code
(2) Histogram equalization
Principle: Histogram equalization is a method used in the image processing field to enhance the contrast of an image, especially when the contrast of the useful data of the image is fairly close. In this way, brightness can be better distributed on the histogram. This can be used to enhance the local contrast without affecting the overall contrast, and histogram equalization achieves this function by effectively extending the commonly used brightness. Disadvantages: Histogram equalization does not select the data to process, it may increase the contrast of background noise and reduce the contrast of useful signals. The basic idea of histogram equalization is to transform the histogram of gray distribution of the original image into a uniform distribution form, so as to enlarge the dynamic range of pixel gray value and enhance the image contrast. The mapping function of the original image {X} to the processed image {Y} needs to meet the following two conditions: (1) the mapping function is monotonically increasing. (2) The range of the mapping function does not exceed the dynamic range of the original image gray scale. Function: image enhancement histogram equalization function by image histogram transformation: histeq() Call form: J= Histeq (imageI,N) N represents the gray level after histogram equalization.
Experiment: L4_2 and L4_3, change the series to see the operation effect
L4_2:
Image=rgb2gray(imread('couple.bmp')); histgram =imhist(Image); % Image histogram [h,w]=size(Image); NewImage1=zeros(h,w); NewImage2=zeros(h,w); s=zeros(256); s(1)=histgram(1);fort=2:256 s(t)=s(t-1)+histgram(t); % Calculate the new gray value endfor x=1:w
fory=1:h NewImage1(y,x)=s(Image(y,x)+1)/(w*h); End end NewImage2=histeq(Image,256); % call imwrite(NewImage2,'Hcouple.bmp') imshow(Image); title('original'); figure; imhist(Image); title('Histogram of a couple gray image'); axis tight; figure; imshow(NewImage1); title('Image after global histogram equalization'); figure; imhist(NewImage1); title('Histogram of image after global histogram equalization'); axis tight; figure; imshow(NewImage2); title('Image after global Equalization of Matlab functions'); figure; imhist(NewImage2); title('Histogram of image after global equalization of Matlab functions');
axis tight;
L4_3:
image=rgb2gray(imread('couple.bmp')); newimage1=histeq(image,16); imshow(newimage1); newimage2=histeq(image,32); newimage3=histeq(image,64); figure; Subplot (2, 2, 1); imshow(image); title('original'); Subplot (2,2,2); imshow(newimage1); title('16'); Subplot (2, 2, 3); imshow(newimage2); title('32 class'); Subplot (2, 2, 4-trichlorobenzene); imshow(newimage3); title('64');
Copy the code
(3) pseudo-color enhancement
Function: the grayscale image of different grayscale in accordance with the linear or nonlinear mapping to different colors to achieve the purpose of image enhancement.
Experiment: L4_4
Image=double(imread('Brain.jpg'));
[height,width]=size(Image);
NewImage=zeros(height,width,3);
L=255;
for i=1:height
for j=1:width
if Image(i,j)<=L/4
NewImage(i,j,1)=0;
NewImage(i,j,2)=4*Image(i,j);
NewImage(i,j,3)=L;
else if Image(i,j)<=L/2
NewImage(i,j,1)=0;
NewImage(i,j,2)=L;
NewImage(i,j,3)=-4*Image(i,j)+2*L;
else if Image(i,j)<=3*L/4
NewImage(i,j,1)=4*Image(i,j)-2*L;
NewImage(i,j,2)=L;
NewImage(i,j,3)=0;
elseNewImage(i,j,1)=L; NewImage(i,j,2)=-4*Image(i,j)+4*L; NewImage(i,j,3)=0; end end end end end figure; imshow(uint8(NewImage)); title('Grayscale - Colour transformation false colour enhanced Image');
imwrite(uint8(NewImage),'huicai.bmp');
Copy the code
(4) Frequency domain enhancement – homomorphic filtering enhancement
Principle: A picture F (x,y) can be expressed as the product of illumination component I (x,y) and reflection component R (x,y). The basic idea of homomorphic filtering is as follows: In order to separate the signal of additive combination, linear filtering method is often used, but the homomorphic filtering technique is commonly used in the combination of non-additive signal to transform the nonlinear problem into linear problem processing, that is, the nonlinear (multiplicative or convolution) hybrid signal is first carried out some mathematical operation, and transformed into additive. Then the image is processed by linear filtering method, and finally the image is restored by inverse transformation operation.
Function: imfilter ()
J=imfilter(imageI, W, option1, Option2…) W: is the template used for filtering operation, is a two-dimensional array. Option1 and option2,… Is optional. In this experiment, the content of replicate refers to filling virtual boundaries is always repeated with its nearest edge pixel (circular, symmetric). The conv shock process is convolution (CORR refers to corr).
Function: fspecial ()
H = fSpecial (type, parameters) type: In this experiment, ‘Gaussian’ refers to the Gaussian template, others include ‘log’ and Laplacian, etc. Parameters refers to the configuration of parameters related to the selected filter type, such as size and standard deviation. Parameters related to the ‘Gaussian’ filter in this experiment are: H =fspecial(‘ Gaussian ‘, hsize, sigma), returns a Gaussian low-pass filter with the size of hsize and standard deviation δ=sigma. The default value of hsize is [3 3], and the default value of sigma is 5.
Lecture 4_2 The image is smooth
I. Spatial domain image smoothing method:
(1) Add noise
Function: artificially add noise to the image
Function: imnoise ()
Call form: h=imnoise(imageI, TYPE, PARAMETERS) TYPE: indicates the added noise TYPE. PARAMETERS refers to the corresponding PARAMETERS of noise, which can represent noise density, etc.
Experiment: L4_6
Image = mat2gray( imread('original_pattern.jpg') ,[0 255]);
noiseIsp=imnoise(Image,'salt & pepper', 0.1); % Add salt and pepper noise, density 0.1imshow (noiseIsp,[0 1]); title('Salt and pepper noise image');
noiseIg=imnoise(Image,'gaussian'); % Add Gaussian noise, default mean is 0, variance is 0.01 figure; imshow(noiseIg,[0 1]); title('Gaussian noise image');
Copy the code
(2) Mean filtering
Fspecial () function: h= fSpecial (type, parameters) type: Specify the type of filter. In this experiment, ‘average’ refers to the mean filter, and parameters is the configuration of parameters related to the selected filter type, in this case, the size of the filter.
Function: filter2 ()
H =filter2(type, imageI) type: specifies the filter.
Experiment: L4_7
Image=imread('Letters-a.jpg');
noiseI=imnoise(Image,'gaussian'); % Add gaussian noise subplot(221),imshow(Image),title('original');
subplot(222),imshow(noiseI),title('Gaussian noise image');
FILTER1=fspecial('average', 3); FILTER2=fspecial('average', 7); result1=filter2(FILTER1,noiseI); %3×3 Mean filter result2= Filter2 (Filter2,noiseI); Subplot (223),imshow(uint8(result1)),title('3×3 mean filtering ');
subplot(224),imshow(uint8(result2)),title('7×7 mean filtering ');
%imwrite(noiseI,'gLetter.bmp');
%imwrite(uint8(result1),'gLetter1.bmp');
%imwrite(uint8(result2),'gLetter2.bmp');
Copy the code
(3) Gaussian filtering
Function: imfilter ()
J=imfilter(imageI, W, option1, Option2…) W: is the template used for filtering operation, is a two-dimensional array. Option1 and option2,… Is optional. In this experiment, the content of replicate refers to filling virtual boundaries is always repeated with its nearest edge pixel (circular, symmetric). The conv shock process is convolution (CORR refers to corr).
Function: fspecial ()
H = fSpecial (type, parameters) type: In this experiment, ‘Gaussian’ refers to the Gaussian template, others include ‘log’ and Laplacian, etc. Parameters refers to the configuration of parameters related to the selected filter type, such as size and standard deviation. Parameters related to the ‘Gaussian’ filter in this experiment are: H =fspecial(‘ Gaussian ‘, hsize, sigma), returns a Gaussian low-pass filter with the size of hsize and standard deviation δ=sigma. The default value of hsize is [3 3], and the default value of sigma is 5.
Experiment: L4_8
Image=imread('Letters-a.jpg'); Sigma1 = 0.6; sigma2=10; r=3; % NoiseI= imnoise(Image,'gaussian'); % add noise gausFilter1 = fspecial ('gaussian',7,sigma1);
gausFilter2=fspecial('gaussian',7,sigma2);
result1=imfilter(NoiseI,gausFilter1,'conv');
result2=imfilter(NoiseI,gausFilter2,'conv'); imshow(Image); title('original'); figure; imshow(NoiseI); title('Gaussian noise image'); figure; imshow(result1); title('Sigma1 =0.6 Gaussian filtering'); figure; imshow(result2); title('Sigma2 =10 Gaussian filtering');
%imwrite(uint8(NoiseI),'gr.bmp');
%imwrite(uint8(result1),'gr1.bmp');
%imwrite(uint8(result2),'gr2.bmp');
Copy the code
(4) Median filtering
Function: medfilt2 ()
Call form: J= medFilt2 (imageI, [M N]) W: imageI, input image, [M N] is the filter size, default is 3×3.
Experiment: L4_9
Image=rgb2gray(imread('lotus.bmp'));
noiseI=imnoise(Image,'salt & pepper', 0.1); imshow(noiseI),title('Salt and pepper noise image'); result1=medfilt2(noiseI); %3×3 Median filtering Figure,imshow(uint8(result1)),title('3×3 median filtering '); result2=medfilt2(noiseI,[5 5]); Figure,imshow(uint8(result2)),title('5×5 median filtering ');
Copy the code
Two, frequency domain image smoothing method:
Round: round to the nearest integer fix: cut off the fractional part to become a certificate; Floor: round down (the largest integer less than or equal to x) ceil: round up (the smallest integer greater than or equal to x)
(1) Ideal low-pass filtering
Ideal low-pass filter:
Where, D0 represents the radius of the passband. D(u,v) is the distance between two points.
The results obtained using a low-pass filter are shown below. Low – pass filters filter out the high – frequency components, so the image is blurred. Due to the excessive characteristics of the ideal low-pass filter is too steep, the ringing phenomenon occurs.
Experiment: L4_10
Image=imread('lena.bmp'); imshow(Image); FImage=fftshift(fft2(double(Image))); % Fourier transform and spectrum shifting [N M]=size(FImage); g=zeros(N,M); r1=floor(M/2); r2=floor(N/2); figure; imshow(log(abs(FImage)+1),[]),title(Fourier spectrum);
d0=[5 11 45 68];
for i=1:4
for x=1:M
for y=1:N
d=sqrt((x-r1)^2+(y-r2)^2);
if d<=d0(i)
h=1;
else
h=0;
end
g(y,x)=h*FImage(y,x);
end
end
g= real(ifft2(ifftshift(g)));
figure,imshow(uint8(g)),title(['Ideal low-pass filter D0=',num2str(d0(i))]);
end
Copy the code
(2) Butterworth low-pass filtering
Butterworth low pass filter:
Similarly, D0 represents the radius of the passband, and n represents the number of butterworth filters. As the number of times increases, the ringing becomes more and more obvious.
Experiment: L4_11
Image=imread('lena.bmp');
Image=imnoise(Image,'gaussian'); % Add salt and pepper noise imshow(Image); FImage=fftshift(fft2(double(Image))); % Fourier transform and spectrum shifting [N M]=size(FImage); g=zeros(N,M); r1=floor(M/2); r2=floor(N/2); figure; imshow(log(abs(FImage)+1),[]),title(Fourier spectrum);
d0=30;
n=[1 2 3 4 5 6];
for i=1:6
for x=1:M
for y=1:N
d=sqrt((x-r1)^2+(y-r2)^2);
h=1/(1+(d/d0)^(2*n(i)));
g(y,x)=h*FImage(y,x);
end
end
g=ifftshift(g);
g=real(ifft2(g));
figure,imshow(uint8(g)),title(['Butterworth low-pass filtering n=',num2str(n(i))]);
end
Copy the code
(3) Exponential low-pass filtering
Exponential low-pass filter:
D0 indicates the radius of the passband. The transition characteristic of exponential low pass is very flat, so there is no ringing phenomenon.
Experiment: L4_12
Image=imread('lena.bmp');
Image=imnoise(Image,'gaussian'); % add noise imshow(Image); FImage=fftshift(fft2(double(Image))); % Fourier transform and spectrum shifting [N M]=size(FImage); g=zeros(N,M); r1=floor(M/2); r2=floor(N/2); figure; imshow(log(abs(FImage)+1),[]),title(Fourier spectrum);
d0=[20 40 80 120];
n=2;
for i=1:4
for x=1:M
fory=1:N d=sqrt((x-r1)^2+(y-r2)^2); (d/h = exp (0.5 * d0 (I)) ^ n); g(y,x)=h*FImage(y,x); end end g=ifftshift(g); g=real(ifft2(g)); figure,imshow(uint8(g)),title(['Exponential low-pass filter D0=',num2str(d0(i))]);
end
Copy the code
Lecture 4_3 Image sharpening
1. Image sharpening method in space domain (first-order differential operator) :
Sharpening principle: There is an image F (x,y), and its gradient is a vector when described by mathematical concept, which is defined as:
It can be seen that the gradient has the following two important properties: ① The direction of the gradient is in the direction of the maximum change rate of the function F (x,y). ② The modulus of the gradient vector is:
G[f(x,y)] is called the gradient of image F (x,y), which is actually the gradient image of image F (x,y). Mathematically, the gradient is the increase in f(x,y) per unit distance in the direction of its maximum rate of change. For mathematical images, difference can be used to approximate differentiation. Here are three common differences (differential algorithms) :
(1) Gradient algorithm
In discrete image processing, the magnitude of gradient is often used, so it is customary to call the magnitude of gradient “gradient”. And the first-order partial derivative is approximated by first-order difference, that is, gx= F (x+1,y)-f(x,y) Gy = F (x,y +1)-f(x,y) the corresponding template is:
To simplify the calculation of the gradient, the use of grad (f) = | gx | | + gy |
Experiment: L4_13
Image=im2double(rgb2gray(imread('qtds.jpg')));
subplot(131),imshow(Image),title('Original image');
[h,w]=size(Image);
edgeImage=zeros(h,w);
for x=1:w-1
for y=1:h-1
edgeImage(y,x)=abs(Image(y,x+1)-Image(y,x))+abs(Image(y+1,x)-Image(y,x));
end
end
subplot(132),imshow(edgeImage),title('Gradient image');
sharpImage=Image+edgeImage;
subplot(133),imshow(sharpImage),title('Sharpen image');
% imwrite(edgeImage,'grad.jpg');
Copy the code
(2) Roberts gradient algorithm
Gx = f (x, y) – f (x + 1, y + 1) gy = f (x + 1, y) – f (x, y + 1) corresponding to the template:
Edge detection function: edge()
Usage: BW = edge(I, TYPE, PARAMETERS), I represents gray or binary image; TYPE represents the specified detection operator, returns binary image BW, 1 represents edge, 0 represents other parts; PARAMETERS indicates the corresponding PARAMETERS of each operator. BW = edge (I) returns the binary image BW containing 1, where the function finds the edge in the input image I and 0 elsewhere. By default, Edge uses the Sobel edge detection method. BW2=double(BW1); Edge does not accept binary images. Use double to force your binary image to be converted to a double. BW = edge (I, method) Detects edges in image I using the edge detection algorithm specified by the method. BW = edge (I, method, threshold) returns all edges stronger than the threshold. BW = edge (I, method, threshold, direction) specifies the direction of the edge to be detected. The Sobel and Prewitt methods can detect edges vertically and/or horizontally. The Roberts method can detect edges that are 45 degrees from the horizontal line, 135 degrees from the horizontal line, or both. This syntax is valid only if the method is “Sobel”, “Prewitt”, or “Roberts”.
Function: imfilter ()
Call form: g = imfilter(f, w, filtering_mode, boundary_options, size_options) where f is the input image, w is the filter mask, and G is the filtered image. Filtering_mode is used to specify whether correlation or convolution is used during filtering. Boundary_options are used to handle boundary zero filling problems, the size of which is determined by the size of the filter. Specific parameter options are shown in the following table:
Parameter Option Description Filtering_mode ‘corr’ is done by using correlation, which is the default. ‘conv’ is accomplished by using convolution ‘X’ input boundary of image by padding with value X (unquoted) extends its default value 0 ‘REPLICATE’ image size extends’ symmetric ‘by copying the value of outer boundary Size_ options’ full ‘The size of the output image is the same as the size of the extended image’ same ‘ The output image is the same size as the input image. This can be done by limiting the offset of the center point of the filter mask to the point contained in the original image, which is the default value.
Experiment: L4_14
Image=im2double(rgb2gray(imread('three.jpg')));
figure,imshow(Image),title('Original image');
BW= edge(Image,'roberts');
figure,imshow(BW),title('Edge detection');
H1=[1 0; 0 -1];
H2=[0 1;-1 0];
R1=imfilter(Image,H1);
R2=imfilter(Image,H2);
edgeImage=abs(R1)+abs(R2);
figure,imshow(edgeImage),title('Robert gradient image ');
sharpImage=Image+edgeImage;
figure,imshow(sharpImage),title('Robert sharpen the image ');
% imwrite(BW,'robertBW.jpg');
Copy the code
(3) Sobel algorithm
gx =f(x+1,y-1)-f(x-1,y-1)+2f(x+1,y) -2f(x-1,y)+f(x+1,y+1)-f(x-1,y+1) gy =f(x-1,y+1)-f(x-1,y-1)+2f(x,y+1) -2f(x,y-1)+f(x+1,y+1)-f(x+1,y-1) -f(x+1,y-1)Copy the code
BW = edge(I, TYPE, PARAMETERS);
Function: imfilter() — see the previous section for usage
Call form: g = imfilter(f, w, filtering_mode, boundary_options, size_options)
The Sobel template can be rotated to expand to eight templates, as shown in textbook P174.
Experiment: L4_15
Image=im2double(rgb2gray(imread('three.jpg')));
figure,imshow(Image),title('Original image');
BW= edge(Image,'sobel');
figure,imshow(BW),title('Edge detection');
H1=[-1 -2 -1;0 0 0;1 2 1];
H2=[-1 0 1;-2 0 2;-1 0 1];
R1=imfilter(Image,H1);
R2=imfilter(Image,H2);
edgeImage=abs(R1)+abs(R2);
figure,imshow(edgeImage),title('Sobel gradient image ');
sharpImage=Image+edgeImage;
figure,imshow(sharpImage),title('Sobel Sharpen images');
% imwrite(BW,'SobelBW.jpg');
% imwrite(sharpImage,'Sobelsharp2.jpg');
Copy the code
Frequency domain image sharpening method
(1) Ideal high-pass filtering
Ideal low-pass filter:
Where, D0 represents the radius of the passband. D(u,v) is the distance between two points.
The high-pass filter transfer function and its radial profile are shown below.
Using this filter to process the image, you get the result shown below. Like the ideal low pass filter, the image obtained has obvious ringing phenomenon.
Experiment: L4_17
Image=imread('lena.bmp');
figure,imshow(Image),title('Original image');
FImage=fftshift(fft2(double(Image)));
figure,imshow(log(abs(FImage)),[]),title(Fourier spectrum);
[N,M]=size(FImage);
g=zeros(N,M);
m=floor(M/2);
n=floor(N/2);
d0=[5 11 45 68];
for i=1:4
for x=1:M
for y=1:N
d=sqrt((x-m)^2+(y-n)^2);
if d<=d0(i)
h=0;
else
h=1;
end
g(y,x)=h*FImage(y,x);
end
end
g=ifftshift(g);
g=real(ifft2(g));
figure,imshow(uint8(g)),title([Ideal high-pass filter D0=,num2str(d0(i))]);
% imwrite(uint8(g),'Ideal high-pass filter D0=.jpg');
end
Copy the code
(2) Butterworth high-pass filtering
Butterworth low pass filter:
Butterworth high pass filter can also adjust the excessive characteristics by changing the number n, but the effect is better, and the ringing effect is not obvious.
Experiment: L4_18
clear,clc,close all;
Image=imread('lena.bmp'); figure,imshow(Image); FImage=fftshift(fft2(double(Image))); [N,M]=size(FImage); gbhpf=zeros(N,M); r1=floor(M/2); r2=floor(N/2); d0=35; d1=75; n=3;for x=1:M
for y=1:N
d=sqrt((x-r1)^2+(y-r2)^2);
bh=1/(1+(d0/d)^(2*n));
gbhpf(y,x)=bh*FImage(y,x);
end
end
gbhpf=uint8(real(ifft2(ifftshift(gbhpf))));
figure,imshow(gbhpf),title(['Butterworth high-pass filter D0=',num2str(d0)]);
% imwrite(gbhpf,'Butterworth high-pass filter.jpg ');
gbs=gbhpf+Image;
figure,imshow(gbs),title('Butterworth High-pass Filtering Sharpening ');
%imwrite(gbs,'Butterworth high-pass Filtering sharpening. JPG ');
Copy the code
(3) Exponential high-pass filtering
Exponential high-pass filter function:
The effect is slightly worse than butterworth high-pass filter and the ringing effect is not obvious.
Experiment: L4_19
clear,clc,close all;
Image=rgb2gray(imread('panda.jpg')); figure,imshow(Image); FImage=fftshift(fft2(double(Image))); [N,M]=size(FImage); gehpf=zeros(N,M); r1=floor(M/2); r2=floor(N/2); d0=35; d1=75; n=3;for x=1:M
for y=1:N
d=sqrt((x-r1)^2+(y-r2)^2);
eh=exp(-(d0/d)^n);
gehpf(y,x)=eh*FImage(y,x);
end
end
gehpf=uint8(real(ifft2(ifftshift(gehpf))));
figure,imshow(gehpf),title([Exponential high-pass filter D0=,num2str(d0)]);
% imwrite(gehpf,'Exponential high-pass filter.jpg');
ges=gehpf+Image;
figure,imshow(ges),title('Exponential High-pass Filtering Sharpening');
% imwrite(ges,'Exponential high-pass filtering sharpened. JPG');
Copy the code
(4) Trapezoidal high-pass filtering
Exponential high-pass filter function:
Produce micro ringing effect, simple calculation, more commonly used.
Experiment: L4_20
clear,clc,close all;
Image=rgb2gray(imread('three.jpg')); figure,imshow(Image); FImage=fftshift(fft2(double(Image))); [N,M]=size(FImage); gbhpf=zeros(N,M); gehpf=zeros(N,M); gthpf=zeros(N,M); r1=floor(M/2); r2=floor(N/2); d0=35; d1=75; n=3;for x=1:M
for y=1:N
d=sqrt((x-r1)^2+(y-r2)^2);
if d>d1
th=1;
elseif d>d0
th=(d-d0)/(d1-d0);
end
gthpf(y,x)=th*FImage(y,x);
end
end
gthpf=uint8(real(ifft2(ifftshift(gthpf))));
figure,imshow(gthpf),title([Trapezoidal high-pass filter D0=,num2str(d0)]);
% imwrite(gthpf,'Trapezoidal high-pass filter.jpg');
gts=gthpf+Image;
figure,imshow(gts),title('Sharpening trapezoidal High-pass Filtering');
% imwrite(gts,'Sharpening of trapezoidal high-pass filter. JPG');
Copy the code
Lecture 5 Image Restoration
1. Frequency domain image restoration method
Matlab operation basis:
(1) The relation and difference between * and.
1, in the numerical operation and number value multiplication matrix, there is no difference between the two, for example: a*b=a.b; aB=a.B; Ba=B.*a (where lowercase letters represent values and uppercase letters represent matrices, similarly below). 2. When dealing with matrix multiplication, * represents ordinary matrix multiplication, requiring that the number of columns of the preceding matrix is equal to the number of rows of the following matrix; .* means that the corresponding elements of two matrices are multiplied, and the number of rows and columns of two matrices is required to be equal.
(2) The difference between/and./.
1. There is no difference between the two values. For example, a/b=a./b 2. When running values and matrices, do you want fractional values first or last? A/b=A /b=A /b 3. Matrix by matrix, A/B can be roughly thought of as A*inv(B) (inversion strongly discouraged); A./B indicates that A matrix is divided by the corresponding elements of B matrix, so the number of rows A and B is required to be equal.
(3) A^2 and A.^2
- A squared is equal to A times A, matrix by matrix.
- A.^2 is the square of each element of matrix A as the new matrix.
(4) Conv2 (A,B)
C=conv2(A,B) C=conv2(Hcol,Hrow,A) C=conv2(… If [Ma,Na] =size(A),[Mb,Nb]=size(B), then size(C)=[Ma+ MB-1,Na+Nb-1].
Frequency domain restoration:
(1) Inverse filtering restoration
Inverse filtering transfer function:
Practice L5_1
clear,clc,close all;
Image=im2double(rgb2gray(imread('flower.jpg')));
window=15;
[n,m]=size(Image);
n=n+window-1;
m=m+window-1;
h=fspecial('average',window);
BlurI=conv2(h,Image);
BlurandnoiseI=imnoise(BlurI,'salt & pepper', 0.001); figure,imshow(Image),title('Original Image');
figure,imshow(BlurI),title('Blurred Image');
figure,imshow(BlurandnoiseI),title('Blurred Image with noise');
imwrite(BlurI,'Blurred.jpg');
imwrite(BlurandnoiseI,'blurrednoise.jpg'); h1=zeros(n,m); h1(1:window,1:window)=h; H=fftshift(fft2(h1)); H (abs (H) < 0.0001) = 0.01; M=H.^(-1); r1=floor(m/2); r2=floor(n/2); d0=sqrt(m^2+n^2)/20;for u=1:m
for v=1:n
d=sqrt((u-r1)^2+(v-r2)^2);
ifd>d0 M(v,u)=0; end end end G1=fftshift(fft2(BlurI)); G2=fftshift(fft2(BlurandnoiseI)); f1=ifft2(ifftshift(G1./H)); f2=ifft2(ifftshift(G2./H)); f3=ifft2(ifftshift(G2.*M)); Result1 = f1 (window + 1, 1:1: n - m - window + 1); Window + result2 = f2 (1: n - 1, 1: m - window + 1); Result3 = f3 (window + 1, 1:1: n - m - window + 1); figure,imshow(abs(result1),[]),title('Filtered Image1');
figure,imshow(abs(result2),[]),title('Filtered Image2');
figure,imshow(abs(result3),[]),title('Filtered Image3');
imwrite(abs(result1),'Filteredimage1.jpg');
imwrite(abs(result2),'Filteredimage2.jpg');
imwrite(abs(result3),'Filteredimage3.jpg');
% clear,clc;
% f=[1 1 1 1 1 1 1 1];
% h=[1 1 1];
% f1=[f 0 0];
% h1=[h 0 0 0 0 0 0 0];
% g1=conv(h,f);
% G1=fft(g1);
% H1=fft(h1);
% M=H1.^(-1);
% M(5:10)=0;
% F1=G1./H1;
% f2=ifft(F1);
% F2=G1.*M;
% f3=ifft(F2);
Copy the code
(2) Wiener filtering restoration
J = deconvwnr(I,PSF,NSR)
Deconvolves image I using the Wiener filter algorithm, returning deblurred image J. Image I can be an N-dimensional array. PSF is the point-spread function with which I was convolved. NSR is the noise-to-signal power ratio of the additive noise. NSR can be a scalar or a spectral-domain array of the same size as I. Specifying 0 for the NSR is equivalent to creating an ideal inverse filter.
Practice L5_2
clear,clc,close all;
Image=im2double(rgb2gray(imread('flower.jpg')));
subplot(221),imshow(Image),title('Original Image');
LEN=21;
THETA=11;
PSF=fspecial('motion', LEN, THETA);
BlurredI=imfilter(Image, PSF, 'conv'.'circular');
noise_mean = 0;
noise_var = 0.0001;
BlurandnoisyI=imnoise(BlurredI, 'gaussian', noise_mean, noise_var);
subplot(222), imshow(BlurandnoisyI),title('Simulate Blur and Noise');
imwrite(BlurandnoisyI,'sbn.jpg');
estimated_nsr = 0;
result1= deconvwnr(BlurandnoisyI, PSF, estimated_nsr);
subplot(223),imshow(result1),title('Restoration Using NSR = 0');
imwrite(result1,'runsr0.jpg');
estimated_nsr = noise_var / var(Image(:));
result2 = deconvwnr(BlurandnoisyI, PSF, estimated_nsr);
subplot(224),imshow(result2),title('Restoration Using Estimated NSR');
imwrite(result2,'ruensr.jpg');
Copy the code
Spatial domain restoration:
(1) R_L restoration
Function: deconvlucy ()
J = deconvlucy(I,PSF,NUMIT,DAMPAR,WEIGHT,READOUT)
NUMIT (optional) is the number of iterations (default is 10).
DAMPAR(Optional) is an array that specifies the threshold deviation of the resulting image from image I (according to the standard deviation of Poisson noise) below which damping occurs. For pixels that deviate from their original value within the DAMPAR value, iteration is suppressed. This suppresses noise in those pixels and preserves necessary image detail elsewhere. The default value is 0 (undamped).
WEIGHT(optional) is assigned to each pixel to reflect the quality of the camera’s shot. A bad pixel is assigned a zero weight to exclude the pixel. Instead of giving weight to good pixels, you can adjust the weights based on the number of flat field corrections. The default is an array of units of the same size as the input image I.
READOUT (optional) is an array (or value) corresponding to additional noise (for example, background, foreground noise) and READOUT camera noise variance. READOUT must be in units of images. The default value is 0.
Practice L5_3
clear,clc,close all;
I=im2double(rgb2gray(imread('flower.jpg')));
figure,imshow(I),title('Original image');
PSF=fspecial('gaussian', 7, 10); % generate a Gaussian low-pass filter, template size is [7 7], the standard deviation of the filter is 10 V=0.0001; Standard deviation of % Gauss additive noise IF1= IMfilter (I,PSF); % BlurredNoisy=imnoise(IF1,'gaussian',0,V); % Add Gaussian Noise figure,imshow(BlurredNoisy),title('Gaussian blur plus noise image'); WT=zeros(size(I)); % generate weight matrix WT(5:end-4,5:end-4)=1; % Use different parameters to restore J1=deconvlucy(BlurredNoisy,PSF); J2=deconvlucy(BlurredNoisy,PSF,50,sqrt(V)); J3=deconvlucy(BlurredNoisy,PSF,100,sqrt(V),WT); figure,imshow(J1),title('10 iterations');
figure,imshow(J2),title('50 iterations');
figure,imshow(J3),title('100 iterations');
Copy the code
(2) Blind deconvolution restoration
Function: the deconvblind ()
[J,PSF] = deconvblind(I,INITPSF,NUMIT,DAMPAR,WEIGHT,READOUT)
NUMIT (optional) is the number of iterations (default is 10). DAMPAR (optional) is an array that specifies the threshold deviation of the resulting image from image I (according to the standard deviation of Poisson noise) below which damping occurs. For pixels that deviate from their original value within the DAMPAR value, iteration is suppressed. This suppresses noise in those pixels and preserves necessary image detail elsewhere. The default value is 0 (undamped). WEIGHT (optional) is assigned to each pixel to reflect the quality of the camera’s shot. A bad pixel is assigned a zero weight to exclude the pixel. Instead of giving weight to good pixels, you can adjust the weights based on the number of flat field corrections. The default is an array of units of the same size as the input image I. READOUT (optional) is an array (or value) corresponding to additional noise (for example, background, foreground noise) and READOUT camera noise variance. READOUT must be in units of images. The default value is 0.
Practice L5_4
clear,clc,close all;
I=im2double(rgb2gray(imread('flower.jpg')));
PSF=fspecial('gaussian', 7, 10); % generate a Gaussian low-pass filter, template size is [7 7], the standard deviation of the filter is 10 V=0.0001; Standard deviation of % Gauss additive noise IF1= IMfilter (I,PSF); % BlurredNoisy=imnoise(IF1,'gaussian',0,V); % Add gaussian noise WT = Zeros (size(I)); WT (5: end - 4, 5: end - 4) = 1; INITPSF = ones(size(PSF)); [J] P = deconvblind (BlurredNoisy INITPSF, 20, 10 * SQRT (V), WT); subplot(221),imshow(BlurredNoisy),title('Gaussian blur plus noise image');
subplot(222),imshow(PSF,[]),title('True PSF');
subplot(223),imshow(J),title('Deblurred Image');
subplot(224),imshow(P,[]),title('Recovered PSF');
imwrite(J,'DeblurredI.jpg');
Copy the code
Lecture 6 Image Segmentation
Matlab Basic Knowledge:
(1), imhist ()
Imhist (I) : directly display the gray histogram of image I (default is 255 gray levels); Imhist (I, n) : n is the histogram displayed at the specified gray level; [count,x]=imhist(I) to obtain histogram information, count is the number of gray pixels at each level,x is the gray level,x can also be specified in IMhist (I,x), you can draw the corresponding histogram through STEM (x, count);
(2) stem ()
Stem (Y) draws the data sequence Y from the x axis to the data value in a stem form, ending in a circle. Stem ([x,] y, linefmt=None, Markerfmt =None, basefmt=None, bottom=0, label=None, use_line_collection=True, data=None) X: array – like, optional; The X-positions of The leafs. Default: (0, 1,… Len (y) -1). Optional argument, array, x position of each swab, default set to (0, 1, 2… ,len(y) -1) y: array-like; The y-values of the stem heads. Array, y value of the swab head, y coordinate. Linefmt: str, optional A string defining the properties of the vertical lines. Usually, This will be a color or a color and a linestyle: linestyle: optional argument, string type, defines the properties of a vertical line. In general, define the color of the vertical line, or the color of the line and the line type. str, optional A string defining the properties of the markers at the stem heads. Default: ‘C0o’, i.e. filled circles with the first color of the color cycle. Basefmt: STR, default: string that defines the attributes of the swab header. The default value is C0o C0oC0o. ‘C3-‘ (‘ C2- ‘in classic mode) A format string defining the properties of the baseline. Baseline format: a string. The default value is C3− C3-C3−(C2− C2-C2− in classical mode). String that defines baseline attributes. Base position: floating point, 0 by default. Label: STR, default: None The labels to use for The stems in legends. Label: a string of characters. None by default. Label of a swab legend.
(3) Hold on & hold off
Hold on is the current axis and image remain without being refreshed, ready to accept the graph to be drawn later, multi-graph coexistence; Hold off makes the current axis and image no longer have the property of being refreshed. When a new image appears, cancel the original image. That is, turn off the graph hold function.
(4) graythresh ()
The graythresh (image) function takes an image as input and a threshold as output. In this function, it is to find an appropriate threshold of the picture by using the maximum inter-class variance method. Using im2BW (transform gray image into binary image) function, will find the threshold input, you can change the original image into a binary image.
(5) im2BW and imbinarize
bw=imbinarize( g ); Or bw = im2bw (g); In order to convert images into binary images, imbinarize function is used, whose parameters must be two-dimensional grayscale images: both functions convert images into grayscale images. In addition to different parameter types, both functions can pass ina threshold value of [0,1], namely: bw = im2bw( g , level ); Or bw = imbinarize(g, level); If im2BW level is omitted, the default value is 0.5. Imbinarize (g) does not specify the level. It is equivalent to im2bw(g, T),T=graythresh(Image).
First, threshold segmentation method
(1) Gray histogram threshold selection
Principle: Assume that the gray histogram of an image has bimodal property (F (Ta)=Ha; F (Tb)=Hb), indicating that the brighter region and the darker region of the image can be well separated. Taking the valley point between the two peaks as the threshold value Th, a good binary processing effect can be obtained. Calculation steps: Textbook P245 (program code).
Practice L6_1
clear,clc,close all;
Image=rgb2gray(imread('lotus.jpg'));
figure;
imshow(Image),title('Original image');
figure;
imhist(Image);
hist1=imhist(Image);
hist2=hist1;
iter=0;
while 1
[is,peak]=Bimodal(hist1);
if is==0
hist2(1)=(hist1(1)*2+hist1(2))/3;
for j=2:255
hist2(j)=(hist1(j-1)+hist1(j)+hist1(j+1))/3;
end
hist2(256)=(hist1(255)+hist1(256)*2)/3;
hist1=hist2;
iter=iter+1;
if iter>700
break;
end
else
break;
end
end
[trough,pos]=min(hist1(peak(1):peak(2)));
thresh=pos + peak(1);
figure;
stem(1:256,hist2,'Marker'.'none');
hold on
stem([thresh,thresh],[0,trough],'filled'.'Linewidth', 1); hold off result=zeros(size(Image)); result(Image>thresh)=1; figure; imshow(result),title('Thresholding based on bimodal histograms');
%imwrite(result,'bilotus1.jpg');
Copy the code
(2) Threshold selection based on pattern classification
① OTSU classification
The Otsu method is optimal because it maximizes the variance between classes. The basic idea is that properly thresholding classes should be distinct in terms of their pixel gray values, and conversely, the threshold that gives the optimal separation between classes will be the optimal threshold in terms of their gray values. Calculation steps: Textbook P247.
Practice L6_2
② Maximum entropy method
Principle: Entropy is an important concept in information theory. This method is often used in the field of data compression. Entropy is a statistical measure used to determine the amount of information contained in a random data source. When the probability of all gray values in the image is the same, the entropy is maximum.Copy the code
Calculation steps: textbook P248.
Practice L6_3
clear,clc,close all;
Image=rgb2gray(imread('lotus.jpg'));
figure,imshow(Image),title('Original image'); hist=imhist(Image); bottom=min(Image(:))+1; top=max(Image(:))+1; J = zeros (256, 1);for t=bottom+1:top-1
po=sum(hist(bottom:t));
pb=sum(hist(t+1:top));
ho=0;
hb=0;
forJ = bottom: t = ho ho - log (hist (j)/Po + 0.01) * hist (j)/Po; endforJ = t + 1: top hb = hb - log (hist (j)/pb + 0.01) * hist (j)/pb; end J(t)=ho+hb; end [maxJ,pos]=max(J(:)); result=zeros(size(Image)); result(Image>pos)=1; figure,imshow(result); % imwrite(result,'lotus1shang.jpg');
Copy the code
Second, boundary segmentation
Matlab Basic Knowledge:
(1), imhist ()
Imhist (I) : directly display the gray histogram of image I (default is 255 gray levels); Imhist (I, n) : n is the histogram displayed at the specified gray level; [count,x]=imhist(I) to obtain histogram information, count is the number of gray pixels at each level,x is the gray level,x can also be specified in IMhist (I,x), you can draw the corresponding histogram through STEM (x, count);
(2) stem ()
Stem (Y) draws the data sequence Y from the x axis to the data value in a stem form, ending in a circle. Stem ([x,] y, linefmt=None, Markerfmt =None, basefmt=None, bottom=0, label=None, use_line_collection=True, data=None) X: array – like, optional; The X-positions of The leafs. Default: (0, 1,… Len (y) -1). Optional argument, array, x position of each swab, default set to (0, 1, 2… ,len(y) -1) y: array-like; The y-values of the stem heads. Array, y value of the swab head, y coordinate. Linefmt: str, optional A string defining the properties of the vertical lines. Usually, This will be a color or a color and a linestyle: linestyle: optional argument, string type, defines the properties of a vertical line. In general, define the color of the vertical line, or the color of the line and the line type. str, optional A string defining the properties of the markers at the stem heads. Default: ‘C0o’, i.e. filled circles with the first color of the color cycle. Basefmt: STR, default: string that defines the attributes of the swab header. The default value is C0o C0oC0o. ‘C3-‘ (‘ C2- ‘in classic mode) A format string defining the properties of the baseline. Baseline format: a string. The default value is C3− C3-C3−(C2− C2-C2− in classical mode). String that defines baseline attributes. Base position: floating point, 0 by default. Label: STR, default: None The labels to use for The stems in legends. Label: a string of characters. None by default. Label of a swab legend.
(3) Hold on & hold off
Hold on is the current axis and image remain without being refreshed, ready to accept the graph to be drawn later, multi-graph coexistence; Hold off makes the current axis and image no longer have the property of being refreshed. When a new image appears, cancel the original image. That is, turn off the graph hold function.
(4) graythresh ()
The graythresh (image) function takes an image as input and a threshold as output. In this function, it is to find an appropriate threshold of the picture by using the maximum inter-class variance method. Using im2BW (transform gray image into binary image) function, will find the threshold input, you can change the original image into a binary image.
(5) im2BW and imbinarize
bw=imbinarize( g ); Or bw = im2bw (g); In order to convert images into binary images, imbinarize function is used, whose parameters must be two-dimensional grayscale images: both functions convert images into grayscale images. In addition to different parameter types, both functions can pass ina threshold value of [0,1], namely: bw = im2bw( g , level ); Or bw = imbinarize(g, level); If im2BW level is omitted, the default value is 0.5. Imbinarize (g) does not specify the level. It is equivalent to im2bw(g, T),T=graythresh(Image).
(1) Hough transform detects lines
Definition: Hough Transform is a feature extraction technology in image processing, which can identify geometric shapes in images. It maps feature points in the image space to parameter space for voting, and obtains a set of points conforming to a specific shape by detecting local extreme points of cumulative results. Classical Hough transform is used to detect straight lines in images, and later hough transform is extended to the recognition of arbitrary shapes, mostly circles and ellipses. It has strong anti – noise and anti – deformation ability. Realization steps: 1. Obtain the edge information of the image; 2. For each point in the edge image, draw a straight line in k-B space; If there is a line passing through this point, the value of this point is increased by 1. 4. Traverse k-B space to find local maximum points, whose coordinates (k, b) are the slopes and intercepts of possible lines in the original image. (Hough transforms are expensive to compute and store.) In the same way that we can detect circles, the equation becomes :(x — a) ^2 + (y-b) ^2 = r^2, so that hough’s parameter space becomes a three-dimensional parameter space. Given the radius of the circle, it is reduced to a two-dimensional Hough parameter space. [H, theta, rho] = Hough (BW) Calculates the standard Hough transform (SHT) of binary image BW. This function returns rho (the distance from the origin to the line along a vector perpendicular to the line) and Theta (the Angle, in degrees, between the X-axis and the vector).
peaks = houghpeaks(H,numpeaks) peaks = houghpeaks(H,numpeaks,Name,Value)
(1) Peaks = Houghpeaks (H,numpeaks) locates peaks in the Hough transform matrix, H, generated by the hough function. numpeaks specifies the maximum number of peaks to identify. The function returns peaks A matrix that holds the row and column coordinates of the peaks. (2) Peaks = Houghpeaks (H, Numpeaks,Name,Value) controls aspects of the operation using name-value pair arguments. Parameter Meaning The matrix numpeaks obtained by H-Hough transform defines the number of peaks to be recognized. The default value is 1. “Threshold” can be considered as the minimum value of peaks, such as 0.2* Max (Max (H)).
lines = houghlines(BW,theta,rho,peaks) lines = houghlines(,Name,Value,…) (1) Lines = Houghlines (BW, Theta, Rho,peaks) Extract line segments associated with specific bin in Hough transform in image BW. Theta and rho are vectors returned by the function hough. Peaks is a matrix returned by the Houghpeaks function that contains the row and column coordinates of the Hough transform bin to search for line segments. The return value lines is an array of structs whose length is equal to the number of merged line segments found. (2) lines = houghlines(,Name,Value…) Extract line segments in image BW, where the specified parameters affect the operation. The return value lines is the found lines, returned as an array of structs, whose length is equal to the number of combined line segments found. Each element of a struct array has the following fields:
Calculation steps: textbook P256 (program code).
Exercise L6_4_1, L6_4_1,
L6_4_1:
Image=rgb2gray(imread('houghsource.bmp'));
bw=edge(Image,'canny');
figure,imshow(bw);
[h,t,r]=hough(bw,'RhoResolution', 0.5,'ThetaResolution', 0.5); figure,imshow(h,[],'XData',t,'YData',r,'InitialMagnification'.'fit');
xlabel('\theta'),ylabel('\rho');
axis on,axis normal,hold on;
P=houghpeaks(h,2);
x=t(P(:,2));
y=r(P(:,1));
plot(x,y,'s'.'color'.'r');
lines=houghlines(bw,t,r,P,'FillGap', 5,'Minlength', 7); figure,imshow(Image); hold on; max_len=0;for i=1:length(lines)
xy=[lines(i).point1;lines(i).point2];
plot(xy(:,1),xy(:,2),'LineWidth', 2,'Color'.'g'); Plot (x, y (1, 1), y (1, 2),'x'.'LineWidth', 2,'Color'.'y'); Plot (x, y (2, 1), y (2, 2),'x'.'LineWidth', 2,'Color'.'r');
end
L6_4_2:
img = imread('line.jpg');
subplot(221), imshow(img), title('original image');
img_gray = rgb2gray(img);
BW = edge(img_gray,'canny'); % the canny edge of image subplot(223), imshow(BW), title('image edge'); [H,Theta,Rho] = hough(BW); % the theta and rho of transformed space subplot(222), imshow(H,[],'XData',Theta,'YData',Rho,'InitialMagnification'.'fit'),...
title('rho\_theta space and peaks');
xlabel('\theta'), ylabel('\rho');
axis on, axis normal, hold on;
P = houghpeaks(H,5,'threshold'Ceil (0.3 * Max (H (:)))); % label the top 5 intersections x = Theta(P(:,2)); y = Rho(P(:,1)); plot(x,y,The '*'.'color'.'r');
Copy the code
(2) Boundary tracking
In the MATLAB image Processing toolbox, there are two functions you can use for boundary tracing, one is bwtraceboundary and the other is bwboundaries. A, B = BWBOUNDARIES(BW) Trace the outer boundaries of the object and the outer boundaries of the holes inside the object. It can also trace the outermost object (the parent object) and the child object (the object completely enclosed by the parent object). Where, BW must be a binary image, and non-zero pixels belong to the object, and zero pixels are the background part. B is the return value, a cell array containing P elements, P is the number of objects and holes. Each cell contains a two-dimensional array of Q2, where Q is the number of boundary pixels of the corresponding object. Each row of Q2’s two-dimensional array contains the row and column coordinates of the boundary pixels. B, B = BWBOUNDARIES(BW,CONN) Specifies the use of connectivity when tracing parent and child object boundaries. CONN can be 8 or 4, meaning 8 connected or 4 connected. If this parameter is not specified, 8 is connected by default. B is an array of P1 cells, P represents the number of connected bodies, and each row in B is a matrix of Q2. Each line of white bar in Q is the position coordinate of the boundary pixel of the connected body. The first column is Y, the second column is X, and Q is the number of boundary pixels. So boundary(:,2) is x and boundary(:,1) is x. Click on the open links C, B = BWBOUNDARIES(BW,CONN,OPTIONS) to provide selection string input. The string can be noholes. That is, look for edges that do not contain holes. By default, the object region and hole region boundaries are traced.
Practice L6_5
Image=im2bw(imread('algae.jpg')); Image=1-Image; The %bwboundaries function targets the white area, in this image the target is dark, so invert the color. [B,L]=bwboundaries(Image); figure,imshow(L),title('Divided area');
hold on;
for i=1:length(B)
boundary=B{i};
plot(boundary(:,2),boundary(:,1),'m'.'LineWidth', 2); endCopy the code