A list,

A, principle

The video sequence captured by the camera is characterized by continuity. If there is no moving object in the scene, the change of successive frames is very weak, and if there is moving object, there will be significant change between successive frames.

Temporal Difference is a reference to the above ideas. Because the target in the scene is moving, the position of the target image in different image frames is different. This kind of algorithm performs difference operation on two or three consecutive frames of images in time. The pixels corresponding to different frames are subtracted to judge the absolute value of gray difference. When the absolute value exceeds a certain threshold, it can be judged as a moving target, so as to realize the target detection function.

                      

The operation process of the two-frame difference method is shown in Figure 2-2. The image of frame n and frame N −1 in the video sequence are denoted as F n and F n−1, and the gray values of the corresponding pixels of the two frames are denoted as F n(x,y) and F n−1(x,y). According to Formula 2.13, the gray values of the corresponding pixels of the two frames are subtracted and their absolute values are taken to obtain the differential image D n:

                                                                                  

Set the threshold T, and perform binarization processing on pixel points one by one according to Equation 2.14 to obtain binarization image Rn ‘. The point with gray value of 255 is the foreground (moving target) point, and the point with gray value of 0 is the background point. Through connectivity analysis of image Rn ‘, the image Rn containing a complete moving target can be finally obtained.

                                         

Two, three frame difference method

The two-frame difference method is suitable for the scene where the target moves slowly. When the target moves fast, due to the large difference in the position of the target on the adjacent frame images, the complete moving target cannot be obtained after subtracting the two frames. Therefore, the three-frame difference method is proposed on the basis of the two-frame difference method.

            

Figure 2-3 shows the operation process of the three-frame difference method. Remember that the images of frame N +1, frame N and frame N −1 in video sequence are F n+1, F n and F N −1, respectively. The gray values of the corresponding pixels of the three frames are denoted as F n+1(x, y), F n(x, y) and F n−1(x, Y). Differential images Dn +1 and Dn are obtained respectively according to Equation 2.13. For differential images Dn +1 and Dn, the sum operation is performed according to Equation 2.15 to obtain image Dn ‘. Then threshold processing, connectivity analysis, and finally extract the moving target.

                   

In inter-frame difference method, the choice of threshold T is very important. If the threshold value T is too small, the noise in the difference image cannot be suppressed. If the threshold value T is too large, part of the target information in the difference image may be covered up. And the fixed threshold T can not adapt to the light changes in the scene. To this end, someone proposed a method of adding the addition term sensitive to the overall illumination into the judgment condition, and modified the judgment condition as:

                               

Among them,N AIs the total number of pixels in the area to be detected, λ is the suppression coefficient of illumination,ACan be set to the entire frame image. Add itemRepresents the change of illumination in the whole frame of the image. If the lighting in the scene varies little, the value of this item tends to zero. If lighting changes significantly in the scene, the value of this item will increase significantly, leading to the adaptive increase of the judgment condition on the right side of Equation 2.16. The final judgment result is that there is no moving target, which effectively inhibits the influence of light changes on the detection results of moving targets.

 

Three, two frame difference and three frame difference comparison

Figure 2-5 shows the experimental results of moving target detection using inter-frame difference method for lab sequence of selfie sequence. (b) Shows the detection results using two-frame difference method, and (c) shows the detection results using three-frame difference method. In lab sequence, the target moves faster. In this case, the position of the moving target in different image frames is obviously different. The target detected by two-frame difference method will appear the phenomenon of “double shadow”, and the relatively complete moving target can be detected by three-frame difference method.

                

In summary, the principle of inter-frame difference method is simple and the calculation is small, which can quickly detect the moving target in the scene. However, it can be seen from the experimental results that the target detected by inter-frame difference method is incomplete and contains “holes” inside, because the position of the moving target changes slowly between adjacent frames, and it is difficult to detect the overlapped part of the target in different frames. The frame difference method is usually not used in target detection alone, but often used in combination with other detection algorithms.

Ii. Source code

unction varargout = facedetecion(varargin) % FACEDETECION MATLAB code for facedetecion.fig % FACEDETECION, by itself, creates a new FACEDETECION or raises the existing % singleton*. % % H = FACEDETECION returns the handle to a new FACEDETECION or the handle to % the existing singleton*. % % FACEDETECION('CALLBACK',hObject,eventData,handles,...) calls the local % function named CALLBACK in FACEDETECION.M with the given input arguments. % % FACEDETECION('Property','Value',...) creates a new FACEDETECION or raises the % existing singleton*. Starting from the left, property value pairs are % applied to the GUI before facedetecion_OpeningFcn gets called. An % unrecognized property name or invalid value makes property application % stop. All inputs are passed to facedetecion_OpeningFcn via varargin. % % *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one % instance to run (singleton)". % % See also: GUIDE, GUIDATA, GUIHANDLES % Edit the above text to modify the response to Help FaceDetecion % Last Modified by GUIDE V2.5 01-May-2017 19:18:42 % Begin initialization code - DO NOT EDIT gui_Singleton = 1; gui_State = struct('gui_Name', mfilename, ... 'gui_Singleton', gui_Singleton, ... 'gui_OpeningFcn', @facedetecion_OpeningFcn, ... 'gui_OutputFcn', @facedetecion_OutputFcn, ... 'gui_LayoutFcn', [] , ... 'gui_Callback', []); if nargin && ischar(varargin{1}) gui_State.gui_Callback = str2func(varargin{1}); end if nargout [varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:}); else gui_mainfcn(gui_State, varargin{:}); end % End initialization code - DO NOT EDIT % --- Executes just before facedetecion is made visible. function facedetecion_OpeningFcn(hObject, eventdata, handles, varargin) % This function has no output args, see OutputFcn. % hObject handle to figure % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % varargin command line arguments to facedetecion (see VARARGIN) % Choose default command line output for facedetecion handles.output = hObject; % Update handles structure guidata(hObject, handles); % UIWAIT makes facedetecion wait for user response (see UIRESUME) % uiwait(handles.figure1); % --- Outputs from this function are returned to the command line. function varargout = facedetecion_OutputFcn(hObject, eventdata, handles) % varargout cell array for returning output args (see VARARGOUT); % hObject handle to figure % eventdata reserved - to be defined in a future version of MATLAB % handles structure with handles and user data (see GUIDATA) % Get default command line output from handles structure varargout{1} = handles.output; % --- Executes on button press in pushbutton1. function pushbutton1_Callback(hObject, eventdata, handles) % hObject handle to pushbutton1 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB %  handles structure with handles and user data (see GUIDATA) global myvideo myvideo1; [fileName,pathName] = uigetfile('*.*','Please select an video'); % file basket, select file if(fileName) fileName = strcat(pathName,fileName); fileName = lower(fileName); % consistent lowercase else % J = 0; Msgbox ('Please select a video'); return; End % boxlnserter = vision.shapeinserter ('BorderColor','Custom','CustomBorderColor',[255 0 0]); % videoOut = step(boxlnserter,videoFrame,bbox); myvideo = VideoReader(fileName); nFrames = myvideo.NumberOfFrames vidHeight = myvideo.Height vidWidth = myvideo.Width mov(1:nFrames) = struct('cdata',zeros(vidHeight,vidWidth,3,'uint8'),'colormap',[]); B_K = read(myvideo,1); axes(handles.axes1); imshow(B_K); % myvideo = VideoReader(fileName); % nFrames = myvideo.NumberOfFrames % vidHeight = myvideo.Height % vidWidth = myvideo.Width % mov(1:nFrames) = struct('cdata',zeros(vidHeight,vidWidth,3,'uint8'),'colormap',[]); % B_K = read(myvideo,1); % axes(handles.axes1); % imshow(B_K); % --- Executes on button press in pushbutton2. function pushbutton2_Callback(hObject, eventdata, handles) % hObject handle to pushbutton2 (see GCBO) % eventdata reserved - to be defined in a future version of MATLAB %  handles structure with handles and user data (see GUIDATA) global myvideo myvideo1; nFrames = myvideo.NumberOfFrames vidHeight = myvideo.Height vidWidth = myvideo.Width mov(1:nFrames) = struct('cdata',zeros(vidHeight,vidWidth,3,'uint8'),'colormap',[]); faceDetector = vision.CascadeObjectDetector(); % videoFileReader = vision.VideoFileReader(fileName); % videoFrame = step(videoFileReader);Copy the code

3. Operation results