This is the 17th day of my participation in Gwen Challenge

OpenCV is a C++ library, currently popular computer vision programming library, for real-time processing of computer vision problems, it covers a lot of modules in the field of computer vision. In Python, the OpenCV library is often used for image processing.

This article shows how to implement basic motion detection using OpenCV in Python3:

Frame to frame difference method

Target tracking is a process of locating moving target in camera video, which has a very wide range of applications. Real-time target tracking is an important task in many computer vision applications, such as surveillance, perception-based user interfaces, augmented reality, and assisted driving.

At present, there are many methods to achieve video target tracking, such as inter-frame difference method.

Inter-frame difference method is a method to obtain the contour of moving object by difference operation of two adjacent frames in video image sequence. It can be well applied to the case of multiple moving objects and camera movement. When abnormal object movement occurs in the monitoring scene, there will be an obvious difference between frames. When two frames are subtracted, the absolute value of the brightness difference between two frames is obtained, and whether it is greater than the threshold is judged to analyze the motion characteristics of the video or image sequence and determine whether there is object movement in the image sequence.

The implementation code

The code in this paper is based on the frame difference method to do a basic motion detection function.

import cv2
import numpy as np

camera = cv2.VideoCapture(0) # parameters0Indicates the first camera # Determine if the video is openif (camera.isOpened()):
    print('Open')
else:
    print('Camera is not on'Size size = (int(camera.get(cv2.cap_prop_frame_width)), int(camera.get(cv2.cap_prop_frame_height)) print('size:'+repr(size))

es = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (9.4))
kernel = np.ones((5.5), np.uint8)
background = None

whileUpon hearing the video stream ear, frame_lwpCV = camera.read() # upon hearing the video stream ear, first turn it into a greyscale image and then perform gaussian filtering upon it. The reason for the processing: Every input video will produce noise due to natural vibration, illumination changes or the camera itself. Noise is smoothed to avoid detection during motion and tracking. gray_lwpCV = cv2.cvtColor(frame_lwpCV, cv2.COLOR_BGR2GRAY) gray_lwpCV = cv2.GaussianBlur(gray_lwpCV, (21.21), 0) # Set the first frame as the background for the entire inputif background is None:
        background = gray_lwpCV
        continue# For each frame read from behind the background, the difference between it and Beijing will be calculated and a different map will be obtained. # You also need to apply thresholds to get a black and white image, and dilate the image with the following code, Thus, holes and imperfection are normalized diff = cv2. absDIff (background, gray_lwpCV) diff = cv2.threshold(diff,25.255, cv2.THRESH_BINARY)[1] # Diff = cv2. Dilate (diff, es, Iterations =2Contours, hierarchy = cv2.findContours(diff.copy(), cv2.retr_external, Cv2. CHAIN_APPROX_SIMPLE) # This function computs the contour of an object in an imagefor c in contours:
        if cv2.contourArea(c) < 1500: # For rectangular areas, only Outlines greater than a given threshold are shown, so some minor changes are not shown. For cameras with constant illumination and low noise, the threshold of minimum contour size can not be setcontinueRectangle (x+w, y+h); rectangle(x+w, y+h); rectangle(x+w, y+h); rectangle(x+w, y+h)0.255.0), 2)

    cv2.imshow('contours', frame_lwpCV)
    cv2.imshow('dis', diff)
    key = cv2.waitKey(1) & 0xFF# according to'L'Healthy exit cycleif key == ord('L') :break


camera.release()
cv2.destroyAllWindows()
Copy the code

The output is:

Results analysis

According to the results, using inter-frame difference method for target tracking has the following advantages and disadvantages.

  • Advantages: simple algorithm implementation, low complexity of program design, fast running speed; Dynamic environment is highly adaptive and insensitive to the change of scene light.

  • Disadvantages: can not extract the whole area of the moving object, only can extract the outline; The performance of the algorithm depends heavily on the selected interval and segmentation threshold.

A series of articles will follow this month,

Wonderful article, please pay attention.