“This is my 27th day of participating in the First Gwen Challenge 2022.First challenge in 2022”.
OpenCV
introduce
OpenCV is a cross-platform computer vision library that runs on Linux, Windows, Android, and Mac OS operating systems. It is lightweight and efficient — it consists of a series of C functions and a small number of C++ classes. It also provides interfaces to Python, Ruby, MATLAB and other languages and implements many common algorithms in image processing and computer vision.
The installation
In our coding process, we will use two modules of OpenCV, which we will install directly using PIP:
pip install opencv-python
Copy the code
Another one:
pip install opencv-contrib-python
Copy the code
Once installed, we are ready to learn how to use OpenCV.
Use OpenCV to process video images
Image processing
Read and display the image
Read the image and display the image is the most basic operation of the image, in OpenCV is also very simple, the code is as follows:
import cv2
# fetch image
im = cv2.imread('test.png')
The first parameter is the window name
cv2.imshow('img', im)
# Wait for keyboard input
cv2.waitKey(0)
Destroy all Windows
cv2.destroyAllWindows()
Copy the code
When we import the module, cv2, we use imread function to read the image, and imshow function to display the image. Since the imshow function only displays for a fraction of a second, we add a waitkey(), which passes in a millisecond value, indicating an infinite wait for keyboard input when 0 is passed. Since OpenCV is implemented using C/C++, we need to destroy the window (similar to free).
Gray level transformation
We can simply understand the color image into a black and white image, in face recognition, we usually use gray conversion after the image recognition.
import cv2
# fetch image
im = cv2.imread('test.png')
# Grayscale conversion
grey = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Save image
cv2.imwrite('im.png', grey)
Destroy all Windows
cv2.destroyAllWindows()
Copy the code
For grayscale conversion, we use the cv2.cvtColor function. The first parameter is an NDARray object (image), and the second parameter is a constant in CV2. As we can see from the constant names, for OpenCV, images are generally BGR rather than RGB.
We use imwrite function to save the image after gray conversion.
Drawing graphics
Then let’s look at how to draw with OpenCV:
import cv2
# fetch image
im = cv2.imread('test.jpg')
# draw graph
cv2.rectangle(im, (10.10), (50.50), (0.255.0), 3)
cv2.circle(im, (100.100), 50, (0.0.255), 2)
# display image
cv2.imshow('im', im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Copy the code
Above we draw a rectangle and a circle. Let’s look at two methods:
Im: ndarray object P1: the upper-left corner of the frame p2: the lower-right corner of the frame
rectangle(im, p1, p2, color, thickness)
"" im: ndarray object center coordinates radius BGR format thickness: outline width ""
circle(im, center, radius, color, thickness)
Copy the code
This is just some of the parameters that you’re going to use. The code looks like this:
In the subsequent process of our face detection, we can draw a graph and frame the face.
To detect human faces
Get specific data
To achieve face detection, we need to download OpenCV feature data first. We go to the official website of OpenCV to download the corresponding file. After entering, we can see the following interface:
Once downloaded, we can double click to run it. If we go to the opencv\ Sources \ Data \ Haarcascades directory, we will find many feature files in it. We usually use haarcascade_frontalface_default.xml as the characteristic data. I copied the feature files directly to the project root directory. You can choose to do what I did, or you can use the absolute path.
To detect human faces
The steps of face detection are as follows:
- Read the image
- Gray level transformation
- Get face detector
- To detect human faces
- Traverse the face
Let’s write the above steps in code:
import cv2
# fetch image
im = cv2.imread('face.jpg')
# Grayscale conversion
grey = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Get face detector
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Face detection
faces = face_detector.detectMultiScale(grey)
# Walk through the faces detected
for x, y, w, h in faces:
# Draw a rectangle in the face area
cv2.rectangle(im, (x, y), (x+w, y+h), (0.255.0), 2)
# display image
cv2.imshow('im', im)
cv2.waitKey(0)
cv2.destroyAllWindows()
Copy the code
We use the cv2.CascadeClassifier method to load the feature data. We then used the detectMultiScale method to detect faces, which returns face data. Face.jpg and draw the rectangle as follows:
In the process of testing, it is found that it can be detected under normal circumstances, but there are also more errors.
Training data
Training data is mainly needed to obtain two data, one is face label, the other is face data. To facilitate the acquisition of face tags, we first prepare some pictures:
There are 40 pictures, the first 20 are Hu Ge and the second 20 are Robert Downey Jr. We can use these pictures as training material later on. We recognize these faces, we capture their face data, and we use the file name as the tag directly, so we know that anything with a tag greater than 20 is Robert Downey Jr., and anything less than 20 is Hu Ge.
The training data steps are roughly as follows:
- Get face data and tag data
- Acquisition trainer
- Training data
- Save training data
The code is as follows:
import cv2
import os
import numpy
# The root directory of the face
root = './face/'
def getFacesAndLables() :
# Used to store face data
faces = []
# Used to store tag data
labels = []
# Get face detector
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
Get the image path
files = os.listdir(root)
for file in files:
# fetch image
im = cv2.imread(root + file)
# Grayscale conversion
grey = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
# Face detection
face = face_detector.detectMultiScale(grey)
for x, y, w, h in face:
# set the tag
labels.append(int(file.split('. ') [0]))
# Set face data
faces.append(grey[y:y+h, x:x+w])
return faces, labels
# Get face data and tags
faces, labels = getFacesAndLables()
Get the training object
recognizer = cv2.face.LBPHFaceRecognizer_create()
# Training data
recognizer.train(faces, numpy.array(labels))
# Save training data
recognizer.write('./trainer.yml')
Copy the code
Here the main code or get face data and face tags. We get training object by cv2. Face. LBPHFaceRecognizer_create () function, the function in the opencv – contrib – python module. The train function is called for the training data, and we use write to save the training data. And then we can do face recognition.
Face recognition
In the training of good data we can begin to recognize the face, face recognition steps are roughly as follows:
- Reading training data
- Read the image
- To detect human faces
- Matching data
We use code to achieve:
import cv2
Get the training object
recognizer = cv2.face.LBPHFaceRecognizer_create()
Load training data
recognizer.read('trainer.yml')
# fetch image
im = cv2.imread('Tony.jpg')
# Grayscale conversion
grey = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Get face detection object
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Face detection
face = face_detector.detectMultiScale(grey)
for x, y, w, h in face:
# Match face
label, confidence = recognizer.predict(grey[y:y+h, x:x+w])
Judge the recognition result by label
if confidence <= 60:
if label < 20:
print('brief')
elif label > 20:
print('Robert Downey Jr.')
else:
print('Face not matched')
Copy the code
We use the read method to load the training data, read the images and detect the faces we’ve already talked about, and then we use recognizer.predict to match the faces. This method returns two parameters, a face label and a credibility. The lower the confidence value, the higher the confidence level. And then we use the tag that we get to figure out who the person is.
Here’s the picture I used to identify it:
Robert Downey Jr.Copy the code
So we have face recognition. Since there is less training data, the accuracy of recognition is also low. You can use more pictures for training. The code of this article is connected as follows:
Github.com/IronSpiderM…