Lei Feng network according to: this is the use of OpenCV face recognition technology. Before you read this article, here are some things to note:

  • I recommend reading this article before running through the code — you need to understand what the code does. Being able to successfully run is not the goal, but being able to catch a bug in a new task is.

  • Make sure you are using OpenCV V2

  • You need a webcam

OpenCV

OpenCV, the most popular computer vision library, originally developed in C and C++, now supports Python as well.

It uses machine learning algorithms to search for people’s faces in images. For something as complex as a face, there is no simple test that can make a conclusion about the presence of a face, and thousands of feature matches are required. The algorithm breaks the face-recognition task into thousands of smaller tasks, each of which is not difficult to handle. These tasks are also known as classifiers.

For something like a face, you might need no fewer than 6,000 classifiers, each of which needs to be a successful match (with error tolerance, of course) to detect a face. But there’s a problem: For face recognition, the algorithm starts at the top left corner and calculates blocks of data, constantly asking “Is this a face?” With more than 6,000 checks per block, adding up to millions of levels of computation, computers are likely to keep you waiting.

OpenCV uses Cascades to avoid that. What is Cascade? The best answer is already in the dictionary: a waterfall or a series of waterfalls.

Like a cascade, OpenCV Cascade breaks down the face detection problem into several steps. For each chunk of data, it does a rough, quick check. If it passes, a more careful test is carried out, and so on. The algorithm has 30 to 50 of these stages, or cascades. Only through all stages, the algorithm will judge the detection of the face. The advantage of this is that most graphs generate negative feedback in the first few steps, so the algorithm doesn’t have to test all 6,000 features on it, saving a lot of time. This allows face detection in real time, as opposed to a “normal process” that can take hours.

Cascade in practice

It may sound complicated in theory, but in practice it is very simple. These Cascades are just a series of XML files containing OpenCV data. You use the desired Cascade initialization code and it will do what you want for you.

Because of the ubiquity of face recognition, OpenCV has a series of built-in cascades that can detect everything from eyes to hands to legs. There’s even Cascade for non-human objects. For example, if you run a fruit store that sells bananas and want to monitor people who steal bananas, one guy has developed an algorithm for that scenario!

Install OpenCV

First, you need to find the correct Settings file for your operating system.

I found installing OpenCV to be the hardest part. If you encounter strange, unexplained errors, it could be a library crash, 32 – to 64-bit compatibility issues, etc. From personal experience, it is easiest to install OpenCV from scratch using only a Linux virtual machine.

Once installed, you can open a Python session and type the following code to test if it works:

$ python

>>> import cv2

>>>

If nothing pops up, you can move on to the next section.

Understand the code

The source code is available for download in the repository. Remember to grab the face_detect.py text, the abba.png image, and haarcascade_frontalface_default.xml. Now, LET me break down the code.

# Get user supplied values

imagePath = sys.argv[1]

cascPath = sys.argv[2]

Pass the image and the Cascade name as command line arguments. We will use Abba images and the default Cascade provided by OpenCV for face detection.

# Create the haar cascade

faceCascade = cv2.CascadeClassifier(cascPath)

Now we create a cascade and initialize it with a face CASCADE. This cascade into memory, so it’s ready to use. Remember, the Cascade is just an XML file that contains face detection data.

# Read the image

image = cv2.imread(imagePath)

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

Read the image and convert it to grayscale format.

# Detect faces in the image

faces = faceCascade.detectMultiScale(    

gray,    

ScaleFactor = 1.1,

minNeighbors=5,    

minSize=(30, 30),   

flags = cv2.cv.CV_HAAR_SCALE_IMAGE

)

This function does is to detect the face, is the core part of the code. So, let’s go over the choices.

The DetectMultiScale function is a generic function that detects objects. We call it on face Cascade, and it checks for faces. The first option is a grayscale image.

The second is scaleFactor. Some faces are closer to the camera and larger than others. ScaleFactor compensates for this.

Detection algorithms use moving Windows to detect objects. Before the system announces that faces have been detected, minNeighbors defines how many objects are currently around them. MinSize gives the size of each window.

I’m using values that are common in these fields. In reality, you’ll experiment with different values for window sizes, extension factors, and so on until you find the one that works best.

When the function thinks it has found a face, it returns a list of rectangles. Next, we loop until it thinks it’s detected something.

print “Found {0} faces!” .format(len(faces))

# Draw a rectangle around the faces

for (x, y, w, h) in faces:

    cv2.rectangle(image, (x, y), (x+w, y+h), (0, 255, 0), 2)   

This function returns four values: the rectangle’s x and y coordinates, and its height and width. Using these values and the built-in Rectangle () function, we draw the matrix.

cv2.imshow(“Faces found” ,image)

cv2.waitKey(0)

Finally, we display the model and wait for the user to press the button.

The inspection results

Let’s use Abba to check.

$ python face_detect.py abba.png haarcascade_frontalface_default.xml

No problem. Try another photo.

Those aren’t faces. Let’s try again. I adjusted the parameters and found that setting scaleFactor to 1.2 removed error detection.

What happened? The first image was taken from close range with a high-definition camera, while the second was taken from a longer distance, probably with a mobile phone. This is why scaleFactor needs to be adjusted. As I said, you need to set the algorithm for the actual scenario and avoid false positive examples.

Here, Lei Feng warns that since this is based on machine learning, the results will never be 100% accurate. Most of the time, you’ll get good results. But algorithms occasionally fail.

The final code is here.

With a webcam

What if you want to use a webcam? OpenCV reads each frame from the camera, and you can process each frame for face detection. You need a powerful PC, but my five-year-old laptop worked just fine.

Via RealPython

Related articles:

The boss is coming: face recognition + mobile push, the boss is coming you immediately know!

How to use OpenCV + Python face recognition

Lei Feng net copyright article, unauthorized reprint prohibited. See instructions for details.