Environment to prepare

Python version: 3.7 OS version: Windows OR Linux Written software: PyCharm OR VscodeCopy the code

Install dependencies

Install CMake

pip install CMake
Copy the code

To install VisualStudio 2017 click download

To install it, you must check the Python and C++ support libraries

Install dlib

pip install dlib
Copy the code

Install face_recognition

pip install face_recognition
Copy the code

Install openCV

 pip install opencv-python
Copy the code

Try to read a face

Here I try to read a picture of a star

Image link: p3-juejin.byteimg.com/tos-cn-i-k3… From Baidu Baike

I saved it to the img directory in my project and named it vove.jpg

import face_recognition

# Read pictures and recognize faces
img = face_recognition.load_image_file("./img/huge.jpg")
face_locations = face_recognition.face_locations(img)
print(face_locations)
Copy the code

If it recognizes the data it should print portrait data like the one below

Dlib face_locations can detect faces and extract certain characteristic data. Although we still don’t know how to use it and what it represents, then we will analyze it

Face_locations returns the following data:

We can guess, since the return is an array, does it represent how many faces are included in the image?

To test this guess, we used an image of multiple faces, which I named huge2.jpg

import face_recognition
# Read pictures and recognize faces

img = face_recognition.load_image_file("./img/huge2.jpg")
face_locations = face_recognition.face_locations(img)
print(face_locations)
Copy the code

The following results can be obtained

In fact, these 4 numbers can probably guess the meaning of its expression, should be the distance from the face around the edge, so as to draw a box, in accordance with the clockwise on the right and left order

This is the approximate effect (guess)

Draw face tag image

We can use OpenCV to draw this indicator

import face_recognition
import cv2

# Read pictures and recognize faces
img = face_recognition.load_image_file("./img/huge2.jpg")
face_locations = face_recognition.face_locations(img)
# print(face_locations)

# openCV reads the image
img = cv2.imread("./img/huge2.jpg")

# Call opencV to display the original image
cv2.namedWindow("raw")
cv2.imshow("raw", img)

# Traverse face information
for face in face_locations:
    print(face)
    # Distance from top of face
    top = face[0]
    # Distance to the right of face
    right = face[1]
    # Lower face distance
    bottom = face[2]
    # Left face distance
    left = face[3]

    # upper left
    start = (left, top)
    # right
    end = (right, bottom)

    # Pick any color
    color = (255.80.80)  # Blue, green, red (0~255)
    # The thickness of the line
    thickness = 3
    Draw a rectangle (image, top left coordinates, bottom right coordinates, color, thickness)
    cv2.rectangle(img, start, end, color, thickness)

# Display the result of drawing analysis
cv2.namedWindow("analysis")
cv2.imshow("analysis", img)

# means to close only when the user enters any key, otherwise the image will be closed in a flash
cv2.waitKey(0)
cv2.destroyAllWindows()
Copy the code

So we can get a most basic “face recognition” effect source code, but this is just to identify a face, but did not achieve the extraction of face information features, then we will continue to improve it

How are you (Who are you)

If the most important features of a face recognition, I think I should is to tell us who is this user, but no matter to face recognition model of cow force can’t tell you that out of thin air very name the man who, as usual we see clock in face recognition, people only need to stand in front of the camera, the system will know who you are, how this is done?

Actually very simple, that is, reference contrast, in the database connection model, in this, we call it faces the bottom face of bottom in the library have a photo of your original, so by face recognition model, to compare still photographs to the similarity of each photo, and guess who the hell are you, to the right, is a guess, So there’s no 100 percent matching, just as much similarity as possible

So let’s start matching faces

Load face object data

import face_recognition
import cv2

# Submit faces for comparison
unknown_image = face_recognition.load_image_file("img/face1.jpg")
# Print face data
print("Face object:".len(unknown_image))
print(unknown_image)
Copy the code

Here I import a photo of myself, and let’s see what the imported image looks like using face_recognition

As you can see, it’s a very long array, a matrix to be exact. Let’s take a look at the source code description of this method

def load_image_file(file, mode='RGB') :
    """ Loads an image file (.jpg, .png, etc) into a numpy array :param file: image file name or file object to load :param mode: format to convert the image to. Only 'RGB' (8-bit RGB, 3 channels) and 'L' (black and white) are supported. :return: image contents as numpy array """
    im = PIL.Image.open(file)
    if mode:
        im = im.convert(mode)
    return np.array(im)

Load image files (.jpg,.png, etc.) into the NUMpy array

#:param file: image file name or file object to load

#: Parameter mode: format to which the image is converted. Only 'RGB'(8-bit RGB, 3-channel) and 'L'(black and white) are supported.

#: Returns: Image contents as numpy array

Copy the code

From the return we can infer that, after we input the image object, we can get a NUMpy analysis result array, which should contain a lot of valuable data, for example, we can parse out the four perimeters of the face from the object in the previous section

import face_recognition
import cv2

# Submit faces for comparison
unknown_image = face_recognition.load_image_file("img/face1.jpg")
# Print face data
# print(" Face object: ", len(Unknown_image))
# print(unknown_image)

Get the four perimeters of the face object
face_locations = face_recognition.face_locations(unknown_image)
print(face_locations)
Copy the code

Get portrait feature data

We’re trying to extract the eigenvalues of the portrait from this face object

import face_recognition
import cv2

# Submit faces for comparison
unknown_image = face_recognition.load_image_file("img/face1.jpg")
# Print face data
# print(" Face object: ", len(Unknown_image))
# print(unknown_image)

Get the four perimeters of the face object
face_locations = face_recognition.face_locations(unknown_image)
print("Portrait margins.", face_locations)

# Portrait eigenvalue
face_encodings = face_recognition.face_encodings(unknown_image, face_locations)
print(face_encodings)
Copy the code

The following results can be obtained

Let’s take a look at the source code description for this method

def face_encodings(face_image, known_face_locations=None, num_jitters=1, model="small") :
    """ Given an image, return the 128-dimension face encoding for each face in the image. :param face_image: The image that contains one or more faces :param known_face_locations: Optional - the bounding boxes of each face if you already know them. :param num_jitters: How many times to re-sample the face when calculating encoding. Higher is more accurate, but slower (i.e. 100 is 100x slower) :param model: Optional - which model to use. "large" (default) or "small" which only returns 5 points but is faster. :return: A list of 128-dimensional face encodings (one for each face in the image) """
    raw_landmarks = _raw_face_landmarks(face_image, known_face_locations, model)
    return [np.array(face_encoder.compute_face_descriptor(face_image, raw_landmark_set, num_jitters)) for raw_landmark_set in raw_landmarks]


""" Given an image, return a 128-dimensional face encoding each face in the image. :param face_image: an image that contains one or more faces: Param known_face_Locations: Optional - if you already know the bounding box of each face. :param num_jitters: The number of times a face is resampled when calculating the code. Higher and more accurate, but slower (i.e. 100 is 100 times slower) : Parametric model: Optional - Which model to use. "Large" (default) or "small", only returns 5 points, but faster. :return: a list of 128-dimensional face codes (one for each face in the image) """

Copy the code

It is clear that this can return to represent the whole face of 128 d characteristic value, and we can join a identified by adjusting the speed faster, or more accurately, we can see into the same and suggest we can contain one or more input image of face image, also means that the array is also in the subscript contains more than one face

We’re now trying to match two similar portraits to see if he can tell if they’re the same

I compared the original image of huge 1 (single) with this image of huge3 (single). For the purpose of distinguishing, we used huge3.png as the input image, and all other images were used for comparison

import face_recognition
import cv2

# submit a face for comparison (pretend we don't know who it is)
unknown_image = face_recognition.load_image_file("img/huge3.png")

# A known face (Hu Ge's photo)
huge_img = face_recognition.load_image_file("img/huge.jpg")

# Unknown face object
unknown_face_locations = face_recognition.face_locations(unknown_image)
# Unknown portrait eigenvalue (default first face because there is only one face)
unknown_face_encoding = face_recognition.face_encodings(unknown_image, unknown_face_locations)[0]

# Hu Ge portrait feature value (directly input the image to extract the feature function, because the comparison of photos must have only one face)
huge_face_encoding = face_recognition.face_encodings(huge_img)

# Use the comparison function to input two eigenvalue objects and tolerance to see if they are the same person
Results = face_recognition.compare_faces(huge_face_encoding, unknown_face_encoding,  tolerance=0.4)
print(Results)
Copy the code

We can run it here and get a result

We use a new method called compare_faces to get the comparison results. Let’s look at the source code description

def compare_faces(known_face_encodings, face_encoding_to_check, tolerance=0.6) :
    """ Compare a list of face encodings against a candidate encoding to see if they match. :param known_face_encodings: A list of known face encodings :param face_encoding_to_check: A single face encoding to compare against the list :param tolerance: How much distance between faces to consider it a match. Lower is more strict. 0.6 is typical best performance. A list of True/False values indicating which known_face_encodings match the face encoding to check """
    return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance)
""" The list of facial codes is compared to the candidate codes to see if they match. :param known_face_encodings: list of known face codes :param face_encoding_to_check: single face codes compared with the list: Parameter tolerance: whether the distance between faces matches. The lower, the stricter. 0.6 is typically the best performance. :return: a list of True/False values indicating which known_face_encoding matches the face encoding to be checked """
Copy the code

We can see, the first pass is to tell us, we can be given a list of face encoding that is a lot of face feature value, is our chapter talked about face on the bottom, then a user can scan face compares tell you what is True, while allowing people degrees represent how much similarity can be the same person do you think, The higher the tolerance, the more errors you can accept.

At this point we have completed the comparison of features between the two portraits