First, face recognition development

Looking into the history of face recognition, we can roughly divide it into four stages:

  1. 1964-1990, the first stage, the study of face recognition facial features, did not realize automatic recognition
  2. 1991-1997, the second stage, research artificial algorithm recognition
  3. From 1998 to 2014, the third stage mainly studied robustness, such as illumination and posture
  4. From 2015 to now, the fourth stage: Internet application: mature technology, extensive promotion and application

This technology has turned lives upside down!

Imagine entering the office on a Monday morning through a face-clocking machine, opening Meituan and choosing a hot cup of coffee. Then you think about the fun time you had with your friends over the weekend, and you get used to checking your face, editing and uploading pictures, and Facebook instantly @ your friends in those pictures.



(source:Face recognition principle)

Second, face recognition based on Face_recognition

Face_recognition is such a cool feature that we can easily implement it!

Face_recognition: Face_recognition is the world’s most concise face recognition library, you can use Python and command line tools to extract, recognize, and manipulate faces. Face_recognition provides a rich API to call (see details). We use python-Flask framework to encapsulate interfaces and build a small face recognition server.

The core code is as follows:

def faceMatch(name) :
    if name is None:
        abort(404)
    name = './uploadimg/'+name
    unknown_image = face_recognition.load_image_file(name)
    res = []
    if unknown_image is None:
        res.append(" position fail!")
    else:
        try:
            unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
            WSI_MASK_PATH = './orisample/'  # original library
            wsi_mask_paths = glob.glob(os.path.join(WSI_MASK_PATH, '*.jpg'))
            for pic in wsi_mask_paths:
                known_image = face_recognition.load_image_file(pic)
                # Known face coding
                known_encoding = face_recognition.face_encodings(known_image)[0]
                The image to be recognized is compared with the source database
                results = face_recognition.compare_faces([known_encoding], unknown_encoding, tolerance = 0.4)
                if results[0] = =True:
                    res.append(pic.split('/') [2] +" It's a picture of me!")
                else:
                    res.append(pic.split('/') [2] +" not")
        except:
            res.append(" position fail!!")
    return render_template('facematch.html', url=res, name=res)
Copy the code

Code (source address) interpretation:

  • After importing face_recognition, call the face_encodings interface to get the image encoding
  • Then through the compare_faces interface, the similarity between the two is compared with the algorithm to predict

Effect display:

The example above simply encapsulates the complete code for the FACE_recognition interface using python-flask

Three, face recognition principle

Note: For those who are only interested in face recognition applications, please skip this chapter

Face recognition, generally can be divided into four steps:

  • To locate faces
  • Extracting facial features
  • Parameterize facial features
  • Compare the source database for face recognition
3.1 Locating face

Face localization uses THE HOG algorithm to obtain the main features of the image face by replacing pixels with gradual changes.

We first set the image to black and white to remove color effects, and then marked it with arrows based on the gradient around the pixels:



(source:Face recognition principle)

Repeat this operation, and finally get the image after the arrow mark (HOG algorithm implementation code) :

The purpose of the HOG algorithm for image transfer is to accurately locate the face position in the image by ignoring the influence of image color and brightness, so as to make preparations for facial features extraction in the next step.

3.2 Extracting facial features

Locating faces presents us with a new question: do we see faces differently when viewed from different angles? At the same latitude, if both the source map and the map to be identified are positive, the prediction will be more accurate.

Then, a simple transfer is made to obtain the approximate positive face image of the image to be recognized.

Next, we need to extract the features of the faces in the recognition image. For example, when drawing, art students usually make the point setting of composition.

(Credit: Brandon Amos, CMU)

3.3 Parameterization of facial features

The following is the focus of this article!!

Each face recognition image is converted by the machine into a 128-bit parametric identifier.

So, what rules does the machine follow to parameterize the image? By training a deep convolutional neural network, deep learning and training are carried out to determine the image to 128 bit rules.

Training methods:

  • Import the known face image and get #1
  • Load another image of the known person and get #2
  • Load an image of an unknown person and get #3

We constantly adjust the algorithm and parameters, making #1 and #2 more and more close, #1 and #3 more and more distant, after millions and millions of operations, the neural network can finally learn the optimal image parameterization rules, to achieve the purpose of face recognition.

3.4 Compare the source database for face recognition

We get 128 bits of value, compared with the data source library, to find the most similar, that is, to complete face recognition. With a simple SVM classifier, the comparison process can be shortened to a few milliseconds.

Four, advanced: video coding

There is huo mou in front, di mou after, this can be busy bad popular variety “breaking the GG” late students, here also help the late students decompression, let the program automatically play code!

Here’s a short clip from the original GG video:

The coding video is as follows:

Mainly with FFMPEG, OpencV implementation, (complete code source: Jack Cui) no need to repeat the wheel:

def mask_video(input_video, output_video, mask_path='mask.jpg') :
    # Print the image
    mask = cv2.imread(mask_path)
    # Read video
    cap = cv2.VideoCapture(input_video)
    FPS, width, heigth
    CV_CAP_PROP_FPS = 5
    CV_CAP_PROP_FRAME_WIDTH = 3
    CV_CAP_PROP_FRAME_HEIGHT = 4
    v_fps = cap.get(CV_CAP_PROP_FPS)
    v_width = cap.get(CV_CAP_PROP_FRAME_WIDTH)
    v_height = cap.get(CV_CAP_PROP_FRAME_HEIGHT)
    Set the parameters for writing video to mp4
    size = (int(v_width), int(v_height))
    fourcc = cv2.VideoWriter_fourcc('m'.'p'.'4'.'v')
    out = cv2.VideoWriter(output_video, fourcc, v_fps, size)

    # Known face
    known_image = face_recognition.load_image_file("lyd2.png")
    # print(known_image)
    biden_encoding = face_recognition.face_encodings(known_image)[0]
    print(biden_encoding)
    # Read video
    cap = cv2.VideoCapture(input_video)
    while(cap.isOpened()):
        ret, frame = cap.read()
        if ret:
            # Face detection
            face_locations = face_recognition.face_locations(frame)
            # Detect every face
            for (top_right_y, top_right_x, left_bottom_y, left_bottom_x) in face_locations:
                unknown_image = frame[top_right_y-50:left_bottom_y +
                                      50, left_bottom_x-50:top_right_x+50]
                if unknown_image is None:
                    break
                try:
                    unknown_encoding = face_recognition.face_encodings(unknown_image)[0]
                except:
                    break
                # Comparison results
                results = face_recognition.compare_faces(
                    [biden_encoding], unknown_encoding)
                # Is Li Yundi, type the code
                if results[0] = =True:
                    mask = cv2.resize(mask, (top_right_x-left_bottom_x, left_bottom_y-top_right_y))
                    frame[top_right_y:left_bottom_y,left_bottom_x:top_right_x] = mask
            # Write video
            out.write(frame)
        else:
            break
Copy the code

Code interpretation:

  • With OpencV, read video stream CAP = cv2.videocapture (input_video), get every frame while(cap.isopened ())
  • Face_recotion.com pare_faces() is performed for each frame. If the target face is recognized, cv2.resize() is performed for each frame.
  • Save image stream and write video out.write(frame) frame by frame
  • The new video is silent and can be converted to audio by FFMPEG and then integrated back together

5. Problems and solutions

5.1 Face_recognition has low face recognition for children and Asians

The official explanation is lack of data source, the solution is to expand the corresponding training library. In addition, deep learning can also be targeted to improve the accuracy of recognition when specific face positioning is poor.

5.2 Video face dynamic changes, resulting in low coding integrity

The video is too long, and the corresponding face changes greatly. In the demonstration, it is obviously found that the far Angle of view and low complementary light are not good for recognition. Therefore, we locate the face size frame by frame and make adaptive coding. Artificial assistance and correction are also necessary to achieve better results.

5.3 Commercialization of face recognition

Security, transportation, finance and building are common fields, but their natural Internet properties can give birth to more creative products, such as the popular face-changing video and pet mood detector. With the popularity of the Internet of Things, there will be many interesting applications for face recognition.

Six, summarized

In this paper, face_recognition is used to demonstrate the construction of face recognition, and the principle of face positioning, feature extraction and deep learning is introduced. Finally, with the help of OpencV through frame by frame processing, video coding, so that “Breaking through GG” di, “code” disappeared.

References:

Face_recognition open source project portal

Modern Face Recognition with Deep Learning Adam Geitgey

Pytorch In-depth Learning tutorial Jack Cui