· Master Dlib face recognition development tutorial
GitHub address –> github.com/xiaosongshi…
1. Background
Dlib is an open source tool for deep learning. It is based on C++ and supports Python development interfaces, similar to TensorFlow and PyTorch. But because Dlib support for face feature extraction is very good, there are many trained face feature extraction models for developers to use, so Dlib face recognition development is very suitable for face project development.
The above mentioned face recognition development, mainly refers to face verification, is the input of two face photos, the system will compare the output 0 or 1, on behalf of the judgment of whether the same person. General face recognition development can be simply divided into 1. Face feature modeling and 2. The use of face feature model for verification (in fact, should also include face alignment, etc., these can also be divided into 1). Using Dlib development, we can directly use the trained face feature extraction model, the main work has become how to verify the face.
Face verification is actually the calculation of similarity, the same person’s similarity will be large, different people will be smaller. Cosine similarity or Euclidean distance can be used to calculate the similarity. Cosine similarity is calculated Angle, Euclidean distance is square variance. Can be used to represent the similarity (distance) of two features.
2. Environment construction
Installation can refer to my blog: [deep learning tools]· Minimal installation Dlib face recognition library, the following points need to pay attention to:
This post is for Windows10 installation, other platforms can follow this step to install
- Install Miniconda
Use conda command to install Dlib library, use Miniconda and Anaconda can be used, I used to use Miniconda, simple and small memory. Recommend the use of Tsinghua source, download and install, select the appropriate platform version. Python = = 3.6
- To install dlib, be sure to enter CMD as administrator and execute (sudo for Linux Mac).
conda install -c conda-forge dlib
Copy the code
- The imageio library is required and can be installed using the following command
conda install imageio
Copy the code
3. Development
1. Realize face detection mark
face_test.py
import dlib
from imageio import imread
import glob
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
path = "f1.jpg"
img = imread(path)
dets = detector(img)
print('%d personal face detected' % len(dets))
for i, d in enumerate(dets):
print('- %d: Left %d Top %d Right %d Bottom %d' % (i, d.left(), d.top(), d.right(), d.bottom()))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
dlib.hit_enter_to_continue()
Copy the code
The code is very simple, read the photo through imread, and then detect, output result is deTS list, there are several faces will have several items, each item has. Left (),.top(),.right(),.bottom() four elements, representing the four border position of the face frame. Finally, use win.add_overlay(deTS) to display marked boxes on top of the original image.
The original photo
The output image
In fact, we can use this function to make a simple application, used to detect the number of faces in pictures or videos.
2. Face feature point extraction
Face feature extraction is added on the basis of actual combat 1.
import dlib
from imageio import imread
import glob
detector = dlib.get_frontal_face_detector()
win = dlib.image_window()
predictor_path = 'shape_predictor_68_face_landmarks.dat'
predictor = dlib.shape_predictor(predictor_path)
path = "f2.jpg"
img = imread(path)
dets = detector(img)
print('%d personal face detected' % len(dets))
for i, d in enumerate(dets):
print('- %d: Left %d Top %d Right %d Bottom %d' % (i, d.left(), d.top(), d.right(), d.bottom()))
shape = predictor(img, d)
The coordinates of the 0th point and the first point
print('Part 0: {}, Part 1: {}'.format(shape.part(0), shape.part(1)))
win.clear_overlay()
win.set_image(img)
win.add_overlay(dets)
win.add_overlay(shape)
dlib.hit_enter_to_continue()
Copy the code
This code adds shape_Predictor to test.py, allowing it to detect 68 features of a face. That’s reflected in the blue line.
The original image
The output image
Note that running this code requires the file predictor_path = ‘shape_predictor_68_face_landmarks. Dat ‘, which I will put on my Github for you to download and use.
3. Face recognition verification
On the basis of the second step, we further, to achieve face extraction as feature vector, so that we can compare the feature vector to achieve face verification, here is the method of comparing European distance.
face_recognition.py
import dlib
from imageio import imread
import glob
import numpy as np
detector = dlib.get_frontal_face_detector()
predictor_path = 'shape_predictor_68_face_landmarks.dat'
predictor = dlib.shape_predictor(predictor_path)
face_rec_model_path = 'dlib_face_recognition_resnet_model_v1.dat'
facerec = dlib.face_recognition_model_v1(face_rec_model_path)
def get_feature(path) :
img = imread(path)
dets = detector(img)
print('%d personal face detected' % len(dets))
# Here we assume that each image has only one face
shape = predictor(img, dets[0])
face_vector = facerec.compute_face_descriptor(img, shape)
return(face_vector)
def distance(a,b) :
a,b = np.array(a), np.array(b)
sub = np.sum((a-b)**2)
add = (np.sum(a**2)+np.sum(b**2)) /2.
return sub/add
path_lists1 = ["f1.jpg"."f2.jpg"]
path_lists2 = ["Photo of Zhao Liying. JPG"."Zhao Liying test.jpg"]
feature_lists1 = [get_feature(path) for path in path_lists1]
feature_lists2 = [get_feature(path) for path in path_lists2]
print("feature 1 shape",feature_lists1[0].shape)
out1 = distance(feature_lists1[0],feature_lists1[1])
out2 = distance(feature_lists2[0],feature_lists2[1])
print("diff distance is",out1)
print("same distance is",out2)
out1 = distance(feature_lists1[0],feature_lists1[1])
out2 = distance(feature_lists2[0],feature_lists2[1])
Copy the code
The output
Feature 1 shape (128, 1) Diff distance is 0.254767715912 Same distance is 0.0620976363391Copy the code
We can see that each face is extracted to 128 dimensional vectors, which can be understood as 128 dimensional coordinates (XYZ is three dimensions, 128 dimensions is composed of 128 axes), what we need to do next is to calculate the distance between the two features, set the appropriate threshold, less than this threshold is identified as the same person. The code works correctly with this file face_rec_model_path = ‘dlib_face_recognition_resnet_model_v1.dat’, which I have put on my Github for everyone to use.
As can be seen from the above test results, the threshold can be set as a value between 0.25 for different distances and 0.06 for the same person. I first set it as 0.09, which also needs a lot of data to calculate. The criterion is to minimize error recognition.
Let’s set the threshold to 0.09 to test whether the system can distinguish between different people: add the following code to face_recognization.py
def classifier(a,b,t = 0.09) :
if(distance(a,b)<=t):
ret = True
else :
ret = False
return(ret)
print("F1 is Zhao Liying",classifier(feature_lists1[0],feature_lists2[1]))
print("F2 is Zhao Liying",classifier(feature_lists1[1],feature_lists2[1]))
print("Zhao Liying photograph.jpg is Zhao Liying test.jpg",classifier(feature_lists2[0],feature_lists2[1]))
Copy the code
The output
F1 is Zhao Liying False F2 is Zhao Liying False JPG is Zhao Liying test.jpg TrueCopy the code
As can be seen from the above, the function of face discrimination has been basically met. If it is to be practical, it is necessary to continue to tune the threshold and code. The criterion of tuning is to select an appropriate threshold to minimize error recognition.
Hope this helps
GitHub address –> github.com/xiaosongshi…