Examples of this article mainly write is how to obtain the similarity of two faces, the rest of the examples refer to the official website, the blogger is uneducated, if there is a mistake, trouble big guys give more advice.

Face-api.js Portal: github.com/justadudewh…

Code portal, install that use, handy point star, a good mood, examples based on face-api.js github.com/TheKiteRunn…

My CSDN address: blog.csdn.net/c_kite/arti…

1. The rendering

Clone my example code and compare the reference image with reference. JPG in the images folder

2. Development process

2.1. Install the face – API. Js

Get face-api.js: NPM I face-api.js

If I want to run it on The Node side, AS a Windows user, I run into a bunch of pits. Here are some solutions:

  1. Node-gyp: Installation method for Windows users github.com/nodejs/node…
  2. Tensorflow.js node. js Windows troubleshooting github.com/tensorflow/…
  3. If Downloading libtensorflow events.js:173 throw er; // Unhandled ‘error’ event ^ error: connect ETIMEDOUT 172.217.160.80:443 Const BASE_URI = ‘storage.googleapis.com/tensorflow/… ‘; Const CPU_DARWIN = ‘CPU – Darwin – x86_64-1.12.0. Tar. Gz’; Const CPU_LINUX = ‘CPU – Linux – x86_64-1.12.0. Tar. Gz’; Const GPU_LINUX = ‘gpu – Linux – x86_64-1.12.0. Tar. Gz’; Const CPU_WINDOWS = ‘CPU – Windows – x86_64-1.12.0.zip’; Const GPU_WINDOWS = ‘gpu – Windows – x86_64-1.12.0. Zip’;
  4. If “node-pre-gyp install –fallback-to-build” is used during installation, the canvas package will download a file: github.com/node-gfx/no… Long time does not have the word of reaction estimate is this pack card advocate, need reasonable Internet access

2.2. Load model

After installing the package, you need to load the appropriate model according to the functions you need, face detection a model, expression recognition a model, face recognition a model, if you do not load the model in advance directly using the API will have the following example code similar tips

Uncaught (inError: Facelandmark68net-load model before inference // This means that the FaceLandmark model is not loadedCopy the code

How do I load the model

await faceapi.loadTinyFaceDetectorModel('the model address')
/ / equivalent to the
await faceapi.nets.tinyFaceDetector.load('the model address')
Copy the code

All models can be found in the warehouse: github.com/justadudewh… You can put them on your own static server

2.3. Face detection

Detect all faces in videos or pictures

const detections = await faceapi.detectAllFaces(input)
Copy the code

Detect the face with the highest confidence score in the image

const detection = await faceapi.detectSingleFace(input)
Copy the code

By default, detectAllFaces and detectSingleFace use SSD Mobilenet V1 face detectors. You can specify the face detector by passing the corresponding option object

// I am testing using the 'TinyFaceDetector', so it works
const detections = await faceapi.detectAllFaces(input, new faceapi.TinyFaceDetectorOptions())
Copy the code

2.4. 68 markers on the face were detected

After face detection, we can also predict the facial markers of each detected face, as shown below:

const detectionsWithLandmarks = await faceapi.detectAllFaces(input).withFaceLandmarks()
/ / or
const detectionWithLandmarks = await faceapi.detectSingleFace(input).withFaceLandmarks()
Copy the code

2.5. Calculate face descriptions

Note: Be sure to call the functions in the order described in the blog post

After face detection and face marker prediction, the face descriptor for each face can be calculated:

const results = await faceapi.detectAllFaces(input).withFaceLandmarks().withFaceDescriptors()
/ / or
const result = await faceapi.detectSingleFace(input).withFaceLandmarks().withFaceDescriptor()
Copy the code

2.6. Calculate the similarity of the two faces

To perform face recognition, you can use Faceapi.facematcher to compare the reference face descriptor with the query face descriptor

const imgEle = document.createElement('img');
imgEle.src = '/reference.jpg'
const  reference = await faceapi.detectSingleFace(imgEle, options).withFaceLandmarks().withFaceDescriptor()

const result = await faceapi.detectSingleFace(videoEl, options).withFaceLandmarks().withFaceDescriptor()
if (result) {
    const faceMatcher = new faceapi.FaceMatcher(result)
    drawLandmarks(videoEl, $('#overlay').get(0), [result], withBoxes)

    if (reference) {
       const bestMatch = faceMatcher.findBestMatch(reference.descriptor)
       console.log(bestMatch)
     }
}
Copy the code

Here, Euclidean distance is calculated mainly by face feature vector. Therefore, as shown in the preview figure, the smaller _distance is, the more matching the two faces are. This threshold can be set as 0.4, below 0.4 is a successful match, while above 0.4 is a failure.

So if you have a face layer vector, you can also use the face-api.js API to calculate Euclidean distance

const dist = faceapi.euclideanDistance([0.0], [0.10])
console.log(dist) / / 10
Copy the code

Euclidean distance definition: Is a commonly used definition of distance, which is the actual distance between two points in m-dimensional space. The Euclidean distance in two dimensions is the distance between two points. D = SQRT ((x1-x2)^2+(y1-y2)^2) +(z1-z2)^2