This article was first published on my blog: www.hansuku.com

This template has been open source to Github, welcome star, patbrick: github.com/Hansuku/bla…

Feature

Based on the wechat CREame API, use BlazeFace to parse the face on the returned frame

What is Blazeface? Google is the official face recognition model, easy to use on the hand.

How to demonstrate

Download the repository and import the project into the applet. Appid can use the test number. The code is relatively simple, with comments in key areas

How to incorporate BlazeFace into your project

Apply for the tensorflow.js plugin, click the link and declare it in app.json. Change the version number to the latest one

{... "Plugins ": {"tfjsPlugin": {"version": "0.1.0", "provider":" wx6AFed118D9e81DF9 "}}... }Copy the code

Init it (if you already have package.json) and add those dependencies to dependencies and NPM I

"@tensorflow-models/blazeface": {"@tensorflow-models/ blazeFace ": "2.0.1", "@tensorflow/ TFJS-backend ": "2.0.1", "@tensorflow/ TFJS-backend-webgl ": "2.0.1", "@tensorflow/ TFJs-back-CPU ": "2.0.1", "fetch - wechat" : "0.0.3}"Copy the code

Or you can directly NPM I @tensorflow-models/ blazeface@tensorflow/TFJS-Converter @tensorflow/ TFJS-core @tensorflow/tfjs-backend-webgl @tensorflow/tfjs-backend-cpu fetch-wechat

Register tf in the applets entry file app.js:

const fetchWechat = require('fetch-wechat'); const tf = require('@tensorflow/tfjs-core'); const webgl = require('@tensorflow/tfjs-backend-webgl'); const plugin = requirePlugin('tfjsPlugin'); App({ onLaunch: function () { plugin.configPlugin({ // polyfill fetch function fetchFunc: fetchWechat.fetchFunc(), // inject tfjs runtime tf, // inject webgl backend webgl, // provide webgl canvas canvas: wx.createOffscreenCanvas() }); .Copy the code

Go to wechat developer tools – Menu bar – Tools – Build NPM, remember to check “use NPM module” in local Settings. OK, normally this would be ready, but since BlazeFace is written by Google and their model files are placed on tfhub, it is very likely that you will not be able to download the blazeFace model, you can try this link: tfhub.dev/tensorflow/…

If you cannot access the model.json file or use it later, go to the \miniprogram_npm\@tensorflow-models\blazeface\index.js file and drag it to line 12.

// const BLAZEFACE_MODEL_URL = 'https://tfhub.dev/tensorflow/tfjs-model/blazeface/1/default/1'; Async function Load ({maxFaces = 10, inputWidth = 128, inputHeight = 128, iouThreshold = 0.3, ScoreThreshold = 0.75} = {}) {// const blazeFace = await tfconv. LoadGraphModel (BLAZEFACE_MODEL_URL, {fromTFHub: true }); const blazeface = await tfconv.loadGraphModel('https://cdn.hansuku.com/tensorflow/model.json'); const model = new face_1.BlazeFaceModel(blazeface, inputWidth, inputHeight, maxFaces, iouThreshold, scoreThreshold); return model; }Copy the code

You can find the model.json and group1-shard1of1.bin files under the model of this project. Upload these two files to your OSS. And keep them in the same level of directory, and then point the model.json address here to your OSS.

Then you can have fun writing code

use
import * as blazeface from '@tensorflow-models/blazeface';
let tensorModel = null

Page({
  async onReady () {
    tensorModel = await blazeface.load()
  },
})
Copy the code

Here is to load the BlazeFace model. I recommend making the tensorModel object global so that you can use it in later methods:

const imgData = { data: new Uint8Array(frame.data), width: frame.width, height: frame.height}
const returnTensors = false
const flipHorizontal = false
const annotateBoxes = false
const predictions = await tensorModel.estimateFaces(imgData, returnTensors, flipHorizontal, annotateBoxes)
Copy the code

TensorModel. This method allows to HTMLIMAGE estimateFaces object, HTMLVIDEO object, canvas and ImageData object, but in the small program, you know, we can only incoming canvas and ImageData object. (If you don’t mind, I’m having a hard time with this.)

Meanings of other parameters:
  • returnTensors: Whether to return tensor data
  • flipHorizontal: Whether the picture needs to be flipped horizontally Mainly for the front camera of the mobile terminal, the picture captured is inverted, depending on the actual demand
  • annotateBoxes: Whether to return the five senses data, if not enabled in the returned data will not havelandmarksThis parameter
The return value:

The predictions returns an array, if the length is 0 the faces were not detected and the length is how many faces were detected. Array will return containing face in the current screen from 0,0 offset position information, the format is as follows:

[{topLeft: [232.28, 145.26], bottomRight: [449.75, 308.36], Probability: [0.998], Landmarks: [[295.13, 177.64], [382.32, 175.56] right / /, / / doing eye [341.18, 205.03], [345.12, 250.61], / / nose/mouth / [252.76, 211.37]. Left ear [431.20, 204.93]}]Copy the code
  • topLeftThe upper left vertex of the current face position
  • bottomRightThe lower right corner of the current face position
  • landmarksThe current facial features are located in the same order as above (PS: You can draw on canvas with this device to achieve the blazeFace image on the official website).
  • probabilityRecognition accuracy, 0-1, the greater the more accurate, can be used to filter some misjudgments, for example, you must be above 0.9 before you think he is a face