PK creative Spring Festival, I am participating in the “Spring Festival creative submission contest”, please see: Spring Festival creative submission Contest

preface

During the upcoming Spring Festival, people should pay New Year’s greetings to their relatives and friends. During the epidemic period, people are not allowed to gather and walk around, so sending New Year’s greetings through mobile phones has become a daily routine. Saying “Happy New Year, Kung Hei Fat Choi” can’t express our wishes, so many people will send some cute and interesting emojis. In order to express my sincere wishes, I want to use face recognition technology to replace the facial expression head image and synthesize a new facial expression. The following is the effect drawing, is there some meaning! Online experience address and source address at the end of the article.

face-api.js

Although face recognition technology has appeared for a long time, I have never been exposed to it, so I searched for some technical solutions before making this thing. As a front-end developer, I can only write a simple back-end, so I want to use front-end technology as much as possible. We found the face-api.js library. Look at the features provided. That’s what I need. So download the official demo study. After cloning the project, go to the example\examples-browser directory and run YARN or NPM install to install the dependency package. After installing the dependency package, run NPM Tun start to start the project. Localhost cannot be used. And in those cases we found faceExtraction which is an example of face recognition and extraction that we want. The next step is to learn by example.

Achieve upload pictures to obtain face map

In the example, JQUERY + Node is used to make the project. I think Vite + cue3 + TS is used to build the project, and there is no problem with the development environment. However, there was an error when packaging, and I saw some errors about WebGL.

After creating the project, add a control to upload pictures, you can use the UI component library, or write your own. In this example we need to get the file object of the image. The image is not actually uploaded to the server, so just grab it in the change event. Get the Image object using the faceapi.bufferToImage(imgFile) method.

// * Upload the image
const handleChange = async (file) => {
  const imgFile = file.file;
  isUpload.value = true; / / upload loading
  const img = await faceapi.bufferToImage(imgFile);
  imgUrl.value = img.src;
  isUpload.value = false;
  updateResults();
};
Copy the code

Then you can call the API of face recognition extraction to get the face. Face recognition model should be loaded before obtaining face image. Returns the corresponding model object based on the model used.

import * as faceapi from 'face-api.js';
const SSD_MOBILENETV1 = 'ssd_mobilenetv1'   // SSD mobile network detection model
const TINY_FACE_DETECTOR = 'tiny_face_detector' // Miniature face detector model

let selectedFaceDetector = SSD_MOBILENETV1

export const getCurrentFaceDetectionNet = () = > {
  if (selectedFaceDetector === SSD_MOBILENETV1) {
    return faceapi.nets.ssdMobilenetv1
  }
  if (selectedFaceDetector === TINY_FACE_DETECTOR) {
    return faceapi.nets.tinyFaceDetector
  }
}

export const isFaceDetectionModelLoaded = () = > {
  console.log(getCurrentFaceDetectionNet());
  return!!!!! getCurrentFaceDetectionNet().params }Copy the code

The ssD_mobilenetv1 model is used according to the official example, which is specified in the NPM.

So when you load a program we want to loading face recognition model, according to the isFaceDetectionModelLoaded () the return value of this method determine whether has been loaded. Publish the face model file.

onMounted(() = > {
  / /! Initialize, load face recognition model
  if (!isFaceDetectionModelLoaded()) {
    getCurrentFaceDetectionNet().load();
  }
});
Copy the code

After the model is loaded, the face image can be obtained. The faceImages we get are an array of canvas elements.

// * Get the picture of face
const updateResults = async() = > {const isFace = isFaceDetectionModelLoaded();
  console.log(isFace);
  if(! isFace) {return;
  }
  const inputImgEl = document.getElementById('inputImg');
  const options = getFaceDetectorOptions();
  const detections = await faceapi.detectAllFaces(inputImgEl, options);
  // * Get face data (array)
  const faceImages = await faceapi.extractFaces(inputImgEl, detections);

  if (faceImages.length > 0) {
    displayExtractedFaces(faceImages);
  } else {
    message.warning('No face recognition');
    facesContainer.innerHTML = ' ';
    mergeUrl.value = ' '; }};Copy the code

Once you get the data, you can render the face to the page. Because it is canvas tag, v-HTML cannot be used to render, so dom element append method is used here to add on the page.

// * Render the image of the human face
const displayExtractedFaces = (faceImages) = > {
  const facesContainer = document.getElementById('facesContainer');
  facesContainer.innerHTML = ' ';
  faceImages.forEach((canvas) = > facesContainer.append(canvas));
  convertCanvasToImage(faceImages[0]);
};
Copy the code

Use a face image and a background image to create a composite image

There are many ways to combine two images to make one image, but I’m using the method of placing the image on the canvas. Start by creating a new canvas. Put the background up. Here we should pay attention to put the background image first, and then put the face image, because the face image is relatively small, put first will be covered by the background image.

 // * Create canvas
  const canvas = document.createElement('canvas');

  // * Create background image
  const bgImg = new Image();
  bgImg.src = bg;
  bgImg.crossOrigin = 'Anonymous';

  bgImg.onload = () = > {
    const width = bgImg.width;
    const height = bgImg.height;
    // * Set the canvas width and height according to the width and height of the background image
    canvas.width = width;
    canvas.height = height;
    / /! Put the picture on the canvas, the background image first, then the face, to prevent overwriting
    canvas.getContext('2d').drawImage(bgImg, 0.0, width, height);
  }
Copy the code

Next, put the face picture. Since the face picture is a Canvas object, it needs to be converted into the Base64 url of the picture first.

  const image = new Image();
  const url = canvas.toDataURL('image/png');
  image.src = url;
Copy the code

After converting to an Image object, you can use the canvas.getContext(‘2d’).drawImage() method to place the face Image on the canvas. Adjust the position and size to cover the head of the background Image.

Implement image download

The image is already composited onto the canvas, and we know how the canvas is converted to a URL, so it’s easy to download.

const downloadImg = () = > {
  console.log(downloadUrl.value);
  if(! downloadUrl.value) { message.warning('No pictures to download! ');
    return;
  }
  const url = downloadUrl.value; //* Get the image address
  const a = document.createElement('a'); //* Create a document inserted by the a node
  const event = new MouseEvent('click'); //* Simulates the mouse click event
  a.download = 'happy-newyear'; //* Set the download attribute of node A
  a.href = url; //* Assign the image SRC to the href of the a node
  a.dispatchEvent(event); //* Trigger the mouse click event
};
Copy the code

At this point, face matting synthesis of new emoticons will be realized. Send red envelopes to your friends and relatives.

The project to improve

The current project can only upload a single image, and the template is single. The following functions are expected to be added later. Good suggestions can also be made in the comments section. Of course interested friends can join the development.

  • Support to upload multiple pictures, get all the pictures synthesized after the family photo.

  • Supports background custom upload.

  • You can export GIF images.

conclusion

Although this small project is simple, it still took a lot of time from the technology selection to the completion of the project. Face recognition technology is now very mature and has been used in our daily life. It’s a technique we need to learn. Through this project, I also entered the door. Although I stood on the shoulders of giants, I also gained experience and broadened my vision.

Source code and experience the address

Because the face recognition model file is relatively large, and the server performance is mediocre, so the face recognition model download is very slow, after the page appears, you need to wait for a period of time to use the face recognition function. Online Experience Address

The source address