Recently, the face recognition function needs to be connected to the Web side, mainly based on navigator. MediaDevices and Canvas. Here is a summary.
The page layout
The video tag is used to dynamically capture the camera content, and the Canvas tag is used to capture every frame of the video stream.
<video id="video" width="100%" height="200" autoPlay>
<track default kind="captions" srcLang="en" src="" />Your browser does not support the video TAB. </video><canvas id="canvas" width="420px" height="200px" />
Copy the code
Calling the Camera
setupCamera = () = > {
let video: any = document.getElementById('video');
// Get the camera API
if (navigator.mediaDevices && navigator.mediaDevices.getUserMedia) {
navigator.mediaDevices.getUserMedia({ video: { width: 420.height: 200 } }).then(
(stream) = > {
video.srcObject = stream;
video.play();
}).catch((err) = > {
console.log(`${err.name }: ${ err.message}`); }); }}Copy the code
Note here that iframe cannot call the camera directly. You need to set the allow property to ‘microphone; camera; midi; encrypted-media; ‘
Use canvas to get each frame
Here you need to turn the image Base64 into a BLOB object, which will be explained in the next article.
takePhoto = () = > {
// Add elements to canvas to take snapshots
const canvas: any = document.getElementById('canvas');
const context = canvas.getContext('2d');
const video = document.getElementById('video');
context.drawImage(video, 0.0.420.200);
const data = canvas.toDataURL();
// We need to convert base64 into bloB objects, which will be explained in the next article.
const newblob = this.dataURItoBlob(data);
}
Copy the code