background

In contrast to Web development and native development, apps are often cited as having two advantages: 1. The interactive experience is smoother. 2. Hardware support is friendlier. As HTML5 evolves, we’ll be able to do more and more things in the browser that we used to do natively, like video recording.

There is a red ball on a canvas rolling back and forth. Click “Start recording” button, the video will start recording. Then click “Download video” button, the video will end recording, and a WebM format video file will be downloaded locally.

Technically implemented

Here I share a whole front-end recording + server storage + download complete solution (the demo above is a simple version, no server storage steps). The whole development process can be divided into three steps: 1. Video recording, 2. Uploading persistent storage, 3. Download the video. The whole front-end process can be summarized in the following figure


The implementation steps are described in detail below

Record video

First of all, the survey found an HTML5 API MediaRecorder suitable for recording this matter, excerpt from the official website document:

MediaRecorder
MediaStream Recording APIProvides an interface for easy recording of media, which he needs to call
MediaRecorder()Constructor to instantiate.


A new
MediaRecorderObject to the specified
MediaStreamSupported configuration items include setting the MIME type of the container (for example
"video/webm"or
"video/mp4") and audio and video bit rates, or both

As can be seen from the above introduction, as long as you can get the MediaStream of the recording object (this paper shows the canvas recording, and currently the objects that can be converted to MediaStream include video, audio, and computer front-facing camera input), you can easily achieve the recording.

Then find the canvas API below, which can be used to get the MediaStream we need from the Canvas

MediaStream = canvas.captureStream(frameRate);Copy the code

This completes the video recording process: Canvas – >MediaStream – >MediaRecord – > Video. Sample code for the entire recording is as follows:

LetallChunks =[]; letcanvas=document.getElementById("canvasId"); letstream=canvas.captureStream(60); // 60 FPS recording letrecorder=newMediaRecorder(stream, { mimeType: 'video/webm; codecs=vp9', }); // Canvas recording callback recorder. Ondataavailable =e=>{allchunk. push(e.ata); } recorder.start(10);Copy the code

Another thing to note is that we record video in WebM format, MP4 format is currently not supported in the latest Chrome version, you can check the support through the following API

MediaRecorder.isTypeSupported('video/webm'); //true MediaRecorder.isTypeSupported('video/mp4'); //falseCopy the code

Upload & Persistent storage

Uploading is relatively straightforward, just uploading the allChunks object mentioned above. In terms of business, I have two points in mind: 1. Recording allChunks for a long time will be too large, and uploading will be time-consuming. 2. If there is an emergency (refresh or click the link to skip away) during recording, the video cannot be saved.

For the above two business scenarios, I simply adopted Blob slice uploading. After recording, I set a timer to send incremental video slices to the back end every 10 seconds. The front-end example code is as follows:

SendNewChunks (){let start = 0; let iterationIndex = 0; sendTimer = window.setInterval( () => { let allBlob = new Blob(allChunks); Let newBlob = allblob. slice(start, allblob.size); start = allBlob.size; If (newBlob. Size > 0) {iterationIndex++; // Here Http post sends newBlob object}},10000)}Copy the code

In my scenario, node.js is used at the back end, so you can directly call the OSS API of Ali Cloud in the Node layer to upload:

letresult=awaitclient.put('object-name', newBuffer('hello world'));
Copy the code

One thing to note here is that the OSS Node. js API needs to upload data in Buffer format. After trying various methods, I was able to go through the whole process: First, convert Blob video slices into Uint8Array at the front end, pass them to Node.js, and then convert them into Buffer format through new Buffer.from(dataArray).

Download the video

Video download scheme is to continuously push small video slices through websocket front end, and then splice the video together by the front end, with the help of the browser transcoding download.

  • Why push slices with websocket instead of downloading them directly via a request?
    • If the video is directly stitched together through the back end and a request is pushed to the front end, the request callback will run the risk of timeout when the video size is too large.
    • Through Websocket can be more convenient to achieve streaming download, and in the front end to do the corresponding visual display (do a download like thunderbolt).
  • Why use browser transcoding?
    • Tried to join the Buffer format file stored in the OSS file at the back end, but did not find a suitable transcoding library. And front end can be through the window. The URL. CreateObjectURL, simpler through the browser to download the Blob in the cache object into a file, the code is as follows:
Function downloadByBlob(blobObj) {const link = document.createElement('a'); link.style.display = 'none'; const downloadUrl = window.URL.createObjectURL(blobObj); link.href = downloadUrl; link.download = `test.webm`; document.body.appendChild(link); link.click(); link.remove(); }Copy the code

Finally, in the process of practice, there are some relevant knowledge points that I refer to in the following list, I hope to be helpful to you! Related knowledge points

    • Blob (Binary Large Object) represents the data content of a Binary file. Blob represents an immutable, primitive file-like Object. (FileInterface is based onBlob, inherits the functionality of bloB and extends it to support files on user systems.
    • WebM is an open and free media file format proposed by Google. WebM video format is actually a new container format developed on the basis of Matroska (MKV) container format, which includes VP8 video track and Ogg Vorbis audio track. Google has licensed its VP8 video coding technology to open source similar to BSD. Ogg Vorbis is an open format.
    • Binary array correlation
      • Usage scenarios: ArrayBuffer, TypedArray, and DataView objects are an interface for JavaScript to manipulate binary data
      • Differences: ArrayBuffer objects represent raw binary data, TypedArray objects represent binary data of determined type, and DataView objects represent binary data of uncertain type
      • Purpose: The original design purpose of these objects is related to the WebGL project. The so-called WebGL is the communication interface between browser and graphics card. In order to meet the massive and real-time data exchange between JavaScript and graphics card, the data communication between them must be binary instead of the traditional text format. The text format passes a 32-bit integer, and the JavaScript scripts and graphics cards at both ends have to convert the format, which is very time-consuming. If there were a mechanism to manipulate bytes directly, as in C, and feed 4-byte 32-bit integers into the graphics card in binary form, script performance would be greatly improved.
    • Compatibility correlation
      • Among them MediaRecorder and MediaStream are in the Working Draft stage of the API, so not all browsers support, Chrome (49.0+) and FireFox (25.0+) support better, details can click on the front link into, Drag to the bottom of the page to see compatibility.