Animation series:

  • Web frame animation solution – APNG principle and implementation
  • Web Frame animation solution – Lottie – Web source code anatomy
  • Web Frame animation solution – WebGl implements transparent video rendering

preface

As the front end students, more or less will receive animation requirements. If it is regular animation is relatively easy to implement, but if it is more complex frame animation, we use CSS implementation, it is very easy to cause the following situation, the designer is the seller show, we develop the buyer show.

You might think of giFs, but giFs often have jagged edges that don’t meet the designer’s refinement requirements. So we need to find more animation solutions that allow us to restore 100% of the design, but still maintain the delicacy and performance of the animation. In this paper, the author mainly introduces the APNG scheme.

APNG (Animated Portable Network Graphics) is an animation format based on the PNG format extension that adds support for Animated images, along with support for 24-bit images and 8-bit Alpha transparency, which means animations will have better quality.

Let’s take a look at the comparison between APNG and GIF:

The size of APNG and GIF is not much different, but APNG is much clearer than GIF, and there is no clutter. This is because APNG has 24-bit image and 8-bit Alpha transparency support. Let’s take a look at the main principles and uses of APNG.

1. APNG data format

1.1 PNG

Before looking at the APNG data format, understand the PNG data format first, after all, APNG is based on the PNG format extension. PNG data format is as follows:

It is mainly divided into four parts:

  • PNG Signature is a file identifier used to verify whether the file format is PNG. The content is fixed as:

    0x89.0x50.0x4e.0x47.0x0d.0x0a.0x1a.0x0a
    // Here is the basis for the following, check whether the first 8 bytes are this content, determine whether PNG
    Copy the code
  • IHDR is a file header data block that contains basic information about a PNG image, such as the width and height of the image

  • IDAT is the core of image data block, storing specific image data

  • IEND is the end data block, marking the end of the image

1.2 APNG

After understanding the PNG data format, let’s look at the APNG data format. As shown below:

It can be seen that APNG adds three modules, acTL, fcTL and fdAT, on the basis of PNG

  • AcTL: must precede the first IDAT block, used to tell the parser that this is APNG in animation format, including the total number of animation frames and the number of cycles, meaning that this field can be used to determine whether it is an APNG image format.

  • FcTL: Frame control block. Each frame must have an auxiliary block in the PNG specification. It contains the sequence number of the current frame and the width and height of the image.

  • FdAT: frame data block, the same meaning as IDAT, is image data. But there are more frames than IDAT because there are multiple frames in the animation. As you can see in the figure, the image data of the first frame is still called IDAT. After the second frame, it is called fdAT because the format of the first frame and PNG data remains the same. On browsers that do not support APNG, you can degrade to a still image and show only the first frame.

In order to better understand the APNG data format, interested students can use the software APNGb below, generate their own APNG animation. The following DEMO is generated with 4 clock images.

Effect:(If not, go straight to the Demo above)

2. The performance

After learning the data format of APNG, and through the above Demo we can find that a clock animation stores 4 frames of clock image data, which means that an APNG animation must be large. If there are dozens of frames, it is even more unimaginable. Page loading animation is slow, which creates a bad user experience, and animation is meaningless.

But the APNG team is also aware of this problem, so frame optimization is also implemented:

As can be seen from the above 4 frames, the dial part can be reused. Therefore, BEFORE generating APNG, APNG will calculate the differences between frames through an algorithm, and only store the differences before the frame, instead of storing the whole frame. As follows, frames 2, 3 and 4 have no dial parts.

The optimized APNG size is as follows. It can be seen that frames 2, 3 and 4 are much smaller than frames 1.

But the question is, how do I draw frames 2, 3, and 4? How do you know which elements to reuse? This question will be answered later.

3. Apng-canvas source code analysis

We usually use APNG as follows, very simple:

<img src="xxx.png" />
Copy the code

But there are two problems with using the IMG tag directly:

  1. Compatibility issues, APNG compatibility is ok for now, depending on how compatible companies want to use it.
  2. – a very big bug, when Safari for iOS previews APNG, the GIF loops the same number of times as the originalloop+ 1. For example, APNG has 10 frames,loopWith a value of 2, a total of 30 frames will be shown in a loop. It would be bad if our animation only played once.

So the apNG-canvas library is generally recommended. The library requires the following support to run:

  • Canvas
  • Typed Arrays
  • Blob URLs
  • requestAnimationFrame

Next, let’s take a look at how the APNG-Canvas library implements normal playback of APNG. There are three steps:

  1. Parse the APNG data format (follow the APNG picture format in Section 1.2).
  2. Sort out the parsed APNG data.
  3. Each frame is drawn by requestAnimationFrame according to the interval length of each frame.

Apng-canvas/SRC directory structure:

├ ─ animation. Js// APNG animation logic├ ─ crc32. Js// The decoding operation is related├ ─ loader. Js/ / APNG download├ ─ main. Js/ / the entry├ ─ parser. Js/ / decoding├ ─ support - test. Js// Check compatibility
Copy the code

3.1 Parsing APNG Data format

The decoding process is as follows:

The APNG file is loaded using XMLHttpRequest. You can look at/SRC /loader.js without explanation.

The decoding logic is mainly in/SRC /parser.js. First, APNG downloads resources in the format of ArrayBuffer, and verifies whether the file format is PNG & APNG by manipulating binary data.

Verifying the PNG format is to verify the PNG Signature block. As mentioned in section 1.1 of PNG Data Format, the key implementation is as follows:

const bufferBytes = new Uint8Array(buffer);
const PNG_SIGNATURE_BYTES = new Uint8Array([0x89.0x50.0x4e.0x47.0x0d.0x0a.0x1a.0x0a]);
for (let i = 0; i < PNG_SIGNATURE_BYTES.length; i++) {
    if(PNG_SIGNATURE_BYTES[i] ! == bufferBytes[i]) { reject('Not a PNG file (invalid file signature)');
        return; }}Copy the code

Verifying the APNG format is to determine if a file has blocks of type acTL, as mentioned in section 1.2 APNG Data Formats. Read each piece of the file in order to obtain the block type and other data, and judge the code as follows:

let isAnimated = false;
parseChunks(bufferBytes, (type) = > {
    if (type === 'acTL') {
        isAnimated = true;
        return false;
    }
	return true;
});

if(! isAnimated) { reject('Not an animated PNG');
	return;
}
Copy the code

The process of decoding and organizing each frame of data is shown in the code below. Call parseChunks to read each block in turn, and process and store each block according to the data contained in each type block, width and height, corresponding position, and byte size.

let preDataParts = [], // Store other auxiliary blocks
    postDataParts = [], // Store the IEND block
    headerDataBytes = null; // Store IHDR blocks

const anim = anim = new Animation();
let frame = null; // Store each frame

parseChunks(bufferBytes, (type, bytes, off, length) = > {
    let delayN,
        delayD;
    switch (type) {
        case 'IHDR':
            headerDataBytes = bytes.subarray(off + 8, off + 8 + length);
            anim.width = readDWord(bytes, off + 8); / / canvas width
            anim.height = readDWord(bytes, off + 12); / / the canvas
            break;
        case 'acTL':
            anim.numPlays = readDWord(bytes, off + 8 + 4); // The number of cycles
            break;
        case 'fcTL':
            if (frame) anim.frames.push(frame); // Last frame data
            frame = {}; // A new frame
            frame.width = readDWord(bytes, off + 8 + 4); // Width of the current frame
            frame.height = readDWord(bytes, off + 8 + 8); // Height of the current frame
            frame.left = readDWord(bytes, off + 8 + 12); // From the left side of the canvas
            frame.top = readDWord(bytes, off + 8 + 16); // From the top of the canvas
            delayN = readWord(bytes, off + 8 + 20);
            delayD = readWord(bytes, off + 8 + 22);
            if (delayD === 0) delayD = 100;
            frame.delay = 1000 * delayN / delayD; // The duration of the current frame
            anim.playTime += frame.delay; // Total playback time
            frame.disposeOp = readByte(bytes, off + 8 + 24);
            frame.blendOp = readByte(bytes, off + 8 + 25);
            frame.dataParts = [];
            break;
        case 'fdAT':
            // Image data
            if (frame) frame.dataParts.push(bytes.subarray(off + 8 + 4, off + 8 + length));
            break;
        case 'IDAT':
            // Image data
            if (frame) frame.dataParts.push(bytes.subarray(off + 8, off + 8 + length));
            break;
        case 'IEND':
            postDataParts.push(subBuffer(bytes, off, 12 + length));
            break;
        default:
            preDataParts.push(subBuffer(bytes, off, 12+ length)); }});if (frame) anim.frames.push(frame); // Store each frame of data in sequence

Copy the code

After processing the width, height, position, and playing time of each frame, the frame data of each frame dataParts is sequentially composed into a PNG image resource. The URL created by createObjectURL is stored in the frame for subsequent drawing. Here code omitted, interested in their own view of the source.

const url = URL.createObjectURL(new Blob(bb, { type: 'image/png' }));
frame.img = document.createElement('img');
frame.img.src = url;
frame.img.onload = function () {
    URL.revokeObjectURL(this.src);
    createdImages++;
    if (createdImages === anim.frames.length) { // Complete the decodingresolve(anim); }};Copy the code

Decoding this piece is boring, want to know more details can see @ netease cloud music column of this article ha ~ APNG decoding, the author is mainly with you to clarify your thoughts.

3.2 Sorting out parsed APNG data

As you can see from section 3.1, the parsed data is stored in anim.frames in sequence. The result of the previously mentioned clock case parsing is as follows:

 anim.frames = 
 [
     / / 1 frame
     {
        blendOp: 0
        delay: 1000 // The duration of each frame
        disposeOp: 0
        height: 150 / / height
        img: img // The image data of the current frame
        left: 0 // From the left side of the canvas
        top: 0 // From the top of the canvas
        width: 150 / / width
	},
    / / 2 frames
    {
        blendOp: 1
        delay: 1000
        disposeOp: 0
        height: 58
        img: img
        left: 46
        top: 31
        width: 73
    },
    / / 3 frames
    {
        blendOp: 1
        delay: 1000
        disposeOp: 2
        height: 66
        img: img
        left: 46
        top: 53
        width: 73
	},
    / / 4 frames
    {
        blendOp: 1
        delay: 1000
        disposeOp: 0
        height: 30
        img: img
        left: 31
        top: 53
        width: 89}]Copy the code

The above 4 frames of data correspond to the following 4 pictures respectively. As mentioned above, these are the optimized results:

It can also be seen that only the width, height, left and top of the first frame are complete, while the width, height, left and top of the second, third and fourth frames are different, because they have been optimized by the algorithm.

So what do the blendOp and disposeOp fields represent? It can be seen that I do not have comments, these two fields are mentioned in section 2 above [how to draw frames 2, 3 and 4? How to know which elements to reuse? The answer to that question. We’ll figure out how to do that in the next drawing.

3.3 Draw each frame

APNG is drawn by requestAnimationFrame, which calls renderFrame continuously to draw each frame. The image, width, height, and position of each frame were obtained in the previous section. RequestAnimationFrame normally runs at 60 FPS (every 16.7ms or so). The playTime field mentioned in the previous section is the drawing time for each frame. So instead of requestAnimationFrame drawing every time, it calculates next TrenderTime by playTime and draws at that time. Avoid useless drawing, which affects performance. The code is as follows:

const renderFrame = function (now) {
    if (nextRenderTime === 0) nextRenderTime = now;
    while (now > nextRenderTime + ani.playTime) nextRenderTime += ani.playTime;
    nextRenderTime += frame.delay;
};

const tick = function (now) {
    while (played && nextRenderTime <= now) renderFrame(now);
    if (played) requestAnimationFrame(tick);
};
Copy the code

The specific drawing is implemented using Canvas 2D API.

const renderFrame = function (now) {
    const f = fNum++ % ani.frames.length;
    const frame = ani.frames[f];
 
    if (prevF && prevF.disposeOp === 1) { // Clear the base map of the previous frame
        ctx.clearRect(prevF.left, prevF.top, prevF.width, prevF.height);
    } else if (prevF && prevF.disposeOp === 2) { // Revert to the base drawing before the last frame
        ctx.putImageData(prevF.iData, prevF.left, prevF.top);
    } // 0 is drawn directly

    const {
        left, top, width, height,
        img, disposeOp, blendOp
    } = frame;
    prevF = frame;
    prevF.iData = null;
    if (disposeOp === 2) { // Store the current drawing base map, which can be used to restore the data before the next frame is drawn
        prevF.iData = ctx.getImageData(left, top, width, height);
    }
    if (blendOp === 0) { // Clears the base map of the current frame area
        ctx.clearRect(left, top, width, height);
    }

    ctx.drawImage(img, left, top); // Draw the current frame picture

    // The drawing time of the next frame
    if (nextRenderTime === 0) nextRenderTime = now;
    nextRenderTime += frame.delay; // delay indicates the interval between frames
};
Copy the code

From the drawing code above, we can see that two fields, blendOp and disposeOp, determine whether to reuse the drawn frame data. The configuration parameters of the two fields are as follows:

  • disposeOpSpecifies what is done to the buffer before the next frame is drawn
    • 0: Renders the new image data directly to the specified area of the canvas without emptying the canvas
    • 1: Clear the canvas of the current frame to the default background color before rendering the next frame
    • 2: Restores the current frame area of the canvas to the result of the previous frame before rendering the next frame
  • blendOpSpecifies what is done to the buffer before the current frame is drawn
    • 0: clear the current area and then draw
    • 1: indicates that the current region is directly drawn without clearing, and the image is superimposed

The drawing process of 4 frames of the corresponding clock is as follows:

  • The first frame:

    • BlendOp: 0 Before drawing the current frame, clear the current area and then draw
    • DisposeOp: 0 renders new image data directly to the specified area of the canvas without emptying the canvas
  • The second frame:

    • BlendOp: 1 Before the current frame is drawn, it means that the current area is directly drawn without clearing, and the image is superimposed
    • DisposeOp: 0 renders new image data directly to the specified area of the canvas without emptying the canvas
  • The third frame:

    • BlendOp: 1 Before the current frame is drawn, it means that the current area is directly drawn without clearing, and the image is superimposed
    • Restore the current frame area of the canvas to the drawing result of the previous frame before rendering the next frame (since the fourth image overlays the red line of the second image, the third image should return to the second frame after moving)
  • The fourth frame:

    • BlendOp: 1 Before the current frame is drawn, it means that the current area is directly drawn without clearing, and the image is superimposed
    • DisposeOp: 0 renders new image data directly to the specified area of the canvas without emptying the canvas

At this point, the drawing process of APNG-canvas is finished. Interested students can think more about the source code

4. Check APNG compatibility

In practice, you can check whether the browser supports APNG by using the following methods:

(function() {
	"use strict";
	var apngTest = new Image(),
	ctx = document.createElement("canvas").getContext("2d");
	apngTest.onload = function () {
		ctx.drawImage(apngTest, 0.0);
		self.APNG = ( ctx.getImageData(0.0.1.1).data[3= = =0 );
	};
	apngTest.src = "data:image/png; base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAACGFjVEwAAAABAAAAAcMq2TYAAAANSURBVAiZY2BgYPgPAAEEAQB9ssjfAAAAGmZjV EwAAAAAAAAAAQAAAAEAAAAAAAAAAAD6A+gBAbNU+2sAAAARZmRBVAAAAAEImWNgYGBgAAAABQAB6MzFdgAAAABJRU5ErkJggg==";
	// frame 1 (skipped on apng-supporting browsers): [0, 0, 0, 255]
	// frame 2: [0, 0, 0, 0]} ());Copy the code
  1. Load a Base64 encoded image with a 1×1 pixel size. The image has 2 frames of data. The difference is that the last value of each frame is different.

    // frame 1 (skipped on apng-supporting browsers): [0, 0, 0, 255]
    // frame 2: [0, 0, 0, 0]
    Copy the code
  2. Draw it on the canvas, and obtain the pixel data of the image through getImageData() method, mainly obtaining the Alpha transparent channel of data[3] (value range: 0-255). On browsers that do not support APNG, only the first frame will be degraded, so data[3] will equal 255. Frame 2 will eventually be displayed on a browser that supports APNG, so data[3] will equal 0 to indicate support for APNG.

5. To summarize

  1. This paper introduces the use of APNG, performance, step pit, compatibility and detection, APNG-Canvas library source code analysis, mainly on the author’s personal learning summary.

  2. In practice, Safari for iOS is not suitable for animations that only play once because loop automatically +1.

  3. APNG files can be large for storing multiple frames of data, so it is recommended to use smaller animation scenes. If the scene is suitable, you can also put a static image at the bottom to be replaced after the APNG is loaded, but in this case the first frame can be displayed statically to the user.

  4. Apng-canvas decoding is time-consuming, if the animation is displayed on the page, it will increase the page blocking time. The author tried to put it into Web Worker for parsing, which can save about 100ms time.

6. Reference materials

The images and related information are from the following resources:

  • Gist.github.com/eligrey/175…
  • Wiki.mozilla.org/APNG_Specif…
  • Littlesvr. Ca/apng/system -…
  • Github.com/davidmz/apn…
  • Implementation principle of Web APNG playback
  • APNG those things