Source: Commons.wikimedia.org
Yang Caifang
Writing in the front
In the development of cloud music live broadcast, there is often a need for animation playback. The application scenario of each need is different, and most of the animations with small volume adopt APNG format.
If the animation is displayed separately, you can use to directly display the APNG animation, but there are compatibility bugs. For example, some browsers do not support APNG playback, and some Android models fail to play the animation repeatedly.
If you need to combine THE APNG Animation with other DOM elements to display the Animation with CSS3 Animation, the APNG needs to be preloaded and controlled. Preloading prevents APNG parsing from taking time and thus causing the two to be out of sync. Controlled operations enable users to perform operations at time nodes such as APNG resolution success or playback end.
Apng-canvas can help us solve these problems. Apng-canvas Uses Canvas to draw APNG animation, which can be compatible with more browsers, smooth the differences between different browsers, and facilitate the control of APNG playback. The implementation principle of APNG and APNg-Canvas library and the implementation method of WebGL rendering added on the basis of APNG-Canvas will be introduced in detail below.
Introduction of APNG
APNG (Animated Portable Network Graphics, Animated PNG) is a bitmap animation format based on the PNG format extension, added support for Animated images, With support for 24-bit true-color images and 8-bit Alpha transparency, animations are of better quality. APNG remains backward compatible with traditional PNG, displaying the default image when the decoder does not support APNG playback.
In addition to APNG, common animation formats include GIF and WebP. In terms of browser compatibility, size, and image quality, the results are as follows (the size of one image is an example, and the size of other solid or colorful images can be viewed as GIF vs APNG vs WebP, in most cases APNG is smaller). Overall, APNG is better, which is why we choose APNG.
APNG structure
APNG is based on the PNG format extension, let’s first understand the composition of PNG structure.
PNG structure composition
PNG mainly includes PNG Signature, IHDR, IDAT, IEND, and some auxiliary blocks. PNG Signature is used to verify whether the file format is PNG. IHDR is a file header data block, containing basic image information, such as image width and height information; IDAT is the image data block, store specific image data, a PNG file may have one or more IDAT blocks; IEND is the end data block, indicating the end of the image; Secondary blocks are after IHDR and before IEND, and the PNG specification does not impose sorting restrictions on them.
The PNG Signature block is 8 bytes in size and contains the following contents:
0x89.0x50.0x4e.0x47.0x0d.0x0a.0x1a.0x0a
Copy the code
The composition of each of the other blocks is basically as follows:
Four bytes indicate the length of the data, four bytes indicate the block type, length indicates the data (if the length of the data is 0, there is no such part), and the last four bytes indicate the CRC check.
APNG structure
APNG adds three kinds of blocks, acTL, fcTL and fdAT, on the basis of PNG, whose composition structure is shown in the figure below:
acTL
: Animation control block, which contains the number of frames and the number of loops (0 indicates infinite loops)fcTL
: Frame control block, an auxiliary block in the PNG specification, contains the sequence number of the current frame, image width and height, horizontal and vertical offsets, frame play time and drawing mode (dispose_op and BLend_op), only one for each framefcTL
块fdAT
: Frame data block, containing the frame sequence number and image data, only comparedIDAT
Multiple frame serial numbers, each frame can have one or morefcTL
Block.fdAT
Of the serial number andfcTL
Shared, used for APNG sequence error detection, optional correction
The IDAT block is the default image for APNG backward compatible display. If there is fcTL before IDAT, then the IDAT data is treated as the first frame image (as shown in the above structure). If there is no fcTL before IDAT, then the first frame image is the first fdAT, as shown in the figure below:
APNG animation playback is mainly through fcTL to control the rendering of each frame of the image, that is, through dispose_op and BLend_op to control the drawing mode.
-
Dispose_op specifies what to do to the buffer before the next frame is drawn
-
0: renders the new image data directly to the specified area of the canvas without emptying the canvas
-
1: Clears the canvas in the current frame area to the default background color before rendering the next frame
-
2: Restore the current frame area of the canvas to the result of the previous frame before rendering the next frame
-
-
Blend_op specifies the operation on the buffer before the current frame is drawn
-
0: indicates that the current region is cleared before drawing
-
1: The current region is directly drawn without clearing, and the image is superimposed
-
Apng-canvas implementation principle
After understanding the structure of APNG, we can analyze the implementation principle of APNG-Canvas, which is mainly divided into two parts: decoding and drawing.
APNG decoding
APNG decoding process is shown in the figure below:
First, APNG downloads resources in the format of ArrayBuffer and manipulates binary data through views. Then check whether the file format is PNG and APNG. Then each APNG piece is split and processed and stored in turn; Finally, PNG images are reconstructed by splitting the obtained PNG marker block, header block, other auxiliary blocks, frame image data block of a frame and the end block, and image resources are loaded. This process requires the browser to support Typed Arrays and Blob URLs.
APNG file resources are downloaded through XMLHttpRequest, simple implementation, here do not repeat.
Verify PNG format
To verify the PNG Signature block, compare file resources from the first byte to the first eight bytes in sequence. The key implementation is as follows:
const bufferBytes = new Uint8Array(buffer); // ArrayBuffer is the downloaded arrayBuffer resource
const PNG_SIGNATURE_BYTES = new Uint8Array([0x89.0x50.0x4e.0x47.0x0d.0x0a.0x1a.0x0a]);
for (let i = 0; i < PNG_SIGNATURE_BYTES.length; i++) {
if(PNG_SIGNATURE_BYTES[i] ! == bufferBytes[i]) { reject('Not a PNG file (invalid file signature)');
return; }}Copy the code
Verify the APNG format
Verifying the APNG format is to determine whether the file has blocks of type acTL. Therefore, each block in the file needs to be read sequentially to obtain data such as the block type. Block reading is processed according to the basic composition structure of PNG block described above, and the process implementation is shown in the figure below:
Off starts at 8, the size of PNG Signature in bytes, and then reads each piece in order. First, read 4 bytes to obtain the length of the data block, and continue to read 4 bytes to obtain the data block type. Then execute the callback function to process the data in this block. Determine whether to continue to read the next block according to the value res returned by the callback function, the block type, and the value off (the value RES indicates whether to continue to read the next block of data). The default is undefined to continue reading). If it continues, the off value is added 4 + 4 + length + 4, and the next block is offset to the start of the loop, otherwise it ends directly. The key code is as follows:
const parseChunks = (bytes, callback) = > {
let off = 8;
let res, length, type;
do {
length = readDWord(bytes, off);
type = readString(bytes, off + 4.4);
res = callback(type, bytes, off, length);
off += 12 + length;
} while(res ! = =false&& type ! = ='IEND' && off < bytes.length);
};
Copy the code
Call parseChunks to start from scratch, and return false to stop reading once there is a block of type === ‘acTL’. The key is as follows:
let isAnimated = false;
parseChunks(bufferBytes, (type) = > {
if (type === 'acTL') {
isAnimated = true;
return false;
}
return true;
});
if(! isAnimated) { reject('Not an animated PNG');
return;
}
Copy the code
Process each piece by type
The detailed structure of the core type blocks in the APNG structure is shown below:
Each block is read in turn by calling parseChunks, and each type of block is processed and stored separately according to the data contained in it and its corresponding offset and byte size. The sequence number (sequence_number) is skipped when processing fcTL and fdAT blocks. It seems that the sequence number error is not considered. Key implementations are as follows:
let preDataParts = [], // Store other secondary blocks
postDataParts = [], // Store the IEND block
headerDataBytes = null; // Store the IHDR block
const anim = anim = new Animation();
let frame = null; // Store each frame
parseChunks(bufferBytes, (type, bytes, off, length) = > {
let delayN,
delayD;
switch (type) {
case 'IHDR':
headerDataBytes = bytes.subarray(off + 8, off + 8 + length);
anim.width = readDWord(bytes, off + 8);
anim.height = readDWord(bytes, off + 12);
break;
case 'acTL':
anim.numPlays = readDWord(bytes, off + 8 + 4); // Number of cycles
break;
case 'fcTL':
if (frame) anim.frames.push(frame); // Last frame data
frame = {}; // A new frame
frame.width = readDWord(bytes, off + 8 + 4);
frame.height = readDWord(bytes, off + 8 + 8);
frame.left = readDWord(bytes, off + 8 + 12);
frame.top = readDWord(bytes, off + 8 + 16);
delayN = readWord(bytes, off + 8 + 20);
delayD = readWord(bytes, off + 8 + 22);
if (delayD === 0) delayD = 100;
frame.delay = 1000 * delayN / delayD;
anim.playTime += frame.delay; // Add up the total playback time
frame.disposeOp = readByte(bytes, off + 8 + 24);
frame.blendOp = readByte(bytes, off + 8 + 25);
frame.dataParts = [];
break;
case 'fdAT':
if (frame) frame.dataParts.push(bytes.subarray(off + 8 + 4, off + 8 + length));
break;
case 'IDAT':
if (frame) frame.dataParts.push(bytes.subarray(off + 8, off + 8 + length));
break;
case 'IEND':
postDataParts.push(subBuffer(bytes, off, 12 + length));
break;
default:
preDataParts.push(subBuffer(bytes, off, 12+ length)); }});if (frame) anim.frames.push(frame); // Store each frame in sequence
Copy the code
Assemble the PNG
After splitting the blocks, you can assemble PNG, Loop through anim.frames to form a PNG from the generic data blocks PNG_SIGNATURE_BYTES, headerDataBytes, Predator, dataParts and postDataParts of a frame in order Image resource (BB), the URL of the image created by createObjectURL is stored in the Frame for subsequent drawing.
const url = URL.createObjectURL(new Blob(bb, { type: 'image/png' }));
frame.img = document.createElement('img');
frame.img.src = url;
frame.img.onload = function () {
URL.revokeObjectURL(this.src);
createdImages++;
if (createdImages === anim.frames.length) { // Complete decodingresolve(anim); }};Copy the code
By now we have finished decoding, we can realize the animation resource preloading function by calling apng. parseUrl: after the page initialization, we call the loading resource for the first time, and call it again during rendering to directly return the parsing result for drawing operation.
const url2promise = {};
APNG.parseURL = function (url) {
if(! (urlin url2promise)) {
url2promise[url] = loadUrl(url).then(parseBuffer);
}
return url2promise[url];
};
Copy the code
APNG draw
After the APNG decoding is complete, you can draw and play according to the animation control block and frame control block. RequestAnimationFrame is used to draw each frame in sequence on the Canvas for playback. Apng-canvas uses Canvas 2D rendering.
const tick = function (now) {
while (played && nextRenderTime <= now) renderFrame(now);
if (played) requestAnimationFrame(tick);
};
Copy the code
Canvas 2D drawing is mainly implemented using Canvas 2D APIS: drawImage, clearRect, getImageData and putImageData.
const renderFrame = function (now) {
// fNum records the total number of frames played in a loop
const f = fNum++ % ani.frames.length;
const frame = ani.frames[f];
// The animation is finished
if(! (ani.numPlays ===0 || fNum / ani.frames.length <= ani.numPlays)) {
played = false;
finished = true;
if (ani.onFinish) ani.onFinish(); // This line was added by the author to make it easier to do something after the animation has finished playing
return;
}
if (f === 0) {
// Clear the entire animation area canvas before drawing the first frame
ctx.clearRect(0.0, ani.width, ani.height);
prevF = null; / / on a frame
if (frame.disposeOp === 2) frame.disposeOp = 1;
}
if (prevF && prevF.disposeOp === 1) { // Clear the base map of the area in the previous frame
ctx.clearRect(prevF.left, prevF.top, prevF.width, prevF.height);
} else if (prevF && prevF.disposeOp === 2) { // Restore the base to the previous frame
ctx.putImageData(prevF.iData, prevF.left, prevF.top);
} // 0 is drawn directly
const {
left, top, width, height,
img, disposeOp, blendOp
} = frame;
prevF = frame;
prevF.iData = null;
if (disposeOp === 2) { // Store the current drawing base and use it to restore the data before the next frame is drawn
prevF.iData = ctx.getImageData(left, top, width, height);
}
if (blendOp === 0) { // Clear the base map of the current frame area
ctx.clearRect(left, top, width, height);
}
ctx.drawImage(img, left, top); // Draw the current frame image
// The drawing time of the next frame
if (nextRenderTime === 0) nextRenderTime = now;
nextRenderTime += frame.delay; // Delay indicates the interval between frames
};
Copy the code
WebGL draw
In addition to Canvas 2D, you can also use WebGL for rendering. The rendering performance of WebGL is better than that of Canvas 2D. However, WebGL has no API that can directly draw images, and the drawing implementation code is relatively complex. This paper will not show the specific code of drawing images. WebGL implementation similar to drawImage API can refer to WebGL-DrawImage, two-dimensional matrix, etc. The key points of the author’s chosen drawing implementation are described below.
Since WebGL does not have getImageData, putImageData and other apis to obtain or copy the image data of the current canvas, multiple textures are initialized when WebGL is initialized, and the variable glRenderInfo is used to record the historical rendered texture data.
// The number of textures
const textureLens = ani.frames.filter(item= > item.disposeOp === 0).length;
// Historically render the texture data
const glRenderInfo = {
index: 0.frames: {}};Copy the code
Each frame is rendered using multiple textures based on glRenderInfo.frames, while updating the glRenderInfo data.
const renderFrame = function (now) {...let prevClearInfo;
if (f === 0) {
glRenderInfo.index = 0;
glRenderInfo.frames = {};
prevF = null;
prevClearInfo = null;
if (frame.disposeOp === 2) frame.disposeOp = 1;
}
if (prevF && prevF.disposeOp === 1) { // We need to clear the area base from the previous frame
const prevPrevClear = glRenderInfo.infos[glRenderInfo.index].prevF;
prevClearInfo = [
...(prevPrevClear || []),
prevF,
];
}
if (prevF && prevF.disposeOp === 0) { // Incrementing the texture subsequence, otherwise replace the previous frame
glRenderInfo.index += 1;
}
// disposeOp === 2
glRenderInfo.frames[glRenderInfo.index] = { / / update the glRenderInfo
frame,
prevF: prevClearInfo, // Used to clear the area base from the previous frame
};
prevF = frame;
prevClearInfo = null;
// Draw the image. The base image is cleared inside the glDrawImage interface
Object.entries(glRenderInfo.frames).forEach(([key, val]) = >{ glDrawImage(gl, val.frame, key, val.prevF); }); . }Copy the code
summary
This article introduces the structure of APNG, image decoding, using Canvas 2D/WebGL rendering implementation. Hope after reading this article, can be helpful to you, welcome to discuss.
reference
- Animated PNG graphics
- apng-canvas
- APNG those things
- Binary array
This article is published by netease Cloud Music front end team. Any unauthorized reprint of the article is prohibited. We hire front end, iOS, Android all the year around. If you’re ready for a career change and you love cloud music, join us GRP. Music-fe (at) Corp.netease.com!