Author: Sheng Hai
preface
Recently, in the work of developing the tool library of Canvas-related functions, test cases need to be written to ensure that each iteration does not affect the final rendering effect. After searching the relevant information online with the keyword “Canvas E2e test”, I found that the netizens’ E2E test methods are numerous, but none of them can meet the purpose of verifying the rendering results. Later, from another perspective, I studied the test cases of three. js E2E in github source code and concluded this article.
Background of exploration
After browsing most of the online articles about front-end E2E testing, I can conclude that there are two solutions.
-
Node environment pure data validation:
-
Log the Canvas API operation log and verify the execution log, for example: jest-mock-canvas
-
Browser environment comparison and verification
-
Requires real browser rendering results, such as cypress
Obviously, the first method does not fully verify the rendering results, while the second method is heavy and requires browser validation, which is not friendly for subsequent automated integration.
After some reflection, I think that maybe the search posture is wrong, Canvas E2E test should have a relatively stable and silky solution, otherwise, how can the open source community Canvas based large visualization projects ensure the quality of rendering results?
E2E scheme of Three.js
When it comes to the community’s large Canvas based Web projects, it’s easy to think of Three.js, which performs WebGL 3D rendering based on Canvas.
Check out the official three.js repository on Github github.com/mrdoob/thre… /test/e2e Find the e2e test step of three. js has two main steps
1. Determine correct expectations: Create screenshots that are expected to render correctly.
-
Start an HTTP service with a static file and load the sample locally at three.js.
-
Access the sample with a Headless browser and save a snapshot of the correct 3D look.
-
Disable the HTTP service.
2. Comparison of differences before and after iteration:
-
Start an HTTP service with a static file and load the sample locally at three.js.
-
Access the iterated example with a Headless browser and make a difference match between the original and correct image pixels. If the pixel difference of matching results is greater than 0.5%, the test case fails.
-
Example Disable the static HTTP service.
NPM packages involved
-
Puppeteer provides Headless browser
-
Jimp provides the image processing program
-
Pixelmatch provides pixel-level comparisons of images
-
Serve-handler Provides static service operations
Implement E2E test cases
1. Make correct screenshots of expected results
const path = require('path'); const http = require('http'); const jimp = require('jimp'); const puppeteer = require('puppeteer'); const serveHandler = require('serve-handler'); const port = 3001; const width = 400; const height = 400; const snapshotPicPath = path.join(__dirname, 'snapshot', 'expect.png'); async function main() { const server = http.createServer((req, res) => serveHandler(req, res, Public: path.join(__dirname, '.. ', 'src'), })); server.listen(port, async () => { try { const browser = await puppeteer.launch(); const page = await browser.newPage(); await page.setViewport( { width: width, height: height } ); Await page. Goto (` http://127.0.0.1:${port} / index. The HTML `); const buf = await page.screenshot(); await browser.close(); server.close(); (await jimp.read(buf)).scale(1).quality(100).write(snapshotPicPath); console.log('create snapshot of screen scuccess! ') } catch (err) { server.close(); console.error(err); process.exit(-1); }}); server.on('SIGINT', () => process.exit(1) ); } main();Copy the code
2. Perform the E2E test
const fs = require('fs'); const path = require('path'); const http = require('http'); const assert = require('assert'); const jimp = require('jimp'); const pixelmatch = require('pixelmatch'); const puppeteer = require('puppeteer'); const serveHandler = require('serve-handler'); const port = 3001; const width = 400; const height = 400; const snapshotPicPath = path.join(__dirname, 'snapshot', 'expect.png'); let server = null; let browser = null; describe('E2E Testing', function() { before(function(done) { server = http.createServer((req, res) => serveHandler(req, Public: path.join(__dirname, '.. ', 'src'), })); server.listen(port, done); server.on('SIGINT', () => process.exit(1) ); }); it('testing... ', function(done){ this.timeout(1000 * 60); Const expectDiffRate = 0.005; new Promise(async (resolve) => { browser = await puppeteer.launch(); const page = await browser.newPage(); await page.setViewport( { width: width, height: height } ); Await page. Goto (` http://127.0.0.1:${port} / index. The HTML `); const buf = await page.screenshot(); const actual = (await jimp.read(buf)).scale(1).quality(100).bitmap; const expected = (await jimp.read(fs.readFileSync(snapshotPicPath))).bitmap; const diff = actual; const failPixel = pixelmatch(expected.data, actual.data, diff.data, actual.width, actual.height); const failRate = failPixel / (width * height); resolve(failRate); }).then((failRate) => { assert.ok(failRate < expectDiffRate); done(); }).catch(done); }); after(function() { browser && browser.close(); server && server.close(); }); });Copy the code
3. Improve E2E testing to visualize differences
-
Using JIMP to match the difference of the pixel mark to save into a picture, easy to view the difference
-
The first picture shows the expected correct result.
-
The second image with a blue circle is the wrong result.
-
The third image is a comparison of expected and false results, with red pixels marking the differences.
The modified E2E test code is as follows
const fs = require('fs'); const path = require('path'); const http = require('http'); const assert = require('assert'); const jimp = require('jimp'); const pixelmatch = require('pixelmatch'); const puppeteer = require('puppeteer'); const serveHandler = require('serve-handler'); const port = 3001; const width = 400; const height = 400; const snapshotPicPath = path.join(__dirname, 'snapshot', 'expect.png'); const diffPicPath = path.join(__dirname, 'snapshot', 'diff.png'); let server = null; let browser = null; describe('E2E Testing', function() { before(function(done) { server = http.createServer((req, res) => serveHandler(req, res, { public: path.join(__dirname, '.. ', 'src'), })); server.listen(port, done); server.on('SIGINT', () => process.exit(1) ); }); it('testing... ', function(done){ this.timeout(1000 * 60); Const expectDiffRate = 0.005; new Promise(async (resolve) => { browser = await puppeteer.launch(); const page = await browser.newPage(); await page.setViewport( { width: width, height: height } ); Await page. Goto (` http://127.0.0.1:${port} / index. The HTML `); const buf = await page.screenshot(); const actual = (await jimp.read(buf)).scale(1).quality(100).bitmap; const expected = (await jimp.read(fs.readFileSync(snapshotPicPath))).bitmap; const diff = actual; const failPixel = pixelmatch(expected.data, actual.data, diff.data, actual.width, actual.height); const failRate = failPixel / (width * height); if (failRate >= expectDiffRate) { (await jimp.read(diff)).scale(1).quality(100).write(diffPicPath); console.log(`create diff image at: ${diffPicPath}`) } resolve(failRate); }).then((failRate) => { assert.ok(failRate < expectDiffRate); done(); }).catch(done); }); after(function() { browser && browser.close(); server && server.close(); }); });Copy the code
The revised link flowchart is as follows
Complete E2E test link
Other Points for attention
-
With the same Canvas code in different operating systems, the screenshots generated by Puppeteer will have a small number of pixel differences. If you want to perform multi-system E2E test, you need to distinguish the E2E test cases of different systems.
Thinking about front-end visual quality testing
After all the above, I also studied all the test cases of Three.js and concluded that the quality verification of front-end visualization generally includes the following Three aspects.
-
Code quality verification
-
Unit testing, granular validation of modules
-
Code coverage for unit tests
-
Common tools or NPM modules are: Jest, Mocha, etc
-
Visual verification
-
E2E test, render result pixel basic verification
-
Common tools or NPM modules include puppeteer + pixelmatch and jest-image-snapshot
-
Visual performance verification
-
Benchmarking, where code performs performance verification
-
Common tools or NPM modules are benchmark.js
Afterword.
- After exploring for a while, jest plugin jest-image-snapshot is available for puppeteer + Pixelmatch to search for information similar to puppeteer.
- After work to develop their own amateur H5 image processing small toys plus test cases github.com/chenshenhai… GitHub Actions successful trial run, no longer have to worry about the quality of image processing render results. If you’re interested, check out the test cases.