preface

Currently, Lighthouse is used as a tool for qualitative inspection of performance problems on the front page of Halo front-end application quality monitoring. In the process of using Lighthouse, we expanded its original capabilities based on the Lighthouse Plugin to detect performance problems on the Web side and small programs, and accumulated some experience in practice. Hope to share through this article, give you some help.

Lighthouse profile

Lighthouse is an open source automated monitoring tool that analyzes the performance metrics of Web applications and web pages using built-in auditing modules, as well as providing first-screen scores and best practice guidelines.

use

There are four ways to use it, please refer to the official instructions (github.com/GoogleChrom…). Start posture is very important, we choose a suitable for their own.

Module implements

The overall architecture is shown as follows, and details of the module implementation can be found in the Lighthouse module implementation. The Lighthouse Plugin mainly involves three modules, Namely Gatherers, Audits, and Categories.

The test report

The Lighthouse detection report displays five types of information by default:

  • Performance
  • Progressive Web App (Progressive Web Application)
  • Accessibility
  • Best Practices
  • SEO (SEO)

You can show several of these classes by calling different config.

Getting started with Lighthouse Plugin development

Refer to the official documentation for a simple example. Developing a plugin can be divided into the following two steps:

Create an NPM package containing plugin.js and package.json.

Module. Exports = {// Here you can add your own audits (audit entries) : [{path: 'lighthouse-plugin-example/test-audit.js'}], where you can set the following categories: {title: 'title', description: 'description', auditRefs: [{id: 'test-id', weight: 1}], }, }; package.json { "name": "lighthouse-plugin-example", "main": "plugin.js", "peerDependencies": { "lighthouse": "^ 5.6.0"}, "devDependencies" : {" lighthouse ":" ^ 5.6.0 "}}Copy the code

Note: Lighthouse should be introduced as peerDependencies to avoid repeated downloads.

Since the previous example used a new audit entry, we need to create this file as well

const { Audit } = require('lighthouse'); Class TestAudit extends Audit {static get Meta () {return {// requiredArtifacts, such as title, exception description, dependent artifacts, etc. : ['devtoolsLogs'], }; } Static audit(artifacts) {// Lighthouse runs through a series of gatherers, and each collector collects its own target information. Artifacts return {score: 1,}; } } module.exports = TestAudit;Copy the code

Use the Lighthouse Plugin

If it is a command line, run directly:

Copy the code

If you run Lighthouse in NPM mode, the configuration of Lighthouse needs to be changed to:

extends: 'lighthouse:default', plugins: ['lighthouse-plugin-example'], settings: { // ... }};Copy the code

The Lighthouse Plugin in action on projects

One of the plans for the quality monitoring of The Front end of Halo was to use Lighthouse to conduct qualitative checks on the performance of the front screen of multiple terminals. The browser side is naturally supported, but the difficulty is that small programs cannot be directly run in the browser, so Lighthouse cannot be used for qualitative inspection. Fortunately, there are a host of open source tools, including Taro and AntMove, that can transform small programs into browser-side applications. As a result, Lighthouse can be integrated into our engineering architecture, using the same toolchain to perform qualitative checks on both the browser-side and the small program side.

Manufacturing demand without air is the base of the technical team of the commercial company. Under the guidance of the overall goal, we need to consider how to check the performance of small programs, the traditional sense of Web page inspection can be directly moved over? The answer is no, and Lighthouse’s default set of principles for implementing small programs cannot be directly copied. Horizontal comparisons can be made for small programs running on the browser side, such as FMP, FCI and TTI, but the reliability of their results still needs to be examined. In addition, we need a set of analysis methods and tools that meet the technical characteristics of small programs. We need to develop the corresponding Lighthouse Plugin. The corresponding Gatherers, Audits, and Categories.

This screenshot is from part of the detection items of Alipay applet. On the basis of traditional Web side detection, many applet specific detection (such as setData frequency, data volume, API call times, same/asynchronous call, etc.) are added. These indicators are not covered by Lighthouse naturally.

Customize a plug-in for the applet

Taking writing a plug-in to audit The Times of calling Native API of Alipay applet as an example, a simple summary is as follows:

Business code injects collection logic + Gatherer collects data + Audit consumes data and calculates results

Business code injection collection logic

Define a log collection object, window.nativecall, to collect a call behavior by calling window.nativecall, window.nativecall, and window.natiVecall.

window.$$nativeCall = { calls: [], push: function (type, callInfo) { this.calls.push({ timestamp: Date.now(), type, callInfo }); }}; function $$myproxy(my) { return new Proxy(my, { get(target, key) { const keyValue = target[key]; Return $$myProxyFn(keyValue, target, key); // If (typeof keyValue === 'function') {return $$myProxyFn(keyValue, target, key); $$nativeCall. Push (' api-atr-called ', json.stringify ({key: 'my.${key} ',})); return keyValue; }}); }; function $$myProxyFn(fn, that, key) { return (function (... $$nativecall. push('api-method-called', json.stringify ({key:}) {$$nativecall. push('api-method-called', json.stringify ({key: `my.${key}`, args, })); try { const result = fn.apply(this, args); // A check point for a successful call to an AP method is reported, $$nativeCall. Push ('api-method-success-called', json.stringify ({key: `my.${key}`, args, result: err, })); } catch (err) {// An exception call to an AP method was reported. $$nativeCall. Push ('api-method-error-called', json.stringify ({key: `my.${key}`, args, result: err, })); } }).bind(that); }Copy the code

The sample code is relatively simple to write, but the call information is stored in the window.$$nativeCall. This code needs to be inserted into the business code of the applet by compiling the tool chain to intercept all the Native API calls in the applet code.

Customizing the audit module

Before performing audit analysis, we need a Gatherer to get the information collected by window.$$nativeCall.

const { Gatherer } = require('lighthouse');
class CustomNativeCall extends Gatherer {
  public async afterPass(passContext) {
    const { driver } = passContext;
    const $$nativeCall = await driver.evaluateAsync('window.$$nativeCall');
    if ($$nativeCall) {
      return $$nativeCall.calls;
    }
    return [];
  }
}
module.exports = CustomNativeCall;
Copy the code

The collector is written. We need to write the processing logic of Audit. Here, we take the number of Native API calls (apI-method-called) as an example:

const { Audit } = require('lighthouse'); Class ApiMethodCalledAudit extends Audit {static get meta() {return {plugin.js finds this Audit item id: RequiredArtifacts: ['CustomNativeCall'],}; } static audit(artifacts) { const { CustomNativeCall } = artifacts; const apiMap = {}; Customnativecall. forEach((log) => {const {timestamp, // timestamp is not used in this audit item, we can not declare type, callInfo,} = log; Parse (callInfo) {const {key} = json. parse(callInfo); / / remove the API method name, count apiMap [key] = apiMap [key] | | 0; apiMap[key] += 1; }}); const result = Object.keys(apiMap).reduce((res, apiKey) => { res.push({ api: apiKey, count: apiMap[apiKey], }); return res; } []); Return {/ / generated form detail details: Audit makeTableDetails (ApiMethodCalledAudit. GetHeadings (), the result), / / results show headline displayValue: ${result.length} = ${result.length} = ${result.length} = ${result.length}; } private static getHeadings() {return [{itemType: 'text', key: 'API ', text:' name ',}, {itemType: 'numeric', key: 'count', text: 'call count',},]; }}; module.exports = ApiMethodCalledAudit;Copy the code

Finally, we introduce api-method-called-audit.js in our plugin.js

module.exports = { audits: [{path: 'lighthouse-plugin-example/api-method-called-audit.js'}], category: { title: AuditRefs: [{id: 'api-method', weight: 1}],}};Copy the code

So far, data report, collection, consumption of the whole process will get through.

The following is a summary of some log types and audit modules defined in our actual projects.

Log type

  • Api-attr-called: Statistics used for the number of times an API property is accessed
  • Api-method-error-called: Statistics on the number of exception calls to an API method
  • Api-method-called: Indicates the number of times an API method is called
  • Api-method-success-called: Statistics on the number of successful calls to an API method and the elapsed time
  • Set-data: statistics the number of setData calls and data size
  • Set-data-success: indicates the statistics of setData call time

Audit

  • Api-async-same-args-called: asynchronous API method called with the same input parameter
  • Api-attr-called: API attribute call
  • Api-deprecated -called: Deprecated API call
  • Api-duplicate-called: when an API is called repeatedly (20 times in a row)
  • Api-error-called: AN API exception is called
  • Api-long-time-called: API call takes too long (more than 1000ms)
  • Api-method-called: API method call
  • Api-sync-called: API synchronous method calls
  • Page-node-used: indicates the complexity of document nodes
  • Set-data-called: setData called frequently (20 times/second)
  • Set-data-size: setData size exceeds the upper limit (exceeding 256kb)

Category

We divided audit items into three categories:

  • performance

It mainly includes performance-related audit items, such as the number of setData calls, the amount of setData set ata time, and the number of page nodes. Optimization of these audit items can bring relatively intuitive performance improvement.

  • container

It mainly contains container-related audit items, such as apPLETS API (exception, repeat, time consuming, same input) calls, native property calls of applets, etc. These audit items are directly related to the running environment of applets.

  • best-practice

This includes best practice related audit items, such as image to WebP, disable deprecated API calls, request exception handling, synchronous to asynchronous API calls, etc., which are related to the official recommended development method.

Plug-in usage

Using the NPM package as an example, we changed the configuration of Lighthouse to:

module.exports = { extends: 'lighthouse:default', plugins: [/ / container detection is introduced here, the performance and best practices if necessary can also be introduced here 'lighthouse - plugin - miniprogram/plugins/container'], passes: [{passName: 'defaultPass', gatherers: [ 'lighthouse-plugin-miniprogram/gatherers/custom-native-call', ], }], settings: { // ... }};Copy the code

The results

Here we use the statistics that have been written so far as a demonstration, using the same way as the above example.

The last

This article briefly introduces the application of Lighthouse in the quality monitoring of the Front-end Applications of Halo from the perspective of actual business. We are still exploring how to use Lighthouse well in small programs. Welcome to exchange ideas.

The resources

  • Lighthouse – developers (article links: developers.google.com/web/tools/l…
  • Lighthouse Playtest Insider (link to article: juejin.cn/post/684490…
  • Web Performance Optimization Map (link to article: github.com/berwin/Blog…
  • Chrome DevTools Protocol (article links: chromedevtools. Making. IO/DevTools – pr…
  • Lighthouse- Architecture (Link to article: github.com/GoogleChrom…