This paper is participating in thePerformance optimization combat record”Topic essay activity

We have divided the direction of performance optimization into the following two aspects, which are helpful for structural thinking and system analysis.

  1. Load performance. How to pull resources from the server to the browser faster, such as HTTP and various optimizations of resource volume, are all aimed at improving load performance.
  2. Rendering performance. How to render resources in the browser faster. Such as reducing rearrangement and redrawing, rIC etc are all aimed at improving rendering performance.

Core Performance metrics and Performance apis

  • LCP: load performance. The maximum content drawing should be completed in 2.5 seconds.
  • FID: Interactive performance. The first input delay should be completed within 100ms.
  • CLS: page stability. Cumulative layout offset, need to be calculated manually, CLS should be kept below 0.1.

Calculation and Collection

  • web-vitals

When collecting core performance indicators of each user on the browser side, they can be collected through Web-Vitals and reported to the doSING system through sendBeacon.

import { getCLS, getFID, getLCP } from 'web-vitals'

function sendToAnalytics(metric) {
  const body = JSON.stringify(metric);
  navigator.sendBeacon('/analytics', body))
}

getCLS(sendToAnalytics);
getFID(sendToAnalytics);
getLCP(sendToAnalytics);
Copy the code

Faster transmission: CDN

Distribute resources to the edge network nodes of CDN, so that users can get the content they need nearby, greatly reduce the transmission distance of optical fiber, so that users around the world have a good network experience when opening the website.

Faster transfer: HTTP2

A number of features of HTTP2 contribute to its faster transmission speed.

  1. Multiplexing, N requests can be sent in parallel in the browser.
  2. Front compression, smaller load volume.
  3. Request priority, faster critical requests

The site is now mostly http2 and can be viewed in the console panel.

Since HTTP2 can be requested in parallel, eliminating http1.1 header blocking, the following performance optimizations will become obsolete

  1. Merge resources. Such ashttps://shanyue.tech/assets??index.js,interview.js,report.js
  2. Domain name fragmentation.
  3. Sprite. Combine numerous small images into a single large image.

Faster transport: Take full advantage of HTTP caching

A better resource caching strategy can reduce the number of source calls for THE CDN and the number of requests sent for the browser. Either way, the second visit to the site is a better experience.

  • Caching strategies
    • Strong cache: packaged resources with hash values (e.g. /build/a3b4c8a8.js)
    • Negotiated cache: packaged resources without hash values (e.g. /index.html)
  • Bundle Spliting
    • Avoid a one-line code change that invalidates the entire bundle’s cache

Faster transport: Reduced HTTP requests and loads

Compress and optimize a site’s resources to reduce HTTP load.

  • Js/CSS /image and other conventional resource volume optimization, which is a big topic, will be discussed separately below
  • Small image optimization, inlining small image as Data URI, reducing the number of requests
  • Lazy loading of images
    • New API: IntersectionObserver API
    • New property: Loading =lazy

Smaller size: gzip/ Brotli

It is effective for JS, CSS, HTML and other text resources, but not for pictures.

  • gzipLZ77 algorithm and Huffman coding to compress the file, the higher the repetition of the file can be compressed more space.
  • brotliLZ77 algorithm, Huffman coding and second-order text modeling to compress files, more advanced compression algorithm, higher performance and compression ratio than GZIP

You can check whether the website has enabled the compression algorithm in the Content-Encoding response header of the browser. At present, Zhihu and Nuggets have fully enabled Brotli compression.

# Request Header
Accept-Encoding: gzip, deflate, br

# gzip
Content-Encoding: gzip

# gzip
Content-Encoding: br
Copy the code

Smaller volume: Compression obfuscation tool

Terser is the master of Javascript resource compression obfuscation.

It can be compressed according to the following strategies:

  1. Long variable names replace short variables
  2. Deletes space newlines
  3. Is:const a = 24 * 60 * 60 * 1000 -> const a = 86400000
  4. Remove code that cannot be executed
  5. Remove useless variables and functions

You can view code compression online at Terser Repl.

  1. swcIs another tool for compressing Javascript, which has the following featuresterserThe same API as it is made up ofrustAs a result, it has higher performance.
  2. Html-minifier -terser A tool used to compress HTML

Smaller size: Smaller Javascript

For smaller Javascript, there are two generalizations:

  1. gzip/brotli
  2. terser (minify)

Here are a few more things to consider:

  1. Lazy route loading eliminates the need to load entire application resources
  2. Tree Shaking: Garbage exports are removed in the production environment
  3. Browserlist/Babel: Updating the BrowserList in a timely manner will result in a smaller spacer volume

One more question:

How do I analyze and optimize the Javascript volume of my current project? It’s much easier if you use Webpack.

  1. usewebpack-bundle-analyzeAnalytical packing volume
  2. Replace some libraries with smaller ones, such as moment -> dayjs
  3. Load some libraries on demand, such asimport lodash -> import lodash/get
  4. Use support for Tree Shaking for some libraries, such asimport lodash -> import lodash-es

Smaller sizes: Smaller pictures

At the end of the day, WebP is generally smaller than JPEG/PNG, and AVIf is one level smaller than WebP

For seamless compatibility, select Picture/Source for rollback processing

<picture>
  <source srcset="img/photo.avif" type="image/avif">
  <source srcset="img/photo.webp" type="image/webp">
  <img src="img/photo.jpg" width="360" height="240">
</picture>
Copy the code
  1. Better size: When the page only needs to display 100px/100px images, compress them down to 100px/100px
  2. Better compression: the front image can be properly compressed, such as throughsharp

Render optimization: Key render paths

The following five steps are the key render paths

  1. HTML -> DOM, parsing HTML into DOM
  2. CSS -> CSSOM to parse CSS to CSSOM
  3. DOM/CSSOM -> Render Tree, merge DOM and CSSOM into Render Tree
  4. RenderTree -> Layout determines the location of each node in the RenderTree
  5. Layout -> Paint renders each node in the browser

Rendering optimization is largely about optimizing key render paths.

preload/prefetch

Preload/Prefetch controls HTTP priorities to achieve faster response to critical requests.

<link rel="prefetch" href="style.css" as="style">
<link rel="preload" href="main.js" as="script">
Copy the code
  1. Preload Loads resources required for the current route and has a high priority. Generally, preload Bundle Spliting resources and Code Spliting resources
  2. Prefetch has a low priority. Resources are loaded when the browser is idle. It is used to load other route resources. For example, when a Link appears on the page, you can prefetch the route resource of the current Link. (bY default, next.js performs lazy loading +prefetch for links. That is, when a link appears on the page, it automatically prefetch the route resource pointed by the link

Dns-prefetch is used to pre-resolve DNS for host addresses.

<link rel="dns-prefetch" href="//shanyue.tech">
Copy the code

Render optimization: anti-shake and throttling

  1. Anti – shake: Prevents jitter. The event trigger can be reset within a unit time, preventing the event from being triggered multiple times by accidental injury. The code implementation focuses on clearing clearTimeout. Anti-shaking can be compared to waiting for an elevator. As soon as one person comes in, you have to wait a little longer. Service scenarios avoid repeated submission when the login button is clicked multiple times.
  2. Throttling: Controls traffic. Events can be triggered only once per unit of time, similar to the Rate Limit on the server. Timer =timeout; The timer = null. Throttling can be likened to going through a traffic light, waiting for each red light to pass one batch.

Both anti-shock and throttling can drastically reduce the number of renders, and in React you can also use hooks like use-debounce to avoid rerenders.

import React, { useState } from 'react';
import { useDebounce } from 'use-debounce';

export default function Input() {
  const [text, setText] = useState('Hello');
  // Render once a second, greatly reducing the frequency of re-rendering
  const [value] = useDebounce(text, 1000);

  return (
    <div>
      <input
        defaultValue={'Hello'}
        onChange={(e)= >{ setText(e.target.value); }} / ><p>Actual value: {text}</p>
      <p>Debounce value: {value}</p>
    </div>
  );
}
Copy the code

Render optimization: Virtual list optimization

This is another old topic, usually in the viewport to maintain a virtual list (only render a dozen or so data), monitor the position of the viewport, so as to control the virtual list in the viewport.

React uses the following libraries:

  1. react-virtualized
  2. react-window

Render optimization: request and resource caching

On some front-end systems, a request is sent when a page is loaded, a request is re-sent when the route is switched out and back again, and the page is re-rendered after each request.

However, most of these rerequests are not necessary, and proper caching of the API will optimize the rendering.

  1. Add a key to each GET API
  2. This API cache is controlled by key and retrieved from the cache when repeated requests occur
function Example() {
  // Set the cache Key to Users:10086
  const { isLoading, data } = useQuery(['users', userId], () = > fetchUserById(userId))
}
Copy the code

Web Worker

Take an example:

In a pure browser, how to achieve high performance real-time code compilation and transformation?

  1. Babel Repl

Using a traditional Javascript implementation would take too much time and block the main thread, potentially causing the page to stall.

It’s much more efficient to delegate this to an additional thread using the Web Worker, and basically all of the browser-side code compilation is done by the Web Worker.

WASM

  1. Low JS performance
  2. C + + / Rust performance
  3. Write code in C++/Rust and run it in a Javascript environment

Take an example:

How to achieve high performance image compression in a pure browser?

It’s basically hard to do, and the performance and ecology of Javascript makes it hard to compress images.

Using WASM is like borrowing the ecology of another language.

  1. Libavif: C language written aviF decoder code library
  2. Libwebp: webP decoder library written in C
  3. Mozjpeg: C language written JPEG decoding code library
  4. Oxipng: PNG optimization library written by Rust

Thanks to WASM, it is possible to migrate the ecology of these other languages into the browser to achieve a high-performance offline image compression tool.

If you want to learn about such a tool, check out Squoosh