This paper will analyze optimization from the following dimensions

  • Build optimization
  • Static resource optimization
  • Network layer optimization
  • The cache
  • Render layer optimization

Build optimization

1. tree shaking

The built JS code contains only modules that are referenced and executed, and modules that are not referenced or executed are removed to reduce packages.

Webpack executes tree shaking by default if mode is production

Note: Tree shaking only checks ES2015 module syntax (import and export). For more details

1.1 Tree Shaking for Webpack

Webpack’s Tree Shaking implementation is not perfect so far, and it doesn’t work for modules that have side effects

Example 🌰 :

// index.js
import { func2, func3 } from './module-a'
func2()
Copy the code
// module-a.js
import lodash from 'lodash-es'
export * from './module-b'
export const func1 = function(value) {
  return lodash.isArray(value)
}
export const func2 = function() {
  console.log('this is func2)
  return123123}Copy the code
// module-b.js
export const func3 = () => {
  console.log('The func3 method of module B')}Copy the code

1.2 Webpack-deep-scope-plugin Optimizing tree shaking for Webpack

This plugin is designed to fill in the shortcomings of WebPack itself tree-shaking, eliminating useless code through scoping analysis. Principle of see

const WebpackDeepScopeAnalysisPlugin = require('webpack-deep-scope-plugin')
  .default
...
plugins: [
  new WebpackDeepScopeAnalysisPlugin(),
],
Copy the code

2. css的tree shaking

Tree shaking for CSS is implemented using purgecss-Webpack-plugin and Mini-CSs-extract-Plugin

Add the following code to the CSS:

.red {
  color: red
}

.blue {
  color: blue
}
Copy the code

Then reference it in the js file:

import './index.css'

var div = createElement('div')
div.styles.class = 'red'

console.log(div);
Copy the code

Ideally, we would like to pack a file with **. Red and no unused. Blue **, configured as follows:

const TerserJSPlugin = require("terser-webpack-plugin")
const OptimizeCSSAssetsPlugin = require("optimize-css-assets-webpack-plugin")
const MiniCssExtractPlugin = require('mini-css-extract-plugin')
const PurgecssPlugin = require('purgecss-webpack-plugin'Minimizer: [new TerserJSPlugin({}), new OptimizeCSSAssetsPlugin({})],}, Module: {... minimizer: [new TerserJSPlugin({}), new OptimizecassetSplugin ({})],}, module: {... use: [ MiniCssExtractPlugin.loader,'css-loader',
    ]
}
plugins: [
    new MiniCssExtractPlugin({
      filename: "[name].css",
    }),
    // css tree shaking
    new PurgecssPlugin({
      paths: glob.sync(`${path.join(__dirname, '/src/**/*')}`,  { nodir: true})}),]Copy the code

The mini-CSS-extract-plugin packages CSS as a single file. Here we use the plug-in purgecss-webpack-plugin to help us analyze and filter unreferenced CSS. The end result is as follows (see section 6 for compression related plug-ins and configurations) :

In the case of scope promotion,

Scope as a function of making webpack code files smaller and faster, it can be called “Scope promotion”.

The advantages of enabling Scope compensation are as follows:

  1. The code size gets smaller because function declarations generate a lot of code.

  2. The code has a lower memory overhead at run time because it creates functions with less scope.

Plugins: [/ / open the Scope Hoisting function new webpack. Optimize. ModuleConcatenationPlugin ()]Copy the code

This is enabled by default in mode Production

Here are two files for verification in the development environment:

// index.js
import a from './module-a'
console.log(a);
// module-a.js
export default 'module-a'
Copy the code

When Scope collieries are not started, generate multiple function scopes according to file module packaging, and the results are as follows:

When enabled, both modules are precompiled into the same module:

4. code spiltting

  1. Third party libraries packaged separately: Since the contents of third party libraries are largely unchanged, they can be separated from the business code to maximize the browser’s caching mechanism and reduce requests.
  2. On-demand loading: Webpack supports defining split points, which are loaded on-demand via require.ensure/import()

4.1 Use Entry to separate code

e.g.

// index.js
import _ from 'lodash'
console.log(_.cloneDeep({a: 'index'}))
// module.js
import _ from 'lodash'
console.log(_.cloneDeep({a: 'module'}))
Copy the code

Here are two files that reference the loDash third-party package. If you package them directly, both files will contain loDash code:

You can use optimization.splitchunks to clear duplicates:

optimization: {
  splitChunks: {
    chunks: 'all'}}Copy the code

Clearing duplicates generates lodash into a separate chunk:

4.2 Dynamic Import

When it comes to dynamic code splitting, use **import()** dynamic loading, or webPack-specific require.ensure

// index.js
function getLodash() {
  return import(/* webpackChunkName: "lodash"* /'lodash').then(({ default: _ }) => {
    return _.cloneDeep({a: 'index'})
  }).catch(e => 'error occurred')
}

getLodash().then(res => {
  console.log(res);
})
Copy the code

Here we import() to load loDash dynamically and set chunkName with comments. We can see that we split the lodash dynamically into a separate file:

See Code Splitting and SplitChunksPlugin for more details

5. Optimize common resource packs

Remove framework packages such as Vue, React, UI framework, and SDK and put them into the CDN

externals: {
 vue: "Vue",}Copy the code

6. Code compression

HTML compression HtmlWebpackPlugin

Js compressed UglifyJsPluginTerserWebpackPlugin

CSS compression CSS – loader with built-in compression, cooperate ExtractTextPluginMiniCssExtractPlugin

Add hash to the CSS

6.1 HTML compression

e.g.

const HtmlWebpackPlugin = require('html-webpack-plugin')
plugins: [
  new HtmlWebpackPlugin({
    minify: {
      collapseWhitespace: true,
      removeComments: true,}}),]Copy the code

You can see that the HTML file is compressed:

In HtmlWebpackPlugin 3.x, minify defaults to {} when it is true and needs to be configured manually. In 4.x, minify defaults to {}

{
  collapseWhitespace: true,
  removeComments: true,
  removeRedundantAttributes: true,
  removeScriptTypeAttributes: true,
  removeStyleLinkTypeAttributes: true,
  useShortDoctype: true
}
Copy the code

For details, see github.com/jantimon/ht… For details about configuration items, see html-minifier

6.2 js compressed

By default, TerserWebpackPlugin is used for JS compression in WebPack 4.x. The original UglifyjsWebpackPlugin is not recommended

Here’s what you need to know about TerserWebpackPlugin and UglifyJsPlugin:

  • The underlying TerserWebpackPlugin uses Terser
  • UglifyJsPlugin uses Uglip-JS at the bottom
  • Uglip-js does not support ES6. You need to configure Uglip-ES, but uglip-ES is no longer maintained and updated
  • Terser retains the APIS and compatibility of uglify-js@3 and Ugliffe -es

In the 3.x development environment, the configuration is as follows:

const UglifyJsPlugin = require('uglifyjs-webpack-plugin')
plugins: [
  new UglifyJsPlugin(),
]
Copy the code

4. Unlock in the x development environment, minimize production defaults to true:

optimization: {
  minimize: true,}Copy the code

Before compression:

After the compression:

You can also override default compression by adding a TerserWebpackPlugin instance to Minimizer:

  optimization: {
    minimize: true,
    minimizer: [
      new TerserPlugin({
        test: /\.js(\? . *)? $/i, }), ], },Copy the code

More Configuration Items

6.3 compressed CSS

The built-in CSS minimizer will be introduced in Webpack 5, and the optimization-CSS-assets-webpack-plugin will need to be manually configured in 4.x

Manually configuring minimizer overrides the default configuration, so you need to replace the TerserJSPlugin compressed JS

The MiniCssExtractPlugin extracts CSS into a separate file with the following configuration:

const TerserPlugin = require('terser-webpack-plugin')
const MiniCssExtractPlugin = require('mini-css-extract-plugin')
const OptimizeCSSAssetsPlugin = require("optimize-css-assets-webpack-plugin"Optimizer: {minimizer: [// compression js new TerserJSPlugin({}), // compression CSS new TerserJSPlugin({})]}, optimization: {minimizer: [// compression JS new TerserJSPlugin({})]}, // compression CSS new TerserJSPlugin({})]}, [ new MiniCssExtractPlugin({ filename:"[name].css",
    chunkFilename: "[id].css"
  })
],
module: {
  rules: [
    {
      test: /\.css$/,
      use: [
        MiniCssExtractPlugin.loader,
        "css-loader"]]}}Copy the code

7. Use preload and prefetch

Configure preload and prefetch in import dynamic loading

Preload:

import(/* webpackPreload: true* /'Modal');
Copy the code

Prefetch:

import(/* webpackPrefetch: true* /'Modal');
Copy the code

The difference between the two:

  • Preload Chunk starts loading in parallel when the parent chunk loads. Prefetch Chunk starts loading after the parent chunk finishes loading.
  • Preload Chunk has medium priority and is downloaded immediately. Prefetch Chunk downloads while the browser is idle.
  • Preload chunk is immediately requested in the parent chunk for the present moment. Prefetch Chunk will be used at some point in the future.
  • Browser support varies.

preload:

prefetch:

For more information

8. The chunk name

Replace hash with chunkhash so that the cache remains valid while chunk does not change

Use Name instead of ID

Each module.id is incremented based on the default resolve order. That is, when the parse order changes, the ID changes with it

Webpack cache

Static resource optimization

1.Gzip -> Brotli

Brotli compression algorithm has many characteristics, the most typical are the following two:

  • For common Web resource content, Brotli provides a 17-25% performance improvement over Gzip;
  • When Brotli compression level is 1, the compression rate is higher than when Gzip compression level is 9 (the highest).

Support for Brotil can be found at Caniuse

1.1 Case of no compression

1.2 GZIP level 9 compression

1.3 Brotil Level 11 compression

1.4 Choose the appropriate compression timing

Dynamic compression occurs instantly. The user makes the request, compresses the content (while the user waits) and provides the compressed content.

Static compression The amount of time an asset is compressed on disk before the user requests it. When a user requests an asset, no compression occurs. Precompressed assets are provided only from disk.

Webpack provides compression-webpack-plugin, brotli-webpack-plugin

Gzip:

const CompressionWebpackPlugin = require('compression-webpack-plugin')
new CompressionWebpackPlugin({
  asset: '[path].gz[query]',
  algorithm: 'gzip'.test: / \. (js | | | HTML CSS SVG) $/, threshold: 10240, minRatio: 0.8})Copy the code

Brotli:

const BrotliPlugin = require('brotli-webpack-plugin')
new BrotliPlugin({
  asset: '[path].br[query]'.test: / \. (js | | | HTML CSS SVG) $/, threshold: 10240, minRatio: 0.8})Copy the code

1.5 nginx. Conf

gzip  on;
gzip_vary               on;
gzip_min_length         1024;
gzip_buffers            128 32k;
gzip_comp_level         9;
gzip_http_version       1.1;
gzip_proxied            expired no-cache no-store private auth;
gzip_types              text/plain text/css text/xml application/xml application/json text/javascript application/javascript application/x-javascript;

brotli on;
brotli_types text/plain text/css text/xml application/xml application/json text/javascript application/javascript application/x-javascript;
brotli_static off;
brotli_comp_level 11;
brotli_buffers 16 8k;
brotli_window 512k;
brotli_min_length 20;
Copy the code

2. Image optimization

2.1 Choose an appropriate picture format

2.2 Image Compression

Compression tool:

  • Compress JPEG, PNG tinypng.com/
  • Compressed SVG iconizr.com/

Imagemin -webpack-plugin can also be compressed by Webpack

tinypng

imagemin

Dependencies: url-loader, file-loader, imagemin-webpack-plugin

const path = require('path')
const ImageminPlugin = require('imagemin-webpack-plugin').default
module.exports = {
  ...
  module: {
    rules: [
      {
        test: /\.(jpe? g|png|gif|svg)$/i, use: [ { loader:'url-loader',
            options: {
              limit: 8192,
              name: '[name].[hash:8].[ext]',
              outputPath: 'imgs/',
            },
          },
        ],
      },
    ],
  },
  plugins: [
    new ImageminPlugin({ test: /\.(jpe? g|png|gif|svg)$/i }), ], }Copy the code

It is not recommended to compress with Webpack at packaging time, as this adds significantly to the packaging time, and imagemin is not nearly as compressed as Tinypng

2.3 Image segmentation for large background images (H2)

Cut a large background image into several smaller images and put them together using HTML.

Take advantage of browser concurrent loading (H2)

2.4 Base64 / font/Sprite

Network request The cache flexibility other
Sprite figure less good poor 1. Enlarged distortion 2. The same icon, different colors, there will be repeated parts 3
iconfont less good good Same as ordinary fonts, you can set size, color, opacity, etc. 3. Small file
base64 There is no poor good 1. Increase the size of additional CSS and JS 2. Increase the size of image resources

2.5 Responsive images

Responsive images refer not only to typography and layout of images, but also to loading different images based on device size.

Image:

The srcSet on the image TAB tells the browser to load different images at different screen widths

The image size critical point is set by setting sizes on the image tag, which clearly defines the size that the image should display under different media conditions.

Picture:

The browser iterates until it finds one that matches the current environment, and then sets that srcset toIn the

Reactive picture breakpoint generator

2.5 Use the small, cacheable Favicon.ico

Favicon. ico is generally stored at the root of the site, and browsers try to request this file whether or not it is set up on the page.

So make sure this icon:

  • Exists (avoid 404);
  • As small as possible, preferably less than 1K;
  • Set a long expiration time.

3. Use the CDN

The response speed of users requesting resources (visiting websites) in different regions varies greatly. In order to improve user experience, we add a layer between users and servers, namely CDN. Content Delivery Network (CDN), whose idea is to distribute the Content of the source station to the edge node of the Network closest to the user, so that the user can obtain the required Content nearby, and improve the response speed of user access.

When the user initiates an HTTP request, the edge node sends a request to the edge node server through CDN. The edge node will detect whether the current node has the data you want to request. If not, the edge node will go to the source station.

Network layer optimization

A complete HTTP request process

DNS Resolution (T1) -> Establish TCP Connection (T2) -> Send request (T3) -> Wait for server to return first byte (TTFB) (T4) -> receive data (T5)

  • Queueing: Requests are queued.
  • 例 句 : Seek to block;
  • Proxy negotiation: The time spent connecting to the Proxy server
  • DNS Lookup: DNS Lookup.
  • Initial Connection: indicates the time of establishing a TCP connection, which is equal to the period from the client sending a request to the end of the TCP handshake.
  • SSL (included in HTTPS connection) : Time spent to complete the SSL handshake.
  • Request sent: The time when an HTTP Request is sent (from before the first byte to after the last byte)
  • Waiting(TTFB) : Time To First Byte after a request has been sent, Waiting(TTFB) : Time To First Byte after a response has been received, Waiting(TTFB) : Time To First Byte after a request has been sent, Waiting(TTFB) : Time To First Byte after a response has been received Usually the one that takes the longest. The time between sending the request and receiving the first byte of the server response is affected by factors such as the line, server distance, and so on.
  • Content Download: The time between the first byte of the response received and the last byte received is the Download time.

The main factors that affect an HTTP request

  • Bandwidth – Network speed
  • delay
    • Browser blocking: Take Chrome as an example. The browser can only have six connections to the same domain name at a time. If the number of requests exceeds this limit, the requests will be blocked
    • DNS query: The browser needs to know the IP address of the target server to establish a connection. The system that resolves domain names into IP addresses is called DNS. This can often be done with DNS caching results to reduce this time.
    • TCP connection: HTTP is based on TCP. The browser can set up a connection only after the third handshake. However, these connections cannot be reused, resulting in three handshakes and slow startup for each request. The effect of three-way handshake is more obvious in the case of high latency, and the effect of slow startup is more significant in the case of large file requests.

1. Reduce blocking time

1.1 Possible causes of blocking

  • There are higher priority requests.
  • There are already six TCP connections open for this origin, which is the limit. Applies to HTTP/1.0 and HTTP/1.1 only.
  • The browser is briefly allocating space in the disk cache

1.2 Reasonable Request Merge and Split (http1.1)

In chrome, for example, the maximum number of concurrent requests for HTTP1.1 is 6 (under the same domain name), and the number of concurrent requests beyond that must wait

** For large resources: ** Merging or not has no significant effect on load time, but splitting resources makes better use of the browser cache, and does not invalidate all resource caches due to an update of one resource, whereas after merging resources, updates of any resource invalidate all resource caches. In addition, the domain name sharding technology can be used to split resources into different domain names, which can not only reduce the server pressure, but also reduce the impact of network jitter.

** For small resources: ** Consolidated resources tend to load faster, but in the case of good network bandwidth, because the upgrade time is measured in ms, the benefit is negligible. If the network latency is large and the server response speed is slow, some benefits can be obtained. However, in a network scenario with high latency, network round-trip times may increase after resource consolidation, which affects the load time.

1.3 Upgrading to HTTP2

2. Reduce the DNS query time

2.1 Procedure for DNS Query

The first DNS search process of a web site is as follows: Browser cache > system cache > local hosts files > Router cache > ISP DNS cache > Recursive search

2.2 Adding DNS Cache to the Server

DNS caches are now available on most servers

2.3 the DNS prefetch

  1. Enable DNS preresolution:

If the browser supports DNS pre-resolution, the browser will still perform pre-resolution even if the TAB is not used.

<meta http-equiv="x-dns-prefetch-control" content="on"> // Turn off offCopy the code
  1. Force query for a specific host name:
<link rel="dns-prefetch" href="//domain.com">
Copy the code

Taobao’s pre-analysis

Note: Use dns-prefetch with caution. Repeated DNS prefetch on multiple pages increases the number of repeated DNS queries.

It is important to note that while using DNS Prefetch can speed up page parsing, it should not be abused, as some developers have pointed out that disabling DNS Prefetch could save 10 billion DNS queries per month.

2.4 Reducing DNS Queries

  1. Using the Connection:keep-alive feature to establish persistent connections, multiple requests can be made on the current Connection without domain name resolution
  2. Resources are placed under the same domain name, using the CACHE of DNS

3. Reduce the TCP connection time

  1. Establish keep-alive connections
  2. Enabling OCSP(Online Certificate Status Protocol)
  3. Upgrade to http2

4. Reduce the Request time

4.1 Reduce cookie use

Each HTTP request carries a cookie by default. If the cookie is too large, the transmission will slow down

4.2 the cookie isolation

For the acquisition of some static resources (pictures), it is not necessary to use cookies, so try to put image resources into a cookie-free server, reduce the cookie mix between the master domain name

5. Reduce TTFB time

  • cdn
  • Server performance
  • .

6. Reduce download time

  • To reduce the response data
  • Using the cache

7. Upgrade to HTTP2

Http1.1 vs. HTTP2 channel model

Advantages of HTTP2 over HTTP1.1

  • Binary protocol

Http2 uses binary transfer protocol, and HTTP1.1 uses text transfer, binary format transfer efficiency is much faster than text.

  • multiplexing

Multiple requests can be sent simultaneously on the same TCP connection without blocking

  • The head of compression

Http1.1 each request, will carry a large number of redundant header information, waste a lot of broadband resources.

  • The server push

It allows the Web server to send some resources to the client ahead of time before receiving the browser’s request

  • Http2 is also backward-compatible with http1.x

http2.akamai.com/demo

Other 8.

8.1 Avoid empty SRC, href

// html
<img src="" />
// js
var img = new Image(); 
img.src = "";
Copy the code

Although the SRC attribute is an empty string, the browser still makes an HTTP request to the server:

  • IE sends a request to the directory where the page resides.
  • Safari, Chrome, and Firefox send requests to the page itself;
  • Opera does nothing.

The consequences of an empty SRC request are not trivial:

  • Cause unexpected traffic burden to the server, especially when the time PV is large;
  • Waste server computing resources;
  • An error may occur.

The empty href attribute presents a similar problem. When the user clicks on an empty link, the browser also sends an HTTP request to the server, which can prevent the default behavior of an empty link through JavaScript.

8.2 Reducing redirection

Each redirection requires a new HTTP request

8.3 avoid 404

HTTP requests are expensive, and returning invalid responses (such as 404 not found) is unnecessary, degrading the user experience, and unhelpful. Some sites have cool 404 pages with prompts that help improve the user experience, but still waste server resources. Especially bad is when an external script returns a 404, which not only blocks other resources from downloading, but the browser tries to parse the 404 page as JavaScript, consuming even more resources.

8.4 Upgrading IPv4 to IPv6

Because IPv4 is running out and major mobile networks are rapidly adopting IPv6 (the US has already reached the 50% threshold for IPv6 usage), it’s a good idea to update your DNS to IPv6 for the future. You can have IPv6 and IPv4 running at the same time, as long as you make sure you have dual-stack support on your network. After all, IPv6 is not backward compatible. Studies show that IPv6’s built-in NDP and routing optimizations can improve web site load times by 10 to 15 percent.

The cache

1. The HTTP cache

When a client requests a resource from the server, it reaches the browser cache first, and if the browser has a copy of the resource to request and is not invalid, it can be fetched directly from the browser cache rather than from the original server.

Common HTTP caches can only cache resources in response to GET requests, not other types of responses, so the subsequent request caches refer to GET requests.

HTTP caching always starts with the second request. The first time a resource is requested, the server returns the resource and its cache parameters in the Respone Header header. On the second request, the browser determines whether the request parameters hit the strong cache and sends them to the server as 200. Otherwise, the browser adds the request parameters to the request header and sends them to the server to see if they hit the negotiated cache.

1.1 strong cache

When caching is enforced (i.e., the max-age of cache-control does not expire or the Expires Cache time does not expire), the browser’s cached data is used directly and no further requests are sent to the server. When the forced cache takes effect, the HTTP status code is 200. This is the fastest way to load the page, and the performance is good, but during this time, if the server side of the resource changes, the page is not available, because it will no longer send requests to the server. This is a situation we often encounter in development, such as you change a style on the page, refresh the page but it does not take effect, because of the strong cache, so Ctrl + F5 after a few operations. Header attributes associated with enforcing caching are Pragma(http1.0)/ cache-Control (http1.1)/Expires.

1.2 Negotiated Cache

When the server returns a response header with no cache-control or Expires in the first request, or if its attribute is set to no-cache, the browser negotiates with the server on the second request. Compare with the server to determine whether the resource is modified or updated. If the resource on the server side is not modified, a 304 status code is returned telling the browser that it is ok to use the cached data, thus reducing the data transfer pressure on the server. If the data is updated, the 200 status code is returned, and the server returns the updated resource and the cache information. The header attributes associated with the negotiated cache are ETag/ if-not-match, last-Modified/if-Modified-since. The request header and response header need to be paired

The negotiation cache works like this: when the browser sends the request to the server for the first time, it returns the negotiation cache header attributes ETag and Last-Modified in the response header, where ETag returns a hash value and last-Modified returns the Last Modified time in GMT format. Then, when the browser sends the request for the second time, it adds an if-not-match to the ETag in the request header, whose value is the ETag returned in the response header. The value of last-modified is if-Modified-since. If a 304 status code is returned, the requested resource has not been modified. The browser can fetch the data directly from the cache. Otherwise, the server will return the data directly.

1.3 summarize

2. Browser cache

  • Storage
    • LocalStorage
    • SessionStorage
    • IndexedDB
    • Web SQL
    • Cookies
  • Cache
    • Cache Storage – service worker
    • Application Cache – Offline Cache

Render layer optimization

1. Prevent blocking rendering

  • The CSS is preloaded in the header
  • The js file is placed at the bottom to prevent blocking parsing
  • Some JS that do not change the DOM and CSS use defer and async properties to tell the browser that it can load asynchronously without blocking parsing

2. Reduce redrawing and reflow

Redraw and backflow are not necessary in actual development, and we can only minimize such behavior as much as possible

  • Reduce DOM manipulation
  • Optimize the DOM structure so that elements that might generate backflow are laid out out of the document flow.
  • The img tag sets the height
  • Dom operation Offline operation (display: None) that triggers only one backflow
  • Use transform to do deformation and displacement without backflow

3. Render on the server

Next, NUxT, etc

Home skeleton screen

4. Improve code quality

HTML:

  • Optimizing the DOM hierarchy, too deep will increase the dom tree construction time, and will cause a great burden for JS to find deep nodes
  • Meta tag added to the document encoding definition, easy browser parsing
  • Do not zoom images in HTML

CSS:

  • Reduce the CSS nesting level and select the appropriate selector
  • Use the style tag for inline key CSS on the first screen
  • Avoid using @import
  • Avoid USING CSS expressions
  • Avoid using filters
  • Animation rendering using 3D syntax, GPU acceleration enabled

Efficiency sorting of CSS selectors:

Id selector (# myID) 2. Class selector (.myclassName) 3. Tag selector (div, H1, P) 4. Adjacent selector (H1 + P) 5. Child selector (ul < Li) 6. Descendant selector (Li A) 7. Wildcard selector (*) 8. Attribute selector (a[rel=”external”]) 9. Pseudoclass selector (A :hover, Li :nth-child)

Js:

  • Don’t manipulate the DOM too often and do it in requestAnimationFrame
  • Instead of changing the style of an element directly with JS, you can change it by changing the class name
  • For dom nodes that need to be accessed multiple times, variable dump is required to avoid performance loss caused by repeated access to DOM nodes
  • Clean up unused timers
  • Add debounce and throttle for high-frequency events
  • Lazy loading, preloading, default images

Quick scheme

  1. js/css tree shaking
  2. Proper code splitting and merging
  3. Code compression, pre-gzip or Brotli to reduce the server dynamic compression time
  4. Select a suitable format for the image and compress it
  5. Upgrade HTTP to HTTP2
  6. Static resources use CDN

In this paper, the reference

Front-End Performance Checklist 2019 [PDF, Apple Pages, MS Word]

Volume reduced by 80%! Unleash the true potential of Webpack tree-shaking

Why adaptive images

HTTP request merge vs HTTP parallel request

X-DNS-Prefetch-Control

Yahoo’s 35 catch-22 for front-end performance optimization