preface

On the topic of front-end performance optimization, we can’t optimize for optimization’s sake. Follow a process:

  • Why front-end performance tuning, and what exactly does it mean?

  • How to evaluate the front-end performance of a website?

  • What are the directions and points of performance optimization?

Why/what is it

Why do we do performance tuning? Performance is critical to the front end. Performance plays an important role in the success of an enterprise. High performance sites attract more users than low performance sites, and performance is an important indicator of conversion.

According to some studies, for every 100 milliseconds decrease in the loading speed of a website’s home page, the conversion based on that page increases by 1.11%. Average income increased by $380,000. Every 100-millisecond decrease in payment page loading speed increases session-based conversion rates by 1.55% and increases average annual revenue by $530,000.

So front-end performance can affect the overall revenue of the company, and at the same time, a high-performance website will give users a better user experience.

What exactly do we mean by performance? In fact, performance is relative; a site may be fast for one user but slow for another. It is also possible that the page loads for a long time, but the interaction time is long. So there need to be some metrics to measure the performance of a website.

Over the past few years, members of the Chrome team (in collaboration with the W3C Web Performance Working Group) have been working on a new set of standardized apis and metrics to more accurately measure a user’s Web performance experience.

Several important indicators:

First Contentful Paint (FCP): Measures the time it takes for a page to complete rendering on screen from the time it starts loading into any portion of the page content.

Largest Contentful Paint (FCP): Measures the time from the start of a page to the completion of rendering of the Largest block of text or image element on the screen.

First Input Delay (FID): Measures the time that passes from the First time a user interacts with your site (for example, when they click a link, click a button, or use a custom javascript-driven control) until the browser is actually able to respond to the interaction.

Time to Interactive (TTI): Measures the Time it takes for a page to start loading until it is visually rendered, the initial script (if any) is loaded, and able to respond quickly and reliably to user input.

Cumulative Layout Shift (CLS) : Measures the Cumulative score of all unexpected layout shifts that occur between the time a page starts loading and its lifecycle state becomes hidden.

assessment

How do you know if a site needs performance optimization? You can use Lighthouse for performance checks (whether headless Chrome — pupeteer does it), which tests your site in several key areas – performance, accessibility, best practices, and how your site performs as a progressive Web application.

Take a look at [] taobao (Taobao touch screen version)

A well-known company:

Evaluation and Webpack integration

Performance results and Webpack are incorporated into the build process. After the build step, Webpack outputs a color-coded list of assets and their sizes. Anything over budget is highlighted in yellow.

Incorporate Performance Support into your build process

Performance optimization

Now that we’re ready to work, let’s summarize where front-end performance can be optimized.

Try to optimize from the development stage, then to the network level after the launch, and then to the rendering level of performance optimization summary.

Development –> Web –> Render

The development phase

We know that the development phase is mainly about webpack performance optimization.

Webpack performance optimization

The performance optimization of Webpack mainly starts from two aspects: packaging time and packaging volume.

1. Upgrade webpack

The first thing to do is to upgrade WebPack to the latest version, WebPack 5 has a lot of built-in plug-ins, and a lot of optimization, upgrade is actually a good choice.

2. Narrow search/reduce file processing

We know that Webpack will scan various files together with loader, and then find the corresponding loader for conversion. But we know that node_modules files are translated, so we don’t need to scan the side, or any third party libraries introduced into the project. These files can be regarded as mature and do not need to be processed by loader.

  1.  include/excludeUsually at largeLoaderIn the configuration,SRC directoryUsually as source directory, can do the following processing. Of course,include/excludeThe value can be modified as required.
export default {
    // ...
    module: {
        rules: [{
            exclude: /node_modules/,
            include: /src/,
            test: /\.js$/,
            use: "babel-loader"
        }]
    }
};
Copy the code
  1. Use externals to extract public libraries that will not change. Import the corresponding CDN in HTML.
< script SRC = "https://code.jquery.com/jquery-3.1.0.js" integrity = "sha256 - slogkvB1K3VOkzAI8QITxV3VzpOnkeNVsKvtkYLMjfk =" crossorigin="anonymous" ></script>Copy the code
module.exports = {
  //...
  externals: {
    jquery: 'jQuery',
  },
};
Copy the code

Enxternals will have two functions here:

  • Instead of packing imported packages into the bundle, obtain these external dependencies at runtime.

  • Expose global jQuery variable names. Some code might use variable names like jQuery to fetch methods. Hanging in global will not report an error.

  1. Use the DllPlugin to pack third-party packages in advance. The DllPlugin can be used in # webpack.

    DllPlugin basically means packing infrequently changing libraries into a file and generating a react.manifest.json file. Save the name of the third-party library and the corresponding location relationship of the packaged file, the next packaging will not need to go through reading, compilation, conversion and a series of time-consuming operations.

3. Targeted search

Configure resolve to improve the file search speed

Alias mapping module path, Extensions indicating file suffixes, noParse filtering no dependent files. It is usually sufficient to configure alias and Extensions.

export default { // ... Resolve: {alias: {"#": AbsPath(""), // Root directory shortcut "@": AbsPath(" SRC "), // SRC directory shortcut swiper: "Swiper /js/swiper.min.js"}, // module import shortcut extensions: [" js ", "ts" and "JSX", "benchmark", "json", "vue"] / / import path file can be omitted when the suffix}};Copy the code
4. The cache

Configure the cache cache loader to compile only modified files. Most loaders provide cache options, such as babel-loader and eslint-webpack-plugin.

import EslintPlugin from "eslint-webpack-plugin";

export default {
    // ...
    module: {
        rules: [{
            // ...
            test: /\.js$/,
            use: [{
                loader: "babel-loader",
                options: { cacheDirectory: true }
            }]
        }]
    },
    plugins: [
        new EslintPlugin({ cache: true })
    ]
};
Copy the code
5. A multithreaded

The advantage of using multithreading is to take advantage of the concurrent processing of files on a multi-core CPU. We know that JS /node is single-threaded, how can we use a multi-core CPU to handle a large number of files?

let HappyPack = require('happypack'); module.exports = { ... Module: {rules: [{test: / \. Js $/, use: 'HappyPack/loader? Id = js / / the id = js means this is packaged js}, {$/ test: / \. CSS, Use :'HappyPack/loader? Id = CSS ' Plugins :[new HappyPack({id:js},// use:['style-loader','css-loader']}), New HappyPack({id:js = 'js',// use:[{//use: array, Loader :'babel-loader', options:{presets:['@babel/presets-env', '@babel/presets-react' ] } }] }) ] }Copy the code
6. Reduce code/compression

We wanted to have as little code as possible when we were preparing the code for launch. We can eliminate unnecessary code that is not being used. And you can extract common parts so that the same code doesn’t have to be repackaged in different files to increase the size of the code.

export default {
    // ...
    mode: "production"
};
Copy the code

In Webpack, you only need to set the packaging environment to the production environment to make the tree shaking optimization take effect. Meanwhile, the business code is written using ESM specification, import module is used, export module is used.

Compress HTML/CSS/JS code and compress fonts/images/audio/video to reduce packaging volume more effectively

  • optimize-css-assets-webpack-plugin: compressionCSS code
  • uglifyjs-webpack-plugin: compressionES5Version of theJS code
  • terser-webpack-plugin: compressionES6Version of theJS code

Webpack V4 uses splitChunks instead of CommonsChunksPlugin for code splitting. Please refer to the official website for details.

export default { // ... Optimization: {runtimeChunk: {name: "manifest"}, // Split WebpackRuntime splitChunks: {cacheGroups: {common: {minChunks: 2, name: "common", priority: 5, reuseExistingChunk: true, // Reuse existing code block test: AbsPath(" SRC ")}, vendor: {chunks: "initial", // code block name: "vendor", // code block name priority: 10, // priority test: // node_modules/ // check file regular expressions}}, // cache group chunks: "all" // code segmentation types: all modules, async module, initial module} // code block segmentation}};Copy the code
7. Load on demand

Package the routing page/trigger functionality as a separate file and load it only when you use it to reduce the burden of first-screen rendering. The more features a project has, the larger the package size, resulting in a slower rendering of the first screen.

const Login = () => import( /* webpackChunkName: "login" */ ".. /.. /views/login"); const Logon = () => import( /* webpackChunkName: "logon" */ ".. /.. /views/logon"); // ----bable.config.js { // ... "babel": { // ... "plugins": [ // ... "@babel/plugin-syntax-dynamic-import" ] } }Copy the code

The network layer

1. Reduce HTTP requests

A complete HTTP request consists of a DNS lookup, a Tcp handshake, a client sending a request, a server responding to the request, and a browser waiting for the response.

Noun explanation:

  • Queueing: Time spent in the request queue.
  • Cost: the time between the establishment of a TCP connection and actual transmission of data, including agent negotiation time
  • Proxy negotiation: The time spent connecting to a Proxy server for negotiation.
  • DNS Lookup: The time it takes to perform a DNS Lookup. DNS Lookup is required for each different domain on the page.
  • Initial Connection/Connecting: Indicates the time taken to establish the Connection, including TCP handshake/retry and SSL negotiation.
  • SSL: indicates the time spent to complete the SSL handshake.
  • Request sent: The time it takes to make a network Request, usually one millisecond.
  • Waiting(TFFB): TFFB is the time between sending the page request and receiving the first byte of reply data.
  • Content Download: Time taken to receive response data — 13.05.

As you can see from this example, the percentage of time actually downloading data is 13.05/204.16 = 6.39%.

When you merge into larger files, the time spent on these overheads stays the same, but the percentage of actual downloads increases. The higher the ratio, the greater the HTTP utilization, the higher the natural efficiency.

This is why it is recommended to combine multiple small files into one large file to reduce the number of HTTP requests.

2. Use HTTP2

Why use HTTP2, and what does it have over Http1.1?

Fast parsing speed

When parsing HTTP1.1 requests, the server must keep reading in bytes until it encounters the CRLF delimiter. HTTP2 requests are less troublesome because HTTP2 is a frame-based protocol, and each frame has a field indicating the length of the frame.

multiplexing

For HTTP1.1, if you initiate multiple requests at the same time, you must establish multiple TCP connections, because a TCP connection can only handle one HTTP request at a time.

On HTTP2, multiple requests can share a TCP connection, which is called multiplexing. A request will have a unique stream ID to ensure that the data is correct.

The first compression

HTTP/2 uses “header tables” on both the client and server sides to track and store previously sent key-value pairs, rather than sending the same data with each request and response.

If the server receives the request, it creates a table anyway. When the client sends the next request, if the headers are the same, it can send the header key directly: 62 63 64.

The server looks for the previously created table and restores the numbers to the index’s full header.

The index The first name value
62 Header1 foo
63 Header2 bar
64 Header3 bat
priority

HTTP2 can set a higher priority for urgent requests, which the server can process first.

Flow control

In the case of fixed bandwidth, one request consumes a lot of traffic, so the traffic of another request will be less. HTTP2 provides precise control over the amount of traffic consumed by streams so that high-priority streams can be processed quickly.

  1. To optimize the image
Use the font icon iconfont instead of the image icon

Font icon is to make the icon into a font, use the same as font, you can set properties, such as font size, color and so on, very convenient. And the font icon is a vector graph, not distortion. Another advantage is that the generated files are extremely small.

References:

  • fontmin-webpack
  • Iconfont- Alibaba vector icon library
Select the correct compression level

Image-webpack-loader can be used to compress images:

{ test: /\.(png|jpe? g|gif|svg)(\? . *)? $/, use:[{loader: 'url-loader', options: {limit: 10000, /* Image size is less than 1000 bytes automatically base64 code reference */ name: Utils. AssetsPath ('img/[name].[hash:7].[ext]')}}, /* Compress images */ {loader: 'image-webpack-loader', options: { bypassOnDebug: true, } } ] }Copy the code
  • Preferred vector formats: Vector images are resolution and scale independent, making them ideal for a multi-device and high-resolution world.
  • Shrink and compress SVG assets: XML tags generated by most drawing applications often contain unnecessary metadata that can be removed; Make sure your server is configured to apply GZIP compression to SVG assets.
  • Prefer WebP to older raster formats: WebP images are typically much smaller than older images.
  • Choose the best raster image format: Determine your functional requirements and choose a format suitable for each specific asset.
  • Experiment with the best quality Settings for raster formats: Don’t be afraid to turn down the “quality” Settings, the results are usually very good and the byte savings are significant.
  • Remove unnecessary image metadata: Many raster images contain unnecessary metadata about assets: geographic information, camera information, and so on. Use appropriate tools to strip this data.
  • Provides zoom images: Resize the image and ensure that the “display” size is as close to the “natural” size of the image as possible. Pay special attention to large images, as they account for the most overhead in resizing!
  • Automation, automation, automation: Invest in automation tools and infrastructure to ensure that all your image assets are always optimized.
Replace animated GIFs with videos to speed up page loading

The cost savings between GIF and video can be significant.

Animated GIFs have three key features that video needs to replicate:

  • They play automatically.

  • They cycle continuously (usually, but can be prevented from cycling).

  • They were silent.

    Fortunately, you can recreate these behaviors using the

<video autoplay loop muted playsinline></video>
Copy the code

Compatible with:

<video autoplay loop muted playsinline>  
    <source src="my-animation.webm" type="video/webm">  
    <source src="my-animation.mp4" type="video/mp4">
</video>
Copy the code

Image processing also provides the following optimization reference links:

[Provide image with correct size](Serve images with correct dimensions)

Use WebP images

Lazy loading image

Lazy-loaded video

Graphics strategy is a cheap but effective strategy for optimizing performance, as it can probably handle all the build strategies in a single image.

  1. The cache

This cache is mainly around the browser cache processing, and is the lowest cost performance optimization strategy. Browser caching can significantly improve web page access speed and reduce bandwidth consumption.

Static resources use CDN

CDN principle and advantages specific reference # WHAT is CDN? What are the advantages of using CDN?

Rendering level

Performance optimizations at the rendering level are no doubt how to make code parsing better, perform faster, and reduce redraw rearrangements.

Browser rendering process

  1. Parsing the HTML generates a DOM tree.
  2. Parsing CSS generates a CSSOM rule tree.
  3. Combine the DOM tree with the CSSOM rule tree to generate the render tree.
  4. Traverse the render tree to start the layout, calculating the location size information for each node.
  5. Draws each node of the render tree to the screen.

Make some optimization points based on these:

  •  CSS strategy: Based on CSS rules
  • DOM strategy: DOM based operation
  •  Blocking strategy: script-based loading
  •  Reflux redraw strategy: Redraw based on reflux
  •  Asynchronous update strategy: Based on asynchronous update
CSS strategy
  • Avoid more than three layersNested rules
  • Avoid toThe ID selectorAdd extra selectors
  • Avoid the use ofLabel selectorInstead ofClass selectors
  • Avoid the use ofWildcard selectorDeclare rules only for the target node
  • Avoid duplicate match duplicate definition, concernInheritable property
DOM strategy
  • Caches DOM calculation properties

  • Avoid excessive DOM manipulation

  • Use DOMFragment to cache batch DOM operations

  • Use event delegates. Event delegates take advantage of event bubbling and manage all events of a certain type by specifying a single event handler. All events that use buttons (most mouse and keyboard events) are suitable for event delegation, which saves memory.

Blocking strategy
  • The script has a strong DOM/ other script dependency: set defer to

  • Scripts are not strongly dependent on DOM/ other scripts: set async to

  • Use Web Workers. The Web Worker uses other Worker threads to be independent of the main thread, and it can perform tasks without interfering with the user interface.

Reflux redraw strategy
  • The cacheDOM computed properties
  • Use class merge styles to avoid changing styles line by line
  • usedisplaycontrolDOM show hiddenThat will beDOM available offline
  • Use Flexbox instead of the earlier layout model
  • Animating is implemented using the transform and opacity property changes
  • Use requestAnimationFrame to implement visual changes
Asynchronous update strategy
  • inAsynchronous tasksChanges in theDOMWhen it is packaged intoMicro tasks

References:

  • Front-end performance optimization 24 build
  • To senior front-end on performance optimization of the nine main strategy and six indicators | netease practice for four years – the nuggets
  • Fast load times