Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

This article also participated in the “Digitalstar Project” to win a creative gift package and creative incentive money

preface

Performance optimization is our front-end essential knowledge, it can be said that everyone must know the knowledge, not only usually used, even interview will be asked. So how do you optimize the front-end performance? There are seven methods

The seven tools include reducing the number of requests, reducing the size of resources, optimizing network connections, optimizing resource loading, reducing redraw backflow, using better-performing apis, and building optimizations

Reduce the number of requests

“Merger”

If you do not merge files, the following three risks exist

  1. There are inserted uplink requests between files, adding n-1 network latency
  2. It is even more affected by the loss of packets
  3. It may be disconnected when passing through a proxy server

However, file merging has its own problems

  1. First screen rendering problem
  2. Cache invalidation

Therefore, for file merge, there are the following suggestions for improvement

  1. Common library merge
  2. Separate pages are merged

[Photo processing]

  1. Sprite figure

CSS Sprite image is a very popular technology in the past. It can reduce the number of HTTP requests on a website by combining some images into a single image, but when the integrated image is large, it will be slow to load at one time. With the popularity of font graphics and SVG graphics, this technology has gradually faded from the scene

  1. Base64

Embedding the content of the image in HTML in Base64 format reduces the number of HTTP requests. However, because Base64 encoding uses 8-bit characters to represent six bits of information, the encoded size is approximately 33% larger than the original value

  1. Use font ICONS instead of images

Reduce redirection

Avoid redirects as much as possible. When a page is redirected, it delays the transfer of the entire HTML document. Nothing is rendered on the page until the HTML document arrives, and no components are downloaded, degrading the user experience

If you must use redirects, such as HTTP to HTTPS, use 301 permanent redirects instead of 302 temporary redirects. If you use 302, you will be redirected to an HTTPS page every time you visit HTTP. Permanent redirects, after the first redirect from HTTP to HTTPS, return the PAGE directly to HTTPS each time you visit HTTP

[Using cache]

When using a strong cache such as Cache-Control or Expires, no requests are sent to the server until the cache expires. When the strong cache expires, a request is sent to the server using a last-Modified or eTAG negotiated cache. If the resource has not changed, the server returns a 304 response and the browser continues loading the resource from the local cache. If the resource is updated, the server sends the updated resource to the browser with a 200 response

[Not using CSS @import]

CSS @import causes additional requests

Avoid empty SRC and href

The a tag sets the empty href, which redirects to the current page address

The form sets the empty method, which submits the form to the current page address

Reduce resource size

“Compression”

  1. HTML compression

HTML code compression is the compression of characters that make sense in a text file but are not displayed in HTML, including Spaces, tabs, and newlines

  1. CSS compression

CSS compression includes invalid code removal and CSS semantic consolidation

  1. JS compression and chaos

JS compression and chaos include deletion of invalid characters and comments, reduction and optimization of code semantics, reduction of code readability and code protection

  1. Image compression

For the real picture situation, discard some relatively insignificant color information

【 webp 】

Android can use webP format images, it has better image data compression algorithm, can bring smaller image volume, the same picture quality, volume than JPG, PNG more than 25%, and has lossless and lossy compression mode, Alpha transparency and animation features

【 Enable Gzip 】

GZIP encoding over the HTTP protocol is a technique used to improve the performance of WEB applications. High-traffic WEB sites often use GZIP compression technology to give users a sense of speed. This usually refers to a feature installed in WWW servers that compresses the content of web pages to display in the visiting computer’s browser when someone visits a web site on the server. Generally for plain text content can be compressed to 40% of the original size

Optimize your Network connection

[Using CDN]

The full name of CDN is Content Delivery Network, namely, Content Delivery Network. It can redirect users’ requests to the service node nearest to users in real time according to comprehensive information such as Network traffic, connection of each node, load status, distance to users and response time. Its purpose is to enable users to obtain the content needed nearby, solve the situation of crowded Internet network, improve the response speed of users to visit the website

[Using DNS pre-resolution]

When the browser accesses a domain name, it resolves the DNS to obtain the IP address of the domain name. During the resolution process, the cache is read by the browser cache, system cache, router cache, ISP(carrier)DNS cache, root DNS server, top-level DNS server, and primary DNS server until the IP address is obtained

DNS Prefetch refers to domain names that may be used after DNS Prefetch is resolved in advance according to rules defined by the browser, so that the resolution results are cached in the system cache, shortening the DNS resolution time and improving the access speed of websites

The method is to write several link tags inside the head tag

<link rel="dns-prefecth" href="https://www.google.com">
<link rel="dns-prefecth" href="https://www.google-analytics.com">
Copy the code

Resolving DNS for these sites ahead of time, because it is parallel, does not clog page rendering, which can shorten resource load time

[Parallel connection]

Due to the HTTP1.1 protocol, Chrome’s maximum number of concurrent requests per domain name is 6. Multiple domain names can increase the number of concurrent requests

Persistent connection

Persistent connections are established using keep-alive or Presistent, which reduces latency and connection establishment overhead, keeps connections tuned, and reduces the potential number of open connections

[Pipelined connection]

In the HTTP2 protocol, you can enable pipelining connections, that is, the multiplexing of a single connection, each connection concurrently transfers multiple resources, there is no need to add a domain name to increase the number of concurrent connections

Optimizing resource loading

[Resource loading location]

Make functionality available as quickly as possible by optimizing where resources are loaded and changing when they are loaded to display page content as quickly as possible

  1. The CSS file is placed in the head, then the page

  2. The JS file is placed at the bottom of the body, first linked, then the page

  3. JS files that handle pages and page layouts, such as babel-polyfill.js and flexibility.js, are placed in the head

  4. Try not to put style tags and script tags in the middle of the body

[Resource loading time]

  1. Asynchronous script tag

Defer: Load asynchronously and execute after the HTML parsing is complete. The actual effect of defer is similar to putting the code at the bottom of the body

Async: load asynchronously and execute immediately after the load is complete

  1. Modules are loaded on demand

In a system with complex business logic, such as SPA, the service module required by the current page needs to be loaded according to the route

Loading on demand is a great way to optimize a web page or application. In this way, the code essentially leaves at some logical breakpoint, and then references, or is about to reference, some new code blocks as soon as something is done in some code blocks. This speeds up the initial loading of the application and reduces its overall size, since some code blocks may never be loaded

Webpack provides two similar techniques, with the preferred approach being the use of import() syntax that conforms to the ECMAScript proposal. The second is to use webpack-specific require.ensure

  1. Preload and prefetch resources are preloaded using resources

Preload enables the browser to load specified resources in advance and execute them as needed, which speeds up the loading of the page

Prefetch tells the browser what resources might be used to load the next page, speeding up the loading of the next page

  1. Lazy resource loading and resource preloading

Lazy loading of resources is also called lazy loading. Resources are loaded lazily or only when certain conditions are met

Resource preloading is used to load resources required by users in advance to ensure good user experience

Lazy resource loading and resource preloading are off-peak operations. They are not performed when the browser is busy and load resources when the browser space is available, optimizing network performance

Reduce redraw backflow

【 Style Settings 】

  1. Avoid using deep selectors, or other complex selectors, to improve CSS rendering efficiency
  2. Avoid CSS expressions, which are a powerful but dangerous way to set CSS properties on the fly. The problem with CSS expressions is that they are computed quickly. Recalculation occurs not only when the page is displayed and zoomed, but when the page is scrolled, and even when the mouse is moved
  3. The element defines the height or minimum height appropriately, otherwise the page element will wobble or position, causing backflow, when the element’s dynamic content is loaded
  4. Size the picture. If the image is not sized, the first load will take up space from zero to full, up, down, left, and right, and backflow will occur
  5. Do not use the table layout, because a small change can cause the entire table to be rearranged. And table rendering usually takes 3 times as long as the equivalent element
  6. If you can use CSS, use CSS instead of JS

[Rendering layer]

1. In addition, separate elements that need to be redrawn multiple times into the render layer, such as absolute, to reduce the redrawn range

2. For some animated elements, use hardware rendering to avoid redrawing and reflow

【DOM optimization 】

  1. Cache the DOM
const div = document.getElementById('div')
Copy the code

Because DOM queries are time-consuming, you can cache the DOM when you do not need to query the same node multiple times

  1. Reduce DOM depth and DOM number

The more tag elements you have in HTML and the deeper the tag level, the longer it takes for the browser to parse the DOM and draw it into the browser, so keep DOM elements as clean and as hierarchical as possible.

  1. Batch operation DOM

Because DOM manipulation is time-consuming and can cause backflow, you can avoid frequent DOM manipulation by batch manipulating the DOM, first concatenating the DOM with strings, and then updating the DOM with innerHTML

  1. Batch operations on CSS styles

Batch manipulate element styles by toggling the class or using the style.csstext attribute of the element

  1. Manipulate the DOM in memory

Use the DocumentFragment object to let DOM operations take place in memory rather than on the page

  1. DOM elements are updated offline

AppendChild, etc., can use the Document Fragment object to perform offline operations on the DOM, insert the page again after the element is “assembled”, or use display: None to hide the element and perform operations after the element is “gone”

  1. DOM read/write separation

The browser has a lazy rendering mechanism, and connecting multiple times to modify the DOM may only trigger the browser to render once. If you modify the DOM, read the DOM immediately. To ensure that the correct DOM value is read, a render is triggered by the browser. Therefore, DOM modification is done separately from DOM access

  1. The event agent

Event broker is to register event listeners on the parent element. Since the events of the child element will be propagated up to the parent node through event bubbling, the listener function of the parent node can process the events of multiple child elements in a unified manner

With event brokers, you can reduce memory usage, improve performance, and reduce code complexity

  1. Anti-shake and throttling

Use function throttling or debounce to limit how often a method fires

  1. Clean up the environment

Remove object references, clear timers, clear event listeners, create minimum scoped variables, and reclaim memory in time

Better performance APIS

  1. Use the right selector

The performance order of the selectors is shown below. Try to choose the selector with the best performance

Id selector (# myID) Class selector (.myclassName) tag selector (div, H1, P) Adjacent selector (H1 + P) subselector (ul > Li) Descendant selector (LI A) wildcard selector (*) attribute selector (A [rel="external"]) pseudoclass selector (a:hover,li: the NTH - child)Copy the code
  1. Use requestAnimationFrame instead of setTimeout and setInterval

You want to make changes to the page at the beginning of each frame, which is currently only possible with requestAnimationFrame. Use setTimeout or setInterval to trigger a function that updates the page. This function may be called in the middle of a frame or at the end of a frame, causing the following frame to be lost

  1. IntersectionObserver is used to achieve lazy loading of the visual area of the image

In traditional practice, the scroll event is used and the getBoundingClientRect method is called to determine the visible area. Even if function throttling is used, the page will flow back. IntersectionObserver does not have the above problems

  1. Use the web worker

A basic feature of client-side javascript is single-threading: for example, a browser cannot run two event handlers at the same time, nor can it trigger a timer when one is running. Web Worker is a javascript multithreading solution provided by HTML5. It can put some computational-heavy code into the Web Worker to run, so as to avoid blocking the user interface. This API is very useful when performing complex calculations and data processing

However, use some of the new apis with browser compatibility in mind

Webpack optimization

[Package common Code]

Using the CommonsChunkPlugin plugin, the resulting composite file can be loaded once at the beginning and stored in the cache for subsequent use. This leads to a speed increase because the browser quickly removes the common code from the cache, rather than loading a larger file every time a new page is visited

Webpack 4 will remove CommonsChunkPlugin, replaced by two new configuration item optimization. SplitChunks and optimization runtimeChunk

By setting up optimization. SplitChunks. Chunks: “all” to start the default code division configuration items

Dynamic import and load on demand

Webpack provides two techniques for separating code through inline function calls to modules, preferentially using import() syntax that conforms to the ECMAScript proposal. The second is to use webpack-specific require.ensure

[Eliminate useless code]

Tree shaking is a term used to describe removing dead-code from a JavaScript context. It relies on static structural features in the ES2015 module system, such as import and export. The term and concept actually arose from the ES2015 module packaging tool rollup

JS tree shaking is done through UglifyJS and CSS tree shaking is done through Purify CSS

[Long cache optimization]

  1. Replace hash with chunkhash so that the cache remains valid while chunk does not change

  2. Use Name instead of ID

Each module.id is incremented based on the default resolve order. That is, when the parse order changes, the ID changes with it

Let’s use two plug-ins to solve this problem. The first plug-in, NamedModulesPlugin, will use the module’s path instead of a numeric identifier. While this plug-in helps make the output readable during development, it takes longer to execute. The second option is to use HashedModuleIdsPlugin, which is recommended for production builds

[Public code inline]

Inline mainfest.js into the HTML file using the htMl-webpack-inline-chunk-plugin plugin