In the current development of the Internet, webApp projects are getting bigger and bigger, with more and more onerous requirements and functions, leading to bigger and bigger code package volume. Page opening speed, page smoothness and user experience are also one of the evaluation criteria for products. Performance optimization is also a necessary work for front-end development. We explain performance optimization separately from engineering, loading and code optimization. The core of performance optimization is “small” first

engineering

packaging

Resources compression

Code compression can reduce code volume, save bandwidth, improve download speed online compression tool

  1. Compression: Reduces the SIZE of Javascript files by removing all comments, tabs, newlines, and useless whitespace from Javascript code.
  2. Obfuscation: Code to rename variables and functions to meaningless names to prevent prying and stealing Javascript source code.
Webpack configures compression
const UglifyJsPlugin = require('uglifyjs-webpack-plugin');
const OptimizeCSSAssetsPlugin = require('optimize-css-assets-webpack-plugin');

optimization: {
    minimizer: [ // For configuring minimizers and options
        new UglifyJsPlugin({
            cache: true.parallel: true.sourceMap: true // set to true if you want JS source maps
        }),
        new OptimizeCSSAssetsPlugin({})
    ]
}

Copy the code

tree shaking

Tree-shaking can be understood as “shaking” our JS files with tools, shaking away code that is not needed in them. It is a category of performance optimization

  1. Webpack4 supports tree shaking by setting mode
module.exports = {
   mode: 'production',}Copy the code

Note: Webpack’s own tree-shaking does not analyze side effect modules

  1. Webpack-deep-scope-plugin: This plugin is designed to fill in the Tree shaking problems of WebPack4 and eliminate useless code from online demos through scoping analysis

  2. webpack5 tree shaking

module.exports = {
   optimization: {
       usedExports: true.// Identify useless code
       minimize: true.// Remove unwanted code from the package
       concatenateModules: true.If possible, merge all modules into one function}}Copy the code
  1. eslint-plugin-you-dont-need-momentjs

If you’re using ESLint, you can install a plugin to help you identify areas of your code base that don’t (and probably don’t) need moment.js.

According to the need to load

The loading of the first screen is important, and for projects with SPA, on-demand loading is appropriate for large files. For example, a js file of nearly 1M cities, provinces and counties in China does not need to be introduced when I load it on the first screen, but only when the user clicks options. If it is not clicked, it will not load. You can reduce the number and time of HTTP requests on the first screen.

Resolution of the code

With the development of the Internet, webApp projects are getting bigger and bigger, and the large file size is one of the problems affecting performance. Especially for mobile devices, it’s a disaster. It also doesn’t make sense to load code in the first place that is only needed in a specific context, which leads to the concept of load on demand. To address these situations, code split was created. The name of code splitting means splitting large files according to different granularity to meet the requirements of generating files such as large size and loading on demand.

Webpack code split

Webpack implements code splitting in three ways

  1. Multiple entries packed separately
  2. De-weight, remove public modules and third-party libraries
  3. Dynamic loading
clean-css

Simply uncss your styles online!

Load optimization

For load optimization, we first need to clarify several important performance indicators of the browser. Please refer to performance.

  1. TTFB (Time to First Byte): indicates the Time when the browser receives the First Byte
  2. Background :# DDD; See the point in time when the page first appears after navigation that differs from the content before navigation. When the browser starts rendering the page, the white screen is triggered. If you set the background color, you will see the background color appear
  3. FCP(First Contentful Paint): The point at which the “main content” of a page is First drawn.
  4. FMP(First Meaningful Paint) : The point at which the “main content” of a page is First drawn. Meaningful drawing (custom)
  5. DCL(DOMContentLoaded) : indicates the HTML loading completion event, L(onLoad) indicates the page all resources loading completion event
  6. Largest Contentful Paint LCP(Largest Contentful Paint): The point in time when the Largest visible element of content begins to appear on the page.
  7. CLS Cumulative Layout Shift (CLS): Indicates the frequency of accidental Layout offset experienced by the user.
  8. Total Blocking Time (TBT): indicates the sum of the Blocking Time of all long tasks from FCP to TTI
  9. TTi(Time to Interactive): Indicates that a page can interact

performance timing

Performance. Timing: a series of critical time points, including network, load, and parse time data. The meanings of each key time point are analyzed as follows:

  1. NavigationStart Unix timestamp when an Unload event occurred when the previous page of the browser window closed, which is the first measurement point
  2. UnloadEventStart Returns the Unix timestamp at the time of the Unload event for the previous page when it belongs to the same domain name as the current page
  3. UnloadEventEnd Returns the Unix timestamp at the end of the callback function of the Unload event on the previous page when the previous page belongs to the same domain name as the current page
  4. RedirectStart returns the Unix timestamp at the start of the first HTTP jump
  5. RedirectEnd returns the Unix timestamp at the end of the last HTTP jump
  6. FetchStart returns the Unix timestamp when the browser is ready to use HTTP requests to read resources such as documents, before the web page queries the local cache
  7. DomainLookupStart returns the Unix timestamp at the start of the domain name query. If persistent connections are used, or the information is fetched from a local cache, the return value is the same as the value of the fetchStart attribute
  8. DomainLookupEnd returns the Unix millisecond timestamp at the end of the domain name query. If persistent connections are used, or the information is fetched from a local cache, the return value is the same as the value of the fetchStart attribute
  9. ConnectStart returns the Unix millisecond timestamp when the HTTP request began to be sent to the server. If persistent Connection is used, the return value is equal to the value of the fetchStart attribute
  10. ConnectEnd returns the Unix millisecond timestamp when the connection between the browser and the server was established. If a persistent connection is established, the return value is the same as the value of the fetchStart attribute. Connection establishment refers to the completion of all handshake and authentication processes
  11. SecureConnectionStart returns the Unix millisecond timestamp of the handshake between the browser and the server that started the secure link. If the current page does not require a secure connection, 0 is returned
  12. RequestStart returns the Unix millisecond timestamp when the browser makes an HTTP request to the server (or when it starts reading the local cache)
  13. ResponseStart returns the Unix millisecond timestamp of the first byte received by the browser from the server (or read from the local cache)
  14. ResponseEnd returns the Unix millisecond timestamp when the browser received (or read from the local cache) the last byte from the server (or closed if the HTTP connection has been closed before)
  15. DomLoading returns the Unix millisecond timestamp when the DOM structure of the current web page began to parse (that is, when the document. readyState property became “loading” and the corresponding readyStatechange event was triggered)
  16. DomInteractive returns the Unix from when the DOM structure of the current web page ends parsing and the embedded resource starts loading (that is, when the document. readyState property changes to “interactive” and the corresponding readyStatechange event is triggered) Millisecond timestamp
  17. DomContentLoadedEventStart returns the current web page DOMContentLoaded event occurs (i.e., after parsing the DOM structure, all the scripts start runtime) Unix millisecond time stamp
  18. DomContentLoadedEventEnd returns the Unix millisecond timestamp when all scripts that need to be executed on the current page are completed
  19. DomComplete returns the Unix millisecond timestamp when the DOM structure of the current web page was generated (that is, when the Document.readyState property changed to “complete” and the corresponding readyStatechange event occurred)
  20. LoadEventStart returns the Unix millisecond timestamp at the start of the callback function for the current page load event. If the event has not already occurred, return 0
  21. LoadEventEnd returns the Unix millisecond timestamp at the end of the run of the callback function for the current page load event. If the event has not already occurred, return 0
The time period for a web page to load

Key indicators of performance calculation

  1. FetchStartFet-navigationStart
  2. Common redirection: redirectEnd-redirectStart
  3. DNS query time: domainLookupEnd – domainLookupStart
  4. TCP connection time: connectEnd – connectStart
  5. Request Request time: responseEnd -responsestart
  6. Parsing dom tree takes time: domcomplete-dominteractive
  7. ResponseStart – navigationStart
  8. Domready: domContentLoadedEventEnd – navigationStart
  9. Onload time (total download time) : loadEventEnd – navigationStart

For optimization times, see web.dev

network

QPS, or query rate per second, is a measure of how much traffic a particular query server processes in a given period of time. QPS = Concurrency/average response time Calculate QPS reasonably for a rainy day

Enable CND acceleration (up to 5 concurrent). Saving cookie bandwidth Saving primary domain name connections optimizing page response Speed Enabling lazy page loading caching Static resource files localstrage nginx: nginx Enabling gzip compression etag Expires caching Nginx Enabling reverse proxy load balancing

(HTML webpsack plugin) Preload Prefetch Preload Preconnect preparse

The DNS resolution beforehand

DNS

The Domain Name System (DNS) is a service of the Internet. As a distributed database that maps domain names and IP addresses to each other, it makes it easier for people to access the Internet.

Dns-prefetch: The process of retrieving IP addresses from domain names. This process is usually fast, but may cause delays. Most browsers cache the resolution results appropriately and pre-resolve the new domain names that appear on the page, but not all browsers do this. To help other browsers pre-resolve certain domain names, you can add dns-prefetch to the HTML tag of the page to tell the browser to pre-resolve the specified domain name

A typical DNS resolution takes 20-120 milliseconds, so reducing the time and frequency of DNS resolution is a good optimization

DNS Resolution Mode

The first domain name DNS resolution search process of the browser on the website is as follows: browser cache – system cache – router cache – ISP DNS cache – recursive search

<! <meta http-equiv="x-dns-prefetch-control" content="on" /> <! <link rel="dns-prefetch" href="https://www.baidu.com" />Copy the code

CDN

The full name of CDN is Content Delivery Network. CDN is an intelligent virtual network built on the basis of the existing network. It relies on the edge servers deployed in various places, through the central platform of load balancing, content distribution, scheduling and other functional modules, users can get the content nearby, reduce network congestion, and improve user access response speed and hit ratio. The key technologies of CDN mainly include content storage and distribution

Cache refresh

After the content of the source site is updated, the CDN tenant can submit a refresh request to force the expiration of the cache content specified on the CDN node. When the user visits again, the CDN node will get the updated content back to the user and cache the latest resources on the node. (In simple terms, it is to delete the CDN cache on each node. When users get files, they can directly go back to the source to get files.)

Cache warming

After submitting the cache preheating request of the specified resource, the corresponding source resource is distributed to the CDN node. When the user initiates an access request, the source resource can be directly obtained from the CDN node, effectively reducing the source back rate. (In simple terms, it is directly from the source station to the cache on each node of CDN, when users get files, they can directly get the latest files!)

HTTP

keep-alive

In the early DAYS of HTTP/1.0, a connection was created for each HTTP request, and the process of creating a connection required resources and time. To reduce resource consumption and shorten response times, you needed to reuse connections. In later HTTP/1.0 and HTTP/1.1, the mechanism of reuse Connection was introduced, that is, Connection: keep-alive was added in the HTTP request header to tell the other party not to close the request after the response is completed, and we will use this request to continue communication next time. HTTP/1.0 requires Connection: keep-alive in the request header if you want to keep a long Connection. HTTP/1.1 supports long connections by default

http2.0

HTTP2.0 significantly improves Web performance, further reducing network latency on top of HTTP1.1’s full semantic compatibility. Achieve low latency and high throughput. For front-end developers, reduced optimization work, HTTP2.0 request demo.

Http2.0 advantage

  1. Binary framing: Add a binary framing layer between the application layer (HTTP/2) and the transport layer (TCP or UDP). HTTP/2 divides all transmitted information into smaller messages and frames and encodes them in binary format.
  2. Header compression: HTTP/2 uses the HPACK algorithm designed specifically for header compression.
  3. Multiplexing: Multiplexing allows multiple request-response messages to be sent simultaneously over a single HTTP/2 connection
  4. Request priority: After splitting the HTTP message into many individual frames, performance can be further optimized by optimizing the interlacing and transmission order of these frames
  5. Server push: Server push is a mechanism for sending data before the client requests it

Nginx

Nginx is a high-performance HTTP and reverse proxy Web server that also provides IMAP/POP3/SMTP services.

The cache

Strong cache

Expires. Headers can effectively take advantage of the browser’s caching capacity to improve page performance and avoid many unnecessary Http requests in subsequent pages. WEB servers use Expires headers to tell WEB clients that they can use the current copy of a component until a specified time. Expires has a very big flaw. It uses a fixed time that requires the server to be in strict sync with the client’s clock, and when that day arrives, the server must reset to the new time.

Cathe-control: Introduced by HTTP1.1, it uses max-age to specify how long a component should be cached, starting with a request. Browsers use the cache for max-age, and use requests outside of it, so that Expires limits can be eliminated.

Note: Cache-Control (version 1.1) takes precedence over Expires (version 1.0)

Negotiate the cache

Etag: Data signature. The content of a resource should have a unique signature (SHA1). If the resource data changes, the signature will also change. Used with if-match or if-none-match,

Last-modified: Last modified time (exact to second), used in conjunction with if-modified-since or if-unmodified-since, usually used by browsers

Browser Cache Flowchart

Open the gzip

GZIP stands for GUN ZIP. It is the most common of the two compression methods defined by the HTTP1.1 protocol. Most client browsers support this compression format.

gzip on; gzip_static on; // Static resources gzip_vary on; // Whether to add Vary: accept-encoding to the HTTP header, gzip_comp_level 5 is recommended; //(Recommended) Gzip compression ratio, 1 minimum compression ratio the highest processing speed, 9 maximum compression ratio the lowest processing speed (fast transmission but consumes CPU) gzip_min_length 0; // The default value is 0, regardless of how large the page is compressed. You are advised to set the value to a value larger than 1K. If the value is smaller than 1K, the value may become larger. Gzip_types text/plain Application /x-javascript text/ CSS application/ XML text/javascript application/x-httpd-php image/jpeg image/gif image/png;Copy the code

Load balancing

Upstream web_mgrsys {server 127.0.0.1:8090 weight=10; Server 127.0.0.1:3000 weight = 3; } proxy_pass http://web_mgrsys;Copy the code

Refer to the happy Scavenger documentation

preload

  1. Prefetch: This tells the browser what resources might be used to load the next page, not the current page. Therefore, the loading priority of this method is very low, which means that the effect of this method is to speed up the loading of the next page

  2. Preload: provides a declarative command that lets the browser load a specified resource in advance (it is not executed after loading) and execute it as needed

    1, separate loading and execution, do not block rendering and document onload event 2, load specified resources in advance, no longer rely on font after a period of time to spawn the situation

async & defer

The blue line represents network reads, the red line represents execution time, and the green line represents HTML parsing.

  1. Async: The loading and rendering of subsequent document elements will take place in parallel with the loading and execution of script.js (asynchronous)
  2. Defer: The loading of subsequent document elements will take place in parallel (asynchronously) with the loading of script.js, but the execution of script.js will be completed after all elements have been parsed and before the DOMContentLoaded event is triggered, and the multiple deferred defer will be loaded in sequence.

Code optimization

Yahoo! Catch

Yahoo catch-all: Whether in the work, or in the interview, web front-end performance optimization is very important, so we need to optimize from what aspects? You can follow Yahoo’s 34 catch-22 for front-end optimization, but now it’s 35, so it’s kind of like yahoo’s 35 catch-22 for front-end optimization. Has been categorized so that there is a clear direction for optimization.

js

Js optimization scheme

  1. Simplified JS code skillfully using data structures and algorithms
  2. Avoid memory leaks, avoid global variables, closures, and lower loops
  3. The detached DOM element cleans up its bound events in time

css

Reflow (rearrangement)

When a DOM node’s layout property changes, the property is recalculated and the browser redraws the corresponding element, a process called Reflow.

Repaint

When an attribute that affects the visibility of a DOM element changes, such as color, the browser repaints the element, a process called Repaint. So rearrangement inevitably leads to redrawing.

Rearrange trigger mechanism
  1. Add or remove visible DOM elements
  2. Element position change
  3. The size of the element itself changes
  4. Content change
  5. The page renderer is initialized
  6. The browser window size changed. Procedure
Redraw trigger mechanism
  1. Background color style modification

Redraw & rearrange optimization

  1. For dome that needs to be modified, try to insert the page at once using a document fragment
  2. Css3 animation is used as far as possible, GPU acceleration of animation is enabled, and rendering calculation is handed over to GPU.

3. Set position to absolute or fixed for elements that need to be rearranged several times. The element is removed from the document flow and its change does not affect other elements

CSS Optimization

  1. Reducing the complexity of CSS hierarchies Reduces the nesting of CSS hierarchies
  2. Reduce render blocking
  3. Use CSS3 animation to enable 3D acceleration and reduce animation rearrangement and redrawing of the page
  4. contain:layout
  5. Hide elements using display none,
  6. If you set the position attribute to absolute or fixed for an element that needs to be rearranged multiple times, the element is removed from the document flow and its changes do not affect other elements

html

  1. Iframe lazy loading
  2. HTMLMinifier: The HTMLMinifier is a highly tested, javascript-based HTML minifier.
  3. Reduce node transition nesting
  4. CSS javascript as far as possible outside the chain

font

Font optimization mode

Font-size display: auto auto: default value. Typical browser font loading behavior occurs, where text with a custom font is hidden until the font is loaded. Copyright belongs to the author.

  1. Block: wait until font download is complete before display 3s
  2. Swap: Default font – Late replacement
  3. Fallback: not displayed
  4. Optional: default or user-defined on the mobile terminal

The picture

1. Tinyjpg.com/

Image imagemin

Progressive image approach: Baseline JPEG processes images from left to right and top to bottom at once. When your JPEG image is below 10K, it is best to save it as a basic JPEG (estimated 75% less likely)

Progressive JPEG when the image transfer time is long, the image can be fractional processing, from fuzzy to clear way to transmit the image (the effect is similar to GIF transmission over the network).

Image optimization NPM dependency package

  1. Progressive-image

: A dead Simple progressive image module for Vanilla JavaScript and vue.js 1.0+ & 2.0+ progressive-image-demo

  1. ImageMagick: ImageMagick is a powerful, stable and open source toolset and development kit for reading, writing, and processing image files in over 89 basic formats, including the popular TIFF, JPEG, GIF, PNG, PDF, and PhotoCD formats. With ImageMagick, you can dynamically generate images based on the needs of web applications. You can also resize, rotate, sharpen, subtract, or add special effects to an image (or a group of images) and save the results in the same format or in other formats. It can also be done with C/C++, Perl, Java, PHP, Python, or Ruby programming. ImageMagick also provides a high quality 2D toolkit with partial SVG support. ImageMagic’s main focus is on performance, bug reduction, and providing a stable API and ABI.

  2. Jpeg – recompress: Compress JPEGs by re-encoding to the smallest JPEG quality while keeping perceived visual quality the same and by making sure huffman tables are optimized

  3. Imagemin: Seamlessly Minify images

  4. yall.js

  5. BlazyA Lazy Loading and multi-serving Image Script Demo