Why performance tuning? How important is performance optimization? The performance of a website has a great impact on user retention and conversion rate. To put it bluntly, improving the performance of a website can directly increase the revenue of a website.
Classification of performance optimizations
Front-end performance optimization can be divided into two categories:
- Load time optimization;
- Runtime optimization;
For example, compression files and loading static resources using CDN belong to load-time optimization. Timely unbinding events and DOM reduction are runtime optimizations.
Gidlin’s rule: when you encounter a problem, you can only solve it well if you understand it first. Therefore, it is a good idea to take a look at the loading performance and running performance of the site before doing any performance optimization.
Manual inspection
Checking loading Performance
The loading performance of a website depends mainly on the first screen time and the white screen time.
- White screen time: refers to the input url, to the page began to display the content of the time;
- First screen time: refers to the time after entering the URL and loading the page completely;
To get the white screen time, place the following code before .
<script>
new Date().getTime() -
performance.timing.navigationStart
</script>
Copy the code
In the window. In the onload event execute new Date (). The getTime () – performance. Timing. The time used navigationStart can obtain first screen is completely loaded.
Check running performance
Here we use Chrome’s developer tools to see runtime performance.
Open the website, F12 go to the browser console, select Performance, click the gray dot in the upper left corner, turn red to start recording. At this point, you can imitate the user’s use of the site. After using the site, click Stop and you can see the performance report of the site during its operation. If there is a red block, it indicates that there is a frame drop; If the color is green, the frame rate is high and the page is smooth.
Under “Performance”, press ESC to bring up a box, and click on the three spots to the left of the box to highlight the rendering.
The first is to highlight the redraw area and the other is to display the frame rendering information. After checking, go to the web page, you can see the rendering changes in real time.
Check with tools
Chrome tools Lighthouse
This tool gives performance ratings to websites.
Performance optimization at load time
1. Reduce HTTP requests
A complete HTTP request goes through a DNS lookup, a TCP handshake, the browser sends the HTTP request, the server receives the request, the server processes the request and sends back the response, and the browser receives the response. Here is an example of an HTTP request:
Noun explanation:
- Queueing: The time spent in the request queue;
- Cost: the time between the establishment of a TCP connection and actual transmission of data, including agent negotiation time
- Proxy negotiation: The time spent connecting to a Proxy server for negotiation.
- DNS Lookup: Time required for performing DNS Lookup. DNS Lookup is required for each domain on the page.
- Initial Connection/Connecting: Indicates the time taken to establish the Connection, including TCP handshake/retry and SSL negotiation.
- SSL: indicates the time spent to complete the SSL handshake.
- Request sent: The time taken to send a network Request.
- Waiting(TTFB): TTFB is the time between sending the page request and receiving the first byte of reply data.
- Content Download: Time taken to receive response data.
You can see that the actual data download time is 13.05/204.16 = 6.39%. To increase this ratio, you need to merge multiple small files into one large file, thus reducing the number of HTTP requests.
2. Use HTTP2
Fast parsing speed
HTTP/2 transmits data in binary format rather than HTTP/1’s text format, and binary protocols parse more efficiently. HTTP/1 request and response packets consist of the start line, header, and entity body, which are separated by text newlines. HTTP/2 separates the request and response data into smaller frames, and they are binary encoded. In HTTP/2, all communication with a domain name is done over a single connection that can host any number of two-way data flows.
multiplexing
In HTTP/1, multiple TCP connections must be used if multiple requests are to be concurrent, and the browser limits the number of TCP connection requests for a single domain name to 6-8 in order to control resources. In HTTP/2, we no longer rely on TCP links for multi-stream parallelism.
- All communication under the domain name is done over a single TCP connection
- A single connection can host any number of two-way data streams
- The data stream is sent as a message, and the message consists of one or more frames, which can be sent out of order because they can be reassembled according to the stream identifier at the head of the frame
This feature greatly improves performance.
priority
In HTTP/2, each request can have a priority value of 31 bits. 0 indicates the highest priority. A higher value indicates a lower priority. The server receives such a request and processes it first.
Server push
A powerful new feature added to HTTP2 is the ability for the server to send multiple responses to a client request.
The server can actively push other resources while sending the PAGE HTML, rather than waiting for the browser to parse to the appropriate location, initiate a request and then respond. For example, the server can actively push JS and CSS files to the client without having to send these requests while the client parses the HTML.
The server can actively push, the client also has the right to choose whether to receive. If the server pushes a resource that has already been cached by the browser, the browser can reject it by sending RST_STREAM. Active push also complies with the same origin policy. The server does not randomly push third-party resources to the client.
3. Use server-side rendering
Client-side rendering: get the HTML file, download the javascript file as needed, run the file, generate the DOM, and render.
Server-side rendering: The server returns the HTML file, and the client only needs to parse the HTML.
- Advantages: Fast first screen rendering, SEO friendly;
- Disadvantages: More requirements involved in build setup and deployment, more server load.
4. Use CDN for static resources
A content delivery network (CDN) is a group of Web servers distributed in multiple geographic locations. As we all know, the further the server is away from the user, the higher the latency. CDN is designed to solve this problem by deploying servers in multiple locations, bringing users closer to the server and shortening request times.
5. Place the CSS at the top and the javascript file at the bottom
All CSS and JS files placed in the head tag will block the rendering. If the CSS and JS take a long time to load and parse, the page will remain blank. So the JS file should be placed at the bottom and loaded after the HTML has been parsed.
CSS is put in the header because: if CSS is put in the tail, it will let the user see the page without style in the first time, compared to ‘ugly’, to avoid this situation, PUT CSS in the header.
6. Use the font icon iconfont instead of the image icon
Font icon is to make the icon into a font, use the same as font, you can set properties, such as font size, color and so on, very convenient. And the font icon is a vector graph, not distortion. Another advantage is that the generated files are extremely small.
Compressed font file
Using the Fontmin-Webpack plug-in to compress the font file, you can further reduce the font size.
7. Make good use of cache and do not load the same resources repeatedly
To prevent users from having to request a file every time they visit a site, we can control this behavior by adding Expires or max-age. Expires sets an absolute time before which the browser won’t request the file, instead using the cache. Max-age is a relative time, so it is recommended to use max-age instead of Expires.
- Max-age: Sets the cache storage period after which the cache is considered expired. Before this time, the browser reads the file without making a new request, and instead uses the cache directly.
- No-cache: indicates that the client can cache resources. Each time a cache resource is used, the validity of the cache resource must be verified.
8. Compress files
Compressed files reduce file download time and improve user experience.
In Webpack you can use the following plug-ins for compression:
- javascript: UglifyPlugin
- CSS: MiniCssExtractPlugin
- HTML: HtmlwebpackPlugin
Gzip compression can also be used, which can be turned on by adding the Gzip identifier to accept-encoding in the HTTP request header. Of course, the server also has to support this feature.
Webpack configuration:
Download the plugin
npm install compression-webpack-plugin --save-dev
Copy the code
Configuration:
const CompressionPlugin = require('compression-webpack-plugin')
module.exports = {
plugins: [new CompressionPlugin()]
}
Copy the code
9. Image optimization
- Image lazy loading
In a page, the image path is not set, only when the image appears in the viewable area, to load the real image, this is lazy loading. For a website with many images, loading too many images at one time will greatly affect the user experience. See this article for a lazy loading implementation of web front-end images
- Reduce image quality
For JPG images, for example, the difference between 100% quality and 90% quality is often indistinguishable, especially when used as background images.
There are two ways of compression: one is through online compression sites; One is through webpack plugin image-webpack-plugin;
npm i -D image-webpack-loader
Copy the code
Webpack configuration
{ test: /\.(png|jpe? g|gif|svg)(\? . *)? $/, use: [ { loader: 'url-loader', options: { limit: 10000, name: utils.assetsPath('img/[name].[hash:7].[ext]') } }, { loader: 'image-webpack-loader', options: { bypassOnDebug: true } } ] }Copy the code
- Use CSS3 effects instead of images whenever possible
Use CSS effects, such as gradients and shadows, as the code size is usually much smaller than the image.
10. Load code on demand, extract third-party library code, reduce redundant code
- Generate file names according to file contents, and import components dynamically to achieve on-demand loading
The value option of the filename property for configuring output via Webpack has a [contenthash], which creates a hash based on the file contents. When the file changes, so does contenthash.
output: { filename: '[name].[contenthash].js', chunkFilename: '[name].[contenthash].js', path: path.resolve(__dirname, '.. /dist'), },Copy the code
- Extracting third-party libraries
Since introduced third-party libraries are generally stable, they don’t change very often. So it’s a better choice to separate them out for long-term caching. You need to use webPack4’s splitChunk plug-in cacheGroups option.
module.exports = { optimization: { splitChunks: { chunks: 'async', minChunks: 30000, cacheGroups: { vendors: { test: /[\\/]node_modules[\\/]/, priority: -10}, default: {minChunks: 2, priority: -20, reuseExistingChunk: true}}}}}Copy the code
Cache groups are probably the most interesting feature of SplitChunksPlugin. In the default setting, modules from the node_mudules folder are bundled into a bundle called vendors, and all modules that are referenced more than twice are allocated to the Default bundle. You can also set the priority through priority.
ReuseExistingChunk: Indicates whether to use an existing chunk. True indicates that no new chunk will be created if the current chunk contains modules that have already been extracted.
See the article for splits using the split-chunk-plugin code
For details, see the official document split-chunk-plugin