In the development of the site often encountered performance bottlenecks, slow to open the situation. How do we troubleshoot these problems step by step and solve them in normal development
In fast-paced times, slowness is an intolerable thing.
First, why is it ‘slow’?
There are a variety of slow cases, such as:
- Poor user experience, feeling “slow”
- Javascript execution is slow.
- The interface responds slowly.
- The resource load is slow.
- Browser rendering is slow.
- .
We can only ask ourselves, so the user’s mobile phone network speed is not exist ~.
Two, the means of investigation
We go straight into it and it’s gonna be sad for days
We’re going to talk about troubleshooting in several ways.
2.1 Technical selection
In the daily development of the front-end, the choice of technology is very important. Why are we talking about this? Because it happens frequently.
In a world of front-end engineering, lightweight frameworks are slowly being forgotten. Not all business scenarios are suitable for using engineered frameworks. React/Vue is not lightweight.
Complex frameworks are designed to solve complex businesses
If the development of H5 publicity, PC display and other scenarios simple business, javascript native with some lightweight plug-ins more suitable.
Multi-page apps are not all bad. Choosing different technologies for different businesses is important and something every front-end should reflect on.
This aspect is the key issue that led to Caton.
2.2 the NetWork
Our old friend NetWork must be familiar to the front end students. Let’s start with the Network panel
From the panel we can see some information:
- Request resource size
- Requested resource Duration
- Number of requested resources
- Interface response time
- Number of interface initiates
- Interface message size
- Interface response status
- The waterfall figure
What is a waterfall diagram?
The waterfall chart is the waterfall column behind the top picture
The waterfall diagram is a cascading diagram that shows how the browser loads resources and renders them as web pages. Each line in the figure is a separate browser request. The longer the graph, the more requests are made while the page is loading. The width of each line represents the time it took the browser to request and download the resource. It focuses on analyzing network links
Waterfall color description:
-
DNS Lookup [dark green] – Before the browser can communicate with the server, a DNS Lookup must be performed to convert the domain name to an IP address. There’s very little you can do at this stage. Fortunately, not all requests need to go through this stage.
-
Initial Connection [orange] – A TCP Connection must be established before the browser can send the request. This happens only in the first few lines of the waterfall diagram, otherwise it is a performance problem (more on that later).
-
SSL/TLS Negotiation [purple] – If your page is loading resources using secure protocols such as SSL/TLS, this is the time the browser establishes a secure connection. The use of SSL/TLS negotiation is becoming more common now that Google uses HTTPS as a search ranking factor.
-
Time To First Byte (TTFB) [green] – TTFB is the Time when the browser request is sent To the server + the Time when the server processes the request + the Time when the First Byte of the response message reaches the browser. We use this metric to determine if your Web server is underperforming or if you need to use a CDN.
-
Downloading (blue) – This is the time for the browser to download the resource. The longer the period, the greater the resource. Ideally, you can control the length of this time by controlling the size of the resource.
So besides the length of the waterfall, how can we tell if a waterfall is healthy?
-
First, reduce the load time of all resources. Reduce the width of the waterfall diagram. The narrower the waterfall, the faster the site.
-
Second, reducing the number of requests reduces the height of the waterfall diagram. The lower the waterfall, the better.
-
Finally, rendering time is accelerated by optimizing the order of resource requests. This is the green “start rendering” line moving to the left. This line moves as far to the left as possible.
In this way, we can investigate the “slow” problem from the perspective of network.
2.3 webpack – bundle – analyzer
The bundles generated after the project is built are compressed. Webpack-bundle-analyzer is a package analysis tool.
Let’s take a look at what it can do. The diagram below:
As you can see from the image above, our bundle has been resolved pretty well. The larger the module area is, the larger the bundle size is. It’s worth noting. Let’s focus on optimizing.
The information it can sift through is
- Displays all incoming modules in the package
- Display module size and size after Gzip
It is necessary to check for modules in packages. Webpack-bundle-analyzer is used to check for modules that are useless, modules that are too large. And then optimize. To reduce the size of our bundle and reduce the loading time.
The installation
# NPM
npm install --save-dev webpack-bundle-analyzer
# Yarn
yarn add -D webpack-bundle-analyzer
Copy the code
Use (as a webpack-plugin)
const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin;
module.exports = {
plugins: [
new BundleAnalyzerPlugin()
]
}
Copy the code
After the package is built, a window will pop up showing the above information.
2.4 the Performance
Performance module of Chrome. First, attach an official document portal: Performance
Can detect many aspects of data, most of the performance of the investigation used more. Those of you who want to know more about it are advised to look at the official documentation.
Let’s take a look at how slow is ranked in the Performance panel and what information it gives us. First, attach a panel picture of Performance.
Some indicators can be analyzed from the figure above
- Is the FCP/LCP time too long?
- Is the number of concurrent requests frequent?
- Order in which requests are sent Is the order in which the requests are sent wrong?
- Javascript Execution Is javascript execution too slow?
These are the metrics we need to focus on, and of course performance does more than that.
Remember how to get these metrics, and then go through the analytical optimization.
2.5 caught
What if there are business situations where you don’t have some of the above debugging tools? We can use the packet capture tool to capture the page information. The indicators we checked out through the Chrome tool can also be captured through the packet capture tool.
Here I recommend a package capture tool Charles. Use a lot of tutorials online, you can search yourself
3. Optimization indicators
Here we will talk about how to optimize the above indicators and some of the slow conditions
3.1 the tree shaking
Chinese (tree shaking), an important part of webpack build optimization. Tree shaking is used to remove some useless code from our project and relies on module syntax in ES.
Like using Lodash on a daily basis
import _ from 'lodash'
Copy the code
If we referenced the LoDash library above, we would have put the entire LoDash package into our bundle when we built the package.
import _isEmpty from 'lodash/isEmpty';
Copy the code
If we had referenced the LoDash library above, we would have simply pulled isEmpty out of our bundle when we built the package.
This simplification will greatly reduce the size of our package. Therefore, in daily reference to third-party libraries, you need to pay attention to the import method.
How do I start shaking trees
Tree-shaking is supported by default in Webpack4.x. Use tree-shaking: Portals in Webpack 2.x
3.2 the split chunks
Chinese (subcontracting)
Webpack 4 does the smart subcontracting for you without configuring anything. Import files are packaged into main.js, and third-party packages larger than 30KB, such as Echarts, XLSX, and Dropzone, are packaged into separate bundles.
Other pages or components that we set up to load asynchronously become chunks, which are packaged into separate bundles.
Its built-in code splitting strategy looks like this:
- Whether the new chunk is shared or from the node_modules module
- Whether the new chunk size is greater than 30KB before compression
- The number of concurrent requests for loading chunk on demand is less than or equal to five
- The number of concurrent requests during initial page loading is less than or equal to 3
You can change the configuration based on your project environment. The configuration code is as follows:
splitChunks({
cacheGroups: {
vendors: {
name: `chunk-vendors`.test: /[\\/]node_modules[\\/]/,
priority: -10.chunks: 'initial',},dll: {
name: `chunk-dll`.test: /[\\/]bizcharts|[\\/]\@antv[\\/]data-set/,
priority: 15.chunks: 'all'.reuseExistingChunk: true
},
common: {
name: `chunk-common`.minChunks: 2.priority: -20.chunks: 'all'.reuseExistingChunk: true}}})Copy the code
Projects without Webpack 4.x version can still be subcontracted in the form of on-demand loading, which makes our packages scattered and improves loading performance.
Loading on demand is also one of the important means of subcontracting in the past
Here is a very good article: How does WebPack use On-demand loading
3.3 unpacking
Different from subcontracting in 3.2. React does not include react, react-dom, react-router, etc.
Because we took these plugins apart. It’s not typed together in the bundle. I put it on the CDN. Let me give you an example to explain.
Assume that the original bundle size is 2M and is pulled once. Split the bundle (1M) and react bucket (CDN) (1M).
From this perspective, the 1+1 pattern pulls resources faster.
To put it another way, in the case of a full deployment project, the bundle is pulled again each time it is deployed. It’s a waste of resources. The React bucket can match the strong cache. If the react bucket is deployed in full mode, you only need to pull the 1 MB bundle on the left again, saving server resources. Optimized the loading speed.
Note: During local development, it is recommended not to introduce CDN for resources such as React. Frequent refresh during development will increase the pressure on CDN service.
3.4 gzip
After gzip compression is configured on the server, resources can be greatly reduced.
Nginx configuration mode
http {
gzip on;
gzip_buffers 32 4K;
gzip_comp_level 6;
gzip_min_length 100;
gzip_types application/javascript text/css text/xml;
gzip_disable "MSIE [1-6]\.";
gzip_vary on;
}
Copy the code
After the configuration is complete, you can view it in the Response Header.
3.5 Image Compression
As an important part of the development, our company’s chart bed tool has its own compression function, which can be directly uploaded to CDN after compression.
If the company doesn’t have a graphic tool, how do we compress images? I recommend a few that I use frequently
- Wisdom map compression (Baidu is difficult to search the official website, free, batch, easy to use)
- Tinypng (free, bulk, speed block)
- The Fireworks Tool compresses pixels and dimensions (do it yourself, master the scale)
- Find UI compression and send it to you
Image compression is a common technique. Because of the relationship between device pixels, the images given by UI are generally X2 and X4, so compression is very necessary.
3.6 Image Segmentation
If the page has a rendering, such as a real rendering, the UI is holding a knife and won’t let you compress it. Consider splitting images.
It is suggested that the size of a single soil picture should not exceed 100K. After segmentation, we splice the picture together through layout. Images can be loaded efficiently.
Note that height must be set for each image after segmentation, otherwise the style will collapse in the case of slow network speed.
3.7 the Sprite
The south is Sprite, and the north is Sprite. This is an interesting phenomenon.
When there are many small images in the website, it is important to combine these small images into one large image, and then divide them into the image to be displayed through background.
What are the benefits of this? Let’s start with a general rule
For example, if you have 10 small images of the same CDN domain name on your page, you need to initiate 10 requests to pull the resource, divided into two concurrent requests. After the first concurrent request returns, the second concurrent request is initiated.
If you combine 10 small images into one large picture, you can pull down the resources of 10 small images with a single request. Reduce server stress, reduce concurrency, and reduce the number of requests.
Here is an example of Sprite.
3.8 the CDN
Chinese (content distribution network), server is centralized, CDN is “decentralized”.
There are a lot of things in the project are put on CDN, such as: static files, audio, video, JS resources, pictures. So why does CDN make resources load faster?
Here’s a simple example:
Before buying train tickets we can only go to the train station to buy, and then we can buy train tickets in the downstairs train ticket agent.
Are you fine goods.
Therefore, static resource degree is recommended to be placed on CDN to speed up resource loading.
3.9 Lazy Loading.
Lazy loading, also known as lazy loading, refers to the lazy loading of images on long web pages and is a great way to optimize web page performance.
When the visual area is not rolled to where the resource needs to be loaded, resources outside the visual area are not loaded.
It can reduce the load on the server, and is usually suitable for the service scenarios with many images and long pages.
How to use lazy loading?
- Lazy loading of images
- layzr.js
3.10 iconfont
Chinese (font chart), now a more popular use. There are several benefits to using font diagrams
- vector
- lightweight
- Easy to modify
- Does not occupy the image resource request.
As mentioned above, if you use font ICONS instead of Sprite ICONS, you don’t need to request them at all. You can just type them into the bundle.
The premise is to use THE UI to give some force, the design tends to font icon, give good resources in advance, establish a good font icon library.
3.11 Logic Moved Later
Logical backtracking is a common optimization method. Take an example of an operation that opens an article website.
The order of requests without logical backward processing looks like this
Page display body is the article display, if the article display request is later, then render the article out of the time is bound to be later, because there may be because of request blocking and other conditions, affect the request response, if more than a concurrent situation, it will be more slow. The situation as shown in the figure also happened in our project.
It is obvious that we should move the main “request article” interface forward and move some of the non-main request logic back. This will render the subject as quickly as possible, which will be much faster.
The optimized order looks like this.
In general development, it is recommended to always pay attention to the logic behind the situation, highlighting the main logic. Can greatly improve the user experience.
3.12 Algorithm complexity
In application scenarios with a large amount of data, pay attention to algorithm complexity.
See the article complexity Analysis of Javascript algorithms for this.
If your code takes too long to execute, consider optimizing complexity.
The choice of time for space and space for time should be made according to business scenarios.
3.13 Component rendering
Take react as an example. Don’t go too far with component splitting. You need to control the rendering of components, especially the render of deep components.
As always, there are ways to optimize component rendering
- Declare cycle controls – such as shouldComponentUpdate for React to control component rendering.
- The API – PureComponent provided by the official website
- Controls the parameters of the injected component
- Assign a unique key to a component
Unnecessary rendering is a huge waste of performance.
3.14 the node middleware
Chinese (Node Middleware)
Middleware mainly refers to methods that encapsulate all the details of Http request processing. An Http request usually involves a lot of work, such as logging, IP filtering, query string, request body parsing, Cookie handling, permission validation, parameter validation, exception handling, etc., but for Web applications, you don’t want to have to deal with so much detail processing. Therefore, the introduction of middleware to simplify and isolate the details between these infrastructures and business logic allows us to focus on business development to achieve the purpose of improving development efficiency.
Use Node Middleware to merge requests. Reduce the number of requests. This approach is also very practical.
3.15 web worker
The function of Web Worker is to create a multithreaded environment for JavaScript, allowing the main thread to create Worker threads and assign some tasks to the latter to run. While the main thread is running, the Worker thread is running in the background without interfering with each other. Wait until the Worker thread completes the calculation and returns the result to the main thread. The advantage of this is that when computationally intensive or high-latency tasks are taken on by Worker threads, the main thread (usually responsible for UI interactions) will flow smoothly and will not be blocked or slowed down.
Proper use of Web workers can optimize complex computing tasks. Here directly throw ruan yifeng’s entry article: portal
3.16 the cache
Caching can talk about a lot of content, I will open a separate cache overview article to detail, we can ➕ attention ha, the more attention to write faster.
Reasonable use of strong cache, cache negotiation, dynamic cache and other mechanisms can greatly improve user experience and page opening speed. Caching also becomes an integral part of the development process.
Fourth, the END
Above sorted out some of the actual business development encountered on the page load slow troubleshooting and solutions. Later will be more and more rich, if your project is likely to encounter a slow opening situation, might as well click “like” favorites ~.
Previous articles are recommended
- How to Build an Exception Capture Platform | Scene Reenactment
- How to improve team Productivity through UI Intelligent Code Generation
- Front-end Code Quality Optimization Communication
- Talk to you about how to optimize the front-end performance