1. The importance of

When we interview, front-end performance optimization is a required knowledge point, but we rarely focus on the project front-end optimization, it is really not important?

If we could halve the back-end response time, the overall response time would only be reduced by 5-10%. If you focus on front end performance, cutting the response time by half, you can reduce overall response time by 40-45%.

Improving the front end usually takes less time and resources, and reducing back-end latency makes a big difference.

Only 10% to 20% of the end user response time is spent downloading the HTML document, the remaining 80% to 90% is spent downloading all the components in the page.

2. The positioning

2.1 Technical choices

In the day-to-day development of the front end, technical choices are very important. Why are we talking about this? Because it happens so often.

In the current era of heavy front-end engineering, lightweight frameworks are slowly being forgotten. Not all business scenarios are suitable for using an engineered framework, and React/Vue is not lightweight.

Complex frameworks are designed to solve complex businesses

If the development of H5, PC display and other scenarios of simple business, javascript native with some lightweight plug-in is more suitable.

Multipage apps are not all downsides. Choosing different technologies for different businesses is very important and something that should be reflected on on every front end.

This is the key issue that led to Catton.

2.2 the NetWork

Our old friend NetWork must be familiar with the front end students. Let’s look at the Network panel first

From the panel we can see some information:

  • Requested resource size
  • Requested resource Duration
  • Requested resource number
  • Interface response duration
  • Number of interface initiates
  • Interface Packet size
  • Interface response Status
  • The waterfall figure

What is a waterfall diagram?

The waterfall image is the waterfall column at the back of the top image

A waterfall diagram is a cascading diagram that shows how a browser loads resources and renders them into a web page. Each line in the diagram is a separate browser request. The longer the figure, the more requests are made while loading the page. The width of each line represents the time taken for the browser to make the request and download the resource. It focuses on the analysis of network links

Waterfall diagram color description:

  • DNS Lookup [dark green] – Before the browser communicates with the server, a DNS Lookup must be performed to convert the domain name into an IP address. There’s very little you can do at this stage. Fortunately, not all requests need to go through this stage.

  • Initial Connection [orange] – A TCP Connection must be established before the browser can send a request. This process only occurs in the first few lines of the waterfall diagram, otherwise it is a performance issue (more on this later).

  • SSL/TLS Negotiation [purple] – If your page is loading resources through a secure protocol such as SSL/TLS, this is the time for the browser to establish a secure connection. Now that Google uses HTTPS as one of its search ranking factors, the use of SSL/TLS negotiation is becoming more common.

  • Time To First Byte (TTFB) [green] -TTfb is the Time when the browser request is sent To the server + the Time when the server processes the request + the Time when the First Byte of the response packet reaches the browser. We use this metric to determine if your Web server is underperforming or if you need to use a CDN.

  • Downloading (blue) – This is the time the browser takes to download the resource. The longer the period, the greater the resources. Ideally, you can control the length of this time by controlling the size of the resource.

So how can we tell if the state of a waterfall is healthy, other than its length?

  • First, reduce load times for all resources. That is to reduce the width of the waterfall diagram. The narrower the waterfall, the faster the site.

  • Second, reducing the number of requests means reducing the height of the waterfall diagram. The lower the waterfall, the better.

  • Finally, optimize the resource request order to speed up rendering time. From the image, this is moving the green “Start rendering” line to the left. The farther this line moves to the left, the better.

In this way, we can check the “slow” problem from the perspective of network.

2.3 webpack – bundle – analyzer

The bundles generated after the project is built are compressed. Webpack-bundle-analyzer is a package analysis tool.

Let’s take a look at what it does. The diagram below:

From the figure above, you can see how our bundle is resolved. The larger the module area, the larger the bundle size. It’s worth noting, focus on optimization.

The information it’s able to sift through is

  • Displays all incoming modules in the package
  • Displays the size of the module and the size after gzip

It is very necessary to check the module situation in the package, and use Webpack-bundle-Analyzer to check out some useless modules, too large modules. Then optimize. To reduce the size of our bundle and reduce the loading time.

The installation

# NPM NPM install --save-dev webpack-bundle-Analyzer # Yarn Yarn add -d webpack-bundle-AnalyzerCopy the code

Use (as a webpack-plugin)

const BundleAnalyzerPlugin = require('webpack-bundle-analyzer').BundleAnalyzerPlugin; Module.exports = {plugins: [new BundleAnalyzerPlugin()]Copy the code

When the package is finished, a window will automatically pop up showing the above information.

2.4 the Performance

Chrome built-in Performance module. First attach an official website document portal: Performance

Can detect many aspects of data, most of the performance of the investigation is used more. If you want to know more about it, I suggest you look at the official documentation.

Next, let’s talk about how to rank “slow” in the Performance panel, and what information it gives us. Attach a picture of the performance panel first.

Some indicators can be analyzed from the figure above

  • Is the FCP/LCP time too long?
  • Are requests concurrent frequently?
  • Request initiation sequence Is the request initiation sequence incorrect?
  • Javascript Execution Is javascript executing too slowly?

These metrics are what we need to focus on, but performance doesn’t stop there.

Remember how to get these metrics and then parse them.

2.5 PerformanceNavigationTiming

Access to various stages of the response time, we have to use interface is PerformanceNavigationTiming interface.

PerformanceNavigationTiming provides a browser is used to store and retrieve relevant documents index methods and properties of the event. For example, this interface can be used to determine how long it takes to load or unload a document.

function showNavigationDetails() {
  const [entry] = performance.getEntriesByType("navigation");
  console.table(entry.toJSON());
}
Copy the code

Using this function, we can get the response time of each phase, as shown in the figure below:

Parameters that

NavigationStart Load start time redirectStart Time when redirection started (if HTTP redirection occurred, each redirection is in the same field as the current document, return the fetchStart value from which the redirection started. In other cases, return 0. RedirectEnd end time (if HTTP redirects occurred and each redirection is in the same field as the current document, return the time when the last redirection finished receiving data.) In other cases, 0 is returned.) fetchStart Start time of reading the cache if there is a cache when the browser initiates a resource request. DomainLookupStart Start time of querying the DNS. If no DNS request is initiated, such as keep-alive or cache, the fetchStart domainLookupEnd command is returned to query the end time of DNS. If no DNS request is made, connectStart indicates the time when the TCP request is set up. If the request is keep-alive, cached, etc., return domainLookupEnd (secureConnectionStart). If TLS or SSL is being used, return the handshake time when connectEnd completed the TCP connection. If it’s keep-alive, cache, etc., the same as connectStart requestStart the time when the request is initiated responseStart the time when the server starts responding and domLoading is the time when the dom is rendered, Specific domInteractive unknown unknown domContentLoadedEventStart start firing time domContentLoadedEventEnd DomContentLoadedEvent events DomComplete is the time when the DOM rendering is finished. The time when the load is triggered by the loadEventStart is unknown. If not, return 0 time when the load event is finished from the loadEventEnd. 0 unloadEventStart Time when the Unload event is triggered unloadEventEnd Time when the unload event is finished

For our Web performance, we will use the time parameters:

DNS resolution time: domainLookupEnd – domainLookupStart TCP connection establishment time: connectend-connectstart White screen time: DomContentLoadedEventEnd – navigationStart loadEventEnd – navigationStart

Based on these time parameters, we can determine which phases have an impact on performance.

2.6 caught

What are some business situations without some of the above debugging tools? We can use the packet capture tool to capture the page information, we through the Chrome tool to check out the indicators, can also be captured by the packet capture tool.

Here I recommend a bag capture tool Charles.

2.7 Performance test tool

2.7.1 Pingdom

2.7.2 the Load Impact

2.7.3 WebPage Test

2.7.4 Octa Gate Site Timer

2.7.5 Free the Speed

3. The optimization

There are various types of front-end optimization, mainly including three aspects of optimization: network optimization (network resource consumption optimization), code optimization (the speed of script interpretation execution after resource loading), and framework optimization (select a better performance framework, such as Benchmark).

3.1 the tree shaking

Chinese (tree shaking), an important part of webPack building optimization. Tree shaking is used to clean up some useless code in our project, and it depends on the module syntax in ES.

Like when you use Lodash everyday

Import _ from 'lodash' copy codeCopy the code

If we refer to the Lodash library as above, the entire Lodash package is inserted into our bundle when we build the package.

import _isEmpty from 'lodash/isEmpty'; Copy the codeCopy the code

If we refer to the Lodash library as above, we’ll just pull out the isEmpty method and insert it into our bundle when we build the package.

This will greatly reduce the size of our package. Therefore, when referencing third-party libraries in daily life, you need to pay attention to the import method.

How to turn on tree shaking

Tree-shaking is supported by default in Webpack4.x. Using tree-shaking: portal in WebPack2.x

3.2 the split chunks

Chinese (Subcontract)

Without configuring anything, WebPack 4 is smart enough to subcontract code for you. The files that the entry file depends on are packaged into main.js, and third-party packages larger than 30KB, such as Echarts, XLSX, and DropZone, are packaged into individual bundles.

Other pages or components that we set up to load asynchronously become chunks, which are packaged into individual bundles.

Its built-in code splitting strategy looks like this:

  • Whether the new chunk is shared or is a module from node_modules
  • Whether the new chunk size is greater than 30kb before compression
  • The number of concurrent requests to load the chunk on demand is less than or equal to five
  • The number of concurrent requests on the initial page load is less than or equal to three

You can change the configuration according to your project environment. The configuration code is as follows:

splitChunks({ cacheGroups: { vendors: { name: `chunk-vendors`, test: /[\\/]node_modules[\\/]/, priority: -10, chunks: 'initial', }, dll: { name: `chunk-dll`, test: /[\\/]bizcharts|[\\/]\@antv[\\/]data-set/, priority: 15, chunks: 'all', reuseExistingChunk: true }, common: { name: `chunk-common`, minChunks: 2, priority: -20, chunks: 'all', reuseExistingChunk: true},}}) copy codeCopy the code

Projects that are not using the WebPack4.x version can still be subcontract by loading on demand, allowing us to spread out our packages and improve loading performance.

On demand loading is also an important means of subcontracting in the past

Here’s a great article: How WebPack uses Load on Demand

3.3 unpacking

Different from subcontracting in 3.2. The react-dom, react-router, and react-dom bundle bundle bundle bundle bundle bundle bundle bundle bundle bundle bundle bundle bundle bundle bundle

Because it takes the plugins apart. They’re not in the bundle together. It’s on the CDN. Let me give you an example to explain.

Suppose: The original bundle is 2M, and the bundle is pulled once. Split bundle (1M) + React bucket (CDN) (1M) two requests concurrent pull.

From this perspective, the 1+1 mode pulls resources faster.

To put it another way, in the case of a full deployment project, the bundle is pulled back each time it is deployed. Waste of resources. React buckets can hit the strong cache. In this way, even if the full deployment is done, it only needs to pull the left 1M bundle again, saving server resources. Optimized loading speed.

Note: In the process of local development, react and other resources suggest not to introduce CDN, development process refresh frequently, will increase the pressure of CDN service, go to the local good.

3.4 gzip

After gZIP compression is configured on the server, the resource size can be greatly reduced.

Nginx configuration mode

http { gzip on; gzip_buffers 32 4K; gzip_comp_level 6; gzip_min_length 100; gzip_types application/javascript text/css text/xml; gzip_disable "MSIE [1-6]\."; gzip_vary on; } copy codeCopy the code

After the configuration is complete, you can view it in the Response header.

3.5 Image Compression

In the development of a more important link, our company’s own map bed tool is built-in compression function, after compression directly uploaded to the CDN.

If the company doesn’t have a picture bed tool, how can we compress the picture? I recommend a few of the ways I use

  • Wisdom map compression (Baidu is difficult to search the official website, free, batch, easy to use)
  • Tinypng (free, batch, speed block)
  • Fireworks tool to compress pixel points and sizes (do it yourself, get the scale)
  • Find the UI and send it to you

Image compression is a common technique, because the device pixel point, THE IMAGE GIVEN by the UI is generally X2, X4, so compression is very necessary.

3.6 Image Segmentation

If the page has a rendering, such as a real rendering, the UI holds a knife in hand and won’t let you compress. Consider splitting images.

It is suggested that the size of a single soil image should not exceed 100K. After dividing the images, we spliced them together through the layout. Can image loading efficiency.

One thing to note here is that each image must be given a height, otherwise the style will collapse if the network speed is slow.

3.7 the Sprite

The south is called a Sprite map, the north is called a Sprite map. This is an interesting phenomenon.

If you have a lot of small images on your site, be sure to merge them into a larger image and then split the image to show through the background.

What are the benefits? Let’s popularise a rule

For example, if you have 10 small images with the same CDN domain name on your page, you need to initiate 10 requests to pull them and divide them into two concurrent requests. After the first concurrent request comes back, the second concurrent request is initiated.

If you combine 10 small images into one large picture, you can pull down all 10 small images with a single request. Reduces server stress, reduces concurrency, and reduces the number of requests.

A Sprite example is attached.

3.8 the CDN

In Chinese (content delivery network), the server is centralized and the CDN is “decentralized”.

There are many things in the project are placed on the CDN, such as: static files, audio, video, JS resources, images. So why does using a CDN make resources load faster?

Here’s a simple example:

In the past, we can only buy train tickets at the railway station, but later we can buy train tickets at the train ticket sales point downstairs.

Are you fine goods.

Therefore, it is recommended to put the static resource on the CDN to speed up the resource loading.

3.9 a lazy loading

Lazy loading, also known as lazy loading, refers to the lazy loading of images on long web pages, and is a great way to optimize web page performance.

When the visible area does not roll to where the resource needs to be loaded, resources outside the visible area will not be loaded.

Reduces server load and is usually used in business scenarios with many images and long pages.

How do you use lazy loading?

  • Lazy image loading
  • layzr.js

3.10 iconfont

Chinese (font chart), a more popular usage now. There are several benefits to using font charts

  • vector
  • lightweight
  • Easy to modify
  • Does not occupy image resource requests.

Just like the Sprite diagram above, if you use the font icon to replace the painting, you do not need to have a request, can be directly into the bundle.

Use the premise is the UI to give some force, the design tends to font ICONS, give good resources in advance, establish a good font icon library.

3.11 Logic backward

Logic backward shift is a common optimization method. Take an example of opening an article website.

This is the order of requests with no logical backward processing

The display body of the page is the article display, if the article display request is later, then the time of rendering the article must be later, because it may be because of the request blocking and other conditions, affecting the response of the request, if more than one concurrent situation, it will be more slow. This is also what happened in our project.

Obviously we should move the main “request article” interface forward and some non-main request logic back. This will render the subject as quickly as possible, and it will be much faster.

The optimized order looks like this.

In ordinary development, it is recommended to always pay attention to logic backward, highlighting the main logic. Can greatly improve the user experience.

3.12 Algorithm Complexity

In application scenarios with a large amount of data, attention should be paid to the algorithm complexity.

In this regard, you can refer to the complexity analysis of Javascript algorithms article.

If your code takes too long to execute, consider whether you should optimize the complexity.

In the choice of time for space, space for time, according to the business scenario to make trade-offs.

3.13 Component rendering

In the case of React, the component segmentation aspect is not too deep. Need to control the rendering of components, especially the render of deep components.

The usual topic is that there are ways to optimize component rendering

  • Declare cycle control – such as React’s shouldComponentUpdate to control component rendering.
  • Api-purecomponent is available on the website
  • Controls the parameters of the injected component
  • Assign a component unique key

Unnecessary rendering is a huge waste of performance.

3.14 the node middleware

Chinese (Node Middleware)

Middleware is basically a method that encapsulates all the details of Http request handling. An Http request usually contains a lot of work, such as logging, IP filtering, query string, request body parsing, Cookie processing, permission validation, parameter validation, exception handling, etc., but for Web applications, you do not want to touch so many details of processing. Therefore, the introduction of middleware to simplify and isolate the details between the infrastructure and the business logic allows us to focus on the development of the business, to achieve the purpose of improving the development efficiency.

Use Node Middleware to merge requests. Reduce the number of requests. It’s also very practical.

3.15 web worker

The role of a Web Worker is to create a multithreaded environment for JavaScript, allowing the main thread to create Worker threads and assign some tasks to the latter. While the main thread is running, the Worker thread is running in the background, and the two do not interfere with each other. Wait until the Worker thread completes the calculation task, and then return the result to the main thread. The benefit of this is that some computation-intensive or high-latency tasks are carried by Worker threads, and the main thread (usually responsible for UI interaction) will be smooth and not blocked or slowed down.

Reasonably practical Web Workers can optimize complex computing tasks. Here directly throw ruan Yifeng’s entry article: Portal

3.16 the cache

The principle of caching is that faster read/write storage media + less I/O+ less CPU computation = performance optimization. The first law of performance optimization is to prioritize caching.

The main means of caching are: browser cache, CDN, reverse proxy, local cache, distributed cache, database cache.

3.17 the GPU to render

Every web page involves some kind of CSS animation, and usually simple animations have very little impact on performance, but when it comes to more complex animations, performance issues can become very prominent.

Chrome, FireFox, Safari, Internet Explorer 9+, and the latest version of Opera all support GPU acceleration, which turns on when they detect that some CSS rule has been applied to a DOM element in a page.

While we may not want to apply 3D transformations to the elements, we can still turn on the 3D engine. For example, we can turn on GPU acceleration using Transform: translateZ(0).

Applying the above method only to the elements we need to animate is not reasonable if we just want to turn on hardware acceleration.

3.18 Ajax can be cached

After sending the data successfully, Ajax saves the requested URL and returned response results in the cache in order to improve the page response speed and user experience. The next time Ajax is called to send the same request (with the same URL and parameters), it will directly take the data from the cache.

When making Ajax requests, you can choose to try to use the GET method as much as possible to use client-side caching and speed up the request.

3.19 the Resource Hints

Resource Hints are a great way to optimize performance and reduce page load times, giving users a smoother user experience.

Modern browsers use a number of predictive optimization techniques to predict user behavior and intentions, such as pre-connect, resource and capture, resource pre-render, and so on.

Resource Hints:

  • The current list of resources to obtain
  • Predict user behavior and required resources based on the current page or application state, user historical behavior, or session

There are many ways to implement Resource Hints: dnS-prefetch, subResource, preload, prefetch, preconnect, prerender, And localStorage.

3.20 SSR

The rendering process is done on the server side, and the final rendered HTML page is sent to the client through THE HTTP protocol, which is considered ‘isomorphic’ or ‘generic’. If your project has a large number of detail pages that are particularly frequent with each other, it is recommended to choose the server side rendering.

In addition to SEO, server-side rendering (SSR) is often used as a first screen optimization to speed up the first screen and improve the user experience. However, there are requirements on the server, and the network transmission data is large, which occupies part of the computing resources of the server.

Vue’s Nuxt.js and React’s nex.js are both server-side rendering methods.

3.21 UNPKG

UNPKG is a site that provides NPM packages for CDN acceleration, so you can write some of the more fixed dependencies into HTML templates to improve the performance of web pages. First, you need to declare these dependencies as external so that webPack does not load them from node_modules. Configure them as follows:

externals: { 'react': 'React' }
Copy the code

Second, you need to write the resources you depend on in the HTML template. This step uses the HTml-webpack-plugin. Here’s an example:

<% if (htmlWebpackPlugin.options.node_env === 'development') { %> <script SRC = "https://unpkg.com/[email protected]/umd/react.development.js" > < / script > < %} else {% > < script SRC = "https://unpkg.com/[email protected]/umd/react.production.min.js" > < / script > < %} % >Copy the code

This code needs to be injected with node_env to get more friendly error messages at development time. You can also choose a more automatic library to help with this process, such as webpack-ctN-plugin or dynamic-CTN-webpack-plugin.

4. To summarize

There are some more common optimizations that I haven’t listed, such as putting stylesheets at the top, scripts at the bottom, reducing redrawing, loading on demand, modularity, etc. There are many ways, the key is the right medicine.

Borrowed a lot of big guys finally summed up the article, I hope that I and the same as the novice partner can always cherish the heart of an apprentice.