preface

Although front-end development as a kind of GUI development, but there is its particularity, front-end particularity lies in the “dynamic” two words, traditional GUI development, Both desktop and mobile client applications are first need to download, only to download the application will run in the local operating system, and the front end is different, it is a “dynamic incremental”, our front-end applications tend to be real-time load, do not need to download in advance, this creates a problem, the front-end development are often the most influence in the performance of the computation or not Render, but load speed, which has a direct impact on user experience and retention.

Lara Swanson, author of Designing for Performance, wrote in a 2014 article called “Web Performance is User Experience,” in which she noted that “faster page loading builds user trust and leads to more visits. Most users expect pages to load in less than two seconds, and after three seconds close to 40% of users will leave your site.

It is worth mentioning that GUI development still has one common feature, which is experiential performance. Experiential performance does not refer to performance optimization in absolute terms, but rather to the fundamental purpose of user experience, because in the field of GUI development, the pursuit of absolute performance is mostly meaningless.

For example, an animation is already 60 frames, and you optimize it to 120 frames by using a skyrocket algorithm. This is useless for your KPI, because the optimization itself is meaningless, because no one except a few psychic inhumans can tell 60 frames from 120 On the contrary, if the first screen of a website takes 4s to load, you don’t have any performance optimization in real sense. You just add a loading diagram designed by a design sister, which is also very meaningful optimization, because good loading It can reduce user anxiety and make users feel that they have not waited too long, which is user experience level performance optimization.

Therefore, we would like to emphasize that even if there is no actual performance optimization, the process of improving the user experience through design is still performance optimization, because GUI development is direct to the user, and you give the user the illusion of fast performance. This is also called performance optimization, after all, the user thinks fast, is really fast…


The article directories

  1. Optimized first screen loading
  2. Route hop loading optimization

1. First screen loading

The first screen loading is the most discussed topic. On the one hand, the first screen loading performance of the Web front-end is generally poor. On the other hand, the first screen loading speed is very important.

To interpret the key points of the first screen from the perspective of user experience, what is the mental process after we input the URL as a user?

When we hit Enter, our first question is, “Is it running?” This question remains until the user sees the first drawn element on the page, at which point he can be sure that his request is valid (and not blocked…). “And the second question:” Does it work?” Mapped the meaningless if only all sorts of out-of-order elements, this is incomprehensible to the user, although began to load the page at this time, but no any value for the users, until completion of the loading text content, interactive button these elements, users can understand the page, when the user tries to interact with the page, there will be the third question: “it can be used?” The first screen is not loaded until the user successfully interacts with the page.

During the waiting period between the first question and the second question, a blank screen appears, which is the key to optimization.

1.1 Definition of white screen

No matter how we optimize the performance, the first screen will always be blank because of the nature of the technology in front end development.

So let’s define the white screen First, so that we can calculate the white screen time easily, because the calculation logic of white screen is different, some people say it is from First Paint (FP) to First Contentful Paint, FCP) this paragraph of time is bad, I personally don’t agree, I personally prefer so since routing change (that is, the user to press enter moment) to the content of the first drawing (that is, can see the first content) white during the time, because according to the user’s psychological, in the press enter has launched a request, think you, until you see the first element is mapped before use Hu’s heart is anxious, because he does not know whether this request will be answered (the site is down?). , I don’t know how long it will take to get a response. , this is the user’s first waiting period.

Hang time = firstPaint – performance. Timing. NavigationStart

Take the WebApp version of Weibo (one of the few conscience products of Weibo) for example. According to Lighthouse(Google’s website testing tool), its white screen load time is 2s, which is a very good result.


1.2 Analysis of blank Screen Loading

In modern front-end application development, we tend to use webpack and other packers for packaging. In many cases, if we do not optimize, there will be a lot of huge chunks, some of which are around 5M (I used Webpack 1.x for the first time to package with 8M), and so on Chunk is a load speed killer.

Browsers generally have a limit on concurrent requests. Chrome, for example, has a limit of six requests, which means that we have to make the first six requests before we can proceed, which also affects the loading speed of our resources.

Of course, network and bandwidth are factors that affect loading speed throughout the game, and the white screen is no exception.

1.3 Performance optimization of white screen

Let’s take a look at what happens during the white screen time:

  1. Press Enter. The browser parses the URL and performs A DNS query. The query returns an IP address and sends an HTTP(S) request through the IP address
  2. The server returns the HTML and the browser starts parsing the HTML, triggering a request for JS and CSS resources
  3. Js is loaded, js is executed, various functions are called to create the DOM and render it to the root node until the first visible element is generated
1.3.1 loading tips

If you’re using a WebPack-based front-end framework, your index.html file will look something like this:

	<div id="root"></div>
Copy the code

We’ve rendered the entire packaged code to the root node, and how do we render it? React calls _react_._createelement_ (). This is time-consuming. While the HTML is loaded, the screen is still blank Add prompt during js execution to increase user experience?

Yes, we usually have a webpack plug-in called html-webpack-plugin, in which you can configure HTML to insert loading maps into files.

Webpack configuration:

const HtmlWebpackPlugin = require('html-webpack-plugin')
const loading = require('./render-loading') // Pre-designed loading diagram

module.exports = {
  entry: './src/index.js'.output: {
    path: __dirname + '/dist'.filename: 'index_bundle.js'
  },
  plugins: [
    new HtmlWebpackPlugin({
      template: './src/index.html'.loading: loading
    })
  ]
}

Copy the code
1.3.2 (pseudo) server rendering

So since there is time to wait between HTML loading and JS execution, why not just render on the server side? The HTML returned directly is complete with THE DOM structure, so you don’t have to call JS to do all the DOM creation work, which is also SEO friendly.

Both Vue and React support server-side rendering, while nuxt.js and next.js are popular frameworks, which are too expensive for applications that already use client-side rendering.

Google has developed a library called Puppeteer, which is essentially a headless browser. With this headless browser, you can use code to simulate all kinds of browser operations. For example, you can use Node to save HTML as A PDF, and you can simulate clicking and submitting forms on the back end. Naturally, you can also simulate the HTML structure of the browser to get the first screen.

Prerender-spa-plugin is a plug-in based on this principle, which emulates the browser environment locally, pre-executes our packaged file so that it can be parsed to get the HTML on the front screen, and in a normal environment, we can return the pre-parsed HTML.

1.3.3 open HTTP2

We see that after obtaining HTML, we need to parse from top to bottom. We can only request related resources when the script tag is parsed. Moreover, due to the browser concurrency limit, we can request 6 times at most at one time.

Http2 is a very good solution,http2’s own mechanism is fast enough:

  1. Http2 is more efficient because it communicates in binary frames, whereas http1.x communicates in text
  2. Http2 can be multiplexed, i.e., with the same domain name, only a TCP request channel can be established, and the request and response channel can be two-way communication. Http1

  1. Http2 can compress the header, which can save the network traffic occupied by the header. HTTP/1.x carries a large number of redundant header information every request, which wastes a lot of bandwidth resources

For example, in the two requests shown below, request one sends all header fields, and request two sends only differential data, which reduces redundant data and reduces overhead

  1. Http2 can do server push, we usually parsing HTML after the relevant tag will then request CSS and JS resources, and http2 can directly push related resources directly, without request, this greatly reduces the time of multiple requests

We can click on this site to test http2

I have done a test,http2 in the network smooth + high performance device performance has no obvious advantage over HTTP1.1, but the worse the network, the worse the device in the case of http2 load is quality, can say http2 is born for the mobile Web, but in the fiber supported high performance PC The upside advantage is less obvious.

1.3.4 Enabling Browser Cache

Since HTTP requests are so troublesome, can we avoid HTTP requests or reduce the load of HTTP requests to achieve performance optimization?

Before taking advantage of the browser cache is a great way to minimize HTTP requests, let’s review the HTTP cache.

Let’s start by listing the cache-related request response headers.

  • Expires

Response header representing the expiration time of the resource.

  • Cache-Control

Request/response headers, cache control fields, precise control of cache policies.

  • If-Modified-Since

The request header, when the resource was last modified, is told to the server by the browser.

  • Last-Modified

The response header, when the resource was last modified, is told by the server to the browser.

  • Etag

The response header, the resource identifier, is told to the browser by the server.

  • If-None-Match

The request header, the cache resource identifier, is told to the server by the browser.

Fields used for pairing:

  • The if-modified-since and last-modified
  • The Etag and If – None – Match

When there is no local cache it looks like this:










In general, our WebApp Is to have our own code and third-party libraries, our code is often change, and the third party libraries unless there is a larger version upgrade, otherwise don’t change, third-party libraries and the code we need to separate packaging, we can be set to a third party libraries a longer cache time, so as not to frequent requests a third-party library code.

So how do you extract third-party libraries? In WebPack4. x, SplitChunksPlugin plug-in replaces CommonsChunkPlugin plug-in for common module extraction. We can configure SplitChunksPlugin and unpack it.

SplitChunksPlugin configuration is as follows:

optimization: {
    splitChunks: { 
      chunks: "initial"./ / code block type Must choose three: the "initial" (initialization) | "all" (the default is all) | "async" (dynamic loading)
      minSize: 0.// Minimum size, default 0
      minChunks: 1.// Minimum chunk, default 1
      maxAsyncRequests: 1.// Maximum number of asynchronous requests. Default: 1
      maxInitialRequests: 1.// Maximum initialization request, default 1
      name: (a)= > {},            // this option accepts function
      cacheGroups: {                // Cache groups inherit the configuration of splitChunks, but test, Priorty, and reuseExistingChunk can only be used to configure cache groups.
        priority: "0"./ / cache set of priorities that weight false | object |
        vendor: {                   // Key is the entry name defined in entry
          chunks: "initial"./ / must choose three: "initial" (initialization) | | "all" "async" (the default is asynchronous)
          test: /react|lodash/.// Verify the regular rule. If it matches, extract chunk
          name: "vendor".// The name of the separated chunk to cache
          minSize: 0.minChunks: 1.enforce: true.reuseExistingChunk: true   // You can set whether to reuse existing chunks and not create new chunks}}}}Copy the code

SplitChunksPlugin has many configuration items. You can go to the official website to learn how to configure SplitChunksPlugin.

If we want to extract a third-party library we can simply configure it like this

   splitChunks: {
      chunks: 'all'.// Initial, async, and all
      minSize: 30000.// Form the smallest size of a new code block
      maxAsyncRequests: 5.// Maximum number of parallel requests when loading on demand
      maxInitialRequests: 3.// Maximum number of initial requests
      automaticNameDelimiter: '~'.// Package the separator
      name: true.cacheGroups: {
        vendor: {
          name: "vendor".test: /[\\/]node_modules[\\/]/.// Package third-party libraries
          chunks: "all".priority: 10 / / priority
        },
        common: { // Package the rest of the public code
          minChunks: 2.// Introduce twice or more to be packaged
          name: 'common'.// The name of the split package
          chunks: 'all'.priority: 5}}},Copy the code

And that seems to be it? No, we have a big problem with our configuration:

  1. Is it possible for us to crudely package third-party libraries together? Will, of course, there is a problem, because the third party libraries a piece of packaging, as long as there is a library we upgrade or introduce a new library, the chunk will change, then the chunk of volatility will be very high, is not suitable for the cache for a long time, one more thing, we should improve the page loading speed, the first priority is to reduce page load depend on the amount of code, please React Vue reudx is the base library of the entire application that we have to rely on on the home page. There is no need to load special libraries such as d3.js three.js on the first screen, so we need to separate the base library from the base library of the application.
  2. How do we notify the client when chunk is in a strong cache, but the server code has changed? As we have seen in the diagram above, when a hit resource is in the cache, the browser reads from the cache without acknowledging it to the server. What if the server code has changed by then? Instead of caching index. HTML (HTML pages in webPack days are too small to be cached anyway), we need to update the server every time we introduce a script script, and enable hashChunk, which generates a new chunk when the chunk changes The hash value, if unchanged, does not change, so that when index loads subsequent script resources, if hashChunk does not change, the cache will be hit, and if it changes, the server will reload new resources.

The following diagram illustrates how to split third-party libraries, such as the basic React library, from the utility Lodash library and the Echarts library

      cacheGroups: {
        reactBase: {
          name: 'reactBase'.test: (module) = > {
              return /react|redux/.test(module.context);
          },
          chunks: 'initial'.priority: 10,},utilBase: {
          name: 'utilBase'.test: (module) = > {
              return /rxjs|lodash/.test(module.context);
          },
          chunks: 'initial'.priority: 9,},uiBase: {
          name: 'chartBase'.test: (module) = > {
              return /echarts/.test(module.context);
          },
          chunks: 'initial'.priority: 8,},commons: {
          name: 'common'.chunks: 'initial'.priority: 2.minChunks: 2,}}Copy the code

We hash the chunk, as shown in the figure below, after we change the code related to chunk2, no other chunk changes, only the hash of chunk2 changes

  output: {
    filename: mode === 'production' ? '[name].[chunkhash:8].js' : '[name].js'.chunkFilename: mode === 'production' ? '[id].[chunkhash:8].chunk.js' : '[id].js'.path: getPath(config.outputPath)
  }
Copy the code

We’ve made front-end projects take full advantage of caching with the HTTP cache + WebPack Hash cache strategy, but there’s a reason WebPack needs the fabled WebPack configuration engineer, because WebPack itself is metaphysical, as shown above, if you What if the chunk2 related code removes a dependency or introduces a new dependency that already exists in the project?

Our normal expectation was that only chunk2 changed, but in fact it was a large number of unrelated chunks that changed the hash, which invalidated our cache policy. Below is the changed hash, the ones we circled in red changed the hash, while in fact we only changed the hash Chunk2 related code, why is this?





I recommend reading this article as an extension to webpack Hash caching

1.4 FMP(first meaningful drawing)

After the white screen is over, the page starts rendering, but the page is still just a few meaningless elements, such as drop-down menu buttons, or out-of-order elements, navigation, etc., which are part of the page but meaningless. What is meaningful? For search engine users, it is the complete search results. For Microblog users, it is the microblog content on the timeline. For Taobao users, it is the display of product pages

Then, although the page is drawn between FCP and FMP, the whole page is meaningless, and users are still waiting anxiously. In addition, out-of-order elements or flickering elements may appear at this time, which greatly affects the experience. At this time, we may need to optimize user experience. Skeleton is a good idea, and Skeleton is starting to be used a lot now. The point of Skeleton is to spread out the elements that are about to be rendered beforehand, to avoid the flash screen, and to remind the user that it is about to be rendered, so that the user is less anxious.

Weibo’s Skeleton, for example, has done well






Skeleton



vue-skeleton-webpack-plugin

For example, vue-CLI 3 can be configured directly in vue.config.js

// Import plug-ins
const SkeletonWebpackPlugin = require('vue-skeleton-webpack-plugin');

module.exports = {
	// Refer to official documentation for additional configurations
	configureWebpack: (config) = >{
		config.plugins.push(new SkeletonWebpackPlugin({
			webpackConfig: {
				entry: {
					app: path.join(__dirname, './src/Skeleton.js'),}},minimize: true.quiet: true,}})),// This is to separate the CSS from the skeleton screen, and process it directly into the HTML as an inline style to improve loading speed
	css: {
		extract: true.sourceMap: false.modules: false}}Copy the code

Then is the basic VUE file preparation, directly look at the document.

1.5 TTI(Interactive time)

When meaningful content is rendered and the user tries to interact with the page, the page is not loaded, but appears to be loaded, while the JavaScript is still executing intensively.

We see a lot of script execution even after the page is almost rendered


Between the time a page is rendered interactively, most of the performance cost is spent on JavaScript interpretation and execution. Two things determine the speed of JavaScript parsing at this time:

  1. JavaScript Script volume
  2. JavaScript execution speed itself

We’ve covered some of the size issues in JavaScript in the last section. We can reduce the size by using SplitChunksPlugin, but there are other methods that we’ll cover later.

1.5.1 Tree Shaking

Tree Shaking has been around for a long time. For example, rollup, the de facto standard packaging tool for the JS base library, is the ancestor of Tree Shaking. React rollup reduces the size of Tree Shaking by 30%.

Tree Shaking is a way of taking code from your code stream and removing it. If you don’t use Tree Shaking, a lot of code is defined but never used, and it is executed on the user’s client. It is a performance killer Shaking relies on the static nature of ES6’s Module module to analyze and weed out useless code.

Tree Shaking is now supported by default in production after WebPack 4.x, so it is a technology that should be used right out of the box, but it does not mean that Tree Shaking actually works because there are a lot of pitfalls.

Pit 1: We already mentioned that Tree Shaking must use es6 modules. If you use dynamic modules like common.js,Tree Shaking is disabled, but Babel is enabled by default Common.js, so we need to close it manually. Pit 2: We already know that Tree Shaking relies on ESM for application analysis, but there are many libraries out there that only expose ES5 versions of their code for compatibility. This makes Tree Shaking ineffective for many third-party libraries, so we have to rely on ESM libraries as much as possible Import {dobounce} from ‘lodash-es’ import {dobounce} from ‘lodash-es’

1.5.2 Dynamic loading of polyfill

Polyfill was created for browser compatibility, and it should be up to the browser on the client to decide whether or not to polyfill, not the developer, but we’ve had developers packaging polyfills for a long time, which in many cases resulted in users loading unnecessary code.

Solution to this problem is very simple, direct introduction < script SRC = “https://cdn.polyfill.io/v2/polyfill.min.js” > < / script >, and for the Vue developers are more friendly, Vue – cli The generated template now comes with this reference.

The principle is that the service provider can identify the browser User Agent of different browsers, so that the server can identify the operating system and version, CPU type, browser and version, browser rendering engine, browser language, browser plug-in and so on. Using this information to determine whether polyfill needs to be loaded, the developer can view the User Agent in the browser’s network.

1.5.3 Dynamically loading ES6 Code

Since Polyfill can load dynamically, can es5 and ES6 + code load dynamically? Yeah, but what’s the point? Will the ES6 be faster?

The first thing to be clear about is that generally after the release of a new standard, browser manufacturers will focus on optimizing the performance of the new standard, while the optimization of the old standard will plateau, and even if you program for the future, ES6 performance will continue to grow faster and faster. Second, we usually write es6+ code, while the release of ES5 is translated by Babel or TS. In most cases, tool-translated code is often inferior to the performance of handwritten code. This performance comparison site is also the same, although Babel Translation tools are improving, but you still see a performance degradation of translated code, especially for class code, which is significant. Finally, the size of the code after translation is bloated. The translator uses a lot of clever tricks to convert ES6 to ES5, resulting in a huge increase in code volume. Using ES6 represents a smaller size.

So how do you load ES6 code dynamically? The secret is to use the

Volume-size contrast

Execution time comparison


1.5.4 Routing Level Disassembling Code

We have removed the third-party library through SplitChunksPlugin above, but there is still a lot of redundant code in the first screen loading process. For example, our home page is a login interface, so the code used is actually very simple

  1. Framework base libraries such as Vue Redux, etc
  2. Parts of the UI framework form components and button components and so on
  3. A simple layout component
  4. A little more logic and style

The code for the login screen is minimal, so why not just load the code for the login screen? In addition to the basic framework and UI library, we only need to load the Code of the current page, which requires Code Splitting technology. What we need to do is actually very simple. We need to set the plugin-syntax-dynamic-import plugin for dynamic import to Babel and then we can use import inside the function.

For Vue you can import routes like this

export default new Router({ 
  routes: [{path: '/'.name: 'Home'.component: Home 
  }, 
  { 
  path: '/login'.name: 'login'.component: (a)= > import('@components/login')}]Copy the code

Your login page will be packaged separately. With React, the built-in react.lazy () allows you to dynamically load routes and components, which is similar to Vue. Lazy () does not support server rendering yet, so you can use React Loadable instead.

2 Component Loading

In fact, routing is a big component. Most of the time, people ignore the optimization of the loading between routes, and most of the time, we focus on the loading on the first screen. However, the loading between routes is equally important, if the loading is too slow, the user experience will be affected.

It’s important to note that in many cases, the first screen loads faster than the route jump and is easier to optimize.

For example, the first page of a graphite document looks like this:












This is not that the graphite is not good enough, but for this kind of application website, compared with the first screen, the jump load optimization of the working page is more difficult, because the amount of code on the working page is far greater than the amount of code and complexity of an official website.

We saw over 6000ms of JavaScript parsing and execution during the load process

2.1 Lazy Component loading

Code Splitting can not only be used for route Splitting, but also for component-level Code Splitting. Of course, the method of component-level Splitting is similar. The advantage of component-level Splitting is that we can only render some necessary components in the page loading, and the rest components can be loaded as required.

Just like a Dropdown, we don’t need to render the drop-down Menu when rendering the initial page, because the Menu only needs to be rendered after the Dropdown is clicked.

Route segmentation vs component segmentation





Our demo looks like this:

Let’s compare the loading of resources with and without component splitting (no compression in the development environment)

Without component splitting, we see a very large chunk, because this component contains antD components and Echarts diagrams and parts of the React framework in addition to our code

After component segmentation, the initial page volume decreases significantly, and the initial page loading volume of inter-route hops decreases, which means faster loading speed

In fact, component splitting is similar to routing splitting, also through lazy + Suspense method for component lazy loading

// Dynamically load the chart component
const Chart = lazy((a)= > import(/* webpackChunkName: 'chart' */'./charts'))

// Contains the modal component of the diagram
const ModalEchart = (props) = > (
    <Modal
    title="Basic Modal"
    visible={props.visible}
    onOk={props.handleOk}
    onCancel={props.handleCancel}
  >
      <Chart />
  </Modal>
)
Copy the code

2.2 Component Preloading

We reduced the resource volume of the initial rendering of the page through lazy loading of the component, which improved the loading performance. However, the performance of the component was again problematic. In the last demo, we reduced the size of the initial page from 3.9m to 1.7m, and the page loading was fast but the component loading was slow.

The reason is that the pressure of the remaining 2m resources is all on the chart component (due to the volume of Echarts), so when we click the menu to load the chart, there will be 1-2s loading delay, as follows:

Can we load the chart ahead of time to avoid long loading times in chart rendering? This method of preloading is called component preloading.

The principle is also very simple, is to trigger the loading of chart resources when the user’s mouse is still in hover state, usually when the user clicks over, the loading is basically completed, this time the chart will be smoothly rendered without delay.

/** * @param {*} factory lazy load component * @param {*} next factory component need preload component */
function lazyWithPreload(factory, next) {
  const Component = lazy(factory);
  Component.preload = next;
  returnComponent; }...// Preloading is then triggered in the component's methods
  const preloadChart = (a)= > {
    Modal.preload()
  }

Copy the code

The demo address

2.3 keep-alive

The keep-alive API should be familiar to vue developers. The keep-alive API is used to keep the component alive even after the page has been switched, and the corresponding instance of the component is saved in memory, so that the cached component instance can be used when the page needs to be rendered again.

This API runs the risk of memory leaks if a large number of instances remain in memory without destruction, so be careful to call deactivated destruction

However, there is no corresponding implementation in React, and the official issue clearly does not add similar API, but provides two self-implementation methods

  1. Use global state management tools such as Redux for state caching
  2. usingstyle={{display: 'none'}}To control the

If you look at these two suggestions,redux is verbose enough, and the extra work and complexity of using a global scheme like Redux for caching state is not worth the cost Controlling the display is a simple approach, but it’s rough enough that we lose a lot of room for manipulation, such as animation.

React-keep-alive solves this problem to some extent by using the React-Portals API to mount the cache component to the DOM outside the root node. The cache component can then be mounted to the corresponding node when recovery is required, and can also be mounted during additional life cycles ComponentWillUnactivate is destroyed.


summary

Of course, there are many common performance optimizations that we haven’t covered:

  1. Lazy image loading, a technique that has been around since prehistoric times, in JQuery and various frameworks
  2. Resource compression, now basically with the reverse proxy tool is automatically enabled
  3. CDN, has seen few web products without CDN, especially after the rise of cloud computing vendors CDN is very cheap
  4. Convergence or divergence, which is of limited significance after http2 is used, because a domain name can be directly multiplexed
  5. Sprite graphics, very old technology,http2 use is also limited
  6. CSS first, JS last, this way suitable for engineering before, now basically use packaging tools instead
  7. The other…

We focused on sorting out the performance optimization plan in the front-end loading stage, which in many cases only gave the direction. The real optimization still needs to be analyzed and mined in the actual project according to the specific situation to achieve the best performance optimization.


Reference links:

  1. Reference performance indicators
  2. The Tree Shaking principle
  3. Component preloading
  4. http2
  5. Deploy es6
  6. Tree-shaking performance optimization practices
  7. Caching strategies