Performance optimization is a double-edged sword, with good and bad sides. The good side is that it can improve the performance of the website, the bad side is that it can be difficult to configure, or there are too many rules to follow. Some performance tuning rules do not apply to all scenarios and need to be used with caution. Please read this article with a critical eye.

References to relevant optimization suggestions in this paper will be given at the end of the recommendations or at the end of the paper.

1. Reduce HTTP requests

A complete HTTP request goes through a DNS lookup, a TCP handshake, the browser makes the HTTP request, the server receives the request, the server processes the request and sends back the response, the browser receives the response, and so on. Let’s look at a specific example to help understand HTTP:

This is an HTTP request with a requested file size of 28.4KB.

Noun explanation:

  • Queueing: Time in the request queue.
  • 例 句 : The delay between the establishment of a TCP connection and its actual transfer of data, including proxy negotiation time.
  • Proxy negotiation: Time taken to connect to the Proxy server for negotiation.
  • DNS Lookup: The time taken to perform a DNS Lookup, which is required for each different domain on the page.
  • Initial Connection/Connecting: Time taken to set up a Connection, including TCP handshakes/retries and SSL negotiation.
  • SSL: Time taken to complete the SSL handshake.
  • Request sent: Time taken to send network requests, usually one millisecond.
  • Waiting(TFFB): TFFB is the time between the first byte of reply data being received and the page request being made.
  • Content Download: Time taken to receive response data.

As you can see from this example, the proportion of actual data download time is 13.05/204.16 = 6.39%, the smaller the file, the smaller the proportion, and the larger the file, the higher the proportion. This is why it is recommended to combine multiple small files into one large file to reduce the number of HTTP requests.

References:

  • understanding-resource-timing

2. Use HTTP2

HTTP2 has several advantages over HTTP1.1:

Fast analytical speed

When the server parses an HTTP1.1 request, it must continually read in bytes until it encounters the delimiter CRLF. Parsing HTTP2 requests is less cumbersome because HTTP2 is a frame-based protocol, and each frame has a field that represents the frame length.

multiplexing

For HTTP1.1 to make multiple requests at the same time, multiple TCP connections must be established, because a TCP connection can only handle one HTTP1.1 request at a time.

On HTTP2, multiple requests can share a TCP connection, which is called multiplexing. The same request and response are represented by a flow and identified by a unique flow ID. Multiple requests and responses can be sent out of order in a TCP connection and reassembled using the stream ID when they arrive at their destination.

The first compression

HTTP2 provides header compression.

For example, there are two requests:

:authority: unpkg.zhimg.com :method: GET :path: /[email protected]/dist/zap.js: Scheme: HTTPS Accept: */* accept-encoding: gzip, deflate, br accept-language: zh-CN,zh; Q =0.9 cache-control: no-cache pragma: no-cache referer: https://www.zhihu.com/ sec-fetch-dest: script sec-fetch-mode: No-cors sec-fetch-site: cross-site User-agent: Mozilla/5.0 (Windows NT 6.1; Win64; X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36Copy the code
:authority: zz.bdstatic.com :method: GET :path: /linksubmit/push.js :scheme: https accept: */* accept-encoding: gzip, deflate, br accept-language: zh-CN,zh; Q =0.9 cache-control: no-cache pragma: no-cache referer: https://www.zhihu.com/ sec-fetch-dest: script sec-fetch-mode: No-cors sec-fetch-site: cross-site User-agent: Mozilla/5.0 (Windows NT 6.1; Win64; X64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36Copy the code

As you can see from the above two requests, there is a lot of data duplication. If you can store the same headers and send only the different parts between them, you can save a lot of traffic and speed up the request time.

HTTP/2 uses “header tables” on both the client and server to track and store previously sent key-value pairs. The same data is no longer sent on each request and response.

Let’s look at a simplified example. Suppose the client sends the following headers in order:

Header1:foo
Header2:bar
Header3:bat
Copy the code

When a client sends a request, it creates a table based on the header value:

The index The first name value
62 Header1 foo
63 Header2 bar
64 Header3 bat

If the server receives the request, it creates a table in the same way. When the client sends the next request, if the header is the same, it can directly send a header block like this:

62 63 64
Copy the code

The server looks up the previously created table and restores the numbers to the full head of the index.

priority

HTTP2 can set a higher priority for the more urgent request, the server after receiving such a request, can preferentially process.

Flow control

Because the traffic bandwidth of a TCP connection (based on the network bandwidth between the client and the server) is fixed, when there are multiple concurrent requests, one request accounts for more traffic than the other. Flow control can accurately control the flow of different flows.

Server push

A powerful new feature of HTTP2 is that the server can send multiple responses to a single client request. In other words, in addition to the response to the original request, the server can push additional resources to the client without the client explicitly requesting them.

For example, when the browser requests a website, in addition to returning HTML pages, the server can also push resources in advance according to the URL of the resources in the HTML page.

Now many websites have started to use HTTP2, such as Zhihu:

H2 refers to the HTTP2 protocol and HTTP /1.1 refers to the HTTP1.1 protocol.

References:

  • HTTP2 profile
  • Half an hour to understand HTTP, HTTPS, and HTTP2

3. Use server-side rendering

Client-side rendering: Get the HTML file, download the JavaScript file as needed, run the file, generate the DOM, and render again.

Server-side rendering: The server returns the HTML file, and the client simply parses the HTML.

  • Advantages: first screen rendering fast, good SEO.
  • Disadvantages: configuration trouble, increase the computing pressure of the server.

I will use Vue SSR as an example to briefly describe the SSR process.

Client rendering process

  1. Visit the client rendering website.
  2. The server returns a statement containing the import resource statement and<div id="app"></div>HTML file.
  3. The client requests resources from the server through HTTP. When all necessary resources are loaded, the request is executednew Vue()Start instantiating and rendering the page.

Server-side rendering process

  1. Access the web site rendered on the server.
  2. The server looks at which resource files are needed by the current routing component and populates the contents of those files into the HTML file. If there is an Ajax request, it is performed to prefetch the data, populate the HTML file, and return the HTML page.
  3. When the client receives the HTML page, it can render the page immediately. At the same time, the page also loads resources, and when all the necessary resources have been loaded, execution beginsnew Vue()Start instantiating and taking over the page.

As you can see from these two processes, the difference is in the second step. Client-rendered sites return the HTML file directly, while server-side rendered sites render the page and return the HTML file.

What are the benefits of this? Is the faster time-to-content.

Suppose your site needs to load abCD four files before rendering is complete. And each file size is 1 M.

This calculation: the client rendering site needs to load 4 files and HTML files to complete the home page rendering, the total size of 4M (ignore the HTML file size). The server rendered site only needs to load a rendered HTML file to complete the home page rendering, the total size of the rendered HTML file (this file is not too large, generally several hundred K, my personal blog site (SSR) loaded HTML file for 400K). This is why server-side rendering is faster.

References:

  • vue-ssr-demo
  • Vue.js Server side rendering guide

4. CDN is used for static resources

A content delivery network (CDN) is a group of Web servers that are distributed in several different geographical locations. As we all know, the further away the server is from the user, the higher the latency. CDN is designed to solve this problem by deploying servers in multiple locations, bringing users closer to the servers and thus reducing request times.

Principle of CDN

When a user visits a website without a CDN, the process looks like this:

  1. To resolve a domain name to an IP address, the browser sends a request to the local DNS.
  2. The local DNS sends requests to the root server, top-level DOMAIN name server, and permission server in turn to obtain the IP address of the web server.
  3. The local DNS sends the IP address back to the browser, and the browser sends a request to the IP address of the web server to obtain resources.

If the site the user is visiting has a CDN deployed, the process looks like this:

  1. To resolve a domain name to an IP address, the browser sends a request to the local DNS.
  2. The local DNS sends requests to the root server, top-level DOMAIN name server, and permission server in turn to obtain the IP address of the Global Load Balancing System (GSLB).
  3. The local DNS then sends a request to the GSLB. The GSLB determines the user location based on the IP address of the local DNS, filters out the LOCAL load balancing system (SLB) that is closer to the user, and returns the SLB IP address to the local DNS.
  4. The local DNS sends the IP address of the SLB back to the browser, and the browser sends a request to the SLB.
  5. According to the resources and addresses requested by the browser, the SLB selects the optimal cache server and sends it back to the browser.
  6. The browser then redirects to the cache server based on the address sent by the SLB.
  7. If the cache server has a resource that the browser needs, it sends it back to the browser. If not, the resource is requested from the source server, sent to the browser and cached locally.

References:

  • What is a CDN? What are the advantages of using CDN?
  • CDN principle analysis

5. Place CSS at the head of the file and JavaScript at the bottom

All CSS and JS files placed in the head tag block the rendering (CSS does not block DOM parsing). If the CSS and JS takes a long time to load and parse, the page will be blank. So the JS file should be at the bottom and loaded after the HTML is parsed.

So why is the CSS file in the header?

To avoid this, put the CSS file in the header, because loading the HTML file first and then loading the CSS file will give the user an unstyled “ugly” page.

In addition, it is not impossible to put JS files in the header, just add the script tag to the defer attribute to download asynchronously and delay execution.

6. Use iconfont instead of picture ICONS

Font icon is the icon made into a font, when used like a font, you can set properties, such as font size, color, etc., very convenient. And the font icon is a vector image, not distortion. Another advantage is that the generated files are extremely small.

Compressed font file

Use the Fontmin-Webpack plug-in to compress the font files (thanks to the front end).

References:

  • fontmin-webpack
  • Iconfont- Alibaba vector icon library

7. Make good use of the cache and do not load the same resource repeatedly

To prevent users from having to request a file every time they visit a site, we can control this behavior by adding Expires or max-age. Expires sets a time before which the browser doesn’t request the file, but instead uses the cache. Max-age is a relative time, so it is recommended to use max-age instead of Expires.

But this creates a problem. What happens when the file is updated? How do I tell the browser to request a new file?

You can update the resource link address referenced in the page, so that the browser gives up the cache actively and loads new resources.

The specific method is to associate the modification of the resource URL with the file content. That is, only the change of the file content will lead to the change of the corresponding URL, so as to realize the precise cache control at the file level. What is relevant to the contents of the file? We will naturally think of the use of data summary algorithm for file summary information, summary information and file content one to one, there is a cache control basis that can be accurate to the granularity of a single file.

References:

  • Webpack + Express implements accurate file caching
  • Webpack – cache
  • Zhang Yunlong — How to develop and deploy front-end code in large companies?

8. Compress files

Compressed files can reduce file download time, so that users have a better experience.

Thanks to the growth of WebPack and Node, it is now very convenient to compress files.

In WebPack, you can use the following plug-ins for compression:

  • JavaScript: UglifyPlugin
  • CSS :MiniCssExtractPlugin
  • HTML: HtmlWebpackPlugin

In fact, we can do better. That’s using GZIP compression. You can enable this by adding a GZIP identifier to the Accept-Encoding header in the HTTP request header. Of course, the server has to support this.

Gzip is by far the most popular and effective compression method. For example, the app. Js file I built with Vue was 1.4MB in size, but when compressed with GZIP, it was only 573KB, a nearly 60% reduction in volume.

Attach how to configure Gzip for WebPack and Node.

Download the plugin

npm install compression-webpack-plugin --save-dev
npm install compression
Copy the code

Webpack configuration

const CompressionPlugin = require('compression-webpack-plugin');

module.exports = {
  plugins: [new CompressionPlugin()],
}
Copy the code

The node configuration

Const compression = require('compression') // app.use(compression())Copy the code

9. Image optimization

(1). Image loading is delayed

In the page, you don’t set a path for the image, and only load the actual image when it appears in the browser’s viewable area. This is lazy loading. For websites with many images, loading all the images at once will have a great impact on the user experience, so you need to use image lazy loading.

First, you can set the image so that it does not load when the page is not visible:

<img data-src="https://avatars0.githubusercontent.com/u/22117876?s=460&u=7bd8f32788df6988833da6bd155c3cfbebc68006&v=4">
Copy the code

When the page is visible, use JS to load the image:

const img = document.querySelector('img')
img.src = img.dataset.src
Copy the code

The image is loaded and you can see resources for the full code.

References:

  • Implementation principle of lazy image loading on the Web front end

(2). Response picture

The advantage of responsive images is that the browser automatically loads the appropriate images based on the screen size.

Realized by picture

<picture>
	<source srcset="banner_w1000.jpg" media="(min-width: 801px)">
	<source srcset="banner_w800.jpg" media="(max-width: 800px)">
	<img src="banner_w800.jpg" alt="">
</picture>
Copy the code

Through @media

@media (min-width: 769px) { .bg { background-image: url(bg1080.jpg); } } @media (max-width: 768px) { .bg { background-image: url(bg768.jpg); }}Copy the code

(3) Adjust the size of the picture

For example, if you have a 1920 by 1080 image, show it to the user as a thumbnail, and only show the full image when the user hovered over it. If the user never actually hovered over the thumbnail, the time to download the image was wasted.

So, we can optimize with two images. At first, only the thumbnails are loaded, and the larger images are loaded when the user hovers over the image. Another option is to lazily load the large image and manually change the SRC of the large image for download after all elements have been loaded.

(4) Reduce picture quality

For JPG images, for example, the difference between 100% quality and 90% quality images is often not obvious, especially when used as background images. Often when I use Photoshop to cut a background image, I cut it into JPG format and compress it to 60% quality, so you can hardly tell the difference.

There are two compression methods: one is through webpack plug-in image-Webpack-loader, and the other is through online website.

The webpack plug-in image-webpack-Loader is attached below.

npm i -D image-webpack-loader
Copy the code

Webpack configuration

{
  test: /\.(png|jpe? g|gif|svg)(\? . *)? $/,
  use:[
    {
    loader: 'url-loader'.options: {
      limit: 10000./* If the image size is less than 1000 bytes, it will automatically be converted to base64 code references */
      name: utils.assetsPath('img/[name].[hash:7].[ext]')}},/* Compress the image */
    {
      loader: 'image-webpack-loader'.options: {
        bypassOnDebug: true}}]}Copy the code

(5). Use CSS3 effect instead of pictures as much as possible

There are many images that can be drawn using CSS effects (gradients, shadows, etc.), and in this case CSS3 is better. Because the code size is often a fraction or even a few dozen times the size of the image.

References:

  • Img images are used in Webpack

(6). Use pictures in WebP format

The advantages of WebP are reflected in its better image data compression algorithm, which can bring a smaller image volume, and has the visual recognition of the same image quality; With lossless and lossy compression modes, Alpha transparency, and animation, the conversion effect on JPEG and PNG is quite good, stable and uniform.

References:

  • What are the advantages of WebP over PNG and JPG?

10. Load the code on demand through WebPack, extract the code of the third library, and reduce the redundant code of ES6 to ES5

Lazy loading, or loading on demand, is a great way to optimize a web page or app. This approach essentially separates your code at some logical breakpoint, and then references or is about to reference some new code block as soon as you have done something in some code block. This speeds up the initial loading of the application and reduces its overall size, since some blocks of code may never be loaded.

The file name is generated according to the content of the file, and the component is dynamically imported with import to realize on-demand loading

You can do this by configuring the filename attribute of output. Among the value options for the filename attribute is a [Contenthash] that creates a unique hash based on the contents of the file. When the contents of a file change, [Contenthash] also changes.

output: {
	filename: '[name].[contenthash].js'.chunkFilename: '[name].[contenthash].js'.path: path.resolve(__dirname, '.. /dist'),},Copy the code

Extracting third-party libraries

Since the third-party libraries introduced are generally stable, they will not change frequently. So it’s a better choice to extract them separately and use them as a long-term cache. Use the WebPack4 splitChunk plugin cacheGroups option here.

optimization: {
  	runtimeChunk: {
        name: 'manifest' // Split webPack's Runtime code into a separate chunk.
    },
    splitChunks: {
        cacheGroups: {
            vendor: {
                name: 'chunk-vendors'.test: /[\\/]node_modules[\\/]/,
                priority: -10.chunks: 'initial'
            },
            common: {
                name: 'chunk-common'.minChunks: 2.priority: -20.chunks: 'initial'.reuseExistingChunk: true}}}},Copy the code
  • Test: Controls which modules are matched by the cache group. If passed untouched, it selects all modules by default. Value types that can be passed: RegExp, String, and Function;
  • Priority: indicates the weight to be extracted. A larger number indicates a higher priority. Because a module may satisfy multiple cacheGroups, the one with the highest weight will determine which one is extracted;
  • ReuseExistingChunk: indicates whether to use the existing chunk. If true, it means that if the modules contained in the current chunk have been extracted, no new chunk will be generated.
  • MinChunks (default: 1) : The minimum number of times this code block should be referred to before it is split
  • Chunks (async by default) : Initial, async, and all
  • Name (name of the packaged chunks) : string or function (the function can customize the name based on the condition)

Reduce redundant code from ES6 to ES5

Babel’s translated code needs some helper functions to perform the same function as the original code, such as:

class Person {}
Copy the code

Will be converted to:

"use strict";

function _classCallCheck(instance, Constructor) {
  if(! (instanceinstanceof Constructor)) {
    throw new TypeError("Cannot call a class as a function"); }}var Person = function Person() {
  _classCallCheck(this, Person);
};
Copy the code

Here, _classCallCheck is a helper function, and if you have classes declared in multiple files, there will be multiple such helpers.

The @babel/ Runtime package declares all the helper functions that need to be used, and the @babel/ plugin-transform-Runtime package will import all the files that need helper functions from the @babel/ Runtime package:

"use strict";

var _classCallCheck2 = require("@babel/runtime/helpers/classCallCheck");

var _classCallCheck3 = _interopRequireDefault(_classCallCheck2);

function _interopRequireDefault(obj) {
  return obj && obj.__esModule ? obj : { default: obj };
}

var Person = function Person() {(0, _classCallCheck3.default)(this, Person);
};
Copy the code

Instead of building helper classCallCheck, you’re referring to helpers/classCallCheck in @babel/ Runtime.

The installation

npm i -D @babel/plugin-transform-runtime @babel/runtime
Copy the code

Use in.babelrc files

"plugins": [
        "@babel/plugin-transform-runtime"
]
Copy the code

References:

  • Babel 7.1 describes the transform-Runtime polyfill env
  • Lazy loading
  • The Vue route is loaded lazily
  • Webpack cache
  • Learn about webPack4’s splitChunk plugin step by step

11. Reduce redrawings and rearrangements

Browser Rendering process

  1. Parse the HTML to generate a DOM tree.
  2. Parse CSS to generate CSSOM rule trees.
  3. Parse JS, manipulate DOM tree and CSSOM rule tree.
  4. The DOM tree is combined with the CSSOM rule tree to generate the render tree.
  5. Start the layout by traversing the render tree, calculating the position size information of each node.
  6. The browser sends data about all the layers to the GPU, which synthesizes the layers and displays them on the screen.

rearrangement

Changing the position or size of a DOM element causes the browser to regenerate the render tree, a process called rearrangement.

redraw

When the rendered tree is regenerated, each node of the rendered tree is drawn to the screen, a process called redrawing. Not all actions will result in a rearrangement, such as changing the font color, but only redrawing. Remember, rearranging leads to redrawing, redrawing does not lead to rearranging.

Both rearrange and redraw operations are expensive because the JavaScript engine thread and the GUI rendering thread are mutually exclusive, and only one of them can work at a time.

What operations cause rearrangements?

  • Add or remove visible DOM elements
  • Element position change
  • Element size change
  • Content change
  • Browser window size changed

How to reduce rearrangement redraw?

  • When modifying a style in JavaScript, it is best not to write the style directly, but to replace the class to change the style.
  • If you want to perform a series of operations on a DOM element, you can take the DOM element out of the document flow and, when you’re done, bring it back to the document. Hidden elements (display: None) or DocumentFragement are recommended for this solution.

12. Use event delegates

Event delegates take advantage of event bubbling by specifying only one event handler to manage all events of a certain type. All events that use buttons (most mouse and keyboard events) are suitable for event delegate technology, which can save memory.

<ul>
  <li>apple</li>
  <li>banana</li>
  <li>The pineapple</li>
</ul>

// good
document.querySelector('ul').onclick = (event) = > {
  const target = event.target
  if (target.nodeName === 'LI') {
    console.log(target.innerHTML)
  }
}

// bad
document.querySelectorAll('li').forEach((e) = > {
  e.onclick = function() {
    console.log(this.innerHTML)
  }
}) 
Copy the code

13. Be aware of the locality of the procedure

A well-written computer program often has good locality. They tend to refer to items near the most recently referenced data item, or to the most recently referenced data item itself. This tendency is called the locality principle. Programs with good locality run faster than programs with poor locality.

Locality usually takes two different forms:

  • Temporal locality: In a program with good temporal locality, a memory location that has been referenced once is likely to be referenced many times in the near future.
  • Spatial locality: In a program with good spatial locality, if a memory location is referenced once, the program is likely to reference a nearby memory location in the near future.

Example of time locality

function sum(arry) {
	let i, sum = 0
	let len = arry.length

	for (i = 0; i < len; i++) {
		sum += arry[i]
	}

	return sum
}
Copy the code

In this example, the variable sum is referenced once in each iteration of the loop, so there is good temporal locality for sum

Example of spatial locality

Programs with good spatial locality

// A two-dimensional array
function sum1(arry, rows, cols) {
	let i, j, sum = 0

	for (i = 0; i < rows; i++) {
		for (j = 0; j < cols; j++) {
			sum += arry[i][j]
		}
	}
	return sum
}
Copy the code

Poor spatial locality of the program

// A two-dimensional array
function sum2(arry, rows, cols) {
	let i, j, sum = 0

	for (j = 0; j < cols; j++) {
		for (i = 0; i < rows; i++) {
			sum += arry[i][j]
		}
	}
	return sum
}
Copy the code

Look at the two examples of spatial locality above, as in the example where each element of an array is accessed sequentially starting from each row, called a reference pattern with step 1. If you access every k elements in an array, this is called a reference pattern of step k. In general, spatial locality decreases with the increase of step size.

What’s the difference between these two examples? The difference is that the first example scans an array by line, with each line scanned followed by the next. The second example is to scan an array by column, and immediately scan for elements in the same column in the next row.

Arrays are stored in line order in memory. As a result, the example of scanning arrays step by step gives a 1 reference pattern with good spatial locality; Another example has a step size of ROWS, which has poor spatial locality.

The performance test

Operating environment:

  • cpu: i5-7400
  • Browser: Chrome 70.0.3538.110

A two-dimensional array of 9000 length (subarray length is also 9000) is tested 10 times for spatial locality, and the time (milliseconds) is averaged. The results are as follows:

The examples used are the two spatial locality examples described above

Step 1 Step length is 9000
124 2316

Based on the above test results, the execution time of the array with step 1 is an order of magnitude faster than that of the array with step 9000.

Conclusion:

  • Programs that refer to the same variable repeatedly have good temporal locality
  • For a program with a reference pattern of step size k, the smaller the step size is, the better the spatial locality is. Programs that jump around in memory with long strides have poor spatial locality

References:

  • Deep understanding of computer systems

14. If-else comparison switch

The greater the number of criteria, the more likely it is to use switches instead of if-else.

if (color == 'blue') {}else if (color == 'yellow') {}else if (color == 'white') {}else if (color == 'black') {}else if (color == 'green') {}else if (color == 'orange') {}else if (color == 'pink') {}switch (color) {
    case 'blue':

        break
    case 'yellow':

        break
    case 'white':

        break
    case 'black':

        break
    case 'green':

        break
    case 'orange':

        break
    case 'pink':

        break
}
Copy the code

In cases like these, switching is best. Assuming the color value is pink, then the if-else statement needs to make 7 judgments and the switch only needs to make one judgment. Switch statements are also better in terms of readability.

In terms of timing, switching is better when the condition value is greater than two. However, there are things that the switch cannot do if-else. For example, switch cannot be used if there are multiple conditions.

15. A lookup table

When there are a lot of conditional statements and switch and if-else are not the best options, try lookup tables. Lookup tables can be built using arrays and objects.

switch (index) {
    case '0':
        return result0
    case '1':
        return result1
    case '2':
        return result2
    case '3':
        return result3
    case '4':
        return result4
    case '5':
        return result5
    case '6':
        return result6
    case '7':
        return result7
    case '8':
        return result8
    case '9':
        return result9
    case '10':
        return result10
    case '11':
        return result11
}
Copy the code

You can convert this switch statement into a lookup table

const results = [result0,result1,result2,result3,result4,result5,result6,result7,result8,result9,result10,result11]

return results[index]
Copy the code

If the conditional statement is not a value but a string, you can use an object to create a lookup table

const map = {
  red: result0,
  green: result1,
}

return map[color]
Copy the code

16. Avoid page jams

60FPS with device refresh rate

Most devices currently have a screen refresh rate of 60 times per second. Therefore, if there is an animation or gradient effect in the page, or if the user is scrolling the page, the rate at which the browser renders the animation or each frame of the page also needs to be consistent with the refresh rate of the device screen.

The budgeted time for each frame was just over 16 milliseconds (1 second / 60 = 16.66 milliseconds). But in reality, the browser has deorganizing to do, so all your work needs to be done in 10 milliseconds. If this budget is not met, the frame rate will drop and the content will wobble on the screen. This phenomenon is often referred to as cat-off and can have a negative impact on the user experience.

Suppose you modify the DOM in JavaScript and trigger style changes, rearrange, redraw and finally draw to the screen. If any of these takes too long, it will take too long to render the frame and the average frame rate will drop. Assuming that this frame takes 50ms, the frame rate at this point is 1s / 50ms = 20fps, and the page looks like it’s stuck.

For some long running JavaScript, we can use timers to slice and delay execution.

for (let i = 0, len = arry.length; i < len; i++) {
	process(arry[i])
}
Copy the code

Given that the loop structure above is too complex because of process(), or too many elements in the array, or even both, try sharding.

const todo = arry.concat()
setTimeout(function() {
	process(todo.shift())
	if (todo.length) {
		setTimeout(arguments.callee, 25)}else {
		callback(arry)
	}
}, 25)
Copy the code

If you’re interested in learning more, check out High-performance JavaScript chapter 6 and Efficient Front-end: Web Efficient Programming and Optimization Practices Chapter 3.

References:

  • Rendering performance

17. Use requestAnimationFrame for visual changes

From point 16, we know that most devices have a screen refresh rate of 60 times per second, which means the average time per frame is 16.66 milliseconds. When animating in JavaScript, it is best to start at the beginning of each frame. The only way to ensure that JavaScript runs at the beginning of the frame is to use the requestAnimationFrame.

/** * If run as a requestAnimationFrame callback, this * will be run at the start of the frame. */
function updateScreen(time) {
  // Make visual updates here.
}

requestAnimationFrame(updateScreen);
Copy the code

If we animate with setTimeout or setInterval, the callback will run at some point in the frame, perhaps right at the end, which can often cause us to lose frames and get stuck.

References:

  • Optimize JavaScript execution

18. Use Web Workers

The Web Worker uses other Worker threads to be independent of the main thread, and it can perform tasks without interfering with the user interface. A worker can send a message to the JavaScript code that created it by sending the message to the event handler specified by that code (and vice versa).

Web Worker is suitable for long-running scripts that work with pure data or have nothing to do with the browser UI.

Creating a new worker is as simple as specifying the URI of a script that executes the worker thread (main.js) :

var myWorker = new Worker('worker.js');
// You can send a message to the worker via the postMessage() method and the onMessage event.
first.onchange = function() {
  myWorker.postMessage([first.value,second.value]);
  console.log('Message posted to worker');
}

second.onchange = function() {
  myWorker.postMessage([first.value,second.value]);
  console.log('Message posted to worker');
}
Copy the code

After receiving the message in the worker, we can write an event handler code as a response (worker.js) :

onmessage = function(e) {
  console.log('Message received from main script');
  var workerResult = 'Result: ' + (e.data[0] * e.data[1]);
  console.log('Posting message back to main script');
  postMessage(workerResult);
}
Copy the code

The onMessage handler is executed immediately after the message is received, and the message itself is used in the code as a data property of the event. Here we simply multiply the two numbers and use the postMessage() method again to pass the result back to the main thread.

Back on the main thread, we use onMessage again in response to the message the worker sent back:

myWorker.onmessage = function(e) {
  result.textContent = e.data;
  console.log('Message received from worker');
}
Copy the code

Here we take the data of the message event and set it to the textContent of the result, so the user can see the result of the operation directly.

However, within the worker, you cannot directly manipulate the DOM node, nor can you use the default methods and properties of the Window object. However, you can use a number of things under window objects, including WebSockets, IndexedDB, and Data storage mechanisms such as FireFox OS’s dedicated Data Store API.

References:

  • Web Workers

19. Use bit operations

Numbers in JavaScript are stored in 64-bit format using the IEEE-754 standard. But in bitbit operations, numbers are converted to a signed 32-bit format. Even when transformations are required, bit operations are much faster than other mathematical operations and Boolean operations.

modulus

Since the lowest order of an even number is 0 and an odd number is 1, the modulo operation can be replaced by a bit operation.

if (value % 2) {
	/ / odd
} else {
	/ / even
}
/ / operation
if (value & 1) {
	/ / odd
} else {
	/ / even
}
Copy the code
integer
~ ~10.12 / / 10~ ~10 / / 10~ ~'1.5' / / 1~ ~undefined / / 0~ ~null / / 0
Copy the code
A bitmask
const a = 1
const b = 2
const c = 4
const options = a | b | c
Copy the code

By defining these options, you can determine whether a/b/ C is in options by bitwise and operation.

// Whether option b is among the options
if (b & options) {
	...
}
Copy the code

20. Do not override native methods

No matter how optimized your JavaScript code is, it can’t beat native methods. Because native methods are written in a low-level language (C/C++) and are compiled into machine code as part of the browser. Use native methods when they are available, especially mathematical and DOM operations.

21. Reduce the complexity of CSS selectors

(1). The browser reads the selector, following the principle of reading from the right to the left of the selector.

To see a sample

#block .text p {
	color: red;
}
Copy the code
  1. Find all P elements.
  2. Find out if the element in result 1 has a parent element with the class name text
  3. Find out if the element in result 2 has a parent element with ID block

(2). CSS selector priority

Inline > ID Selector > Class Selector > Tag SelectorCopy the code

Based on the above two information, a conclusion can be drawn.

  1. The shorter the selector, the better.
  2. Try to use higher-priority selectors, such as ID and class selectors.
  3. Avoid wildcard characters (*).

Finally, as far as I can tell, there is no need to optimize CSS selectors because the performance difference between the slowest and the slowest selectors is very small.

References:

  • CSS selector performance
  • Optimizing CSS: ID Selectors and Other Myths

22. Use Flexbox instead of the older layout model

In the early days of CSS layouts we could position elements absolutely, relative, or floating. Now, we have a new layout, the Flexbox, which has the advantage of better performance than the earlier layouts.

The screenshot below shows the layout overhead of using a float on 1300 boxes:

Then let’s recreate this example with flexbox:

Now, for the same number of elements and the same visual appearance, the layout takes much less time (3.5ms and 14ms, respectively, in this case).

Flexbox compatibility is a bit of a problem though, not all browsers support it, so use it with caution.

Browser compatibility:

  • Chrome 29+
  • Firefox 28+
  • Internet Explorer 11
  • Opera 17+
  • Safari 6.1+ (prefixed with -webkit-)
  • The Android 4.4 +
  • IOS 7.1+ (prefixed with -webkit-)

References:

  • Use Flexbox instead of the older layout model

23. Use the Transform and opacity properties to animate

In CSS, property changes such as Transforms and opacity do not trigger rearrangements or redraws. They are properties that can be processed by a composite alone.

References:

  • Use the Transform and opacity property changes to animate

24. Use rules wisely to avoid over-optimization

There are two main types of performance optimization:

  1. Load time optimization
  2. Runtime optimization

Of the above 23 recommendations, the first 10 are load-time optimizations and the last 13 are run-time optimizations. Generally speaking, it is not necessary to use all 23 performance optimization rules, it is best to make specific adjustments according to the user group of your site, saving effort and time.

You have to find the problem before you can solve it, or you can’t start. So it’s a good idea to investigate the loading and running performance of your site before tuning.

Checking loading Performance

How well a website loads depends on the white screen time and the first screen time.

  • White screen time: refers to the time from the input url to the beginning of the page display.
  • First screen time: refers to the time from the input url, to the page fully rendered.

To get the white-screen time, place the following script in front of .

<script>
	new Date() - performance.timing.navigationStart
</script>
Copy the code

In the window. The onload event in the execution of new Date () – performance. Timing. NavigationStart can obtain first screen time.

Checking performance

With Chrome’s developer tools, you can see how your site performs at runtime.

Open the website, press F12 to select Performance, click on the top left corner of the gray dot, become red to represent the start of recording. When you’re done using the site, click Stop, and you’ll see a performance report for the duration of the site. If there are red blocks, there is a frame drop. If it’s green, it means FPS is good. Please use the search engine to search the specific use of performance, after all, the space is limited.

By checking the loading and running performance, you should have a good idea of the performance of your site. Use the above 23 tips to optimize your website as much as possible. Go!

References:

  • performance.timing.navigationStart

Other References

  • Why is performance important
  • High-performance website construction guide
  • Authoritative Guide to Web Performance
  • High-performance JavaScript
  • Efficient Front end: Web efficient programming and optimization practices

More articles are welcome