Abstract: Performance is one of the most important user experiences.
- Author: Sailing in the waves
FundebugReproduced with authorization, copyright belongs to the original author.
primers
The Internet has a famous eight-second rule. Users get impatient when they visit a Web page that takes more than eight seconds, and they give up if it takes too long to load. Most users expect web pages to load in less than 2 seconds. In fact, for every extra second of loading time, you lose 7% of your users. 8 seconds is not exactly 8 seconds, it just shows website developers how important loading time is. So how can we optimize page performance and speed up page loading? This is the main problem to be discussed in this article, but performance optimization is a comprehensive problem, there is no standard answer, it is not easy to list all aspects. This article focuses on just a few key points, and here are some of the common ways I summarize performance tuning:
1. Resource compression and consolidation
Mainly include these aspects: HTML compression, CSS compression, JS compression and chaos and file merge. Resource compression can remove unnecessary characters, such as carriage returns and Spaces, from files. When you write code in an editor, you use indentation and comments, which undoubtedly make your code concise and readable, but they also add extra bytes to your document.
1. The HTML compression
HTML code compression compresses characters that make sense in text files but are not displayed in HTML, including Spaces, tabs, and newlines, as well as other meaningful characters such as HTML comments.
How to compress HTML:
- Use online sites for compression (usually not used during development)
- Nodejs provides htML-minifier tools
- The back-end template engine renders compression
2. CSS code compression:
CSS code compression is simply invalid code removal and CSS semantic consolidation
How to compress CSS:
- Use online sites for compression (usually not used during development)
- Use html-minifier tool
- Use clean-CSS to compress the CSS
3. Js compression and confusion
Js compression and chaos mainly includes the following parts:
- Deletion of invalid characters
- Remove comments
- Code semantics reduction and optimization
- Code protection (it is important that the code logic becomes cluttered, reducing the readability of the code)
How to compress and clutter JS
- Use online sites for compression (usually not used during development)
- Use html-minifier tool
- Use UglifyJs2 to compress JS
In fact, CSS compression and JS compression and chaos than HTML compression benefits are much larger, while CSS code and JS code is much more than HTML code, through CSS compression and JS compression bring traffic reduction, will be very obvious. So for large companies, HTML compression is optional, but CSS compression and JS compression and chaos must have!
4. Merge files
As you can see from the figure above, non-merge requests have the following disadvantages:
- There are inserted uplink requests between files, adding n-1 network latency
- It is even more affected by the loss of packets
- The keep-alive mode may be broken. The keep-alive mode may be disconnected when passing through the proxy server. That is, the keep-alive state cannot be maintained all the time
Compressing and merging CSS and JS can reduce the number of HTTP requests to a site, but merging files can cause problems: first screen rendering and cache invalidation. So how do you deal with this problem? —- common library merge and merge of different pages.
How do I merge files
- Use an online site for file consolidation
- File merging using NodeJS (gulp, FIS3)
Two, non-core code asynchronous loading asynchronous loading mode
1. Asynchronous loading
There are three ways to load asynchronously — async and defer, and dynamic script creation
1 the async mode
- Async is a new HTML5 property that requires support from Chrome, FireFox, and Internet Explorer 9+
- The async property specifies that scripts are executed asynchronously once they are available
- The async property applies only to external scripts
- If there are multiple scripts, this method does not guarantee that the scripts will be executed sequentially
<script type="text/javascript" src="xxx.js" async="async"></script>
Copy the code
(2) the defer way
- Compatible with all browsers
- The defer property specifies whether script execution is delayed until the page loads
- In the case of multiple scripts, this method ensures that all scripts with the defer property are executed sequentially
- If the script does not change the content of the document, you can add the defer attribute to the script tag to speed up the processing of the document
③ Create a script tag dynamically
Before you define defer and async, the asynchronous loading method is to dynamically create a script, use the window.onload method to ensure that the page is loaded, and then insert the script tag into the DOM as follows:
function addScriptTag(src){
var script = document.createElement('script');
script.setAttribute("type"."text/javascript");
script.src = src;
document.body.appendChild(script);
}
window.onload = function(){
addScriptTag("js/index.js");
}
Copy the code
2. The difference between asynchronous loading
1) Defer will be executed after the HTML has been parsed. If there are more than one defer will be executed in the order it was loaded
2) Async is executed immediately after loading. If there are more than one async, the execution order has nothing to do with the loading order
The blue line represents network reads and the red line represents execution time, both of which are for scripting; The green line represents HTML parsing.
To recommend a good BUG monitoring toolFundebug, welcome free trial!
Third, use the browser cache
For Web applications, caching is a great way to improve page performance while reducing server stress.
Browser Cache type
1. Strong cache: does not send a request to the server, but reads resources directly from the cache. In the Network option of chrome console, you can see that this request returns a status code of 200, and size displays from Disk cache or from Memory cache.
Related to the header:
Expires: The expiration time in the response header. If the browser reloads the resource within this expiration time, the strong cache will be hit. The value is a time string in the GMT format of an absolute time, such as Expires:Thu,21 Jan 2018 23:39:02 GMT
Cache-control: This is the relative time, expressed in seconds, when the Cache is configured. When set to max-age=300, it means that the strong cache will be hit if the resource is reloaded within 5 minutes of the correct return time of the request (which is also recorded by the browser). Such as the cache-control: Max – age = 300,
In a nutshell: Expires is a product of HTTP1.0, cache-Control is a product of HTTP1.1, and cache-Control takes precedence over Expires when both exist. In some environments where HTTP1.1 is not supported, Expires can be useful. So Expires is an outmoded object that currently exists as a way to write compatibility. Strong cache Determines whether the cache is cached based on whether it has passed a certain time or a certain period, and does not care whether the file on the server side has been updated. This may lead to the loading file is not the latest content on the server side. Then how do we know whether the content on the server side has been updated compared with that on the client side? At this point we need to negotiate the cache policy.
2. Negotiation cache: sends a request to the server. The server will judge whether the negotiation cache is matched according to some parameters of the request header. In addition, the negotiated cache must be used with cache-control.
Related to the header:
(1) Last-Modified and if-Modified-since: When a resource is first requested, the server passes the resource to the client, and the Last Modified time of the resource is returned to the client in the form of “Last-Modified: GMT” on the entity head.
Last-Modified: Fri, 22 Jul 2016 01:47:00 GMT
Copy the code
The information on the client for the resource tag, next time again request, will bring along with the information attached in the request message to the server to do check, if transfer time value and the resource on the server end modification time is consistent, then the resource has not been modified, direct return a status code of 304, the content is empty, thus saving the transmission of data. If the two times are inconsistent, the server sends back the resource with a 200 status code, similar to the first request. This ensures that resources are not repeatedly issued to the client and that the client can get the latest resources when the server changes. A 304 response is usually much smaller than a static resource, thus saving network bandwidth.
But last-Modified has some disadvantages:
I. Some servers cannot obtain the exact change time
ⅱ. The file modification time has been changed, but the file content has not been changed
Since the cache is not sufficient based on the file modification time, can the cache policy be determined directly based on the file content modification? – the ETag and If – None – Match
②ETag and if-none-match: ETag is the response header returned by the server when the resource was loaded last time. It is a unique identifier of the resource. If the resource changes, the ETag will be generated again. When the browser sends a request to the server next time it loads a resource, it will add the Etag value returned last time to if-none-match in the request header. The server only needs to compare the if-none-match value sent by the client with the Etag of the resource on the server. It is a good idea to determine whether the resource has been modified relative to the client. If the server finds that the ETag does not match, it sends the new resource (including the new ETag) to the client in a regular GET 200 packet return. If the ETag is consistent, 304 is returned to inform the client to use the local cache directly.
A comparison between the two:
First, Etag is superior to Last-Modified in accuracy. Last-modified time is in seconds. If a file changes several times within a second, their last-Modified time is not actually Modified, but Etag changes each time to ensure accuracy. If the server is load-balanced, the last-Modified generated by each server may also be inconsistent.
Second, in terms of performance, Etag is inferior to Last-Modified, because last-Modified only takes time to record, whereas Etag requires the server to compute a hash value through an algorithm. Third, server verification takes Etag as the priority
Caching mechanism
The mandatory-cache takes precedence over the negotiated Cache. If mandatory-cache (Expires and cache-control) is valid, the Cache is used directly, and If not, the negotiated Cache (last-modified/if-modified-since and Etag/if-none-match) is used. The server decides whether to use the negotiation cache. If the negotiation cache is invalid, the request cache is invalid, and the request result is obtained again and stored in the browser cache. If it takes effect, return to 304 and continue to use the cache. The main process is as follows:
The impact of user behavior on browser caching
1. It is a normal user behavior to access the address bar and redirect links, which triggers the browser cache mechanism.
2.F5 refreshes, the browser sets max-age to 0, skips strong cache judgment, and implements negotiation cache judgment.
3. CTRL +F5 refresh to skip the strong cache and negotiation cache and directly pull resources from the server.
To learn more about caching, dig deep into the browser’s caching mechanism
4. Use CDN
The pursuit of speed for large Web applications does not stop at just using browser cache, because browser cache is always only to improve the speed of second access. For the acceleration of first access, we need to optimize from the Network level, the most common means is CDN (Content Delivery Network, Content delivery network) acceleration. By caching static resources (such as javascript, CSS, images, etc.) to the CDN nodes of the same network operator close to the user, it can not only improve the user’s access speed, but also save the bandwidth consumption of the server and reduce the load.
How does CDN speed up?
This is actually a CDN service provider in the deployment of computing nodes in various provinces, the CDN acceleration of the content of the website are cached in the network edge, in different parts of the same user can access to the nearest node of CDN network line, when the request after reaching the CDN node, the node will judge oneself the validity of the contents of the cache, if effective, The cache content is immediately responded to the user to speed up the response. If the CDN node’s cache is invalid, it will go to our content source server according to the service configuration to obtain the latest resource response to the user, and cache the content for the response to the subsequent users. Therefore, as long as one user in a region loads resources first and a cache is established in the CDN, other subsequent users in the region will benefit.
5. Pre-resolve DNS
Resource preloading is another performance optimization technique that can be used to foretell browsers that certain resources are likely to be used in the future.
DNS preresolution is used to tell the browser that we may fetch resources from a particular URL in the future, and when the browser actually uses a resource in the domain, DNS resolution can be completed as quickly as possible. For example, if we could get images or audio resources from example.com in the future, we could add the following in the tag at the top of the document:
<link rel="dns-prefetch" href="//example.com">
Copy the code
When we request a resource from this URL, we no longer need to wait for DNS resolution. This technique is particularly useful for using third-party resources. A simple line of code tells compatible browsers to do DNS pre-parsing, which means that when the browser actually requests a resource in the domain, DNS parsing has already been done, saving valuable time. Note that the browser automatically enables DNS Prefetching on the href of a tag, so there is no need to manually set the link in head for any domain name contained in the tag. This does not work with HTTPS and requires meta to force the function on. This restriction prevents eavesdroopers from inference about the host name of the hyperlink displayed on an HTTPS page based on DNS Prefetching. The following sentence forces domain name resolution to be enabled for the A tag
<meta http-equiv="x-dns-prefetch-control" content="on">
Copy the code
If you find this article helpful, please feel free to like and follow my GitHub blog. Thank you very much!
Refer to the article
- Front-end performance optimization – Resource preloading
- CDN deployment for front-end engineering
- Issue 887. Summary of browser caching mechanisms
- Thoroughly understand the browser caching mechanism