preface
Performance optimization has always been a hot issue in the front end. As an excellent front-end developer, performance optimization is a necessary skill. This paper will comprehensively summarize common front-end optimization methods from four general directions: reducing the number of HTTP requests, reducing the size of a single request resource, rendering optimization, resource loading optimization, and many small directions. (See table of contents for more information)
Reduce the number of HTTP requests
1. Browser cache policy
There are four aspects to the browser caching mechanism, which are listed in order of priority when requesting resources:
- Memory Cache: Indicates the Cache stored in Memory. In terms of priority, it is the first cache that the browser tries to hit. It is the fastest type of cache in terms of efficiency. Browsers are “frugal.” We found that Base64 images can almost always be stuffed into memory cache, which can be seen as a browser “self-preservation” to save on rendering costs. In addition, small JS and CSS files also have a greater chance to be written into memory – by contrast, large JS and CSS files do not have this treatment, memory resources are limited, they tend to be directly thrown into the disk.
- Service Worker Cache: is a Javascript thread that is independent of the main thread. It is detached from the browser form and therefore cannot access the DOM directly. Such an independent personality prevents the “personal behavior” of Service workers from interfering with page performance. This “behind-the-scenes Worker” can help us achieve offline caching, message push, network proxy and other functions. The offline Cache we implement with Service workers is called the Service worker Cache.
- HTTP Cache: It is divided into strong Cache and negotiated Cache. The strong cache has a higher priority. The negotiation cache is enabled only when the strong cache fails to be matched. Strong caching is controlled using the Expires and cache-Control fields in the HTTP header. In a strong cache, when a request is made again, the browser determines whether the target resource “matches” the strong cache based on the expires and cache-control parameters. If so, the browser directly obtains the resource from the cache without communicating with the server. The negotiated cache depends on the communication between the server and the browser. In the negotiation cache mechanism, the browser needs to ask the server for information about the cache to determine whether to resend the request, download the complete response, or obtain cached resources from the local server. If the server indicates that the cache resource is Not Modified, the resource is redirected to the browser cache. In this case, the status code for the network request is 304.
- Push Cache: Refers to the Cache that HTTP2 exists in the Server Push phase. Push Cache is the last line of defense for caching. The browser will only ask for Push Cache if the Memory Cache, HTTP Cache, and Service Worker Cache all miss. Push Cache is a Cache that exists during the session and is released when the session terminates. Different pages can share the same Push Cache as long as they share the same HTTP2 connection.
2.CDN
CDN has two core points, one is cache, one is back source. “Caching” means that we copy a copy of the resource to the CDN server, and “back source” means that the CDN finds that it does not have the resource (usually the cached data expires) and turns to the root server (or its upper layer server) for the resource. CDN is usually used to store static resources. Static resources refer to resources such as JS, CSS and images that do not need to be calculated by a business server. Users can get data from a better server for faster access and less load on the source site. In addition, the domain name of THE CDN must be different from the domain name of the main business server. Otherwise, the Cookie under the same domain name runs everywhere, wasting the overhead of performance traffic. The CDN domain name is placed under a different domain name, which can perfectly avoid unnecessary cookies!
3. Image processing
- Sprite is a very popular technique that combines several images on a website into a single image. It can reduce the number of HTTP requests on a website, but when the integrated image is large, it can be slow to load at one time. With the popularity of font graphics and SVG graphics, this technology has gradually faded from the scene.
- Base64 Image encoding: Embedding the content of an image in HTML in Base64 format reduces the number of HTTP requests. However, because Base64 encoding uses 8-bit characters to represent six bits of information, the encoded size is approximately 33% larger than the original value
- Font icon: Font icon through their own font Unicode code, find the file according to the Unicode code to find the shape, in the form of text instead of pictures.
4. Merge files
Combine common JS and CSS styles into one large file. Combine JS and CSS files separately according to the requirements of different pages.
5. Reduce redirection
Avoid redirects as much as possible. When a page is redirected, it delays the transfer of the entire HTML document. Nothing is rendered on the page until the HTML document arrives, and no components are downloaded, degrading the user experience. If you must use redirects, such as HTTP to HTTPS, use 301 permanent redirects instead of 302 temporary redirects. If you use 302, you will be redirected to an HTTPS page every time you visit HTTP. Permanent redirects, after the first redirect from HTTP to HTTPS, return the PAGE directly to HTTPS each time you visit HTTP.
Reduce the size of a single request
6. CSS compression, image compression, Gzip compression, JS confusion, etc
CSS compression is simple compression, compression of white space and so on. Image compression, mainly to reduce the volume, in the premise of not affecting the look and feel, can delete some irrelevant colors. You can also use webP images. Gzip compression is mainly for HTML files, it can be repeated in THE HTML part of a package, multiple reuse. Js obfuscation can be as simple as compressing (removing whitespace characters), uglifying (shrinking some variables), or obfuscating js encryption.
Rendering optimization
7. Optimize CSS selectors
CSS selectors, such as #myul li {}, are matched from right to left. Therefore, it is necessary to optimize the selector, mainly in the following aspects:
- Avoid the wildcard * character and select only the elements you need.
- Use label selectors less. If possible, use a class selector instead. #dataList li{} #dataList li{}
- Focus on attributes that can be implemented through inheritance to avoid repeated matching and repeated definitions.
- Rather than gilding the lily, ID and class selectors should not be held back by redundant label selectors. Error:.datalist# title Correct: #title
- Reduce nesting. Descendant selectors have the highest overhead, so we should try to keep the depth of the selectors to a minimum (no more than three layers at most) and use classes to associate each tag element whenever possible.
8. Reduce backflow and redraw times
- Backflow: When we make changes to the DOM that result in a change in the DOM’s geometry (such as changing the width or height of an element, or hiding an element), the browser recalculates the element’s geometry (which also affects the geometry and position of other elements), and then draws the calculated results. This process is called backflow (also known as rearrangement).
- Redraw: When we make changes to the DOM that result in a style change without affecting its geometry (such as changing the color or background color), the browser doesn’t have to recalculate the element’s geometry and simply draw a new style for the element (skipping the backflow shown above). This process is called redrawing.
Redrawing does not necessarily lead to backflow, backflow does lead to redrawing.
9. Reduce DOM operations
As you can see from the above, DOM changes tend to backflow and redraw, so we need to reduce DOM manipulation. Example analysis, the following code:
for(var count=0; count<10000; count++){ document.getElementById('container').innerHTML+=' I am a small test '// Each time we call the DOM interface to retrieve the container element, extra overhead}Copy the code
Evolution of a
// Get container only oncelet container = document.getElementById('container')
for(letcount=0; count<10000; count++){ container.innerHTML +=' I am a small test '
}
Copy the code
Evolution two considers the fact that JS runs much faster than DOM. The core idea behind reducing DOM manipulation is to let JS partial pressure the DOM.
// Reduce unnecessary DOM changeslet container = document.getElementById('container')
let content = ' '
for(letcount=0; count<10000; Count++){// start with content +=' I am a small test '} // When the content is processed, the DOM changes to container.innerhtml = contentCopy the code
In DOM Fragments, the DocumentFragment interface represents a minimal document object that has no parent file. It is used as a lightweight version of Document to store formatted or poorly formatted XML fragments. Because the DocumentFragment is not part of the real DOM tree, its changes do not cause reflow of the DOM tree or performance issues.
let container = document.getElementById('container') // Create a DOM Fragment object as a containerlet content = document.createDocumentFragment()
for(letcount=0; count<10000; Count++){// span can now be created using the DOM APIlet oSpan = document.createElement("span")
oSpan.innerHTML = 'I'm a little test.'Contact.appendchild (oSpan)} // When the content is handled, trigger changes to the real DOM.Copy the code
Evolution four when it comes to over ten thousand toned data rendering, and does not require stuck screen, how to solve? How to render data without jamming the page, that is, instead of rendering tens of thousands of entries at once, part of the DOM should be rendered at once, so you can use requestAnimationFrame to refresh every 16 ms.
setTimeout(() => {// Insert 100000 data const total = 100000 // Insert 20 data at a time, Const loopCount = total/once; const loopCount = total/oncelet countOfRender = 0
let ul = document.querySelector('ul')
function add() {/ / optimize performance, insert not cause reflux const fragments = document. CreateDocumentFragment ()for (let i = 0; i < once; i++) {
const li = document.createElement('li')
li.innerText = Math.floor(Math.random() * total)
fragment.appendChild(li)
}
ul.appendChild(fragment)
countOfRender += 1
loop()
}
function loop() {
if (countOfRender < loopCount) {
window.requestAnimationFrame(add)
}
}
loop()
}, 0)
Copy the code
10. Use event delegates
Event delegation refers to registering event listeners on the parent element. Since the events of the child element will be propagated up to the parent node through event bubbling, the listener function of the parent node can process events of multiple child elements in a unified manner. With event delegation, you can reduce memory usage, improve performance, and reduce code complexity.
11. Throttling and anti-shaking
When the user scrolls, the Scroll event will trigger our listener function for every scroll the user makes. Function execution is performance-hungry, and frequently responding to an event results in a lot of unnecessary page computation. Therefore, we need to optimize further for events that are likely to be triggered frequently. Throttling and anti – shake is very necessary!
- Function throttling: Fires frequently, but only executes code once for a specified period of time
- Function stabilization: trigger frequently, but only execute code once in a specified period of time without triggering the execution condition
// throttling functionfunction throttle(fn,time){
var last = 0;
return function(){
var context = this;
var now = Date.now();
if(now - last >= time){ fn.apply(this, arguments); last = now; }}; } // Static functionfunction debounce(fn, time){
return function(){
var context = this;
clearTimeout(timeId);
timeId = setTimeout(function(){
fn.apply(context, arguements);
}, time);
};
}
Copy the code
Resource loading optimization
12. Resource preloading and lazy loading
- Lazy loading may delay the loading of resources or load certain resources only when certain conditions are met.
- Preloading is to load resources required by users in advance to speed up page loading and ensure good user experience.
Lazy resource loading and resource preloading are off-peak operations. They are not performed when the browser is busy and load resources when the browser space is available, optimizing network performance.
13. Optimize the reference position of CSS and JS files
- The CSS file is placed in the head, then the page.
- The JS file is placed at the bottom of the body, first linked, then the page.
- Try not to put style tags and script tags in the middle of the body.
- JS files that handle pages and page layouts, such as babel-polyfill.js and flexibility.js, are placed in the head.