Today look at from the perspective of performance optimization from the URL to the page, before the two articles from the URL to the page display process, intermediate process happened, and no outstanding performance optimization point, of course, if I don’t know what happened between the I don’t know what performance optimization, the principle, talked so if you haven’t seen before two notes, to combine two before.

What happens between entering a URL and presenting a page?

Rendering Flow of Browser Pages

In order to compile this article, I purchased front-end Performance Optimization Principles and Practices (PS: Input requires Output) from Nuggets, which is mainly about “What happens from the time you enter the URL to the time the page loads?” As an introduction to start the topic, this interview question from the big place to start thinking, that is, two important knowledge dimensions, one is the network level, the second is rendering level. The former involves DNS (Domain name resolution System), IP addressing, TCP connections, HTTP requests and responses, etc. The latter are process and thread concepts, DOM trees, cascading styles, rearrangement and redrawing, and composition. There is a performance optimized mind map in the booklet posted to share with you

Network layer performance optimization

About network level optimization, greeted should be a resource request and loading, as for the DNS domain name resolution, IP addressing, TCP connection that the network infrastructure front field we also can’t do any optimization, and about the optimization of resource request and load, there are a lot of aspects, such as the source code package compression, build optimization, Will be together with webpack engineering to do related aspects of the optimization work. The optimization of Webpack is mainly reflected in two aspects:

  • The webPack build process takes too much time
  • Webpack results are too bulky

Common optimizations for Webpack

  • Build common dependency libraries using the DllPlugin
  • Use Happypack to change the Loader from a single process to multiple processes
  • Tree-shaking removes useless code in advance when packaging
  • Use caching to speed up secondary builds
  • The core of loading on demand isrequire.ensure(dependencies, callback, chunkName)
  • Use Gzip to compress the Request headers in our request headers header.accept-encoding: gzip

I believe there are other webpack optimization solutions, welcome to add oh ~

Image optimization

We often say that page optimization from the key resources, in fact, ignore the key page optimization is the picture optimization. I don’t know if you agree with this point of view, maybe for doing e-commerce very agree, because doing e-commerce is essentially doing pictures; However, image optimization is definitely an important part of our front-end performance optimization. When we think about image optimization in our work, is it a “tradeoff” between image size and quality? The so-called optimization is a “tradeoff” that sacrifices image quality for experience and performance.

We are familiar with several formats of pictures:

  • JPEG/JPG features: lossy compression, small size, fast loading, does not support transparency
  • Png-8 and PNG-24 features: lossless compression, high quality, large size, transparent support
  • SVG features: text files, small size, no distortion, good compatibility
  • Base64 features: text files, dependent encoding, small icon solutions
  • WebP features: support lossy compression and lossless compression, all-rounder player

In the work of the above formats believe that have been used, in fact, there are a lot of picture optimization to be excavated, if not combined with the in-depth work, it is difficult to have their own profound insights. For example, performance optimization is not easy to learn, the fundamental reason is that the front-end technology is complex and changing with each passing day, knowledge is not systematic, difficult to enter.

Cache optimization

Next, the page’s non-image resource loading optimization, also known as resource cache optimization, on the one hand to reduce network IO consumption, and on the other hand to improve access speed. Browser caching is a simple and effective way to optimize front-end performance.

Browser caching mechanism

There are four aspects of the browser caching mechanism, which are listed in order of priority when requesting resources:

1. Memory Cache; 2, Service Worker Cache; HTTP Cache; 4, Push the Cache

  • MemoryCache refers to the cache that exists in memory. In terms of priority, it is the first cache that the browser tries to hit. It is the fastest type of cache in terms of efficiency.

  • A Service Worker is a Javascript thread that is independent of the main thread. It is detached from the browser form and therefore cannot access the DOM directly. Such an independent personality prevents the “personal behavior” of Service workers from interfering with page performance. This “behind-the-scenes Worker” can help us achieve offline caching, message push, network proxy and other functions. The offline Cache we implement with Service workers is called the Service worker Cache. PS: Note that Server workers have requirements for protocols, which must be based on the HTTPS protocol.

  • HTTP Cache is one of the most familiar caching mechanisms in daily development. It is divided into strong cache and negotiated cache. The strong cache has a higher priority. The negotiation cache is enabled only when the strong cache fails to be matched.

Implementation of strong caching: From (http1.0) Expires to (http1.1) cache-control negotiated caching: From last-Modified to Etag negotiated caching, the browser needs to ask the server for information about the cache to determine whether to re-initiate the request, download the complete response, or fetch cached resources locally.

Here’s an authoritative flow chart for HTTP caching:

  • Push Cache refers to the Cache that HTTP2 exists in the Server Push phase. Because the knowledge is relatively new work has not used, do not make too many notes.

Browser Local Storage Web Storage

Finally, the browser localStorage data, before HTML5 has always been cookie, that is to store session state, followed by the development of technology with localStorage and sessionStorage, in order to meet the rich page data cache needs. The difference between them lies in the lifecycle and scope of Web Storage:

  • Life cycle: Local Storage is persistent Local Storage. Data stored in it will never expire. The only way to make it disappear is to manually delete it. Session Storage is temporary local Storage. It is session-level Storage. When the Session ends (the page is closed), the Storage content is also released.

  • Scope: Local Storage, Session Storage, and Cookie all follow the same origin policy. However, the special point of Session Storage is that even if two pages under the same domain name are not opened in the same browser window, their Session Storage content cannot be shared.

In addition to the familiar cookie, localStorage and sessionStorage, there is also a sparse common browser database IndexDB, because I have not used it in my work, I do not have much feeling. But it’s interesting to look at it, and of course you need to have a sense, in case the data is complicated and needs to be stored locally using a browser database. Refer to Ruan Yifeng’s weblog “IndexedDB primer for Browser Databases”

Optimization of rendering layers

After combing through the optimization of the network level, we will look at the optimization of the rendering level. Let’s review the workflow flow chart for each stage of the browser rendering process

Among them, we focus on DOM parsed by HTML interpreter, style of calculation property parsed by CSS interpreter, layer layout module, layer drawing module and view composition module. Because we can make relevant optimization in these places, we will sort out the optimization points from these five aspects one by one.

The optimization of the DOM

DOM optimization is no stranger to reducing the nesting depth of DOM nodes and DOM node operations to avoid rearrangement and redrawing of render trees.

  • Rearrangement: When we make changes to the DOM that result in a change in the DOM’s geometry (such as changing the width, height, or hiding elements), the browser recalculates the element’s geometry (which also affects the geometry and position of other elements) and then draws the calculated results. This process is also called reflux.
  • Redraw: When we make changes to the DOM that result in a style change but do not affect its geometry (such as changing the color or background color), the browser does not have to recalculate the element’s geometry and simply draws a new style for the element, skipping the rearrangement.

The optimization of CSS

The optimization of CSS is mainly reflected in writing specifications, loading sequence of CSS resources, and ANIMATION of CSS.

Style rules of the CSS

The first thing to know is that the CSS engine looks up stylesheets and matches each rule from right to left. It’s important to know that when writing styles, avoid using wildcards *, or element tags, and use selectors; In addition, use appropriate nesting, not multiple layers of deep nesting (maximum three layers of nesting), as much as possible to use classes to associate each tag element.

The CSS block

According to the page rendering flow chart above, we know that CSS loading blocks will affect the page rendering, that is:

CSS is the resource that blocks rendering. It needs to be downloaded to the client as soon as possible to reduce the first rendering time.

For CSS blocking optimization, the booklet summarizes well, do two things: first, as early as possible, CSS in the head tag; Second, enable CDN to optimize static resource loading speed as soon as possible.

CSS animations

The animation optimization of CSS is mainly optimized in the synthesis process. For example, using transform animation will skip the rearrangement and redrawing in the rendering process and directly enter the synthesis stage, because the transform attribute is the attribute of the element that is neither laid out nor drawn. In addition, composition will greatly improve the drawing efficiency compared with rearrangement and redrawing.

The optimization of JS

As for the optimization of JS for page rendering, it is mainly blocking and DOM operation, and the optimization point is also to reduce rearrangement and redraw.

The loading method of JS

  • Normal mode
<script src="index.js"></script>
Copy the code

We usually place the js file at the end of the body tag, based on browser rendering principles, to avoid blocking.

  • Async mode
<script async src="index.js"></script>
Copy the code

In async mode, JS does not block the browser from doing anything else. It loads asynchronously, and when it finishes loading, the JS script executes immediately.

  • Defer mode
<script defer src="index.js"></script>
Copy the code

In the defer mode, JS loads are asynchronous and execution is deferred. When the entire document has been parsed and the DOMContentLoaded event is about to be triggered, the JS files that have been tagged defer will start executing in sequence.

JS manipulation DOM optimization

Minimize DOM manipulation in JS to avoid overrendering. Such as:

let container = document.getElementById('container')
let content = ' '
for(let count=0; count<10000; count++){// Start with the content
  content += ' I am a small test '
} 
// When the content is processed, DOM changes are triggered
container.innerHTML = content
Copy the code

In addition, you can use The DocumentFragment to partial pressure the DOM to reduce DOM operations.

let container = document.getElementById('container') // Create a DOM Fragment object as a containerlet content = document.createDocumentFragment()
for(letcount=0; count<10000; Count++){// span can now be created using the DOM APIlet oSpan = document.createElement("span")
  oSpan.innerHTML = 'I'm a little test.'Contact.appendchild (oSpan)} // When the content is handled, trigger changes to the real DOM.Copy the code

Avoid rearrangement and redraw in JS

  • Sometimes we want to calculate the layout of an element multiple times. We do this:
Const el = document.getelementById (const el = document.getelementById ('el') 
letOffLeft = el.offsetLeft, offTop = el.offsetTop // Calculate at JS levelfor(leti=0; i<10; I++) {offLeft += 10 offTop += 10} // apply the result to DOM once"px"
el.style.top = offTop  + "px"
Copy the code
  • Avoid changing styles line by line and use class names to merge styles
const container = document.getElementById('container')
container.style.width = '100px'
container.style.height = '200px'
container.style.border = '10px solid red'
container.style.color = 'red'// Use the class name const container = document.getelementById ('container')
container.classList.add('basic_style')
Copy the code
  • The DOM can be operated “offline” by displaying: None, manipulating its properties, and then displaying :block. It’s also a good optimization for frequent manipulation that changes its properties.

  • Flush queues: Browsers are not that simple

// How many times does the browser backflow or redraw this code?let container = document.getElementById('container')
container.style.width = '100px'
container.style.height = '200px'
container.style.border = '10px solid red'
container.style.color = 'red'
Copy the code

Will the above snippet be rearranged or redrawn four times by the browser? We can try it ourselves, but it’s not because modern browsers are clever. The browser itself knows that if every DOM operation returns a backflow or redraw in real time, performance is unsustainable. So it caches a flush queue of its own, and fills it with the backflow and redraw tasks that we trigger, and then flushes them out when there are more tasks in the queue, or when it reaches a certain interval, or when it “has to.” So even though we made four DOM changes above, we only triggered one Layout and one Paint.

summary

When it comes to performance optimization, there’s a real feeling of ambiguity. Here is only “from URL to page display” perspective to learn the knowledge of performance optimization, I believe that there are a lot of content is floating on the surface, the specific business scene is certainly more complex and changeable, so the front-end performance optimization point is complex, more comprehensive test of personal work ability. Anyway, performance optimization is a long way off. Be happy and painful