This article writes about front-end performance optimization from a loading perspective.

It should be noted that the following brain burst map is viewed from quadrant 1 and quadrant 4.

I. Optimization of loading volume

We can’t control the user’s network condition, since we want to load fast, of course, it is not loaded, can load less load less.

  • Merging code is important in the HTTP1.1 era.
  • Uglify and compression are important all the time. Brotli has a higher compression rate, which is already supported by Chrome (browser support can be found at http://caniuse.com), as well as some cloud storage systems in China. If the browser supports this you will see the word BR in the accept-encoding request header. The brotli compressed message is returned with content-Encoding br.
  • Image compression and whether to do lossy compression should be decided according to the actual business situation.
  • Image lazy loading, http1.1 is obvious, and full loading is a waste of server resources. The basic idea is to start with a placeholder image (or background image) and load the image when you scroll into the viewable area.
  • For different rates to provide different sizes of pictures, the need to support the image server cutting to do more cool.
  • It is generally correct, but not always, to use CSS implementations first if you can.
  • Reduce cookie transmission. Most of the time, images, CSS, js and other resources do not need cookies, so putting these resources under a separate domain can reduce cookie transmission.
  • Choose the right image. See brain burst. Webp has all the advantages of being a new format, but there are browser compatibility issues. In the case that the picture server supports formatting, js can be used in the front end to judge whether webP is supported or not and the corresponding parameters can be splicing to the picture URL, so that the same picture URL can load different formats according to the requirements of the front end.

Second, cache optimization

  • HTTP protocol level optimization, see brain burst diagram, mainly some request headers response headers, these headers control browser freshness validation behavior.
  • From the engineering point of view, the resources that are not easy to change are packaged separately to facilitate the long-term cache of such resources. Service Worker, well, PWA is hot these days.

Three, loading distance

It is worth noting that this is not a physical distance, but a network topology level distance. Maybe your network distance from your neighbor is 3000 kilometers.

  • CDN, now everyone is using, thinking about a question, CDN must be fast? The answer is no, it’s possible that your broadband operator cheated you by visiting a Xizang CDN node in Beijing, but most of it is fine.
  • If your page is used in your company’s app, then the app to help you do the pre-loading is very cool, when the need to open the page can be directly in the local page and open, it will be very fast.

4. Loading sequence

  • The web page is getting bigger and bigger now, but the first screen must be the first one to see, if you can make the first screen out very fast and other resources slowly load is great.
  • Bigpipe, which is a response priority optimization using transfer-encoding: chunked at the HTTP level, was originally proposed by Facebook for a scenario where the page is split into many chunks and each chunk of data is retrieved separately. Why not use JS on the front end? Js single thread will block. @i5ting, also known as Wolverine, has a project on Github called BigView, which is a bigpipe package for Node.js Base. @Puhua contributed a lot of code.

5. Load link optimization

We know that DNS interpretation is a very tedious process, may need to exchange information with the remote server for many times to get the IP, if one link is slow will seriously affect the page opening speed. In addition, if your broadband provider’s DNS server sucks, you may not be able to open the page at all. What’s worse is that it gets hijacked, and you don’t even know what the hell the user is opening.

Access to HTTPDNS for your own app can do one HTTP request to get the IP, and can do the app side cache, which can reduce the DNS resolution time to 0, which is a very cool thing. That’s why I’m pushing the APP side to access HTTPDNS.

If all browsers can access HTTPDNS, the front end is much happier. Quick question. Who can push this thing?

Http1.1 differs from http2

Http2 has a multiplexing aspect to load, and many cloud storage vendors already support http2.

In the http1.1 era we wanted fewer requests, and browsers had a limit on the number of links to the same domain, so we wanted to spread resources across multiple machines.

In the http2 era, the number of requests is not a problem, but only resources under the same domain name can be multiplexed, so the domain name should be appropriately converged. As a side benefit, the problem of slow HTTP startup is somewhat circumvented by reusing the same link.

summary

Any inadequacies and omissions are welcome to be pointed out and communicated.