preface
You know. We talk about front-end performance optimization every day, we recite front-end performance optimization solutions every day, but we don’t know the principles behind it and the knowledge base involved. So, I asked myself some questions,
- 1. Why do we do front-end performance optimization?
- 2. What are the performance optimization criteria?
- We talk about performance optimization every day. What are we talking about?
- 4. From what angles should we talk about performance optimization?
- 5. What is the rationale behind his optimization?
- 6. What should I consider when doing performance optimization?
Once I asked myself a few questions, the context for my relearning of performance optimization became clear, so let’s work through it one by one.
1. Why do we do front-end performance optimization
In front end careers, we hear the words “performance” and “experience” over and over again. And in the process of upgrading slowly from the rookie a little bit dozen strange, the frequency of these two words heard in the gradual rise.
And many people only know, this thing, the interview to test, so began to back! Back! Back! I never thought about why we were doing performance optimization in the first place. What kind of optimization should be done at what stage of the project. How to balance maintainability and optimization. These are all questions we need to think about when we do performance optimization, so with a lot of thinking, in fact, the answers in our hearts can be readily revealed. This is also what I want to tell myself: before doing something, we should think about what is the essence behind it, instead of being superficial and following what others say.
So why do we do performance tuning? What kind of projects must be optimized? What kind of project can sacrifice some optimization points for project stability, and maintainability!
We know that a web site is the most important is the user, the user can you have a business, for example, you are a e-commerce sites, then you must hope your users very much, only in this way can you have more people to browse your goods, to spend money on your website, buy things, so that you can produce income, in, for example, in order to get more users, You have to use third-party tools, such as search engines, to promote your site. Now the search engine, he will go to your site to do performance evaluation, which may affect your ranking!
In this way, you’ll know that what we call a performance optimization is to retain users, as well as access to the user, then, based on the above ideas, you can according to the current project, tell me what kind of special way of optimizing the current website should give as well as the general optimization way, rather than on the Internet to find a phone, and then to optimize!
With that in mind, we need to look for some criteria, as well as performance bottlenecks. To achieve the purpose of optimization
2. What are the performance optimization criteria
Remember amazon did a survey and found that a site loses 1% of its sales for every 100ms of delay. So how late do we have to be to make our site good? So we have to have a standard!
Understanding performance Indicators
As mentioned above, the situation of each project is different, we can not do the best like Amazon, so most websites need to follow a standard, we think that to achieve this performance index, even if it is ok, in a high-frequency operation point, to do targeted optimization!
Measuring page load performance is a difficult task. So Google Developers is working with the community to build Progressive Web Metrics (PWM’s).
What are PWM’s and why do we need them?
This is where browser history comes in. To fully illustrate this, let’s start at the beginning. Long, long ago, we had two main points (events) to measure performance:
DOMContentLoaded – Triggered when the page has been loaded but the script file has just begun to execute. This is when the initial HTML document has been loaded and parsed, but not when the stylesheet, images, and subframes have been loaded. See the MDN DOMContentLoaded event
The load event is triggered when the page is fully loaded and the user is ready to use the page or application.
For example, in the nugget, you can see DOMContentLoaded and Load at the bottom
In today’s world of complex interactions and complex page content, you’ll find that DOMContentLoaded and Load don’t reflect the user experience as much as they used to. It has a lot to do with the complexity of your page, the difficulty of interaction, the amount of animation, etc. Take Bilibili and Nuggets for example
You’ll notice that bilibili takes much longer to load than nuggets. But Bilibili is just as good as the Nuggets
The problem with DOMContentLoaded in today’s Web pages is that it doesn’t include the time it takes to parse and execute JavaScript, which can be very long if the script file is too large. Mobile devices, for example, measure the tracking timeline within the limits of a 3G network and find that it takes about ten seconds to reach the load point. On the other hand, the Load event is triggered too late to analyze the performance bottleneck of the page. So can we rely on these indicators? What exactly do they tell us? And the main question is, from the start of the page to the end of the page, how does the user feel about the process?
When you refresh the Bilibili page, you will find that the bilibili experience is very good, in addition to having a good design, it has a lot of performance optimizations for it. For example, fast loading of the first screen, lazy loading of other screens, good use of cache, etc.
Having said that, what is called PWM’s?
PWM’s is a set of indicators designed to help detect performance bottlenecks. In addition to load and DOMContentLoaded, PWM’s gives developers more detailed information about the page loading process
In fact, in Google Chrome, we can use DevTools to see the load times of various metrics! Let’s take Bilibili again
First, open Performance and click the refresh button.
In this way we can see some key render nodes
First Paint (FP)
This is Google’s developer tool and it gives us a metric — FP. This indicator represents the point in time when the page was drawn. In other words, it represents the point in time when the user first saw the white screen (fp has another meaning which is functional). Fp events are triggered when the Graphic Layer is being drawn, not when the text, image, or Canvas is being drawn. This is a tricky time to measure performance, so Google has given us another.
First Contentful Paint (FCP)
This is the point in time when the user sees some “content” element being drawn on the page. Unlike white screen, it can be the first drawing of text, or the first appearance of SVG, or the first drawing of Canvas, etc.
The FCP event is triggered when elements such as text (not counting text waiting for a font file to load), images, and Canvas are drawn. The results showed that the time difference between FP and FCP could vary from a few milliseconds to a few seconds. The difference can even be seen in the picture above.
So your content is taking too long to draw, which means your resource file may be too large, or the network may be dragging. It can really reflect some of the problems with web performance
Largest Contentful Paint (LCP)
LCP A new performance metric. LCP is a performance metric that focuses on user experience and is easier to understand and reason about than existing metrics. It differs from the discarded FMP in that it is meaningful to draw a point in time for content, so this meaningful decision is placed in the Google Developer Tools area, which is controversial and does not address performance issues.
Discussions in the W3C Web Performance Working Group and Google’s research have found that a more accurate and simple way to measure the visibility time of the main content of a page is to see when the elements with the largest “draw area” start rendering. This maximum length of content renders him most likely to be the main content. So, it’s logical to replace FMP
Now let’s look at this picture again
The above picture says at all time points, the current web page rendering snapshot, and the blue line, is the current web page memory footprint, from the current line you can clearly see that garbage collection (gc) in when to start, if you look at the blue line is of unlimited increase, then most likely leak happened, He can help you pinpoint the problem very well.
If we click on the main option, we can see all the long tasks on the current page, how long they take to render, and the order in which they are executed (long tasks: We know that JS and rendering are mutually exclusive, so it is also clear in the diagram that js and rendering are mutually exclusive and who compromises who. This diagram he also has a foreign name is called – flame diagram
So through him, look carefully
You’ll see that there’s a red triangle, which is what Google tools is telling us, that the performance is not up to par! By doing so, you can identify key performance bottlenecks and optimization points.
lighthouse
There are a lot of friends will say, this is not my website, I see what he do, I just want to see how the performance of this website! Google has also provided us with a tool — Lighthouse, which was previously downloadable, is now integrated with developer tools
In the same case of Bilibili, if you measure it in the developer tool, it will return a bunch of data. Let’s just focus on the performance section. First, it will have an overall score, then the time of each item, and then analyze it one by one
- First Contentful Paint – The time it takes to render the first text or first image
- Time to Interactive is the amount of Time that an interaction can take place
- Speed Index — Represents the visible fill rate of web page content
- Total Blocking Time — The Time from seeing the content to interacting with it
We’re just going to focus on those four points here but especially on the first and the third. Bilibili’s First Contentful paint performance is ok, but if it’s red, it’s over the limit. Google’s Speed Index is 4 seconds and Bilibili’s 4.3 seconds is not too bad.
The next step is to look at some items that need to be optimized, including HTTP, JS, CSS layer optimization.
network
In first Contentful Paint, we found that there are two main components to its time consumption, the first is the rendering time and the other is the network load time.
Let’s open network to see the loading diagram of web resources. This diagram also has a very professional name, called waterfall diagram
He intuitively describes the loading time and order of website resources. There are two ways to interpret this diagram, one is horizontal view, the other is vertical view. Looking horizontally we can see exactly what resources are loaded
We can see that downloading resources is actually the last step, and it also includes the wait time, he has to queue up, which is 5 milliseconds, maybe 200 milliseconds blocked due to the maximum number of browser requests. Next, it took 0.18 milliseconds to send. TTFB time – is the processing time in the background and the network transfer time before we can download
If I look in the vertical, I can see the order in which the resources are loaded, so I might be able to get some of the longer requests loaded ahead of time, parallel requests, to achieve my optimization goal.
RAIL
With these concepts in mind, let’s take a look at the Chrome team’s proposed RAIL model.
RAIL is Response, animation, Idle, and Load.Copy the code
- Response — The processing time should be completed within 50ms
- Animation – Produces one frame every 10ms
- Idle — Increase idle time as much as possible
- Load – Wants content loaded and interactive within 5s
Of course, this is just a reference. In a complex project, we can only try our best to catch up, but it is difficult to achieve, because in addition to the complexity of the project, there are many things we can not control. For example, it’s not up to you whether your site supports HTTP2 or not. If idle we need to reach the standard, theoretically, wash the JS calculation of data, the back end can do never let the front end to do, but the reality is, in my career, basically every day in the data washing.
We talk about performance optimization every day. What are we talking about?
We talk about performance optimization every day, and then one day I I looked back and said, are we really talking about these optimization points when we talk about performance optimization every day? No, no, no. When we say that we can optimize performance, we actually have a deep understanding of how HTTP works, how caching works, how browsers work, how toolchain optimizations work, and how front-end frameworks work. So we talk about the performance optimization every day is actually a deep bottomless pool, he needs you to have a perfect knowledge system and rich experience, is not the back of the optimization point can understand.
That way, when we talk about performance tuning, we don’t have to talk about these optimizations or these mundane things. But to dig into the principles behind it, and summed up the thinking of this program. Now, let’s take it one at a time.
4. From what angles should we talk about performance optimization?
Optimization of the direction of how the browser works
I’ve written an article about how browsers work before, which basically covered a few details
As you can see above, he actually breaks down into these steps
The whole process is to convert the URL into a Bitmap, first send the request to the server, then the server returns the HTML, the browser parses the HTML, then build the DOM tree, calculate the CSS properties, then typeset, and finally render the Bitmap. The view is then displayed through the OPERATING system or hardware API.
So in these steps, there is a layout and render two steps, in fact, the layout (reflux) and draw, this is the browser key rendering path of two very important steps, and very consumption of browser resources. Our performance optimization can actually be done in these two steps.
Layout and Drawing
For a layout, what we need to change is the geometry of the elements, such as width, height, and position. So let’s see what operations we can do to start the layout, so that we can avoid this operation in our code. In this way, performance optimization can be achieved
-
Add or delete elements
-
Operating styles
-
display:none
-
OffsetLeft, scrollTop, clientWidthd, etc
-
Move element positions
-
Modify the browser size and font size
Let’s go back to Bilibili’s flames. The purple part is the layout. In the process of layout, there is a classic problem called layout jitter, which makes the page appear very slow. In fact, the so-called layout jitter is caused by the continuous layout process
How can we optimize performance at this stage without compromising results?
1, avoid reflux
For example, to change the position of an element, we can use CSS animation to solve the problem, use composite steps to solve the problem, such as using VDOM to minimally change the layout of an element,
2, read and write separation
It actually uses the browser’s API, requestAnimationFrame, to read data in the current needle. The next frame writes the data, so it can achieve the effect of read and write separation, there is a fastdom library on the community to solve this problem.
For drawing, it only affects the appearance and style of the elements, not the layout, such as background-color.
As shown in the figure above, the green part is the drawing step, and the browser has developed a composite thread for the drawing step to improve the power. It is similar to the layers in Photoshop. The browser also divides some boxes into layers, so that modifying some layers does not affect the drawing layout of other pages.
As shown in the figure above, this is the composite layers process, which causes the style calculation, but instead of redrawing, it is a composite layers process.
So how do we use composition as much as possible without redrawing?
- Create a new layer using the will-change attribute
- If the page needs to be animated, use CSS3 animations, such as Transform, opacity, etc
3. Slow down the triggering of high frequency events. In complex web page interactions, such as dragging, scrolling, and high frequency clicking, the triggering frequency is very high, much higher than 60Hz. Therefore, we need to have damping, throttling functions and so on to help us slow down the high frequency events! Its principle is very simple. It uses a timer to delay or intermittently process event callbacks
4. Use the browser API to reduce page shake
We know that React16 has a fiber architecture, which uses the browser API requestIdCallback at the bottom of the page to maximize the page lag. Why not also consider using the requestIdCallback, requestAnimationFrame, and other apis when solving the caton problem?
How to write high-quality optimized code
In today’s framework world, react, Vue, and Angular are divided into three parts, but in the framework programming paradigm, we tend to ignore our own code level performance optimization, always think that the framework author will consider these issues, such as React Fiber. What we didn’t know, however, was that as a framework, it’s important to make your code maintainable, and to generalize the framework, so the framework gives us the assurance that we can still deliver decent performance without having to manually optimize. It’s not very good performance, it’s essentially manipulating the DOM, but the framework does it for you, you just have to describe your purpose
Here’s an example:
As shown above, I vue a recent project, it has a serious performance bottleneck, we can see that after the first rendering, there are two very long long task in blocking page rendering, appear very caton, he in fact essential reason is that the table form rendering, the large data volume rendering is very prone to performance bottlenecks. Although Vue has virtual DOM and Diff algorithms, they are not free and come with a lot of overhead. This is where manual optimization is needed, such as adding virtual scrolling.
The next step is to understand from the bottom, where is the overhead of JS execution?
As you can see above, the js app.js file takes over 700 milliseconds to compile and parse.
Looking back, we found that in addition to compiling, parsing scripts, and GC (garbage collection), there was a lot of time. So we have a way of doing things in this step,
- Reducing the size of the resource and compressing it, tree Shaking, reduces the code weight
- Code splitting, loading on demand, using this piece of code to load, prevents unwanted JS execution from blocking the page
- Avoid large interline scripts, which must be reduced in size because interline scripts cannot be optimized by browsers when parsed
- Write optimized code that suits the browser
Let’s break them down one by one
The first three are obvious questions, so let’s focus on how to write browser-friendly code.
We know, in the browser, js parsing engine called v8, actually, v8 in the underlying parsing, is to do a lot of things, for example, we know that TCP flow transmission, the v8 then have the script flow optimization, roughly means to explain, v8 will give us a preliminary analysis, the script hasn’t started parsing code after the download is complete, In things like bytecode caching and lazy parsing, what we do is we cater to the browser, like:
- 1. Add variables of the same type, and the parsing time will be less.
- 2. Since functions are lazy parsing by default, add parentheses around function declarations when you need to parse quickly
- 3. Code that executes the same method repeatedly is faster than code that runs a different method each time (the importance of abstract encapsulation).
- Always instantiate your object properties in the same order, avoid hiding class adjustments, and try to avoid adding members after instantiation
- 5. Avoid reading more than the length of the array, and avoid casting
- 6. At the HTML level, minimize the use of iframes, avoid table layouts, avoid deep nesting of nodes, and use external links for reference scripts
- 7, CSS layer, reduce the CSS loading on the CSS rendering block, use gpu to accelerate rendering
How to optimize resources from reducing TTFB and rendering time
At the code level, before we can do the optimization of the lists, however little it benefits, is minuscule, in fact, you carefully look at some of the vue, element, such as well-known open source libraries, they don’t have a perform optimization means of the above, because, they need to maintainability, to sacrifice a little performance is very good. When writing code, there is a balance to be struck. You can give up these optimizations for the sake of maintainability, but you need to know why you are using them, so that you can use them in an interview or on a future project.
While we optimize in terms of resources, he is actually able to see everything, and can see the exact effect. For example, the compression and merging of file resources, the image format browser parsing fastest, do not need to load images immediately can be lazy loading, font experience will not affect performance?
We know that at the network protocol level, the smaller the resource, the less the transmission time, so we have to do some optimization at the file resource level. In fact, no matter how to optimize, they always follow this principle:
- Reduce the number of HTTP requests
- Reduce the file resource request size
So around these two points, in fact, there have been a lot of optimized methods, I will say a few, a lot of old front-end, specified is unforgettable, but due to the speed and technology progress, they have been submerged in the long river of history, such as Sprite diagram, using Gulp compression to merge HTML, CSS, JS resources, Use Imagemin to optimize the image size. Of course, these problems are actually covered up in today’s engineering. We only need to focus on development and describe the purpose. However, there are some problems we should pay attention to:
- 1, in the selection of large picture resources, JPG is the most appropriate, the image compression ratio is high, the picture quality is good, the size is not large
- 2. Some resources, such as pictures, do not need to load the content on the first screen, implement lazy loading solution, which is no matter how far the tool chain develops, we need to optimize ourselves
- If you use special fonts, to prevent problems with loading font resources, you should use the default scheme of box-display limited use degradation. Let the font appear first
Use the tool chain to optimize at the build step
In this paper, we talked about some optimization solutions, but no matter what, he would not open around the rational use of tool chain, this also is we cannot bypass the performance optimization of a link, because of the tools used properly, can balance maintainability, and better performance, then mention tool chain, is not open around the construction of a new generation of tools, Webpack, Rollup, etc. Today we’re going to look at webPack, an old-school build tool.
The original idea is to act as a middleman so that developers can use the new features of some languages and allow browsers to run their code. However, due to its powerful plug-in and loder capabilities, performance optimizations were made for us. For details, please refer to my previous article webPack Optimization to solve the problem of large project size, long packaging time and long refresh time!
The optimization at the transport level plays an important role at the network protocol level
This means is one of the most profitable means, the means mentioned above in fact limited capacity, and in the transmission level, in fact, is the means of twice the result with half the effort
Open the gzip
We know that our code is compressed, but it is not a compressed package, so gzip comes out, so let’s see what gzip is.
Gzip is short for several file compressors, and Gzip encoding over HTTP is a technique used to improve the performance of WEB applications. High-traffic WEB sites often use GZIP compression technology to make users feel faster. This usually refers to a function installed on a WWW server that compresses the web content and transmits it to the browser of the visiting computer when someone visits the web site. Generally for plain text content can be compressed to 40 percent of the original size. This will transfer fast, the effect is that you click on the url will be displayed quickly. Of course, this also increases the load on the server. The general server is installed with this function module.Copy the code
Not every browser supports GZIP. If you know whether the client supports gZIP or not, the request header has accept-Encoding to indicate support for compression. The HTTP request header on the client specifies the compression mode supported by the browser, and the compression mode, file type, and compression mode are configured on the server. When the client requests to the server, the server parses the request header. If the client supports GZIP compression, the requested resource is compressed and returned to the client in response. The browser parses the request in its own way. This means that the server uses gZIP compression.
It is also very simple to use, in some Nginx, node, and other Web servers can be enabled.
To enable the keep Alive
The keep-alive intent of the HTTP protocol is to reuse connections in a short period of time. It is hoped that multiple requests/responses can be made on the same connection in a short period of time.Copy the code
In general, a web page may have many components. In addition to the text content, there may be static resources such as JS, CSS, images, and sometimes asynchronous AJAX requests. Only after all the resources are loaded do we see the full content of the page. However, a web page, may introduce dozens of JS, CSS files, hundreds of images, if every request for a resource, create a connection, and then close, the cost is too large.
In this context, we want connections to be reused in a short period of time. When loading the same page, we want to reuse the connection as much as possible. This is what the KEEP-Alive property in HTTP protocol is for.
After HTTP1.1, keep-alive is enabled by default, so we don’t have to pay special attention to it.
HTTP cache
The diagram above is a flowchart of HTTP caching. In my opinion, none of the optimizations can compare to the experience of caching, which saves some static file resource overhead and improves the quality of the second request. And there are only two rules he follows:
- 1. For mandatory caching, the server notifies the browser of a cache time. Within the cache time, the next request will be directly cached.
- 2. For the negotiated cache, the Etag and last-Modified in the cache information are sent to the server through a request, and the server verifies and returns the 304 status code, and the browser directly uses the cache.
And we can also connect the cache and tool chain, so as to give users a better experience, for example: webpack when packaging, can be monitored if the file changes, thus if the file changes, will change the file name hash value of the remaining unchanged, as a result, after deployment to upgrade, the user request to change the file only, Thus reducing the download of resources. To achieve the best performance.
service worker
A middleman between the server and the browser. If a service worker is registered in the website, it can intercept all the requests of the current website and make judgments (corresponding judgment procedures need to be written). If a request needs to be sent to the server, it will be forwarded to the server. If you can use the cache directly, just return it to the cache and not forward it to the server. This greatly improves the browsing experience.Copy the code
It has two characteristics
- 1. Speed up repeat visits
- 2. Offline support
Unfortunately, in 2021, his compatibility is in doubt. So it’s not popular.
SSR technology is used to reduce the browser’s load and accelerate the first screen rendering
SRR technology is actually a very old technology, it has existed for a long time, but due to the fire of VUE and React, the traditional SSR was innovated, so that the front end can also participate in the wave of SRR. His principle is actually very simple, the first screen content on the server concatenation into a string, the client parsing. This reduces js execution time on the client side. Render the page quickly. To achieve the purpose of performance optimization
conclusion
After all the above problems are solved, the last two questions proposed at the beginning of the article are clear, and the principle is also expounded in general.
In fact, in every direction there is worth digging deep knowledge, these need to be dug deep knowledge in every moment to remind us: we are really very good, however, I often find that a lot of people really just superficial, he enjoys the dividend and illusion brought by the Internet, mistakenly believe that they are strong, and always love to point out. After thinking for a long time, I asked myself if the over-elevated Internet industry fever had passed. What do I have left? I use vue and React apis.
Therefore, it is wrong to record this article and slowly conquer it in all directions in the hope that it will add a brick to your knowledge system. Please criticize and correct!