Implications for front-end performance optimization
Internet development is very rapid, now do more and more website content, function is more and more powerful, the page is more and more beautiful. The problem with this is that the more content you have on the page, the slower the page will be. As a user, of course, you want to access pages as fast as possible. So it’s getting more and more challenging for front end engineers. We only continue to optimize the performance of our website, in order to keep up with the user’s experience needs.
Performance is the backbone of Web sites and applications. If your site doesn’t perform well, users will stop visiting your site. So performance is relevant to the benefits of our site. The most important thing for a website is the user, who will have business. For example, do an e-commerce website, more users, browsing goods and orders will be more. The so-called order is the conversion rate. How many people order, the business is how much profit. You also want enough people to see your site to generate enough advertising revenue. Or cooperate with other platforms to take goods. Another point is that today’s search engines evaluate the performance of web sites, and high-performance sites appear at the top of their results. So if the site is not performing well, it will fall behind in the search rankings. Therefore, the performance of the site is directly related to your site’s user experience, traffic, search, conversion rate. Optimization of website performance based on the above problems is the focus of every front-end engineer.
Important performance metrics and optimization goals
Performance indicators are the criteria we can refer to when we go to optimize the website. This is the industry and predecessors summed up as a guide. What we need to do is to stand on the shoulders of giants and optimize the site by referring to these indicators. We can visit Taobao’s page to see what the important performance indicators are. Go to the page and open the console, click on the Lighthouse option, and generate the following performance test report:
You can see that the score on this test is 48. Each test is affected by the network, so the results of each test may be different. Therefore, it is recommended that each test page performance can be tested several times, take an average. Now let’s analyze the test results. The metrics we care about are First Contentful Paint, Speed Index, and Time to Interactive. First Contentful Paint indicates the time when the First content (text or image) is drawn, that is, the time when the screen is no longer blank. It takes 1.6 seconds, and it’s green, which means it’s doing a good job. This gives the user a very good experience. Now the standard for Speed Index is 4s. Taobao page Speed Index is 2.8s, indicating that the Speed of the web page is very fast. Time to interactive indicates the Time when a user can interact. Click on network to view the status of each request.
Here I’m interested in the TTFB metric, which measures how long it takes to send a request to get a response back. These are the performance of the network load, for performance we should also care about the site after loading, users actually start to use the process of interaction experience. The interactive response should be fast enough, the picture should be smooth enough (the frame rate should be high enough), and the asynchronous request should be fast enough (the data can be returned within 1s, and loading animation cannot be added to improve the user experience).
RAIL measurement model
RAIL measurement model is a measurement standard proposed by Google to quantify website performance, which can be used to guide us to optimize performance. Each letter has a specific meaning.
R — Response (the experience of a site giving a Response to the user)
A — Animation (whether the Animation is smooth 1s 60 frames)
I — Idle (the browser has enough Idle time, which corresponds to Response)
L — Load (page Load time)
RAIL’s goal —- makes a good user experience the goal of performance optimization
RAIL evaluation criteria:
Response: Event processing should be completed within 50ms.
Animation: generates one frame every 10ms.
Leisure: Increase your leisure time as much as possible.
Load: Content is loaded and interactive within 5s.
Several performance measurement tools: Chrome DevTools for development and debugging, performance metrics, Lighthouse for overall quality assessment, WebPageTest for multiple test locations, and comprehensive performance reporting.
Commonly used performance measurement APIs
Web standards APIs
TTFB, Time to Interactive and other time-critical nodes mentioned above are specific apis implemented by browsers according to standards, so we can directly obtain time-critical node data and other performance-related data through these apis.
Critical time node
The browser has an important object, Performance, from which much of the time-node data can be retrieved. One of these methods, getEntriesByType, has all the important time nodes associated with network loading on navigation. Take Time to Interacrtive as an example.
let timing = performance.getEntriesByType('navigation')[0];
let tti = timing.domInteractive - timing.fetchStart;
Copy the code
Calculation formula of other important time nodes:
DNS resolution time: domainLookupEnd - domainLookupStart TCP connection time: connectEnd - connectStart SSL security connection time: ConnectEnd - secureConnectionStart Network request Time (TTFB): responseStart - requestStart Data transmission time: DomInteractive - responseEnd Resource loading time: LoadEventStart - domContentLoadedEventEnd First Byte Time: responseStart - domainLookupStart White screen time: ResponseEnd - fetchStart first interactive time: domInteractive - fetchStart DOM Ready time: DomContentLoadEventEnd - fetchStart page full load time: LoadEventStart - fetchStart HTTP header size: transferSize - encodedBodySize redirection number: performance. Navigation. RedirectCount redirection time consuming: redirectEnd - redirectStartCopy the code
Network APIs
Browser API that provides a judge state of the network, then we can according to user’s current state of the network resource loading, such as when the network state when we load the hd picture, when under the condition of network status is bad, we load is not so clear but smaller pictures, let users have a better experience.
/ / to monitor the user's network state let connection. = the navigator connection | | the navigator. MozConnection | | the navigator. WebkitConnection; let type = connection.effectiveType; function updateConnectionStatus() { console.log("connection type of change from" + type + "to" + connection.effectiveType); } connection.addEventListener('change',updateConnectionStatus,false);Copy the code
Web page Display Status (UI APIs)
A practical example is to monitor if a user leaves the page.
// let vEvent = 'visibilitychange'; if (document.webkitHidden ! Undefined) = {vEvent = 'webkitvisibilitychange} function visibilityChanged () {/ / page is not visible if (document. The hidden | | document.webkitHidden) { console.log("web page is hidden"); } else {// page visible console.log(' Web page is visibile'); } } document.addEventListener(vEvent,visibilityChanged,false);Copy the code
This list lists some common apis. If you are interested, you can go to developer.mozilla.org/zh-CN/docs/… Check out more apis. Let’s move on to the topic of how to optimize performance. I divided the discussion into several modules: rendering optimization, code optimization, resource optimization, build optimization, transfer loading optimization.
Rendering optimization
In this section we will discuss how browsers render pages. The web rendering process is divided into many parts, here I will talk about the key render path. Only when you understand how many steps rendering goes through can you optimize accordingly. We know that when web resources are loaded, scripts and CSS are parsed, and when they’re parsed, browsers have to understand them and draw them on the page. This is the rendering process. What is done can be seen through performance analysis.
As you can see from the above image, the tasks that Main is busy with are listed. It is common to see some repetitive tasks, such as Recalculate Style, Layout, and Paint. These tasks are important stages in the rendering process. In fact, browsers go through five key steps to render a page, which I call the key render path. Whether the page is first loaded or a subsequent page style change, these five steps are taken to render the page to the user. Using javascript to make visual changes to a page, such as adding a DOM element, the browser recalculates the style and calculates which element’s CSS has changed based on the selectors’ matching. Then there is layout, drawing elements to the size and position specified by style. The next step is to draw, to draw elements onto the page. The final step is compositing. For efficiency, browsers do not draw all elements on the same layer and then compose them together to form a page.
What does the browser do when it gets the resource back from the server? HTML, CSS and JS are essentially text, and computers can’t understand text. So the first step is to translate this text into understandable data structures through the interpreter. Let’s take a look at how HTML is transformed. Start by converting the HTML text to a single character. HTML has multiple tags, marked by ‘<>’, and this’ <>’ can be used for identification, so we can think of strings as meaningful tags, which are eventually converted into node objects into a linked data structure called the HTML DOM tree. The following figure illustrates the nesting relationships described in HTML.
Looking at the CSS section, the same is true when the interpreter encounters you referencing an external CSS table. Download the resource first, and process the text of the resource. Identify all the tags, see which node style the style describes, and then store the relationships in a tree structure. The diagram below:
So the first step for the browser when it gets the resource is to build two object models: a DOM object and a CSSOM object.
Step 2: Merge the two trees to build the render tree. Let the browser understand what will eventually be drawn on the page.
The result of the merge is as follows: the display: None element is eventually removed, leaving only the nodes that are ultimately displayed on the page.
Once you have this tree, the browser knows what size each node is and where to draw it. After that, I will talk about the layout and drawing process of the browser.
Layout and drawing
Layout and drawing are the two most important steps in the critical rendering path, and also the most expensive. First understand what layout and drawing do, and then look at how they can be reduced, or even avoided. Layout is simply calculating the exact position and size of each node based on a “box model”. Drawing is the process of pixelating each node. Key render paths must be triggered during the first page visit. This process can be triggered by animations during the entire page usage, or by user interactions that change the visual effects of the page. So we can think about, can we not trigger layout and drawing in certain situations? The answer is yes. Because some style changes don’t actually trigger layout and drawing. The layout will not be triggered if the style does not change the size and position of elements such as height and offset, such as background color or shadow size. For example, some animations can be accelerated by GPU. In fact, these animations can be directly compounded without layout or redrawing. Operations that affect reflux (layout) :
- Add/remove elements
- Operating styles
- display:none;
- offsetLeft,scrollTop,clientWith
- Moving elements
- Modify the browser size and font size
For example, change the image size to see what happens.
let cards = document.getElementsByClassName('MuiCardMedia-root');
const update = () => {
cards[0].style.width = '800'px;
}
window.addEventListener('load',(e) => {
update();
});
Copy the code
Open the debugging tool and select the Performance TAB for performance analysis. You can see that a recalculation of style and layout occurs after the load event.
This simply changes the height of the element and does not affect other elements, which is relatively small. If a backflow operation affects not only itself but also the position of other elements. Imagine the cost of the cascade that it triggers. It may even cause the page to stall. Therefore, avoid operations that cause backflow and improve page performance.
But sometimes backflow is unavoidable. Layout thrashing is also possible when unavoidable. Take a look at layout jitter with an example. Change the width of all images on the page and loop indefinitely.
window.addEventListener('load',(e) => {
let cards = document.getElementsByClassName('MuiCardMedia-root');
const update = (timetamp) => {
for(let i =0; i < cards.length; i++) {
cards[i].style.width = ((Math.sin(cards[i].offsetTop+timetamp / 1000) + 1)*500) + 'px';
}
window.requestAnimationFrame(update);
}
window.addEventListener('load',(e) => {
update();
})
Copy the code
You’ll notice that all the images on the page have been animated (imagine), but none of the animations are smooth. It’s pretty catchy. Now let’s see what the performance analysis tells us. You can see a lot of Layout happening. A message is displayed indicating that a Layout Forced reflow has occurred.
The problem is in the for loop, where the browser will try to delay modifying the layout properties to help us improve the performance of the layout, but there are situations where it cannot be delayed, such as when you want to get a layout property, such as offsetTop. The browser has to do the latest calculations to make sure you get the latest results. So the browser is forced to do the offsetTop calculation before width is assigned, and then write to width. There are continuous reads and writes in the loop, and each read forces the layout to be recalculated immediately, resulting in continuous forced backflow, which causes the jitter of the page layout to become very sluggish. How to avoid this problem :(1) avoid backflow; (2) Read and write separation, batch read work, and then batch write operation.
—- FastDom puts read operations in measure and write operations in mutate. Address github.com/wilsonpage/…
const update = (timetamp) => { for(let i =0; i < cards.length; Measure (() => {let top = cards[I].offsetTop; fastdom.mutate(() => { cards[i].style.width = cards[i].style.width = ((Math.sin(cards[i].top+timetamp / 1000) + 1)*500) + 'px'; }) }) } window.requestAnimationFrame(update); } window.addEventListener('load',(e) => { update(); })Copy the code
With Fastdom, page animations no longer lag, and performance analysis has no forced backflow warnings.
Compound threads and layers
Composition is the last step in the critical rendering path, but it is closely related to rendering. Browsers have compositing in order to improve rendering efficiency. The main function is to break up the page into different layers. When the page changes visually, these changes actually affect only one layer. The other layers are not affected and the drawing is done more efficiently. The job of the composite thread is to take the layers of the page and draw them and then recompose them. So how does our web page break up into different layers? What are the rules for splitting? By default, this is left up to the browser, which analyzes whether elements interact with each other. If some elements affect others too much, the browser extracts them into a separate layer. The advantage of this is that if this element changes, you only need to redraw this layer without affecting the rest of the layout. In addition to the browser’s default split rules, we can also actively extract elements into separate layers. You can use DevTools to learn how to split layers on a web page.
Check Layers to see the page split into two layers. When you place the mouse over a layer, the page displays the area of the layer. The following properties only trigger composition, not redraw.
transform: translate(npx,npx); transform: scale(n); transform: rotate(ndeg); opacity: 0... 1;Copy the code
When we use the above attributes, if we can extract the elements involved into a layer, then the visual changes of those elements will only trigger composition, not layout and redrawing.
The home page of Kaola online shopping separates the above cards into separate layers, because these cards have animation effects when the mouse is hovering, and it is hoped that these animation effects will not trigger the layout and redrawing. It is scale transformation animation realized by Transform.
You can see that the mouse hover only triggers Composite Layers. This greatly improves efficiency. It is obvious that splitting layers can improve the performance of a web page, but it is not always the case that more layers are more expensive, which can adversely affect the performance of a web page. So only put certain elements on one layer for the desired effect.
Reduce repaint
As mentioned above, some properties do not trigger a redraw, so use transform as an example. Rotate the picture as you hit the card.
spin = () => { this.setState({spinning: true}); } render() {/* execute a spin by Spinning */ let cardClass = this.state.spinning? this.props.classes.cardSpinning : this.props.classes.card; Return (<div className={this.classes.class.root}> {/* Add click event, <MaterialUICard className={cardClass} onClick={this.spin}> <CardMedia Component ='img' className={this.props.media} image={this.props.image} title={this.props.title} height='200' /> </MaterialUICard> </div> ); } const styles = theme => ({ root: { margin: theme.spacing(1), }, card: { width: 300 }, cardSpinning: { width: 300, animation: '3s linear 1s infinite running rotate' }, media: { height: 200, width: 300, objectFit: 'cover', } }); css @keyframes rotate { 0% { transform: rotate(0deg); / * opacity: 0.1; */ /* width: 300px; */ /*transform: scaleX(1); Opacity: 0; */ opacity: 0; } */ 100% { transform: rotate(360deg); / * opacity: 0.1; */ /* width: 600px; */ /*transform: scaleX(2); * /}}Copy the code
Effect: Click on the image to start rotation.
To start performance analysis, click on the image before and after.
There are no running tasks before clicking the image. The main thread is basically idle.
After clicking the image, the Animation is triggered and the main thread becomes busy. The styles are first recalculated, then the layer tree is updated, followed by compositing. No task layout or redrawing operation occurred, and the desired effect was achieved. Now adjust the animation to directly modify the width property, and the page will definitely be laid out and redrawn.
css @keyframes rotate { 0% { /* transform: rotate(0deg); * / / * opacity: 0.1; */ width: 300px; /*transform: scaleX(1); Opacity: 0; */ opacity: 0; } */ 100% { /* transform: rotate(360deg); * / / * opacity: 0.1; */ width: 600px; /*transform: scaleX(2); * /}}Copy the code
Select Rendering and Paint Flashing for performance analysis. Elements that have been redrawn are marked green, as shown in the figure above. The layouts of elements interact with each other, causing the overhead of critical path re-rendering to increase. The point of this example is to use transform instead of directly modifying properties that affect layout and drawing.
Unfreeze any element using transform or opacity on a separate layer, and set willChange on root to ‘transform’ so that the browser knows that the element should be unfreeze on a separate layer.
root: {
margin: theme.spacing(1),
willChange: 'transform'
},
Copy the code
High frequency event anti-shake
Some events fire very frequently, even exceeding the refresh rate of the frame. Such as scroll events, touch events, etc., which can be triggered multiple times in a frame. This causes the browser to respond to these events more than once in a frame, and if their corresponding event handler consumption is high, the task in the frame is heavy. We don’t actually have to do it many times in one frame, like scrolling, we just care where we end up scrolling. The rest of the scrolling will result in too much work, and a frame can’t be completed in 16ms, which will cause the page to stall (jitter). RequestAnimationFrame resolves page lag. Why requestAnimationFrame can solve page lag is to look at the frame lifecycle first.
The event triggers js to trigger a visual change, at which point a frame is about to start, and rAF is called before layout and Paint. That is, requestAnimationFrame is something to do before layout and drawing. Therefore, requestAnimationFrame can be used to do the processing before the layout and drawing, greatly improving efficiency. In addition, requestAnimationFrame is scheduled by JS and will try to fire rAF before each drawing to achieve 60fps effect. Ticking variables control firing frequency so that events with high firing frequency are not executed according to their own firing frequency but according to requestAnimationFrame’s scheduled frequency.
/ / to jitter (page caton) function changeWidth (rand) {let CARDS = document. The getElementsByClassName (' MuiCardMedia - root); for(let i =0; i < cards.length; i++) { cards[i].style.width = ((Math.sin(rand / 1000) + 1)*500) + 'px'; } } let ticking = false window.addEventListener('pointermove',(e) => { let pos = e.clientX; // Prevent multiple ticking events within a frame; ticking = true; / / to trigger a task changeWidth package in requestAnimationFrame window. RequestAnimationFrame (() = > {changeWidth (pos); ticking = false; }); })Copy the code
Code optimization
This module looks at what you can do at the code level to improve page performance. Our door code has javascript, HTML, CSS, images, text and so on. Javascript is the most expensive of these. In addition to the overhead of loading, JS also goes through parsing and compiling, and then execution, both time-consuming processes.
As you can see from the figure, assume that both the JS and the image are around 170KB. Theoretically, load times across the network are the same. But js is then compiled, parsed, and executed. Compilation takes 2s and execution takes 1.5 seconds. Image decoding takes 0.064 seconds. The drawing process took 0.028 seconds. So you can see that JS is expensive. The JS loading process can also cause pages to block and affect user interaction. Although js compilation and parsing are dependent on the browser’s processing engine, you can actually optimize the process if you cooperate at the code level. There are two solutions: code splitting, load on demand and Tree shaking. In addition, from the perspective of parsing and execution, the workload of the main thread should be reduced, long tasks should be avoided, inter-line scripts of more than 1KB should be avoided, and rAF should be used for time scheduling.
Work with v8 optimization code
V8 is the JS engine for Chrome. It is compiled as follows:
Get js script after parsing work, translated into abstract syntax tree. The Interpreter then understands what it means to write something, and the compiler does some optimization before turning the code into machine code to run. But not all optimizations are appropriate, and when the compiler finds that the optimization it made is not appropriate, it does the reverse optimization, which is to remove the previous optimization. This can actually reduce operational efficiency. So the optimization we do at the code level is to try to meet the conditions of compiler optimization. Whatever the compiler optimizes, we write the code as it wants. Also avoid code that causes inverse optimization. Here’s an example:
const {performance,PerformanceObserver} = require('perf_hooks'); const add = (a,b) => a + b; const num1 = 1; const num2 = 2; performance.mark('start'); for(let i = 0; i < 1000000; i++) { add(num1,num2); } add(num1,'str'); for(let i = 0; i < 1000000; i++) { add(num1,num2); } performance.mark('end'); const observer = new PerformanceObserver((list) => { console.log(list.getEntries()[0]); }) observer.observe({entryTypes:['measure']}); Performance measure (' measuring 1 ', 'start', 'end');Copy the code
Test result: running time 48.203922ms.
The add (num1, “STR”); Remove this line of code and test it again. The running time is 27.781225ms. There is a significant speed increase. The previous operations were the addition of two numbers, and the parameters were very stable. Even after 10 million calls, the add function is optimized during compilation. However, when the add function is executed, the type of the second argument is changed from a number to a string. Then you can’t use the logic that you’ve optimized. So I’m going to undo the optimization THAT I did. This creates a certain amount of delay.
Function optimization
The V8 engine does lazy parsing of functions, meaning that the function body is parsed only when the function is actually called. If you don’t parse, you don’t have to create a syntax tree for it, so you don’t have to allocate memory for this function in the heap. This is a huge performance boost. Parsing is either lazy parsing or eager parsing. Lazy parsing is the default parsing mode. There is no denying that lazy parsing can improve performance, but in reality sometimes functions need to be executed immediately. The problem is that if a function is executed immediately, the default lazy resolution is applied at the beginning of the declaration. But when it was discovered that it needed to be executed immediately, a quick hunger resolution was performed, resulting in lazy parsing followed by hunger parsing of the same function. It reduces efficiency. So you need a way to tell the parser that the function needs to be executed immediately, allowing the parser to proceed with the eager parsing. How do you tell the parser in your code to do the eager parsing? It’s as simple as adding a pair of parentheses to tell the parser.
const add =((a,b) => a + b);
const num1 = 1;
const num2 = 2;
add(num1,num2);
Copy the code
But in the project, js is compressed and the parentheses are removed. There is no way to notify the parser. You can use the optimize.js plug-in (github.com/nolanlawson…) Help us put the parentheses back.
Object to optimize
-
Initialize object members in the same order, avoiding hidden class adjustments.
class RectArea { // HC0 constructor(l,w) { this.l = l;// HC1 this.w = w;// HC2 } } const rect1 = new RectArea(3,4); const rect2 = new RectArea(5,6); Copy the code
Javascript is a weakly typed language, so we don’t declare variable types when we write code. However, the compiler ultimately needs to determine the type of the variable, so during parsing, the compiler assigns a specific type to the variable as it sees fit. There are as many as 21 types, known as hidden classes. All subsequent optimizations made by the compiler are based on Hidden Class. Hidden Class The Hidden type was developed by V8 to speed up queries. Instead of dynamic query, every time an object is created/changed/deleted, the corresponding HC changes. The V8 runtime creates HCS for each object, primarily to record the structure of an object. When RectArea is declared, the first hidden type HC0 is created. A hidden type HC1 is created when this.l is assigned, and a hidden type HC2 is created when this.w is assigned. Creating an object creates three hidden classes, which the compiler optimizes, and when creating objects again, if they are done in this order, the compiler will reuse them. Another example of how important it is to initialize objects in the same order when hidden types cannot be reused.
const car1 = {color: 'red'}; // HC0 car1.seats = 4; // HC1 const car2 = {seats: 2}; //HC2 car2.color = 'blue'; // HC3 will be created againCopy the code
The first hide type HC0 is created when car1 is created. The second hide type HC1 is created when adding seats property to car1. Next we create the CAR2 object, which has a seats property. In this case, it cannot reuse HC0. The property of HC0 is color property. When creating CAR2, it declares the property of seats, so there is no way to reuse. You can only declare a new hidden type HC2. Then why can’t we reuse HC1? Actually HC1 not only contains the property of seats, but also the property of color. It also emphasizes the order of attributes. That is, the hidden type has a descriptive array underneath it that emphasizes the order in which the attributes are declared or the location of the index. So HC1 contains a description of car1 as a whole. When we add color to car2, we create HC3. As you can see, when we create CAR2, we can’t reuse the hidden type that we created in car1.
-
Avoid adding new attributes after instantiation
const car1 = {color: 'red'}; car1.seats = 4; . Try not to append attributes in this way.Copy the code
The color property is a built-in property when the CAR1 object is created, and is called the in-object property. The appended property seats is called the Normal/Fast property and is stored in the property store. The parser looks for this property indirectly through a description array. Therefore, the property query process is not as fast as looking up the properties of the object itself. So try not to append attributes in this way. It is recommended to use the new ES specification as far as possible. The new specification will take the above problems into account in the design, helping you write more standard code to meet the parser to better optimize the code.
-
Use Array instead of array-like objects whenever possible
Array-like objects are array-like objects, such as the Arguments object that contains information about function arguments and variables. It is an object in itself, but attributes can be accessed through indexes. It also has a length property, which is like an array, but doesn’t actually have array methods, such as forEach traversal, so it’s called a class array. The V8 engine optimizes real arrays for great performance. But you don’t do the same optimization for class arrays. Although the Array by the method of indirect. Prototype. ForEach. Call to traverse the attributes of an Array of class, but the efficiency is not high on the real Array. It is recommended to first convert an array-like object to a real array and then iterate over it. Converting to a real array is also expensive, but V8 has done some experiments and concluded that it is more efficient to iterate over a class array by converting it to a real array than by not converting it. So we had better follow its requirements.
const arr = Array.prototype.slice.call(arrObj,0); arr.forEach((val,index) => { console.log(val); }) Copy the code
-
Avoid reading more than the length of the array – out of bounds problem
function foo(array) { for(let i = 0; i <= array.length; If (array[I] > 1000) {console.log(array[I]); }}} foo ((three, four, five));Copy the code
Existing problems:
- Array [3] = array[3]; array[3] = array[3]; array[3] = array[3];
- Causes undefined to compare with 1000.
- Properties that cannot be found on an object are searched along the prototype chain. Cause extra overhead.
-
Avoid element type conversions
Const array = [1, 2, 3]; // The elements of the array are integers, so the compiler can recognize that and perform an optimization. Specifies that this array receives an integer. Assign a PACKED_SMI_ELEMENTS type. Array. Push (1.1); // All previous optimizations will be invalid. It becomes PACKED_DOUBLE_ELEMTNTS. Causes additional overhead to the compiler.Copy the code
The compiler can tell that the array elements are integers, specify that the array receives an integer type, assign a PACKED_SMI_ELEMENTS type, and make a series of optimizations to PACKED_SMI_ELEMENTS. After push element 1.1, however, the compiler’s previous optimizations are invalidated and a PACKED_DOUBLE_ELEMTNTS type is assigned, causing additional overhead for the compiler.
HTML optimization
HTML optimization has a small space, so note the following.
- Reducing iframes use, or delaying iframes loading, does not affect parent document loading.
- Compress whitespace.
- Avoid deep nesting of nodes to avoid consuming too much memory during DOM tree generation.
- Avoid table layouts, which are very expensive.
- Delete invalid content and reduce the resource size.
- CSS, JS chain as far as possible.
- Delete element default attributes.
CSS optimization
-
Reduce CSS blocking for rendering
(1) Download the CSS as early as possible and parse it as soon as possible.
(2) Reduce the size of the CSS and only load CSS that is useful for the first screen or the current page. CSS that are not currently needed can be loaded later.
-
Use GPU to complete animation.
-
Use the contain attribute.
Contain is a feature that developers can use to communicate with the browser. Contain: Layout Tells the browser that all the child elements in the box have no relation to any of the elements outside the box. Changes in the outside elements do not affect the elements in the box. Can reduce backflow or relayout calculations. For example: codepen. IO /rachelandre… .
Resource optimization
Compression and merge
Compression and consolidation can reduce the number of HTTP requests and the size of requested resources. The smaller the resource, the faster it loads, the faster it is presented to the user.
HTML compression
- Use an online tool for compression
- Use NPM tools such as HTML-minifier
CSS compression
- Use an online tool for compression
- Use NPM tools such as clean-CSS
Js compressed
- Use an online tool for compression
- Use Webpack to compress JS at build time
Css&js merger
Image optimization
Summarize the picture optimization scheme with the following figure
-
Choose the right format
Different formats of images have different advantages and disadvantages, and there are greater advantages to using specific images in different scenes.
Picture format comparison
-
JPEG/JPG
Pros: JPG is a compressed image that is compressed to reduce the size of the image itself. After the compression of the picture color sense is also very good. That is, high compression ratio and good color retention. When the compression ratio is 50%, the image quality can be maintained at 60% level.
Disadvantages: the image after compression edge will be jagged. If the image emphasizes texture, edge (design logo icon) when JPG is not suitable, can make the logo edge particularly rough.
Application scenario: JPG format is preferred when large pictures need to be displayed and the picture quality is better. For example, the first screen wheel cast diagram.
-
PNG
Pros: Make up for the shortcomings of JPG, texture is done very well.
Cons: Retaining more detail, the image size is relatively large.
Application scenario: Use a small icon or logo.
-
Webp
Advantages: Same quality as PNG, but with a higher compression ratio than PNG.
Disadvantages: not standard, it is proposed by Google, some browsers may not support.
-
-
Size your picture appropriately
Do not upload too large pictures to the client, too large pictures on the network is a waste. Upload as many pictures as you need.
-
Adapt to different screens
Design images of different sizes to fit different screen sizes.
-
The compression
-
Prioritize critical images
Important images (such as the first screen image) are loaded first.
-
Lazy-load the rest
Load some images first, and continue loading images as the user scrolls through the page.
-
Take care with Tools
Image loading optimization
lazy load
-
Native picture lazy loading scheme
Loading =”lazy “on the img tag.
Browser support is required, and customization and scalability are not good. So we usually use third-party plug-ins for lazy loading.
-
Third party picture lazy loading scheme
Related plug-ins are: verlok/the lazyload, yall. Js, as well
Use progressive images
The image goes through a process from blurry to clear. The process of going from low pixels to high pixels. This way the user can see the full image from the beginning, the experience will be better. Look for UI design to generate such progressive images during development.
Use responsive images
In different screen sizes of devices can have a suitable picture to allow users to achieve the best visual experience. Use the srcset attribute on the IMG tag to set a series of images of different sizes, corresponding to different screen sizes. As follows:
Depending on the size of the screen, the browser downloads one of the most appropriate images and displays it to the user. The browser is based on sizes, which is set to 100VW, and the image is 100% horizontal. For example, the screen size is 1440, more than 1400 but less than 1800, so the browser will choose 1800W images.
The font to optimize
The most common problem with fonts is that the browser hides or automatically degrades fonts before they are downloaded, causing them to flicker. There are two cases:
Flash Of Invisible Text – The process by which Text changes from Invisible to visible.
Flash Of Unstyled Text — The Text starts in one default style and is rendered in another style, also causing a flickering process.
These two types of problems are unavoidable because font downloads take time. The browser must choose between waiting for the CSS to download before displaying the text, or giving you the default text and re-rendering the text style we want when the font is downloaded. We prefer the latter, showing the user the first time even if there is a flash. You can control the behavior of the browser with a property called font-display. Although this property is relatively new, it is supported by almost all browsers.
The font to display a total of five values: auto/block/swap/fallback/optional
This figure is very clear description of the block/swap/fallback/optional behavior of several values.
Block does not allow text to be displayed at the beginning. After 3s, if the font is downloaded, the font will be displayed. If the font is not downloaded, the default font will be used instead.
Swap – Start with the default font until the required font is downloaded and replaced.
Fallback is an optimization of block. After 100ms, the default font or required font is displayed according to whether the font is downloaded.
After optional 100ms, if the font has not been downloaded, use the default font to display, even if the font is downloaded, it will not be changed.
Build optimization
Optimized configuration of Webpack
Tree-shaking
Tree-shaking is a way to reduce resource volume. A lot of code is actually not needed in the final production package, so it can be processed before packaging to remove the code that is not needed. Tree-shaking usage criteria code must be modular. Code written in ES6 import and export form. Webpack4 provides us with two modes: development and production. The Production mode will enable the introduction of the TerserPlugin by default, and tree-shaking is actually handled by the TerserPlugin. The simple way it works is to go to the entry file, which is the root of the tree, and see what the entry file refers to. It further analyzes what modules are referenced in the application’s packages or modules. After constant analysis, you save everything you need, you keep shaking things that are brought in but not used, and you end up with bundles that only contain the code you really need at runtime. Tree-shaking has its limitations. It requires a modular es6-based syntax, but sometimes it involves adding or modifying properties in the global scope. Export is not shaking, and the code is broken. We can tell Webpack what not to remove from tree-shaking. Add sideEffect to package.json and put all the files that don’t need to be shaken off into the sideEffect array. For example, CSS is not written in a modular way and may be shaken off, and when added to sideEffect webPack will not remove CSS code.
Finally, note the effect of Babel configuration. Preset is commonly used, which is a set of the common Babel plug-ins that can be used only by invoking the set. The most common is preset-env. There is a problem in preset, ES6 modular syntax will be converted to other modular syntax during the conversion, we definitely want to preserve ES6 modular syntax, so we can add a modules configuration and set it to false, indicating that there is no need to preset to other modular syntax. That’s how tree-shaking works.
Webpack relies on optimization
Optimizing dependencies can speed up webpack packaging, and there are two ways to speed up the WebPack packaging process. The first is to use the noParse parameter to speed up builds. NoParse simply means not parsing and tells WebPack to ignore larger libraries. For example, the third-party tool library referenced is relatively large in size, and it is not written in a modular way, and it has no external dependence and is relatively independent. Such libraries are not parsed.
Configuring the noParse parameter in the Module to set loadsh tells WebPack that recursive parsing of loadsh is not required. The second way is to speed up builds by using the DllPlugin plugin to avoid repeated builds of unchanging libraries while packaging. For example, the react/react dom introduced in the project, if it could be extracted into a reference similar to the dynamic link cry, each build does not need to repeat the build, only need to reference the fixed things that have been built before. Create a webpack.dll.config.js file. Entry contains the classes that need to be created as a dynamic link library, and the description file of the dynamic link file is generated through webpack.dllPlugin.
Run webpack.dll.config.js to generate the following dynamic link file react.dll. With these two files you can optimize the build process.
Then add the DllReferencePlugin to webpack.config.js to reference the description file generated above, which tells WebPack how to find the dynamic link file during the build process.
After running webpack.config.js, it can be seen that the packaging time before and after optimization is 8159ms, and the packaging time after optimization is 5428ms.
Webpack code split
For large Web applications, it would be inefficient and unacceptable if we packaged everything together. The bundle needs to be broken down into several small bundles/chunks. Divide large files into smaller files for loading. Load more important files first to shorten the loading time on the first screen and improve user experience. The first way is to add additional entries to the entry so that the project is broken up into different bundles. This approach is straightforward, but requires manual addition and maintenance. The second approach uses the splitChunks plug-in to extract common code and separate business code from third-party libraries.
Extract third-party libraries and break them into separate bundles. Match the library we introduced in node_modules. The size of the actual business logic app.bundle.js is significantly reduced after the build. App.bundle. js was 101KiB before optimization and 2.33kib after optimization. Vendor.bundle.js is also generated in the build directory. This breaks out all the dependencies.
Extract the common code and match the files under SRC.
JSX and index1. JSX I introduced method A from a.js file and executed it in both entry files index. JSX and index. JSX. After packaging, common.bundle.js is generated in the build directory. That is, the common code in the two entry files is extracted.
Webpack resource compression
(1)TerserPlugin compresses JS.
(2) Mini-CSS-extract-plugin extract CSS, optimize CSS-assets webpack-plugin compress CSS.
(3) HTMLWebPackplugin-minify compressed HTML. Minify is enabled by default in build mode.
Webpack persistent cache
Caching can improve the user experience of re-visiting the site and speed up the network load. Managing caching is critical for us. The cache is managed using a persistent cache scheme, where each packaged resource file has a unique hash value. Appends each static resource with a hash value that can be calculated from the contents of the file. Because hash is a discrete unique value, if the contents of the file are unchanged, the corresponding hash value is also unchanged and unique. Once the file content changes, only the affected file hash changes. Based on the hash feature, an incremental update can be made. This allows us to take full advantage of the browser cache and allow a smooth transition of updates to the site while maintaining a user experience.
Webpack has long supported persistent caching solutions, calculating the hash value of the file during the generation of the packaged file, and concatenating the hash value to the file name when naming the file. For example, name filename when extracting CSS.
The files are packed with hash values.
Transfer load optimization
Enable Gzip compression
Gzip is used to compress network resources and reduce the transmission size of resource files on the network. The compression ratio can be up to 90%. Gzip is the only technology available for real-time and dynamic compression of resources during network transmission. Nginx is used as an example to illustrate how to configure Gzip compression. The webPack resources need to be deployed to Nginx. Copy the packed file to the /Users/lanxiang/dereck folder, go to the ngnix.conf configuration file and change the root path. The test page can be accessed normally.
Pay attention to the two main JS files of Vendor and app, whose sizes are 233kB and 8.7KB respectively.
Then modify the configuration file to enable Gzip. Add the following to the HTTP object.
(1)gzip on Enable Gzip.
(2)gzip_min_length 1k is used to compress files of at least 1K;
(3)gzip_comp_level 6 Compression level, 1 to 9. The higher the compression ratio, the higher the CPU consumption. 6 is a good value.
(4) gzip_types is the type of compressed files. It is better to compress text files.
(5) gzip_static on directly uses compressed static resources.
(6) gzip_vary on Adds a Vary attribute to the header of the response to tell the client whether gzip compression is enabled.
(7) Gzip_buffers 4 16K optimization of the compression process.
(8) Gzip_http_version 1.1 uses HTTP version 1.1.
After gzip compression is enabled, the two main JS files of Vendor and app are 72.9kB and 3.7kB respectively
To enable the Keep Alive
HTTP Keep Alive can reuse TCP connections. After establishing a TCP connection with the server, subsequent requests do not need to be repeated to establish a TCP connection. It can greatly save the overhead of network loading for websites with high request volume. HTTP 1.1 start this parameter is enabled by default. The time consumed by requests during the loading process of website resources is not only the time of file content loading and downloading, but also a lot of lead time. Initial Connection, for example, is the time it takes to establish a TCP connection. Why Initial Connection? Because Keep Alive is enabled by default, communication with the server normally requires only one connection.
The second request does not have an Initial connection. This is because the TCP connection from the first request is reused.
You can see whether Keep Alive is enabled in the response header.
There are also two parameters related to Keep Alive, which are usually configured according to the actual number of requests and users on the site. The following example illustrates how to configure based on NgniX.
Keepalive_timeout Indicates the timeout duration. After a TCP connection is established between the client and the server, the server tries to keep the connection. However, if the client does not use the connection after the timeout, the server closes the connection. We set the timeout time to 65s, that is, after 65s the client does not use the TCP connection, the server is shut down. Keepalive_timeout 0 indicates that Keep Alive is disabled. The second, important keepalive_requests parameter, indicates how many requests a client can send using this TCP connection. We set 100 to indicate that the server will close the TCP connection after 100 requests are sent. On the 101st request, the TCP connection must be re-established.
HTTP resource caching
The main purpose of using HTTP cache is to improve the speed of loading resources in case of repeated access and improve user experience.
Cache scheme:
- Cache-Control/Expires
- Last-Modified + If-Modified-Since
- Etag+If-None-Match
Add caches in the modify Location section of the NgniX profile.
HTML is not expected to be cached, so cache-control “no-cache” does not need to be cached. The latter two were added to be compatible with older versions of HTTP. Because HTTP 1.0 does not implement cache-control. Js and CSS require caching. Set expires to 7d. Access is taken from the browser cache for seven days.
How does the server know if the resource requested by the client has changed? The Etag is used, which is the unique identifier of the file resource and is generated on the server side. The client is told what the identity of the file resource is on the first request.
The server is asked if the ETag matches when the request is made again. If-none-match: If it does not Match, fetch the latest resource from the server. If there is a match, the server returns 304 and the client retrieves the file from the cache. Ngnix enables Etag technology by default.
Last-modified + if-modified-since is similar to Etag+ if-none-match, but it is time-dependent and is affected by time accuracy. The client and server time may be out of sync. Therefore, last-modified + if-modified-since caching is not recommended.
Service workers
A Service worker establishes an intermediate layer between the client and the server to store resources. The client can access resources without the server.
Using Service Workers speeds up repeat visits and enables offline access to pages. Pages can be accessed offline just like native apps. Here’s how to configure Service Workers in Webpack. We need to generate an asset-manifest.json file, generated using webpack-manifest-plugin, which defines which resources need to be cached. Precach-manifest.* **.js contains cached file version information.
Then use the workbox-webpack-plugin to generate service-worker.js
Workbox. Precaching. PrecacheAndRoute said immediately after registering successfully cache list of resources.
SwUrl generates service-worker.js with the workbox-webpack-plugin. Call worker.register () to launch the feature.