preface

When we go to an interview, there is a high probability that we will be asked: Have you tried to optimize the performance of the project? Or what do you know about performance tuning? You may hear this question like this:

Deja vu but said not clear, often can only scattered say so a few points, it is difficult to achieve a coherent answer. This article gives you a brief overview of some of the main aspects of front-end performance optimization.

First, the merger and compression of resources

Web front-end application development and deployment process:

The inputurlTo the page display process:

Some potential performance optimizations during requests:

  • dnsWhether it can be reduced by cachingdnsQuery time?
  • How does the network request process go to the nearest network environment?
  • Can the same static resource be cached?
  • Can you reducehttpThe size and number of requests?
  • Can I render server side?

Bottom line: In-depth understanding of the HTTP request process is at the heart of front-end performance optimization.

Optimization of the core

  • To reducehttpNumber of requests;
  • Reduce the size of the requested resource;

googleHome Case Study

  • htmlCompression;
  • cssCompression;
  • jsOf compression and confusion;
  • File merge;
  • opengzip;

1.htmlThe compression

HTML code compression is to compress some meaningful characters in the text file, but do not show in HTML characters, including Spaces, tabs, line breaks, etc., and some other meaningful characters, such as **HTML comments ** can also be compressed;

A simple calculation:

Google’s traffic, which accounts for 40% of the entire Internet, is expected to reach 1.3Zb (1ZB = 10^9TB) in 2016, so Google’s traffic in 2016 is 1.3Zb * 40%. If Google reduces one byte per 1MB request, ** Can save traffic of nearly 500TB** per year.

How to dohtmlThe compression

  • Use online sites for compression;
  • nodejsTo provide thehtml-minifierTools;
  • Back-end template engine rendering compression;

2.cssCode compression

Divided into two parts:

  • Compression of invalid code;
  • cssSemantic combination;

How to docssThe compression

  • Use online sites for compression;
  • usehtml-minifierrighthtmlIn thecssCompress;
  • useclean-cssrightcssCompress;

3.jsCompression and confusion (uglification)

Include:

  • Deletion of invalid characters (Spaces, carriage returns, etc.);
  • Remove comments;
  • Reduction and optimization of code semantics;
  • Code protection (if the code is not processed, the client can directly detect code vulnerabilities);

JSCompression and confusion (uglification)

  • Using online website for compression: tool.oschina.net/jscompress/
  • usehtml-minifierrighthtmlIn thejsCompress;
  • useuglify.js2rightjsCompress;

4. Merge files

Benefits of file merging:

The one on the left means usehttpLong linkskeep-aliveThe case of no merge request, however, requires three separate retrievesa.js,b.js,c.js; On the right is the long link and merge request, which only needs to be sent once to get the merge filea-b-c.jsCan request all three files back.

Non-merge requests have the following disadvantages:

  • The number of uplink requests inserted between files will increaseN-1Network latency;
  • The packet loss problem is more serious: Packet loss may occur every time a request is made. Reducing requests can effectively reduce packet loss.
  • keep-aliveThere are problems of their own: it can be disconnected when passing through a proxy server;

Problems with file merging

  • First screen rendering problem: when requestedjsFile when rendering if the page only relies ona.jsFile, due to file merge, need to wait after mergea-b-c.jsThe file is requested back to continue rendering, which causes the page to render slowly. This is mostly the case with modern front-end frameworks such asVueAnd so on the use of the process;
  • Cache invalidation problem: merged filesa-b-c.jsIf only one of the files (e.ga.js), then the entire merge file will be invalid, which would not be the case without file merge;

Use advice

  • Common library merge: Merge common component library files that do not change often;
  • Will be different pagesjsMerge files individually: for example, apply them on a single pageSPAIs required only when the route redirects to a specific pagejsFile;

How do I merge files

  • Use online sites for file consolidation;
  • usenodejsImplement file merge;
  • usewebpackFront-end components such as chemical tools can also be well realized;

Second, picture related optimization

Lossy compression process: a JPG image is parsed separately:

  • Color space conversion: fromRGBColor space to another color space;
  • Resampling: color transformation to distinguish between high and low frequencies;
  • forDCTProcess: Compress high-frequency color sampling results, so that the benefits of compression will be relatively large;
  • Then the data are quantified;
  • Finally, code (encoding);

Finally get JPEG-Compressed Image Data, that is, the real display of JPG pictures. Although this is lossy compression, in many cases the lost data does not affect the display;

png8/png24/png32The difference between

  • png8:256color+Support transparency;
  • png24:2 ^ 24color+Transparency is not supported;
  • png32:2 ^ 32color+Support transparency;

Common business scenarios for pictures in different formats

  • jpgLossy compression, high compression rate, support transparency; Application: most business scenarios that do not require transparent images;
  • pngTransparent support, compatible browser; Application: most business scenes requiring transparent images;
  • webp(2010Introduced by Google in 2003) compression is better inios webviewThere are compatibility problems in; Apps: Android all;
  • svgVector diagrams, code embedded, relatively small, used for a relatively simple image style scene; Application: For examplelogoandiconfont;

1. Image compression

According to the real picture situation, abandon some relatively insignificant color information, compress the picture; Take tinypng.com/, an online compression site

2.cssSprite figure

Reduce the number of HTTP requests to your site by combining images used on your site into a single image. The principle is as follows: Set the display area of the whole Sprite image, and locate the icon you want to display there (upper left corner); ** Disadvantages: ** integration picture is relatively large, one load is slow.

For example, Sprite pictures on Tmall: In many cases, not all the small ICONS are placed in one Sprite picture, but will be properly split. There are fewer Sprite scenes now.

The website that automatically generates the Sprite pattern: www.spritecow.com/ Select the corresponding icon in the Sprite pattern and the corresponding style will be generated.

3. In-line images (Image inline)

Embed the image content in HTML to reduce the number of HTTP requests on a website. It is often used for small ICONS and background images. The inline image of the webpage is written as follows:

 <img src="data:image/png; base64,iVBORw0KGgoAAAANSUhEUgAAA..." alt="">
Copy the code

Here is a url to convert: image to DataUrI: tool.c7sky.com/datauri/

Disadvantages:

  • Browsers do not cache inline image resources.
  • Poor compatibility, only supportedie8Above browser;
  • More than1000kbThe pictures,base64Encoding will increase the size of the image, resulting in the overall download speed of the web page;

So it depends on the scenario, but the benefits of inlining images to reduce HTTP requests are significant. For example, images under development that are smaller than 4KB or 8KB are automatically inline to THE HTML via the build tool. In this case, the Image inline increases the size of the Image better than increasing the number of HTTP requests.

4. The vector diagramSVGwithiconfont

useiconfontTo solveiconThe problem

This method should be used as far as possible, such as Alibaba Vector gallery:

You can download it in any format:

You can see the obvious difference in size:

useSVGVector graph control

SVG stands for Scalable Vector Graphics. SVG uses XML format to define images. The following is an example:

Online conversion website: www.bejson.com/convert/ima…

5.webp

The advantages of WEBP are reflected in its better image compression algorithm, which can bring smaller image volume and has undifferentiated image quality recognized by naked eyes. Both lossless and lossy compression modes, Alpha transparency and animation features are available. The conversion results on JPEG and PNG are excellent, stable and uniform. There are no compatibility problems on Android, it is recommended to use android.

The following is the picture of taobao home page request:

As you can see, the selection of WebP formats has been massively added to the image. . Jpg_. Webp Indicates that the webP format is used if the browser supports WebP, otherwise, the JPG format is used.

The following is the picture of the homepage of station B, which basically adopts webP format:

JPG format and WebP format have a significant difference in compression rate of the same image:

Images can be converted to webP: zhitu.isux.us/ via an online site

Static files such as pictures can be stored in the CDN server, allowing the CDN server to convert the pictures to Webp format in batches.

Third, browser rendering engine and blocking

1. Main module of rendering

Version 1:

Version 2:

A rendering engine mainly includes: HTML parser, CSS parser, javascript engine, layout module, drawing module:

  • HTMLParser: ExplainHTMLDocument parser, the main function is toHTMLThe text is interpreted asDOMTree;
  • CSSParser: It is used forDOMEach element object in the compute style information to provide the infrastructure for the layout;
  • JavascriptEngine: useJavascriptCode can modify the content of a web page as well as modify itcssThe information,javascriptThe engine can explainjavascriptCode and passDOMThe interface andCSSTree interface to modify web page content and style information to change the result of rendering;
  • Layout (layout) :DOMOnce created,WebkitIt is necessary to combine element objects with style information, calculate their size and position and other layout information, and form an internal representation model that can express all the information.
  • Drawing module (paint) : Graph library is used to draw the nodes of each webpage into image results after layout calculation;

2. Rendering process

The entire process of the browser rendering the page: the browser parses the document from top to bottom.

  1. When the browser sees HTML tags, it calls the HTML parser to parse them into tokens (a token is a serialization of tag text) and build a DOM tree (a block of memory that holds tokens and establishes relationships between them). At the very beginning of DOM generation (Bytes → characters), requests for CSS, images, and JS are made in parallel, whether they are in the HEAD tag or not.

    Note: initiating a js file download request does not require DOM processing to that script node;

  2. Encountering the style/link tag calls the parser to process the CSS tag and build the CSS style tree;

  3. When script tags are encountered, javascript parsers are called to process script tags, bind events, modify DOM tree /CSS tree, etc.

  4. Combine THE DOM and CSS trees into a Render Tree.

  5. Layout: Calculates the position of each node in the screen according to the styles and dependencies of each node in the render tree;

  6. Painting: According to the calculated results: the node to be displayed, CSS and location information of the node, through the graphics card, the content is painted on the screen;

The DOM, CSSOM, and Render Tree can all be updated several times after the first Painting. For example, Layout and Painting can be repeated when JS changes the DOM or CSS properties. In addition to DOM and CSSOM updates, Layout and Painting are also called to update the page after the image is downloaded.

Supplement:

  • HTMLMay introduce a lot ofCSS, jsThese external resources are loaded concurrently on the browser side. However, browsers limit the number of concurrent requests for the same domain name. That is, the concurrency of a single domain name is limited.
  • So, often host most of your resources toCDNOn the server and set3 ~ 4aCDNThe domain name. Prevent only oneCDNIn the case of domain names, the maximum number of concurrent requests from external resources of the browser has been reached. As a result, many resources cannot receive concurrent requests. Therefore, you should set more than oneCDNThe domain name.

3.cssblocking

Only external CSS introduced via link blocks:

  • styleStyles in the tag:

    • byhtmlParsers parse;
    • Does not block browser rendering (” splash screen “may occur);
    • Don’t blockDOMParsing;
  • linkIntroduced exteriorcssStyle (recommended use) :

    • byCSSParsers parse;
    • Block browser rendering: due tocssIt is already loaded, so the entire rendering process is styled, so this blocking prevents “splash screen”;
    • Blocking behindjsStatement execution: This is not hard to understand,jsIt often appears in documentsDOMOperation, which may be involvedcssStyle changes. In practice, these changes often rely on previous introductionscssSet the style, socssblocksjsThe implementation;
    • Does not block DOM parsing;
  • Optimization core idea: Improve externally as quickly as possiblecssLoading speed:

    • useCDNNodes accelerate external resources;
    • rightcssCompress (using a packaging tool, e.gwebpack.gulpEtc.);
    • To reducehttpNumber of requests, will be multiplecssFile merge;
    • Optimize the stylesheet code;

4.jsblocking

  • Blocking DOM parsing:

    The reason: The browser does not know the content of the subsequent script. If the following DOM is parsed first and the subsequent JS deletes all the DOM, the browser will have done no work. The browser cannot predict the specific operations in the script, such as document.write, and simply stop all operations. The browser then proceeds to parse the DOM; You can solve this problem by adding defer and async properties to the Script tag to asynchronously import the JS file.

  • Blocking page rendering:

    Reason: javascript can also set the DOM style, the browser also wait for the script to complete, then continue to work, to avoid useless work;

  • Block the subsequentjsThe execution of the:

    Reason: Js is executed sequentially so that dependencies can be maintained. For example, jQuery must be introduced before bootstrap is introduced.

  • Does not block loading of resources:

    This does not contradict the above, because it is impossible to block the loading of other resources by loading one JS file. In this case, the browser preloads subsequent resources;

5. To summarize

  • CSS parsing and JS execution are mutually exclusive (mutually exclusive). CSS parsing stops js execution and CSS parsing stops JS execution.

  • CSS blocking and JS blocking do not block the browser from loading external resources (images, videos, styles, scripts, etc.).

    This is because the browser is always in a “send out first” mode. Anything that involves a web request, be it an image, a style, or a script, will be sent out first to get the resource. It is up to the browser to decide when the resource will be used locally. This is obviously very efficient;

  • Both WebKit and Firefox have been optimized for pre-parsing. As the JS script executes, other threads in the browser parse the rest of the document to find and load additional resources that need to be loaded over the network. In this way, resources can be loaded on parallel connections, increasing overall speed. Note that the pre-parser does not modify the DOM tree

Lazy loading and preloading

1. The lazy loading

The method of requesting an image resource after the image enters the viewable area is called image lazy loading. It is suitable for business scenes with many pictures and long pages, such as e-commerce;

Lazy loading functions:

  • Reduce loading of invalid resources:

    For example, if a website has 10 pages of images and users only view the first page of images, there is no need to load all 10 pages of images.

  • Too many resources concurrently loaded will blockjsLoading, affecting the normal use of the site:

    Because the browser has a maximum concurrency limit for a certain host name, if the CDN where the image resources reside and the CDN where the static resources reside are the same, the concurrent loading of too many images will block the subsequent concurrent loading of JS files.

Lazy loading implementation principle:

Listen to the onScroll event to judge the position of the visible area:

Image loading is SRC dependent, and you can start by adding custom property fields for all lazily loaded static resources to store real urls. For an image, you can define the data-src attribute to store the actual image address. SRC refers to the loading image or a placeholder. Then, when the resource enters the viewport, the SRC property value is replaced with the real URL stored in data-src.

<img src="" class="image-item" alt="" lazyload = "true" data-src="TB27YQvbm_I8KJjy0FoXXaFnVXa_!! 400677031.jpg_180x180xzq90.jpg_.webp">
Copy the code

Lazy loading instance

You can use the element’s getBoundingRect().top to determine whether the current position is in the viewport or whether the distance offsetTop and scrollTop from the top of the document is less than the viewport height:

For example,

For example, the home page of mobile Taobao:

When you are about to scroll to the image you want to display, you can see that there is a lazyload attribute on the image:

2. Preloading

Preloading is the opposite of lazy loading: lazy loading is actually lazy loading, delaying the loading time of static resources we need; Preloading is to request static resources such as images in advance before they are used, so that resources can be loaded directly from the cache when they are used, thus improving user experience.

The role of preloading:

  • Request resources in advance to improve loading speed: You only need to read the resources requested in advance from the browser cache.

  • Maintain page dependencies: WebGL pages, for example, rely on 3D models, which are necessary resources for page rendering. Rendering a page without all the resources loaded can be a very bad experience.

    Therefore, page rendering dependencies are often maintained by preloading, such as after loading the 3D model that WebGL pages depend on. In this way, the rendering process will not have any obstacles, with a better user experience;

Preloaded instances

For example, in the nine squares raffle business, each prize has a selected state and an unselected state, which is actually a combination of two pictures. Since the selection process of each prize is a moment, it requires a high efficiency of switching between selected and unselected pictures. If the selected pictures are not preloaded, it is obviously too late.

Therefore, in fact, all the selected patterns and corresponding pictures in the nine grid need to be preloaded, so that we can instantly read the selected pictures from the cache in the lottery process, so as not to affect the display of the lottery effect.

In addition, there are animations for website login or activities, which are not displayed until each frame of the animation is fully preloaded.

Five, redraw and reflux

1.CSSThe layer

When browsers render a page, they divide the page into layers, large and small, with one or more nodes on each layer. When rendering the DOM, what the browser actually does is:

1, get`DOM`After segmented into multiple layers;2, calculate the style result for each layer node (`Recalculate style`Style recalculation);3, generate graphs and positions for each node (`Layout`-- Refluxing and relayout);4, draw and fill each node into the layer bitmap (`Paint Setup`and`Paint`-- redraw);5The layer is uploaded as a texture`GUI`;6, compound multiple layers onto the page to generate the final screen image (`Composive Layers`-- Layer reorganization);Copy the code

2. Conditions for creating layers

  • Have a 3D or preperspective transform CSS attribute.

  • A

    node with a 3D(WebGL) context or an accelerated 2D context;

  • CSS3 animation plug-ins (such as Flash);

  • Elements that have accelerated CSS filters;

    • transform, such as:translateZ(0)
    • opacity
    • filter
    • will-change: Which attributes are about to change to optimize.

3. Repaint

Redraw is browser behavior triggered by changes in the appearance of an element, such as background-color, outline, and so on. These attributes do not affect the layout, but only the appearance and style of the elements, causing DOM elements to be re-rendered, a process called redraw.

It is important to note that repainting is done in layers, so if an element of a layer needs to be redrawn, the entire layer needs to be redrawn. For example, if a layer contains many nodes and there is a GIF image, each frame of the GIF image will return to the other nodes of the entire layer to generate the final bitmap of the layer.

Therefore, there is a special way to force the GIF to be a single layer (translateZ(0) or translate3D (0,0,0); The same is true for CSS3 animations (fortunately, in most cases the browser will create layers for the nodes of CSS3 animations);

So: If the DOM element that is frequently redrawn is a separate layer, then the redrawn and redrawn DOM element will only work on that layer; In principle, you should avoid creating new Layers, because this will cause the calculation of Composive Layers to increase. Therefore, create a separate layer to place DOM elements only when they are frequently redrawn back;

Only redrawn properties are triggered

// Some attributes
color
border-style
border-radius
visibility
text-decoration
background
background-image
background-position
background-repeat
background-size
outline-color
outline
outline-style
outline-width
box-shadow
Copy the code

4. Reflow

When part (or all) of the Render Tree needs to be rebuilt due to changes in element size, layout, hiding, etc. This is called reflow;

  • Backflow is needed when the layout and geometry of the page change;
  • Backflow will inevitably lead to redrawing, but redrawing will not necessarily lead to backflow;

Property that triggers page relayout (reflux)

Box model related properties Position and float properties Text structure attribute
width top text-align
height bottom overflow-y
padding left font-weight
margin right overflow
display position font-family
border-width float line-height
border clear vertical-align
min-height * white-space
* * font-size

Frequent redrawing and backflow triggers frequent UI rendering. In the rendering process, js thread is blocked and js execution is slowed down.

5. Common operations that trigger backflow

  • Add, delete, modifyDOMNode;
  • mobileDOMThe location of the;
  • Modify theCSSStyle;
  • ResizeThe window; Mobile does not have this problem because mobile scaling does not affect the layout viewport (vw/vh);
  • Modify the default font of web pages.
  • To get someDOMElement attributes (width.heightEtc.);

Note: Display: None will trigger Reflow, and visibility: hidden will only trigger Repaint because no position change has occurred;

6. Sample

Case 1: Taobao round broadcast map

Performance, a Chrome debugging tool, can be used to observe the redraw backflow process caused by a rotation chart on taobao’s home page:

Update Layer Tree to backflow and relayout:

Paint redraw:

Composite Layers:

Case 2: Player

Use the Layers option of the Chrome debug tool to view the Layers and the reasons for the new Layers:

The DOM element of the video tag is constantly redrawn during the video playback, so it’s a good idea to limit it to one layer so that it only involves redrawing that layer and doesn’t affect other layer elements.

Do not abuse layers, or they will cost you a lot of performance during layer reorganization!

Add transform:translateZ(0) to the global Style (*) in the HTML tag to trigger a new layer:

You can also create a new layer by adding the: will-change: transform property.

Check the layer situation again, you can see that there are a lot of layers on the home page.

7. Actual combat optimization points

If we need to improve the performance of animations or other node renderings, all we need to do is reduce the amount of work the browser needs to do at runtime:

  • Calculates the style result that needs to be loaded on the node (Recalculate styleStyle recalculation);
  • Generate a graph and position for each node (Layout— Refluxing and relayout);
  • Fill each node into the layer (Paint SetupandPaint— redraw);
  • Combine layers onto the page (Composite Layers— Layer reorganization);

1, use,translatealternativetopAnd other properties to change position;

<! DOCTYPEhtml>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="Width = device - width, initial - scale = 1.0">
  <title>Document</title>
  <style>
    #box{
  	 1 * / / * method
      position: relative;
      top: 0;
     
     2 * / / * method
     /* transform: translateY(0); * /
        
      width: 200px;
      height: 200px;
      background-color: pink;
    }
  </style>
</head>
<body>
  <div id="box"></div>
  <script>
    setTimeout(() = > {
      document.getElementById("box").style.top = '100px'
      //document.getElementById("box").style.transform = 'translateY(100px)'
    }, 2000);
  </script>
</body>
</html>
Copy the code

Redraw and reflow Layout when changing the position of a square using the top property:

Changing the position of a square with the Translate attribute does not cause redraw or reflow:

For example, some websites will have floating Windows floating from left to right. Since these floating Windows are realized by using timers, the position of the floating Windows will be changed every 100ms. In this case, using transform instead of top/left reduces ten backflow within 1s, which is very helpful to improve the speed of the web page.

2, use,opacityalternativevisibility:

  • [Fixed] Using visibility does not trigger rearrangement, but still redraws.

  • Using opacity directly triggers both redrawing and rearrangement (GPU underlying design!) ;

  • Opacity is used in conjunction with any layer, and neither redrawing nor rearrangement is triggered.

    Reason: When the opacity changes, the GPU simply lowers the alpha value of the previously drawn texture to achieve the effect, without the need for overall redrawing. However, the prerequisite is that opacity itself must be a separate layer.

3. It will change many timesDOMOperations on element style attributes are combined into one operation:

  • Pre-definedclassAnd then modify itDOMtheclassNameTo add styles;

4, theDOMModify after offline:

  • Due to thedisplayProperties fornoneElements are not in the render tree, and operations on hidden elements do not cause rearrangements of other elements. If you want to perform complex operations on an element, you can hide it and display it after the operation is complete. This only fires when hidden and displayed2Reflux;

5. Don’t take somethingDOMAttribute values of nodes are placed in a loop as loop variables

The browser flushes the queue when it requests certain style information, such as:

  • ffsetTop.offsetLeft.offsetWidth.offsetHeight;
  • scrollTop/Left/Width/Height;
  • clientTop/Left/Width/Height;
  • width.height;

To get the most accurate value, the browser needs to refresh the internal queue. Because there may be operations in the queue that affect these values, even if there are none, the browser will force the render queue to refresh. There is no way to make use of the cache of the render queue to avoid backflow too often, so you can store the property values in a variable instead of retrieving them every time you use DOM element attributes.

6. Don’t use ittableLayout:

  • Because even a small change can make a wholetableRelayout of; So use as much as you candivLayout;

7, enable,GPUHardware acceleration:

The principle is as follows: the browser will detect some specific CSS attributes, and when the DOM element has these CSS attributes, the browser will enable GPU hardware acceleration on the DOM element. For example: Transform: translateZ(0) and transform: translate3D (0, 0, 0) both enable hardware acceleration; Hardware acceleration should also not be abused, as it can lead to too many layers, which can consume a lot of performance when merging layers.

8. Selection of animation speed:

  • Because every change in the animation causes redraw and reflow, balance the number of animation frames (smoothness) and the number of reflows as appropriate for the business scenario;

9. Create a new layer for the animation element to enhance the animation elementz-index;

10. Using document fragments (documentFragment) — — — — — –vueThis method is used to improve performance

If we wanted to add 10,000 Li’s to a UL, we would need to append 10,000 times with Append if we didn’t use document fragments, which would cause pages to keep flowing back, consuming resources:

var oUl = document.createElement("ul"); 
for(var i=0; i<10000; i++) {var oLi = document.createElement("li"); 
    oUl.appendChild(oLi); 
} 
document.body.appendChild(oUl); 
Copy the code

We can introduce the createDocumentFragment() method, which creates a document fragment. 10000 Li will be inserted into the document fragment first, and then added to the document at once. That is, the document fragment acts as a temporary repository, which greatly reduces DOM operations:

// Create a document fragment first
var oFragment = document.createDocumentFragment(); 

// Create a UL label
var oUl = document.createElement("ul"); 
for(var i=0; i<10000; i++) {// Create the li label
  var oLi = document.createElement("li"); 
  // Attach to the document fragment first
  oFragment.appendChild(oLi);  
}
// Add document fragments to ul tags
oUl.appendChild(oFragment);
// Add the UL tag to the body tag
document.body.appendChild(oUl);
Copy the code

** If there are cases where you can use compositing threads to handle CSS effects or animations, try to use will-change to tell the rendering engine in advance to prepare a separate layer for that element.

12. Use virtualDOM;

** Create an animation using requestAnimationFrame: ** Details below.

8. Request animation frames (requestAnimationFrame)

* * window. RequestAnimationFrame () : * * this method will tell the browser before redraw call the specified function:

  • ** Arguments: ** This method takes a callback that is called before the browser redraws;

    The callback function is automatically passed a parameter: DOMHighResTimeStamp, which identifies the current time when requestAnimationFrame() started firing the callback function;

  • Return value: a non-zero integer, also known as the request ID, that is unique in the callback list and has no other meaning;

* * window. CancelAnimationFrame (requestID) : * * window. To cancel a previously by calling the method requestAnimationFrame () method to add to the request of animation frames of the plan. RequestID was previously called Windows. RequestAnimationFrame () method returns the ID.

use

  • When unavailableCSS3In the case of animation, use this method instead of timer animation;
  • Due to the mechanism of redrawing on the call, the animation frequency produced is consistent with the refresh frequency of the browser, there will be no flash, to ensure the smooth animation;

The sample

<! DOCTYPEhtml>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="Width = device - width, initial - scale = 1.0">
  <title>Document</title>
  <style>
    #box{
      height: 200px;
      width: 200px;
      background-color: pink;
    }
  </style>
</head>
<body>
  <div id="box"></div>
  <script>
    let i = 0
    // Get the request ID
    let id = requestAnimationFrame(move)

    function move(){
      i++
      document.getElementById('box').style.transform = `translateX(${i}px)`
      // Call requestAnimationFrame recursively to update the request ID
      id = requestAnimationFrame(move)
    }

    setTimeout(() = > {
      // stop animation after 2s
      cancelAnimationFrame(id)
    }, 2000);
  </script>
</body>
</html>
Copy the code

Six, function tremble and throttling

1. Function anti-shake

  • ** concept: ** continuously triggers a function, in the specified time only for the last time to take effect, the previous does not take effect;
  • ** realization: ** timer;
  • ** Application: ** Send query request after user complete input when searching;

Code implementation

function debounce(fn,delay){
           var timer = null
        // Clear the last delay
          return function(){
               clearTimeout(timer)
              // Reset a new delay timer
              timer = setTimeout(() = > {
                  fn.call(this) }, delay); }}Copy the code

Using function stabilization can reduce the number and frequency of events that fire, and can be optimized in some cases. For example: search box, for the core business of non-search websites, generally wait for users to complete input content before sending query requests, once to reduce the pressure on the server. The core business of search sites like Baidu, the server performance is strong enough, so do not carry out function anti-shake processing;

2. Function throttling

  • ** concept: ** repeatedly triggers a function, execute for the first time, only after the set execution period will be executed for the second time, so as to control the function execution frequency;
  • ** realization: ** timer, identification;
  • ** In the game, you can set the fastest frequency of character attack action, no matter how fast hand speed can not exceed this frequency;

Code implementation

* Throttling function: fn: function to be throttled, delay: specified time */function throttle(fn, delay){
    // Record the last time the function started
    var lastTime = 0
    return function(){
        // Record the time when the current function fires
        var nowTime = new Date().getTime()
        If the current time minus the last execution time is greater than the specified interval, let it trigger this function
        if(nowTime - lastTime > delay){
            // bind this to
            fn.call(this)
            // Synchronize time
            lastTime = nowTime
        }
    }
}
Copy the code

7. Browser storage

1.Cookie

  • CookieIt translates to “cookie,” a former netscape employeeLou Montulli 在1993years3Invented on;
  • CookieIs in plain text format and does not contain any executable code information accompanying the user request inWebPass between server and browser;
  • CookieEssentially belonging tohttpBecausehttpThe protocol itself is stateless, so the server cannot distinguish which client the request comes from, even if the request comes from the same client for several times. So introducingCookieTo maintain the state of the client (e.g. shopping cart status is different for each account).

CookieGeneration mode of

  • Client generation:

    With the document.cookie property in JavaScript, you can create, maintain, and delete cookies; Setting the value of the document.cookie property does not delete all cookies stored in the page, it simply creates or modifies the cookies specified in the string.

  • Server-side generation:

    A Web server creates a Cookie by adding a set-cookie field to the HTTP response header. You can prevent cross-domain scripting (XSS) attacks by adding the HttpOnly attribute to this field to prevent JavaScript scripts from accessing the Cookie.

CookieThe defects of

  • ** Security: ** As cookies are transmitted in plain text in HTTP, the data contained therein can be accessed by others, resulting in problems such as tampering and embezzlement;

  • Size limitation: The size of cookies is limited to about 4KB, which is obviously not the ideal choice if you want to do a lot of storage.

  • ** Increases traffic: ** Since cookies are bound to the server corresponding to the domain name, each Request for the same domain name will have a Cookie in the Request Header.

    • On the one hand: increase the request time to the server;
    • On the other hand: most of it is not neededCookieInformation under the occasion of the waste of flow; Each time the browser requests the same domain name, it gets more4KBTraffic, which is a big drain on a large site.

Therefore, use cookies with caution. Do not store important and sensitive data in cookies.

CookieMethods of performance optimization

Separate the domain name of the CDN server that stores static resources from the domain name of the master site. This saves a lot of traffic by eliminating the need to carry cookies every time a static file is requested.

For example,

For example, when you log in to Baidu, there will be set-cookie field in the request header, where BDUSS is the string identifying the user’s login status:

The HTTPonly attribute in set-cookie indicates that js scripts are forbidden to access cookies, which can prevent XSS attacks to a certain extent.

The Cookie has been planted to the domain name Domain:.baidu.com, and the HttpOnly attribute has been set:

Each subsequent browser Request carries this Cookie information in the Request Headers. After refreshing the page, you can see that the request header carries the Cookie information BDUSS:

This way the server knows that this is the user that has logged in.

However, not all requests need to carry Cookie information, such as Youku:

You can see that cookies are also carried with requests to the index.css file, but this is not necessary and will result in wasted traffic.

The solution is the above mentioned: **CDN domain name and the main domain name ** independent;

This is how Baidu solved it:

Baidu.com, but the static resource server CDN. And the request header does not carry Cookie information:

Set and getCookie

Setting cookies is simple: the key and value values are joined by an equal sign:

document.cookie = "userName=zhangsan"
Copy the code

Open the Application option to view the current Cookie and you can see that the Cookie has been changed:

Get cookies:

document.cookie
Copy the code

Remark:

  • Static resources are not carriedCookie;
  • CookieAre generally backstage kind, rarely let the front end to write directly;
  • CookiePoints: persistence level,sessionLevel;
  • CookieGenerally used for storagesession IDCommunicate with the server;

2.Web Storage

  • Web StorageDivided intoSessionStorageandLocalStorageDesigned for client browser local storage, while space ratioCookieMuch larger, generally supported5-10M;
  • Browser side throughWindow.sessionStorage 和 Window.localStorageProperty to implement the local storage mechanism;

LocalStorage

LocalStorage is designed by HTML5 specifically to store browser information:

  • Size of the5~10MOr so;
  • It is used only in the client and does not communicate with the server.
  • Interface encapsulation is better providedjsPerforming operations such as reading and writingAPI;
  • Using the browser local cache scheme, can directly use the browser local cache, improve the speed of webpage rendering;

For example,

For example, you can view the data stored in LocalStorage in Taobao through the Application option of Chrome debugging tool:

This data persists even when the page is closed, as long as it is not manually cleared. When you need to use images, JS/CSS files and other resources, you do not need to re-request the server, but can directly use the cache in LocalStorage, this is the advantage of LocalStorage cache;

Cookie, on the other hand, stores data to be brought to the server, such as user login status, statistics and other data:

Set and getLocalStorage

LocalStorage provides a relatively simple API in the form of keys and values.

Set through:

localStorage.setItem("key"."value")
Copy the code

LocalStorage:

When obtaining:

localStorage.getItem("key")
Copy the code

Other methods

// This method takes a key name as an argument and deletes the key name from the store.
localStorage.removeItem('key');
	
// Calling this method clears the store of all key names
localStorage.clear();
Copy the code

SessionStorage

SessionStorage is used to store the session information of the browser. After the tabs are closed, the data stored in SessionStorage will be emptied. LocalStorage will not be emptied.

  • Size of the5~10MOr so;
  • It is used only on the client and does not communicate with the server.
  • Interface encapsulation is good;
  • Form information can be maintained; For example, when a form is refreshed, you can write the information entered before the refreshSessionStorage, so that data will not be lost even after the refresh; Another scenario is when a paginated form moves forward or backward if information is saved inSessionStorageIn will not be lost;

Set and getSessionStorage

The method for setting SessionStorage is similar to the method for setting LocalStorage:

/ / set
sessionStorage.setItem("key"."value")

/ / to get
sessionStorage.getItem("key")
Copy the code

SessionStorage has been successfully modified using the Application option: other methods

// This method takes a key name as an argument and deletes the key name from the store.
sessionStorage.removeItem('key');
	
// Calling this method clears the store of all key names
sessionStorage.clear();
Copy the code

3.IndexedDB

IndexedDB is a browser-provided API for storing large amounts of structured data on the client. The API uses indexes to enable high-performance searches of data. While WebStorage is useful for storing small amounts of data (using the key/value approach), IndexedDB performs better for storing larger amounts of structured data.

IndexedDBThe application of

  • Create an offline version of the application.

IndexedDB objects can be printed in a browser:

4.PWA

PWA (Progressive Web Apps) is a new model (standard) of Web Apps, which does not specifically refer to a cutting-edge technology or a single knowledge point. As can be seen from the English abbreviation, this is a progressive Web App, which gradually enhances user experience through a series of new Web features and excellent UI interaction design.

PWAThe requirements of

  • ** Reliable: ** provides basic page access even in a non-network environment without “not connected to the Internet”;
  • ** Fast: ** Optimized for webpage rendering and network data access;
  • * * in (Engaging) : ** apps can be added to the mobile desktop and have full-screen, push and other features like normal apps;

5.Service Worker

The Service Worker is a script that makes the browser run in the background, independent of the current web page. Opens the door for features that don’t rely on pages or user interaction. Future features include push messages, background synchronization, and geofencing. The ability to intercept and process web requests, including programmatically managing cached responses, will be the first major feature to roll out.

That is, the Service Worker can help the browser perform large-scale operations without blocking the main thread.

Service WorkerThe application of

  • Using the ability to intercept and process network requests to implement an offline application;
  • useService WorkerWhile running in the background and the ability to communicate with the page, to achieve large-scale background data processing;

Service WorkerThe application process

The sample

The Service Workers information of Taobao can be viewed through the Application option of Chrome debugging tool:

When we refresh the Taobao web page, check the Network option, we can find from the size bar of the request file that a large number of files are requested from the Service Worker cache:

In this way, the Service Worker’s cache can be used to optimize the site’s performance.

Taking the same JS file requested by taobao as an example, it took 7ms to load it from the Service Worker:

It took 100ms to request the same file from the server after forcing the refresh with Ctrl + F5:

This is the performance advantage of using Service workers. Since resources are being read from the local cache, there is a significant increase in resource read speed and overall performance.

HTTP general cache policy

1. An introduction to caching

  • Cache definition:

    The browser stores the data requested by the user on the local disk. When the user needs to change the data again, the user does not need to send the request again and directly obtains the data from the browser

  • Benefits of caching:

    • Reduce the number of requests;
    • Save bandwidth and avoid wasting unnecessary network resources.
    • Reduce server stress;
    • Improve web page loading speed, improve user experience;

2. Cache related header fields

You can view requests for resources by using the Network option in the Chrome debugger:

Be careful not to check the box in the figure, otherwise some requests will be filtered;

Cache-Controlfield

The server can Control the caching policy between the client and server through the cache-control field in the HTTPheader, which has the following attributes:

max-age

This field specifies the maximum duration of the cache. The following is an image from Taobao:

The client does not make a request to the server until the time specified by the max-age attribute expires, but reads the image directly from the cache. In the figure above, you can see that the browser reads the image resource directly from the ServiceWorker’s cache.

The Expires field can also specify the expiration date of the Cache, but this is HTTP1.0 and has a lower priority than the max-age attribute of the cache-Control field in HTTP1.1.

s-maxage

Generally speaking, there are two kinds of cache devices: browser (client) and CDN server;

  • Where the browser belongs toprivateType cache device: indicates that only browsers can cache resources.
  • CDNThe server belongs topublicType cache device that caches resources on the source server. And the cache is accessible to any user;

S-maxage has the highest priority of Expires and max-age and is used to specify the expiration date of a resource on a public type cache device (such as a CDN). As shown in the figure below, after setting this field for the resource, the browser will neither use the browser cache nor request the resource from the server, but request the resource from the cache device of public type (such as the CDN server) :

private

The server side can use this property to specify that a resource can only be cached by the browser (client) and not by the proxy caching server (CDN).

public

The server can use this attribute to specify a resource that can be cached by the browser or the proxy cache server.

no-cache

The no-cache attribute specifies that the browser must send a request to the server to confirm the freshness of the cache resource before deciding whether to use the cache. As shown below:

no-store

This property specifies that the browser skips the cache and requests resources from the server again, regardless of whether the cache resource has expired. The no-store attribute is rarely used.

Expiresfield

This is the HTTP1.0 specification; The value is an absolute time GMT(Greenwich Mean Time) format time string, such as Mon, 10 Jun 2015 21:31:12 GMT;

This field specifies the expiration time of the browser cache resource. Before the expiration time, the browser can read data directly from the local cache without sending a request to the server again, which is a strong cache. Compared to max-age and S-Maxage, the Expires field has the lowest priority and is invalid in the presence of these two attributes.

A field that identifies resource changes

Last-Modified/If-Modified-Since

These are fields that identify when the resource was last updated, based on a caching mechanism negotiated between the client and the server. The last-Modified field is in the Response header, and the if-Modified-since field is in the Request header, both of which are used in conjunction with the cache-Control field.

When a resource on the server changes, the last-Modified field value is updated synchronously. When the Expires field or max-age attribute Expires, the client carries the if-Modified-since field in the request header. Compare with the last-Modified field value of the server-side resource:

  • Case 1: If the two are equal, the resource has not changed since the time specified in the Last-Modified field. In this case, the server returns the status code 304, which belongs to the negotiated cache.
  • ** If the two are not equal, the resource has been updated and the server returns the latest resource and the latestlast-modifiedField value, and the status code is200;

For example,

The following figure shows the response with status code 304:

  • The if-modified-since field is Mon, 23 Mar 2020 18:14:15 GMT

  • The last-Modified field in the response header is Mon, 23 Mar 2020 18:14:15 GMT:

If the two values are equal, the resource has not changed. Therefore, the server returns the status code 304, which is the negotiated cache. The browser continues to use the local cache.

The value of the if-modified-since field is the last-Modified field value of the Last response resource on the server side;

Last-ModifiedThe disadvantages of the

  • Some files may change periodically, but their contents do not change (only the modification time), at which point we do not want the client to think that the file has been modified and redoGET;
  • Some files are modified very frequently, such as in seconds or less, (e.g1sChanges in theNTime),If-Modified-SinceThe granularity that can be examined issLevel, this modification can not be determined (such as Taobao eachmsUpdate data);
  • Some servers cannot get the exact change time.

So I have

Etag/If-None-Match

The Etag field, a standard in HTTP1.1, is a hash value that uniquely identifies the server-side resource. This field exists in the response header. This is used with if-none-match and cache-control fields in the request header.

The Etag value changes as soon as the resource on the server side changes. Compared with the last-Modified field, it has a higher priority and is more effective. When the Expires value or the max-age value in the cache-Control field Expires, the client will carry the if-none-match field in the request header. This field is the Etag value of the last response resource on the server side and is compared with the Etag value of the latest resource on the server side:

  • Case 1: If the values of the two fields are equal, it indicates that the resources are not changed, and the server rejects the response and returns status code 304, which belongs to the negotiation cache.
  • ** If the values of the two fields are not equal, the resource on the server has changed, and the server returns the latest resource andEtagValue, where the status code is200;

For example,

The following figure shows the response with status code 304:

  • The if-none-match field in the request header is 2DA25D4039… :

  • The Etag field in the response header is 2DA25D4039… :

If the two values are equal, the resource has not changed. Therefore, the server returns the status code 304, which is the negotiated cache. The browser continues to use the local cache.

Conclusion:

  • usingEtagCan more accurately control the cache becauseEtagIs the unique identifier on the server side of the corresponding resource automatically generated by the server or generated by the developer;
  • Last-ModifiedwithETagIt can be used together becauseEtagHas a higher priority, so the server will compare firstEtagandIf-None-Match. In the case of consistency, the comparison will continueLast-ModifiedandIf-Modified-Since, and finally decide whether to return the status code304.

3. Cache policy

Classification of cache

  • Strong cache:

    • Instead of sending a request to the server, the number is fetched directly from the local cache.
    • The status code of the requested resource is:200 ok(from memory cache);
  • Negotiation cache:

    • Send a request to the server, the server will determine whether the negotiation cache is matched according to the resources in the request header.
    • If a hit, the status code is returned304Notify the browser to read the resource from the cache;

Strong cache vs. negotiated cache

The cache Form of resource acquisition Status code Send the request to the server
Strong cache Fetch from the cache 200 (from cache) If no, it is directly obtained from the cache
Negotiate the cache Fetch from the cache 304 (not modified) If yes, check whether the cache is available based on the information returned by the server

Hierarchical cache strategy

The bottom of the200state

  • This layer is controlled by the Expires/ cache-Control field:

    • 1.Expires(HTTP1.0Version valid) is absolute time;
    • 2.Cache-Control(HTTP1.1Version valid) is relative time;

    When both are present, cache-Control overwrites Expires. As long as the fields are not invalid, the browser will use the local Cache directly, which is a strong Cache.

  • There are two types of cache: memory cache and disk cache:

As you can see, it takes no time to read the cache from the memory cache and some time to read the cache from the disk cache.

The relative time and absolute time are related to the setting of the server. When Atime (last access time) is set on the server, the two are equal. When the server sets Mtime (absolute modification time), Expires starts when the resource is created and max-age starts when the request is initiated.

The following is an example of strong cache in Taobao. The status code is 200, and the image resources are read from the browser’s memory cache, so the request time is 0ms:

In the middle of the304state

  • By this layerlast-modified/EtagControl. When the next layer fails or the user clicksrefresh/F5, the browser will make a request to the server and return a status code if the relevant resource on the server has not been updated304, belong toNegotiate the cache;

The following figure shows the negotiated cache with the status code 304. As long as the status code is 304, it belongs to the negotiation cache:

The top of the200state

  • When the browser itself has no cache or the next layer fails, or the user clicksCtrl + F5When you force a refresh, the browser directly requests the latest resources from the server.

As shown below:

The impact of user behavior on caching

The user action Expires/Cache-Control Last-Modified/Etag
Address enter effective effective
Page link jump effective effective
A new window effective effective
Move back effective effective
F5The refresh invalid effective
Ctrl + F5Forced to refresh invalid invalid

Cache policy process analysis

As shown in the figure, the flow chart shows the process of server side using caching strategy when processing resources:

  • First, the server determines whether the resource can be reused or notCache-ControlAdd to fieldno-storeProperties;
  • If the resource can be reused, does it require strong consistency? If so, thenCache-ControlAdd to fieldno-cacheProperty, which requires the client or caching proxy server to first confirm the freshness of the resource to the server, regardless of whether the cache resource is expiredNegotiate the cache;
  • The server then specifies whether to allow itWebProxy caching resources (e.gCDNServer cache), if allowed inCache-ControlAdd to fieldpublicProperty and specify the expiration date of the resource on the proxy cache servers-maxage; If not, addprivateProperty, indicating that only the client browser can cache resources and set the cache validity periodmax-age;
  • Then, in the client browser, choose strong cache or negotiated cache depending on the situation.

Server performance optimization

1.CDNThe server

define

Websites typically have all their servers in one place, and as the user base grows, companies must deploy content on multiple geographically different servers. To reduce the time of HTTP requests, we should place a lot of static resources closer to the user.

Content Delivery Networks (CDN) is one of them. CDN is a group of Web servers distributed in different geographical locations or network segments, which are used to publish content to users more efficiently.

The basic idea

  • Avoid as far as possible on the Internet may affect the speed and stability of data transmission bottlenecks and links, so that the content transmission faster and more stable;
  • By placing node servers everywhere in the network, a layer of intelligent virtual network is formed on the basis of the existing Internet.
  • CDNThe system can redirect the user’s request to the nearest service node in real time according to the network traffic, the connection of each node, the load status, the distance to the user and the response time.

The infrastructure

The simplest CDN network consists of a DNS server and several cache servers:

  • 1. When the user clicks on the content on the website pageURLWhen, through the localDNSSystem parsing,DNSThe system will eventually hand over the resolution of the domain nameCNAMEPoint to theCDNA dedicatedDNSThe server;

DNS resolution is not necessarily responded by the DNS server, but generally read from the cache. Such as computer cache, browser cache, router cache, operator cache and so on. If it is not found in the cache, search the DNS level by level: local DNS-> permission DNS-> top-level DNS-> root DNS. There are only 13 root DNS servers in the world.

  • 2. The DNS server of the CDN returns the IP address of the global load balancing device of the CDN to the user.

  • 3. The user initiates a content URL access request to the global load balancer of the CDN.

  • 4. The CDN global load balancing device selects a regional load balancing device of the region to which the user belongs according to the USER’s IP address and the content URL requested by the user, and tells the user to send a request to this device.

  • 5. The regional load balancing device will select an appropriate cache server for users to provide services based on the following criteria:

    • According to the userIPAddress, determine which server is closest to the user;
    • According to the user’s requestURLTo determine which server has the content required by the user;
    • Query the current load of each server to determine which server has service capability.

    Based on the comprehensive analysis of the above conditions, the CDN regional load balancer returns the IP address of a CDN cache server to the CDN global load balancer.

  • 6. The CDN global load balancer returns the IP address of the server to the user.

  • 7. The user initiates a request to the CDN cache server, and the cache server responds to the request and transmits the content required by the user to the user terminal; If the CDN cache server does not have the content that the user wants, but the zone equalization device still allocates it to the user, then the CDN server will request the content from its upper level cache server until the source server traced to the website pulls the content locally.

Application scenarios

  • Web site/application acceleration:

    Site or application of the acceleration of a large number of static resources distribution, proposed to site content of dynamic and static separation, dynamic files can be combined with cloud server ECS, static resources such as various types of images, such as HTML, CSS, js file, use the CDN server storage, can effectively accelerate the content loading speed, the easier web site pictures and short video content distribution.

  • Mobile application acceleration:

    Mobile APP update file (APK file) distribution, mobile APP images, pages, short videos, UGC and other content optimization to accelerate distribution. Provides the httpDNS service to avoid DNS hijacking and obtain real-time and accurate DNS resolution results, shortening user access time and improving user experience.

  • Video and audio on demand/large file download distribution acceleration;

  • Live video acceleration;

conclusion

To put it simply, the CDN server is equivalent to the warehouses of SF Express distributed all over the country. The main warehouse will deliver the express to these sub-warehouses, and users can pick up the goods nearby, thus speeding up the speed.

In addition, the CDN server also has many advanced functions, such as preventing DDOS attacks, which will not be expanded here;

2.SSR(Server Side Rendering)

Websites that rely on modern frameworks such as Vue and React tend to have problems.

VueProblems facing rendering

When rendering the first screen, app.js (packaged vue.js) should be downloaded and parsed before rendering the page.

Optimization scheme

  • Build layer template compilation: put template compilation in the build layer, not the browser.
  • Data independentPrerenderThe way;
  • Server-side rendering: transferring part of the browser-side computation to the server side;

It is usually optimized by using server-side rendering (SSR). The so-called SSR is to use the excellent computing power of the server side, part of the page rendering task to the server side for processing. The following is the flow chart of the server rendering SSR:

Server-side rendering optimizes the first screen rendering problem; According to the business needs, the client side and the server side of the rendering part can be appropriately allocated, comprehensive utilization of the client side and server side of the computing power, so as to achieve the purpose of performance optimization.

References:

  • What is CDN? What are the advantages of using CDN? ;
  • Web front-end performance optimization;