Front-End Performance Checklist 2021[1]


https://www.smashingmagazine….


Front-end performance optimization (I) : preparation work [2]


Front-end performance optimization (II) : resource optimization [3]


Front-end performance optimization (III) : construction optimization [4]

Load key Java scripts asynchronously using DEFER

DEFER: Load asynchronously and execute only after the HTML has been parsed. Async: Load asynchronously and execute the script as soon as it is downloaded (the script is good and all previous synchronization work is completed). If the script is downloaded quickly, such as directly from the cache, the HTML parsing will be blocked. At the same time, multiple async scripts are executed in an unpredictable order. Use DEFER is recommended.

It is not recommended to use both defer and async, as async takes precedence over defer.

Use IntersectionObserver and Priority Hints to load performance-consuming components

Native lazy loading(only Chromium) is already available for images and iframes, add loading property to the DOM. Loads when the element is at a certain distance from the viewable window. The threshold depends on several things, from the type of image resource being retrieved to the type of network connection. Experiments using the Chrome browser on Android showed that on 4G, 97.5% of the folded images with delayed visibility were fully loaded within 10ms of being visible. Even on a slower 2G network, 92.6% of the folded images were fully loaded in 10 milliseconds. As of July 2020, Chrome has made significant improvements to align images with viewport distance thresholds for lazy loading to better meet developer expectations. Thresholds are set to 2500px if the network condition is good (eg. 4g), and the distance threshold is 1250px if the network condition is bad (eg. 3g).

The best way to implement lazy loading is to use the Intersection Observer API. It provides asynchronous detection of whether an element is visible in the ancestor element or the root element (usually the scrolling parent), and we control the operation asynchronously.

To work With all browsers, we use Hybrid Lazy Loading[5]With IntersectionObserver[6].

<img data-src="lazy.jpg" loading="lazy" alt="Lazy image"> <script> (function() { const images = document.querySelectorAll("[loading=lazy]"); if ("loading" in HTMLImageElement.prototype) { images.forEach(function(one) { one.setAttribute( "src", one.getAttribute("data-src") ); }); } else {const config = {... }; let observer = new IntersectionObserver(function (entries, Self) {entries. Foreach (entry => {if (entry.isIntersecting) {entries. }}); }, config); images.forEach(image => { observer.observe(image); }); }}) (); </script>

For more information on lazy loading, read Google’s Fast Load Times [7].

In addition, we can use the important attribute [8] on the DOM node to reset the priority of resources. It can be used on

● High: Resources can be prioritized if the browser's heuristics do not prevent it. ● Low: You can lower the priority of a resource if the browser's heuristic allows it. ● Auto: The default value lets the browser decide which priority is applied to the resource.

Priority Hints affect you slightly differently depending on your network stack. With HTTP/ 1.x, the only way for the browser to determine resource priority is to delay the time the request is issued. As a result, the lower-priority requests only come into the network after the higher-priority requests, assuming there are higher-priority requests in the queue. If not, the browser may still delay some lower-priority requests if it expects a higher-priority request to appear soon (for example, if the <head> of the document is still open and key resources may be found in it).

With HTTP/2, the browser may still defer some low-priority requests, but in addition to that, it can set the stream priority of its resources to a lower level, allowing the server to better prioritize the resources it sends down.

How to view the priority of resource loading in Chrome’s DevTools,Open the Network pane -> right-click -> and check PriorityThen, as shown below:

At this point, you can see your resource load priority.

Three, gradually load the picture

A low-quality or even blurry image can be loaded and then replaced with a full-quality version as the page continues to load using BlurHash technology or LQIP (low-quality image placeholder) technology. Whether or not it improves the user experience varies, but it does increase the time to make meaningful drawing for the first time. We can even do this automatically by using SQIP to create a low-quality version of the image as an SVG placeholder, or by using CSS Linear Gradient (CSSGIP can be used) as a Gradient image placeholder.

● Blurhash [9], a website that allows you to turn an uploaded image into a blurry image ● LQIP[10] uses a low quality image for initial page loading and a high quality image for replacement once the page is loaded. ● SQIP[11], which helps create lower-quality versions of images as SVG placeholders.

How do you do that? We can use the Intersection Observer API. Of course, if the browser doesn’t support the intersection observer, you can use it with a polyfill or some library file.

// https://calendar.perfplanet.com/2017/progressive-image-loading-using-intersection-observer-and-sqip/ <img class="js-lazy-image" src="dog.svg" data-src="dog.jpg"> // Get all of the images that are marked up to lazy load const images = document.querySelectorAll('.js-lazy-image'); const config = { // If the image gets within 50px in the Y axis, start the download. rootMargin: '50px 0px', threshold: 0.01}; // The observer for the images on the page let observer = new IntersectionObserver(onIntersection, config); images.forEach(image => { observer.observe(image); }); function onIntersection(entries) { // Loop through the entries entries.forEach(entry => { // Are we in viewport? if (entry.intersectionRatio > 0) { // Stop watching and load the image observer.unobserve(entry.target); preloadImage(entry.target); }}); }

Deferred rendering with content visibility

Using content-visibility: auto, when the container is outside the viewport, we can prompt the browser to skip the layout of the children.

footer {
  /* Only render when in viewport */
  content-visibility: auto;
  contain-intrinsic-size: 1000px;
  /* 1000px is an estimated height for sections that are not rendered yet. */
}

Note that the content-visibility: auto is the same as the overflow: hidden. You can fix it using padding-left, padding-right, and declared width. Padding basically allows elements to spill out of the content box and into the fill box without having to leave the entire box model and cut them off.

body > .row {
  padding-left: calc((100% - var(--contentWidth)) / 2);
  padding-right: calc((100% - var(--contentWidth)) / 2);
}

It’s also worth checking out the CSS contain property. It allows the developer to declare the current element and its content as independent as possible from the rest of the DOM tree. This allows the browser to recalculate layout, styling, drawing, size, or a combination of all four, affecting only a limited area of the DOM, rather than the entire page, which can greatly improve performance. It has the following values:

● Layout: Indicates that the outside of an element cannot affect the layout inside the element, and vice versa. This allows the browser to potentially reduce the amount of computation required to create a page layout, with the additional benefit that the browser may defer or move the associated computation to a lower priority if the included elements are not on screen or are masked in some way. There is a problem with this: Although its children do not affect other elements on the page, they do affect the current element, and if the children are increased or decreased, the size of the current element will also be affected, thus affecting other elements on the page. ● paint: Notifies the browser that any descendants of the element will not be drawn outside the border of the element. If a descendant element is positioned so that part of its bounding box is cropped by the border of the containing element, that part will not be drawn. If a descendant element lies completely outside the border of the containing element, it will not be drawn at all. This is similar to overflow:hidden, but overflow:hidden doesn't have the benefit of reducing or skipping the required technology. ● Size: Tells the browser to perform page layout calculations without considering any offspring. The contained element must have the height and width attributes applied, or it will be folded into a zero pixel square. The page layout calculation needs to consider only the elements themselves, because descendants cannot influence the size of the elements. In this calculation, the descendants of the included elements are skipped entirely; It doesn't seem to have any offspring at all. ● style: Attributes that affect both the element and its descendants are within the scope of the element. ● Content: Layout and Paint. ● Strict: A combination of layout, paint and size.

Decoding =”async” to delay decoding

Using decoding=”async” grants the browser the right not to decode the image on the main thread, avoiding the user’s influence on the CPU time used to decode the image.

The < img decoding = "async"... />

For off-screen images, we can first display a placeholder and then use IntersectionObserver to trigger a network call when the image is in the viewport to download the image to the background. Alternatively, we can defer rendering until img.decode() is used to decode; If the Image Decode API is not available, images can be downloaded.

// Image loading with predecoding
// -------------------------------------
const img = new Image();
img.src = "bigImage.jpg";
img.decode().then(() => {
    document.body.appendChild(img);
}).catch(() => {
    throw new Error('Could not load/decode big image.');
});

Generate and provide key CSS

The key CSS is the CSS for the first visible part of the page (or the CSS for the first screen), usually inlined to the of the HTML. Because of caching, having key CSS (and other important resources) in a separate file in the root domain can sometimes even have many advantages over inlining. Note: With HTTP / 2, key CSS can be stored in separate CSS files and can be delivered via a server push without bloating the HTML. The problem with server push is that there are a lot of browser-to-browser pitfalls and race conditions that make it a hassle.

We can use CriticalCSS and Critical to generate criticalCSS, use Webpack’s plugin Critters to implement inline criticalCSS and lazily load the rest.

If you are still using LoadCSS to load all your CSS asynchronously, this is not necessary. Using media=”print” can trick the browser into loading CSS asynchronously and immediately apply it to the screen environment as soon as it is loaded.

<! -- Via Scott Jehl. https://www.filamentgroup.com/lab/load-css-simpler/ --> <! -- Load CSS asynchronously, with low priority --> <link rel="stylesheet" href="full.css" media="print" onload="this.media='all'" />

Try to reorganize your CSS rules

CSS is so critical to performance for the following reasons:

● The browser cannot render the page until it has built a "render tree"; ● The render tree is a combination of DOM and CSSOM; ● DOM is HTML plus all blocking JavaScript that needs to be done on it; ● CSSOM is all CSS rules that apply to the DOM; ● Using the async and defer properties makes it easy to unblock your JavaScript; ● Making CSS asynchronous is much more difficult; ● So a good rule of thumb to keep in mind is that pages render only as fast as the slowest stylesheet.

If we could split a single complete CSS file into its own media queries, it would look like this:

<link rel="stylesheet" href="all.css" media="all" />
<link rel="stylesheet" href="small.css" media="(min-width: 20em)" />
<link rel="stylesheet" href="print.css" media="print" />

The browser downloads all the files, but only those that meet the requirements of the current context will block rendering.

Also, avoid using @import in CSS files because it waits for the current CSS file to be downloaded.

The CSS link blocks page parsing and rendering, and when we put a JS fragment after the link, it waits until the CSS is downloaded and the parsing is complete (i.e. CSSOM is generated). Also, it blocks asynchronous JS loads. The best way to do this is to put JS that doesn’t depend on CSS in front of it.

Inlining key CSS does not make use of the browser cache. We can use the service worker to solve this problem: In general, in order to quickly find CSS using JavaScript, we need to add an ID attribute to the style element, which JavaScript can then use the caching API to store in the local browser cache (text/ CSS). For use on subsequent pages. To avoid inlining on subsequent pages and referencing cached resources from the outside, we set a cookie the first time we visit a site.

// https://www.filamentgroup.com/lab/inlining-cache.html <style id="css"> .header { background: #09878} h1 { font-size: 1.2 em. Col... } h2 { margin: 0; }... </style> <script> if( "caches" in window ){ var css = document.getElementById("css").innerHTML; if( caches ){ caches.open('static').then(function(cache) { cache.put("site.css", new Response( css, {headers: {'Content-Type': 'text/css'}} )); }); } } </script>

It’s worth noting that dynamic styles can also be expensive, but usually only if you rely on hundreds of composited components that are rendered simultaneously. Therefore, if you are using CSS-In-JS, make sure that your CSS-In-JS library optimizes execution without relying on themes or props and not overly combining styling components. For those interested, read Aggelos Arvanitakis’ The Unseen Performance Costs of Modern CS-In-JS Libraries in React Apps [14].

Consider making components connectable

If your site allows the user to access it in save-data mode, when the user opens it, it is passed to the server as a request header, and the server sends less content back. Although it doesn’t do anything itself, the service provider or website owner can do something about it based on this header:

● Google Chrome may enforce interventions, such as delaying external scripts and lazy loading of iframes and images To improve performance for users who opt in to "lite" and have poor network connections ● Web site owners can offer lighter versions of their applications, for example by reducing image quality, delivering pages rendered on the server side or reducing the amount of third-party content ● ISPs can convert HTTP images to reduce the size of the final image

Of course, in addition to the user’s initiative to open, as a developer, we can also judge whether to return the user “lite” content according to the user’s current network state. It can be obtained by using the Network Information API, and the values are: slow-2g, 2g, 3g, or 4g.

navigator.connection.effectiveType

For easier control, we can also use the service worker intercept.

"use strict"; self.addEventListener('fetch', function (event) { // Check if the current request is 2G or slow 2G if (/\slow-2g|2g/.test(navigator.connection.effectiveType)) { // Check if the request is for an image if (/\.jpg$|.png$|.gif$|.webp$/.test(event.request.url)) { // Return no images event.respondWith( fetch('placeholder.svg', { mode: 'no-cors' }) ); }}});

Consider making your components memory sensitive to the device

In addition to the network status, we should also consider the memory status of the device. Use the Device Memory API, navigator.deviceMemory, to get an idea of how much RAM the Device has (in GB), rounded to the nearest power of two.

Preheat the connection to speed up transmission

There are several resource tips you need to know:

● PREFETCH: Perform a DNS lookup in the background ● PRECONNECT: Requires the browser to start the connection handshake (DNS, TCP, TLS) in the background ● PREFETCH: Requires the browser to request resources ● PRELOAD: Prefetch resources without executing them ● PreRender: Prompts the browser to build an entire page of resources in the background for the next navigation (deprecated: from huge memory footprint and bandwidth usage to analyzing click-through rates and AD exposure for multiple registrations, it can be challenging). ● NOSTATE PREFETCH: Like PreRender, NOSTATE PREFETCH fetching resources ahead of time; But the difference is that it doesn't execute JavaScript, nor does it pre-render any part of the page

Preload and prefetch, vs , more about preload and prefetch, Readable Loading Priorities in Chrome (details on what circumstances might cause requests to be made twice, etc.)

● Preload is a declarative extract that forces the browser to request the resource without preventing the document's onload event; A prefetch is a hint to the browser that a resource may be required, and the browser decides if and when to load the resource. ● Preload is usually when you have a high degree of confidence that the preloaded resource will be used in the current page; PREFETCH is typically a resource that may be used for future navigation across multiple navigation boundaries. ● Preload is an early access instruction to the browser, which is used to request resources needed for the page (key scripts, Web fonts, Hero images); Prefetch is used slightly differently - the user's future navigation (for example, between views or pages), where the fetched resources and requests need to remain constant across the navigation. If page A initiates A prefetch request for critical resources required by page B, the critical resources and navigation requests can be completed in parallel. If we used preload in this use case, it would be cancelled immediately after the unload of page A. ● Browsers have 4 types of caches: HTTP cache, Memory cache, Service Worker cache & Push cache. Both preload and prefetch are stored in the HTTP cache.

If you’re interested in Early Hints, Priority Hints.

Use the service worker to optimize performance

We have also seen the use of service workers in many places, so let’s go into more detail here.

Service Worker is a script that the browser runs in the back desk independently of the web page. Its core function is to intercept and process network requests, including managing the responses in the cache through programs. It can support offline experience.

(1) Matters needing attention

● It is a JavaScript Worker, can not directly access DOM, localStorage, window. The Service Worker communicates with the pages it controls by responding to messages sent by the PostMessage interface, which can perform operations on the DOM if necessary. ● Service Worker is a programmable network agent that allows you to control how network requests sent by a page are handled. ● Service workers are stopped when not in use and restarted the next time they are needed. Therefore, you cannot rely on the global state of the Service Worker onFetch and onMessage handlers. The Service Worker can access the IndexedDB API if there is information that you need to keep and reuse after a restart. ● Service workers make extensive use of Promise.

(2) Life cycle

The Service Worker’s life cycle is completely independent of the web page, as follows:

Low register. ● Installation: Some static assets can be cached. If all files are cached successfully, then the Service Worker is installed. If any of the files fail to download or cache, the installation steps will fail and the Service Worker will not be activated (that is, it will not be installed). ● Activation: A great opportunity to manage old caches. ● Control: The Service Worker will control all the pages in its scope. However, when the Service Worker is registered for the first time, the page will need to be reloaded before it can be controlled by it. After the thread controls, it will be in one of two states: The service worker thread terminates to save memory, or to process fetch and message events, which occur after a network request or message is issued from the page.

(3) prerequisites

● Service Worker is supported by Chrome, Firefox and Opera. ● During development, the Service Worker can be used through localhost, but if you want to deploy the Service Worker on the website, you need to set HTTPS on the server.

(4) use

// 1, Register (After registering w, you can check to see if the service Worker is enabled by going to chrome://inspect/#service-workers and looking for your site.) If ('serviceWorker' in navigator) {window.addEventListener('load', function() {// Here/SW.js is in the root domain. This means that the scope of the service worker thread will be the entire source. // In other words, the Service Worker will receive FETCH events for all items on this domain. // If we register the Service Worker file at /example/sw.js, The Service Worker will only see FETCH events for pages whose URL starts with /example/ (i.e., /example/page1/, /example/page2/). navigator.serviceWorker.register('/sw.js').then(function(registration) { // Registration was successful console.log('ServiceWorker registration successful with scope: ', registration.scope); }, function(err) { // registration failed :( console.log('ServiceWorker registration failed: ', err); }); }); } // 2, install (sw.js) self.addEventListener('install', Function (event) {// 1, open cache // 2, cache file // 3, verify that all required assets have been cached Event.waitUntil (caches. Open (CACHE_NAME). Then (function(cache); {// urlsToCache: file array return cache.addAll(urlsToCache); }) .then(() => { // `skipWaiting()` forces the waiting ServiceWorker to become the // active ServiceWorker, triggering the `onactivate` event. // Together with `Clients.claim()` this allows a worker to take effect // immediately  in the client(s). self.skipWaiting(); })); }); // 3, Cache and return request (sw.js) self.addEventListener('fetch', function(event) { event.respondWith( caches.match(event.request) .then(function(response) { // Cache hit - return response if (response) { return response; } // IMPORTANT:Clone the request. A request is a stream and // can only be consumed once. Since we are consuming this //  once by cache and once by the browser for fetch, we need // to clone the response. var fetchRequest = event.request.clone(); Return fetch(FetchRequest). Then (function(response) {// Check if we received a valid response // That is, a request made by itself. This means that requests for third-party assets are not added to the cache either. if(! response || response.status ! == 200 || response.type ! == 'basic') { return response; } // IMPORTANT:Clone the response. A response is a stream // and because we want the browser to consume the response // As well as the cache consuming the response, we need // to clone it so we have two streams. The reason for this is that the response is a data flow, so the body can only be used once. var responseToCache = response.clone(); caches.open(CACHE_NAME) .then(function(cache) { cache.put(event.request, responseToCache); }); return response; }); })); });

Note: Any registrations and caches created from a traceless window will be cleared when the window is closed.

(V) Update the Service Worker

● To update the Service Worker, follow these steps: ● To update the sw.js file. When a user visits your site, the browser will try to re-download the script file that defines the Service Worker in the background. If there is a byte difference between the Service Worker file and the file it is currently using, it is treated as a new Service Worker. ● The new Service Worker will start and the Install event will be triggered. ● At this point, the old Service Worker still controls the current page, so the new Service Worker will enter the waiting state. ● When the currently open page on the site is closed, the old Service Worker will be terminated and the new Service Worker will take control. ● When the new Service Worker takes control, its Activate event will be triggered. (A common task in the Activate callback is cache management. Cache management is needed because if you clear any old caches during the installation steps, any old Service workers that continue to control all current pages will suddenly not be able to serve files from the cache.
// Delete any caches (old caches) that are not defined in the cache whitelist. self.addEventListener('activate', function(event) { var cacheAllowlist = ['pages-cache-v1', 'blog-posts-cache-v1']; event.waitUntil( caches.keys().then(function(cacheNames) { return Promise.all( cacheNames.map(function(cacheName) { if (cacheAllowlist.indexOf(cacheName) === -1) { return caches.delete(cacheName); } }) ).then(() => { // `claim()` sets this worker as the active worker for all clients that // match the workers scope and triggers an `oncontrollerchange` event for // the clients. return self.clients.claim(); }); })); });

(6) What optimization can be done?

1. Small HTML payloads

HTML packaged into two files, first visit is a complete HTML document, access to the file after the completion of the head and tail of stored in the cache, access to intercept the request again, will forward it to only the contents of the HTML file, the file after receipt of the content and exists in the cache before the head and tail of splicing (can be in the form of flow) returned to the browser.

// Use a workbox here. If you don't want to use a workbox, you can read the details (return HTML with a stream) : HTTP: / / https://livebook.manning.com/book/progressive-web-apps/chapter-10/55 / / in addition, still need to consider the title of the page, are not described here, Import {cacheNames} from 'workbox-core'; import {cacheNames} from 'workbox-core'; import {getCacheKeyForURL} from 'workbox-precaching'; import {registerRoute} from 'workbox-routing'; import {CacheFirst, StaleWhileRevalidate} from 'workbox-strategies'; import {strategy as composeStrategies} from 'workbox-streams'; const shellStrategy = new CacheFirst({cacheName: cacheNames.precache}); const contentStrategy = new StaleWhileRevalidate({cacheName: 'content'}); const navigationHandler = composeStrategies([ () => shellStrategy.handle({ request: new Request(getCacheKeyForURL('/shell-start.html')), }), ({url}) => contentStrategy.handle({ request: new Request(url.pathname + 'index.content.html'), }), () => shellStrategy.handle({ request: new Request(getCacheKeyForURL('/shell-end.html')), }), ]); registerRoute(({request}) => request.mode === 'navigate', navigationHandler);

2. Offline caching

Features that are supported by itself

Intercept and replace resources

For example, intercept image request, if the request fails, return the default failed image

function isImage(fetchRequest) { return fetchRequest.method === "GET" && fetchRequest.destination === "image"; } self.addEventListener('fetch', (e) => { e.respondWith( fetch(e.request) .then((response) => { if (response.ok) return response; // User is online, but response was not ok if (isImage(e.request)) { // Get broken image placeholder from cache return caches.match("/broken.png"); } }) .catch((err) => { // User is probably offline if (isImage(e.request)) { // Get broken image placeholder from cache return caches.match("/broken.png"); }}})));

4. Different types of resources use different caching strategies

For example, Network Only (live data), Cache Only (Web Font), Network Falling Back to Cache (HTML, CSS, JavaScript, image).

For example, WebP images are returned on mobile phones that support WebP images.

Destination can be used to distinguish between different types of requests. The destination associated with a Request has the following values: “audio”, “audioworklet”, “document”, “embed”, “font”, “image”, “manifest”, “object”, “paintworklet”, “report”, “Script “,” serviceWorker “, “sharedWorker “, “style”, “track”, “video”, “worker”, or” XSLT “. An empty string if not specified.

5. It can also be used on CDN/ EDGE

I won’t go into the details here.

(7) When 7 KB is equal to 7 MB

DOMException: Quota exceeded.

If you are building a progressive Web application and encounter excessive cache storage when the Service Worker caches static assets provided from the CDN, make sure to set the correct CORS response headers for cross-domain resources. And instead of inadvertently caching opaque responses with the Service Worker, you can select a cross-domain image resource to enter CORS mode by adding the crossLogin attribute to the <img> tag.

Safari’s Range Request

Safari sends the initial request to get the video and sets the Range header to bytes = 0-1. As you can see, Safari requires an HTTP server that provides video and audio to support such a Range request. The Service Worker is problematic. To solve this problem, set the Service Worker as follows:

// https://philna.sh/blog/2018/10/23/service-workers-beware-safaris-range-request/ self.addEventListener('fetch', function(event) { var url = new URL(event.request.url); if (url.pathname.match(/^\/((assets|images)\/|manifest.json$)/)) { if (event.request.headers.get('range')) { event.respondWith(returnRangeRequest(event.request, staticCacheName)); } else { event.respondWith(returnFromCacheOrFetch(event.request, staticCacheName)); } } // other strategies }); // Range Header: Range: bytes=200-1000 function returnRangeRequest(request, cacheName) { return caches .open(cacheName) .then(function(cache) { return cache.match(request.url); }) .then(function(res) { if (! res) { return fetch(request) .then(res => { const clonedRes = res.clone(); return caches .open(cacheName) .then(cache => cache.put(request, clonedRes)) .then(() => res); }) .then(res => { return res.arrayBuffer(); }); } return res.arrayBuffer(); }).then(ArrayBuffer => {// ArrayBuffer const bytes = /^bytes\=(\d+)\-(\d+)? $/g.exec( request.headers.get('range') ); if (bytes) { const start = Number(bytes[1]); const end = Number(bytes[2]) || arrayBuffer.byteLength - 1; return new Response(arrayBuffer.slice(start, end + 1), { status: 206, statusText: 'Partial Content', headers: [ ['Content-Range', `bytes ${start}-${end}/${arrayBuffer.byteLength}`] ] }); } else { return new Response(null, { status: 416, statusText: 'Range Not Satisfiable', headers: [['Content-Range', `*/${arrayBuffer.byteLength}`]] }); }}); }

Optimize rendering performance

Make sure there is no lag when scrolling through pages or elements to display animations, always reaching 60 frames per second. If not, at least keep it in the 60 to 15 frames-per-second mixing range. You can use CSS will-change to tell the browser which elements and attributes will change.

Without changing the DOM and its styles, redrawing is triggered by: GIF, Canvas Drawing, Animation. To avoid redrawing, we need to use Opacity and Transform as much as possible, except for special circumstances such as animations for SVG paths that trigger redrawing. For example: DevTools → More Tools → Rendering → Paint Flashing to the earth

In addition to Paint Specific to the cloud, there are several interesting tool options such as:

● Layer Borders: Used to display the browser-rendered Layer borders so that any changes in size can be easily identified. ● FPS Meter: Display the current frame count of the browser in real time. ● Paint flashing: Used to highlight an area of a web page that the browser is forced to redraw. Umar Hansa's video on Understanding Paint Performance with Chrome DevTools[15] is worth a look.

Analyze Runtime Performance[16] Analyze Runtime Performance[16] Analyze Runtime Performance[16] Analyze Runtime Performance[16] Analyze Runtime Performance[16] Analyze Runtime Performance

(a) How to measure the time spent in style and layout calculation?

RequestAnimationFrame can be used as our tool, but there is a problem with when the callback function is executed. Different browsers behave differently: Chrome, FF, Edge >= 18 fires before style and layout calculation, Safari, IE, Edge < 18 fires before drawing after style and layout calculation.

If setTimeout is called in the callback of requestAnimationFrame, in a compliant browser such as Chrome, the setTimeout callback will be called after drawing. In non-compliant browsers (such as Edge 17), requestAnimationFrame and setTimeout start almost at the same time, both firing after the style and layout calculations have been completed.

If a microTask is called in a requestAnimationFrame callback, such as Promise. Resolve, this is completely useless and is run immediately after the JavaScript execution is complete, so it does not wait for the style and layout at all.

If requestTidleCallback is called in the callback of requestAnimationFrame, it will fire after drawing is complete. However, this can be triggered too late. It starts up fairly quickly, but if the main thread is busy doing other work, the RequestTidleCallback may be delayed for a long time, waiting for the browser to determine that it is safe to run some “idle” work. This is definitely much less than setTimeout.

If requestAnimationFrame is called in the callback of requestAnimationFrame, it fires after the drawing is complete and may capture more wait time than setTimeout. It takes about 16.7 milliseconds on a 60Hz screen, while SetTimeout’s standard time is 4 milliseconds — so it’s slightly inaccurate.

In general, requestAnimationFrame + setTimeout, despite its drawbacks, is probably better than requestAnimationFrame + requestAnimationFrame.

(2) Do you know about the layout of the waterfall flow?

Just using CSS Grid [17] is immediately supported. If you are interested, you can read more about it, but I will not describe it here.

.container { display: grid; // Create a four-column layout grid-template-columns: repeat(4, 1fr); grid-template-rows: masonry; }

(3) CSS animation

When we write CSS animations, we are usually told that if we want to animate, we should use Transform. For example, to change the position of the element, why not just use the common left, top? Because the browser needs to constantly calculate the position of the element, this triggers backflow; Another example is to change the display state of an image on the page, which will trigger a redraw. The cost of redrawing is usually very high.

If you want the animation to look smooth, there are three things to note:

● Do not affect the flow of the document ● Processes that are not dependent on the document ● Do not cause a redraw

Those who meet the above conditions are:

● Translate3D, Translatez and other forms; ● <video>, <canvas>, <iframe>; ● Transform and Opacity animations using Element.animate(); ● Transform and Opacity Animations using СSS Transitions and animations; Low position: fixed; When will the change; When the filter;

When a browser uses these without reflow or redraw, it can apply compositing optimization by drawing elements to separate layers and sending them to GPU compositing. This is often referred to as being out of the document stream. Note that if an element is taken out of the document stream, the elements displayed above it are also implicitly taken out of the document stream.

How much memory is required for a single composite layer? Let’s take a simple example: how much memory is required to store a 320×240-pixel rectangle (filled with a solid color # FF0000). The problem is that PNG is used along with JPEG, GIF, etc., to store and transmit image data. To draw such an image onto the screen, the computer must decompress it from the image format and then represent it as an array of pixels. Therefore, our example image will consume 320×240×3 = 230,400 bytes of computer memory. That is, we multiply the width of the image by the height of the image to find the number of pixels in the image. We then multiply this by 3, since each pixel is described by three bytes (RGB). If the image contains a transparent area, multiply it by 4, because additional bytes are needed to describe transparency: (RGBA) : 320×240×4 = 307,200 bytes. The browser always draws layers out of the document flow as RGBA images. Technically, it is possible to store PNG images in the GPU to reduce the memory footprint, but the GPU has a problem that it is drawn on a per-pixel basis, which means it has to decode each pixel over and over again for the entire PNG image.

If you want to see how many layers your site has and how much memory they consume, in Chrome, go to Chrome ://flags/#enable-devtools-experiments to launch the “Developer Tools experiments” flag, To see the panel in DevTools, use ⌘+⌥+ I (on Mac) or Ctrl + Shift + I (on PC), click the icon to destroy the vertical three points in the upper right, select “More Tools” and click “Layers”. This panel displays all the active layers of the current page as a tree. When you select a layer, you will see information such as its size, memory consumption, redraw times, and composition reason.

How do browsers handle animations?

Here we take the form of a click-triggered animation as an example.

● First of all, once the page loads, the browser has no reason to place the elements on a new layer, so initially the elements stay in the default document stream. ● When we click the button, it places the element on the new diagram layer. Raising an element to a new layer triggers a redraw: the browser must texture the element on the new layer and remove the elements in the background layer (the default document stream). ● The new layer image must be transferred to the GPU to achieve the final image composition as seen on the screen by the user. Depending on the number of layers, the size of the texture, and the complexity of the content, the execution time of the redraw and data transfer can take a considerable amount of time. This is why we sometimes see elements flashing at the beginning or end of an animation. Low end of the animation, we will have a reason to delete the new layer now, once again, the browser found that don't need to waste resources on synthesis, so it goes back to the best strategy: keeping the entire content of the page in a single layer, which means that it must be drawn on the background layer element (another redraw) and send the updated texture to the GPU. As with the above steps, this may cause flickering.

To eliminate the problem of implicitly creating new layers and reduce visual artifacts, the following actions are recommended:

● Try to animate the object as high as possible at z-index. Ideally, these elements should be direct children of the body element. Of course, this is not always possible when animation elements are nested deep within the DOM tree due to normal layout. In this case, you can clone the element and place it in the body for animation only. ● With the will-change CSS property you can tell the browser that the element will be removed from the normal document stream. By setting this property on the element, the browser will (but not always!) Raise it to a new layer ahead of time so that the animation starts and stops smoothly. Don't abuse this attribute, though, or you'll end up with a lot more memory consumption!

For CSS animations, we can do the following optimizations:

● Use only Transform and Opacity for animations, they will not trigger reflow and redraw. ● If it is a solid color image, reduce the physical size of the image by enlarging the image to achieve the effect, which helps reduce memory usage. If you want to animate a larger image, you can usually shrink the image by 5 to 10 percent and then zoom in so that the user won't see any difference. You can save even a few megabytes of precious memory. ● Use Transition or Transform in CSS as much as possible. It's faster than JS and won't get clogged with heavy JS calculations.

(4) Rendering performance optimization list

● FONT-DISPLAY: Speed up the display of custom fonts. By default, all text using custom fonts is not visible until those fonts are loaded (or for no more than 3 seconds). ● Self-hosting Web Fonts: Self-hosting (if using Google fonts, use google-fonts-webpack-plugin) with font subsets to help load fonts faster. ● /*#__PURE__*/ : If you have a function, call it only once, and then store the result in a variable, but don't use the variable, add this code before the function call, tree-shaking will delete the variable, but not the function. ● Use babel-plugin-styled-components and styled-components: These plug-ins prefix /* # __ PURE __ */ with the CSS- in-js declaration. Without them, unused CSS rules will not be removed from the bundle. Detection of unnecessary redrawing: DevTools → More Tools → Rendering → Paint Flashing to the earth To check code splitting: DevTools → Ctrl/ Ctrl/⌘+P → Coverage ● Check if the resource is not gzip/Brolti compressed: enter "-has-response-header: content-encoding "in the filter of the "Network" panel to find all the resources that are not gzip/Brolti compressed. Network → Sort by domain → right-click each third-party → Select "Block request domain"; Re-execute the comparison. ● Preload network fonts with crossLogin ="anonymous" attribute: preloaded fonts will be ignored without this attribute due to CORS spoofing. ● image-webpack-loader: Insert this loader in front of the url-loader or file-loader and it will compress and optimize the image as needed. ● Responsive Loader: combined with < IMG SRCSET > use, put small pictures on a small screen. ● SVG-URL-Loader: If you use a URL-loader to load SVG, the base64-encoded resource is on average 37% larger than the original asset due to letter limitations. ● purgecss-webpack-plugin: Deletes unused classes, i.e. removes unused CSS. ● babel-plugin-lodash: Convert the lodash import to ensure that only the methods actually used are bound (= 10-20 functions instead of 300); Also try aliasing lodash as lodash-es (and vice versa) to avoid having different versions of lodash with different dependencies and having to package it multiple times. ● If you are using babel-preset-env and core.js 3+, enable 'useBuiltIns': "usage" : This will only package the polyfill you actually use and need. If use HTMLWebpackPlugin, start ` optimization. SplitChunks: 'all' ` : make webpack code split on entry files automatically, to achieve a better cache. Low set ` optimization. RuntimeChunk: true ` : such webpack runtime can be moved to a separate block, which can also improve the cache. ● Webpack-Bundle-Analyzer: Helps analyze packaged files to avoid large files. When using http://webpack.github.io/analyse/: figuring out why packaging package contains a specific module. ● preload-webpack-plugin: Used with the HtmlWebpackPlugin and generates <link rel = "preload/prefetch" > for all JS blocks. ● Duplicate-package-checker-webpack-plugin: Warning if you package multiple versions of the same library (super common for core-js). ● Bundle-Buddy: Shows which modules are repeated in your block and uses it to fine-tune code splitting. ● source-map-explorer: Build a map of modules and dependencies based on the source-map. Unlike Webpack-Bundle-Analyzer, it only needs to run the Source Map. This is useful if you can't edit the Webpack configuration (for example, using create-react-app). ● Bundle-Wizard: A dependency diagram is also created -- but for the entire page. ● Day.js: Replace Moment.js, which has the same function but is smaller. ● Linaria: A 0 run-time alternative to the Style -components or emotion, with a similar API. ● QuickLink: An embedded solution designed to prefetch links for websites based on content in the user's viewport and is small in size (reduced/compressed <1KB). ● Service Worker ● HTTP/2: To check whether all requests are using a single HTTP/2 Connection or are configured incorrectly, enable the "Connection ID" column in DevTools→Network. Cache-Control: Immutable: Caching static files For API responses (Like/API/User): Prevent Cache-Control: Max-age =0, no-store For hashed assets (like /static/bundle-ab3f67.js): cache For as long as possible → cache-control: ● Divide your CSS into two parts: the key CSS and the CSS under the screen ● Defer third-party resources or use the setTimeout package to load: Avoid third-party scripts competing for bandwidth and CPU time. ● If you have any 'scroll' or 'touch*' events, be sure to pass 'passive' : True 'to addEventListener: This tells the browser that you are not going to call event.preventDefault() internally, so you can optimize the way these events are handled. ● Do not cross get or set the style property "width" or "offset*" : every time you change it and then read the width or other content, the browser must recalculate the layout. ● Use http://polyfill.io to reduce the number of polyfills used: Check the User-Agent header and provide browser-specific polyfills. As a result, modern Chrome users don't have to load Polyfill, whereas IE 11 users get all of it. ● Tools: Lighthouse CLI, WebPageTest

XIII. Optimizing perceptual performance 18[20]

The concept involves the psychology of waiting, basically keeping the user busy or engaged with something else.

In terms of time, it can be analyzed from two different points of view: objective time, also known as clock time (i.e., time really spent); Perceived time, also called brain time (i.e., time perceived by the user).

Time is money, everywhere. A 2015 survey found that visitors can abandon a website in just three seconds. For every one second of improvement, Wal-Mart’s conversion rate on its website went up by 2%. When AutoAnything halved load times, its sales rose 13 per cent.

We divide time into four spans:

● 0.1-0.2s: Studies indicate that this time interval is the maximum acceptable response time range for simulating instantaneous behavior, during which the user will notice little, if any, delay. ● 0.5-1s: This is the maximum response time for immediate actions. This interval is usually the response time of an interlocutor in a person-to-person conversation. The delay during this time interval is obvious, but is easily tolerated by most users. During this time, the user must be given an indication that an interaction has been received from the user, such as a click. ● 2-5S: The optimal experience time varies depending on subjective metrics, but for most tasks the average user faces online, the concentration time is between two and five seconds. That's why for years we've used 2 seconds as our best page load time. ● 5-10s: The average attention span fell from 12 seconds in 2000 to 8.25 seconds in 2015, according to the National Center for Biotechnology Information in the US National Library of Medicine. Guess what? 1 second less than the attention span of a goldfish! For the sake of simplicity, and to emphasize our superiority with the goldfish, we will consider 10 seconds as the absolute maximum length of the user's attention span. Users will still be focused on their tasks, but it's easy to get distracted. This is how long the system involves the user in the process. If you don't, the user is likely to be lost forever.

Therefore, page loading should occur immediately and the user should get immediate feedback on a given action.

One thing to remember: the 20% rule. In order for the user to see the perceived time difference before and after optimization, it must be changed by at least 20%.

If your page load time is 5 seconds compared to your competitor’s 2 seconds, even if you’ve increased it by 20%, then you need to compare yourself to your competitor. If we can’t optimize to 2s, we should at least optimize to 2s + 2 * 20% = 2.4s so that at least the user won’t feel the difference.

There is also a psychological threshold. Taking 2s and 5s just as an example, the duration greater than this threshold will be perceived by the user as close to 5 seconds. Duration less than this threshold will be considered close to 2 seconds. Using this concept, we can find A geometric average of it: √(A × B), in this case: √(2 × 5) ≈ 3.2 seconds. If the load time is less than 3.2 seconds, the user will notice the difference, but it doesn’t matter to them how they choose the service.

Time can be divided into two stages with the characteristics of users’ psychological activities: activity stage or activity waiting, which may be some physical activities or pure thinking process, such as solving problems or finding solutions on the map; Passive phase or passive wait, which is a period of time during which the user has no choice or control over the waiting time, such as standing in line or waiting for someone who is late for an appointment. Even though the time intervals are objectively equal, people tend to estimate the time spent waiting passively as longer than the time spent waiting actively.

When we talk about waiting too long, we’re usually talking about passively waiting time. Therefore, in order to manage mental time and make the brain perceive an event as less lasting than it actually is, we should usually minimize the passive phase of the event by increasing the active phase of the event. There are several techniques to achieve this goal, but most boil down to two simple practices: Start Early and Finish Early.

First start

Open events in the active phase and hold them for as long as possible before switching the user to a passively waiting process. Of course, this should not affect the length of the event itself. Many people don’t think of the activity phase as waiting time. Thus, for the user’s brain, preemptive priming means moving the priming event marker virtually closer to the end (by the end of the activity phase), which helps the user feel that the event is shorter.

In 2009 an airport in Houston, Texas, faced an unusual complaint: passengers unhappy with the long wait to collect their luggage when they arrived at their destination. The airport immediately made changes to increase the number of baggage handlers, which reduced the wait time to 8 seconds, but that did not reduce complaints. The airport then did a series of investigations and found that it took about eight minutes for the first bags to show up on the baggage carousel. However, passengers only need a minute to reach the baggage carousel. As a result, on average, passengers have to wait seven minutes to see their first bags. Psychologically, the active phase is only one minute, while the passive wait is seven minutes. The solution: move your arrival gate away from the main terminal and send your luggage to the farthest carousel. This increases the walking time for passengers to six minutes, while passive waiting only leaves two minutes. Despite a longer journey, complaints dropped to almost zero.

We can do something similar for a front-end project. For example, Safari’s search function, Safari will preload the page of Top Hits results in the search list, so that when users click this link, they can go to the page faster. Using this idea, we can implement preemptive start with resource hints: DNS-PREFETCH, PRECONNECT, PREFETCH, PRELOAD, PRERender.

To finish early

Similar to the way we can move the start tag in preemptive start techniques, finishing early moves the end tag closer to the start, giving the user the feeling that the process is ending quickly. In this case, we will open the event in the passive phase, but switch the user to the active phase as soon as possible.

The most common use of this technology on the network is in video streaming services. When you click the play button on a video, you don’t have to wait for the entire video to download. Start playing when the first minimum-required video block is available. Thus, the end tag is moved closer to the starting point and provides the user with an effective wait (to watch the downloaded block) while the rest of the video is downloaded in the background. Simple and effective.

We can apply the same technique when dealing with page load times. Once you have the basic elements (such as the DOM) to display, you start rendering the page. If the resource doesn’t affect rendering, we don’t have to wait for all the resources to download. We don’t even need all the HTML elements; We can use JavaScript later to inject content that is not immediately visible, such as the footer. By breaking the load into a short passive wait at the beginning, and then actively waiting after the initial information is loaded and presented, give the user something as quickly as possible that makes them think the page loads faster than it actually does.

In addition to the above, we can do a few things to improve the user’s tolerance.

In the first half of the 20th century, building managers received complaints about long elevator waiting times. Managers are confused because there is no simple technical solution. To solve this problem, some expensive and time-consuming projects must be considered. Ideas for a different, non-technical solution were proposed: mirrors in the elevators and floor-to-ceiling mirrors in the lobby. That’s a pretty good solution to the problem. Why? The solution replaces the pure passive waiting that people experience when they are waiting for an active elevator by having them look at each other (and secretly at each other). It doesn’t try to convince people that waiting times are shorter, nor does it do anything about objective markers of time or events. Instead, people move from waiting to activity. As a result, people have a much greater tolerance for waiting.

Here are some propositions about the psychology of waiting:

● P1: Less time than idle time. ● P2: People want to start using (preprocessing waits longer than processing waits). ● P3: Anxiety makes waiting times seem longer. ● P4: Uncertain wait times are longer than known finite wait times. ● P5: Unexplained wait time is longer than explained wait time. ● P6: Unfair waiting is longer than fair waiting. ● P7: The more valuable the service, the longer the customer will wait. ● P8: Waiting alone is longer than waiting in group. ● P9: Uncomfortable waiting is longer than comfortable waiting. ● P10: New or infrequent users may feel they have to wait longer than frequent users.

Solving the problems of P4 and P5 is what we call tolerance management. First, by resolving the uncertainty of wait times and process state, we can use good progress indicators. Progress indicators are further divided into two categories: dynamic (updated over time) and static (constant), each of which can be subdivided into two categories: deterministic (projects completed by unit of work, time, or other measures) and indetermatic (projects not completed).

How to choose? We can distinguish them according to the previous time span:

● Instant (0.1-0.2s) : No need ● Instant (0.5-1S) : 1S is the longest time for the user to think uninterrupted, and the display of complex instructions will cause the user to think interrupted. In general, there is no harm in displaying a simple indicator with no text information. Category D metrics, such as rotators or very basic progress bars (simplified category A), can avoid interrupting the user's thought flow while subtly signaling that the system is responding. ● Best experience: During this time period, we must indicate to the user what request is being entered or processed and that the system is responding. Again, the best metrics are category D metrics or simplified category A metrics -- without drawing the user's attention to other information. ● Attention (5-10s) : This is the leading edge of the user tolerance threshold and requires a more complete solution. For events at this interval or longer, we need to display not only the normal "in progress" indicator, but also more information: the user needs to know how long they need to wait. Therefore, we should display type A or B dynamic metrics where the progress of the process is clear.

In summary:

● For events with duration of 0.5-1s: Suggest to display indicator of class D (rotator) or simplified class A (progress bar) depending on the situation. ● For events that last from 1 to 5 seconds: Use either A Class D (rotator) or A simplified Class A indicator (progress bar). ● For events longer than 5 seconds: Dynamic indicator A or B is recommended.

In practical applications, we can use them in combination, for example:

● Use class D loaders successively. ● Use the D classloader while modifying the copy presented in the process. ● Provide user interactive animation, so that users are busy while waiting.

In addition, we can also use a skeleton screen while the page is loading, which I won’t go into here.

Prevent layout changes and redrawing

One of the more disruptive experiences in the realm of perceived performance may be layout shifting, or reflux, caused by rejigging images and video, Web fonts, injected ads, or later discovered scripts that populate components with actual content. As a result, when a customer begins to read an article, they may be interrupted by the layout above the reading area. This experience is often abrupt and rather confusing: it may be a case where loading priorities need to be reconsidered.

The community has developed a number of techniques and solutions to avoid backflow. In general, it is best to avoid inserting new content on top of existing content unless it occurs in response to user interaction. Always set the width and height properties on the image, so modern browsers allocate this box by default and reserve space (Firefox, Chrome).

For images or videos, we can use placeholders to preserve the display box in which the media will appear. This means that the area will be properly preserved when its aspect ratio is set and maintained.

The placeholder could be:

● Base64 in SVG format: It is smaller than PNG's Base64, especially as the width to height ratio becomes more and more complex. ● URL-encoded SVG: Easy to read, easy to template and infinitely customizable, perfect placeholder for images of unknown size without creating CSS blocks or generating base64 strings! Such as:
data:image/svg+xml,%3Csvg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 ${width} ${height}"%3E%3C/svg%3E

Consider using local lazy loading instead of using lazy loading with external scripts, or using mixed lazy loading only when local lazy loading is not supported.

● Local lazy loading: <img loading=lazy> (most browsers support it, but still have compatibility issues) External script lazy-loading (i.e. lazy-loading images under the screen) can be done using the Intersection Observer API or using scroll, resize, or OrientationChange events to listen. ● Hybrid lazy loading, which uses script lazy loading when local lazy loading is not supported.

Mixed lazy loading

First check if the browser supports local lazy loading:

if ('loading' in HTMLImageElement.prototype)

To keep things simple, we’ve already shown you how to use the Intersection Observer, so we won’t go into that.

Always group Web word weights and convert from all degraded fonts to Web fonts at once – just make sure the conversion isn’t too abrupt by using a font style matcher to adjust the line height and spacing between the fonts. We’ve covered font optimization before, so I won’t go into it here.

To override the backing font to mimic the font metrics of a web font, we can override the font metrics using the @font-face descriptor (enabled in Chrome 87). (Note, however, that tuning can be complicated by a complex font stack.) The proposed descriptors are:

Ascent - override, descent - override, the line - gap - override

Syntax: < percentage > | normal initial value: normal state: has set up a file in the M86 implementation; Agree to ship at CSSWG

These descriptors allow us to completely eliminate vertical layout offsets (except for different line numbers due to different newlines). When calculating line height, set Ascent, Descent, or Line Gap to a given percentage of the font size used. This allows us to cover the line box height and baseline positioning:

Ascent = Ascent + Descent + Line Gap = Line Box Top + Line Gap / 2 + Ascent

For example, if we have ‘overcent-override: 80%; Descent – Override: 20%; Override: 0% “, then the height of each wireframe is 1em (assuming the font is 1em), and the baseline is 0.8em below the top of the wireframe.

Low advance - override

Suggested syntax:

Initial value: 0

This descriptor allows us to reduce horizontal line breaks and vertical line breaks caused by different line breaks.

This descriptor sets an additional lead for each character that uses the font. The lead is equal to the descriptor value multiplied by the font size used.

Note: You can apply this property in addition to the CSS letter-spacing property. For example, if we have ‘font-size: 20px; Spacing: -1px, and “Advance – Override: 0.1” for font-face, then the final spacing between characters is 20px * 0.1-1px = 1px.

For later CSS, we can make sure that the key layout CSS is embedded in the header of each template. And it’s even more than that: For long pages, adding a vertical scroll bar does move the main content 16px to the left. To display the scroll bar early, we can add overflow-y to the HTML to force the scroll bar to be used the first time we draw. The latter is useful because scrollbars cause unusual layout shifts due to the rearrangement of the folded contents as the width changes. However, most of this should happen on platforms with non-overlapping scroll bars, such as Windows. But this breaks the position:sticky because these elements will never scroll out of the container (the position:sticky will only work if the parent’s overflow is equal to visible).

If the title becomes fixed or sticky as the page scrolls, save space for the title until then, because removing the title from the layout flow and pasting it to the top of the page will move all subsequent content up.

If there is more content after the list, infinite scrolling and loading more will also trigger the CLS. To improve the CLS, leave enough space for the content to load before the user scrolls to that part of the page, and remove any DOM elements at the footer or bottom of the page that might be pushed down as the content loads. Prefetch the data and images of the unfolded content so that they already exist when the user scrolls to that location. You can also use virtual lists to optimize long lists.

So how do we compute CLS, we’ve talked about it before, but we won’t talk about it here.

Chrome DevTools > Performance Panel > Experience

How do you count CLS? Consider the following code:

/ / the original: https://wicg.github.io/layout-instability/ let perFrameLayoutShiftData = []; let cumulativeLayoutShiftScore = 0; function updateCLS(entries) { for (const entry of entries) { // Only count layout shifts without recent user input. if (entry.hadRecentInput) return; perFrameLayoutShiftData.push({ score: entry.value, timestamp: entry.startTime }); cumulativeLayoutShiftScore += entry.value; } } // Observe all layout shift occurrences. const observer = new PerformanceObserver((list) => { updateCLS(list.getEntries()); }); observer.observe({type: 'layout-shift', buffered: true}); // Send final data to an analytics back end once the page is hidden. document.addEventListener('visibilitychange', () => { if (document.visibilityState === 'hidden') { // Force any pending records to be dispatched. updateCLS(observer.takeRecords()); // Send data to your analytics back end (assumes `sendToAnalytics` is // defined elsewhere). sendToAnalytics({perFrameLayoutShiftData, cumulativeLayoutShiftScore}); }});

Welcome to follow my personal official account:

The resources

Front – End Performance Checklist 2021 [1] : https://www.smashingmagazine… Front end performance optimization (a) : the preparation [2] : https://mp.weixin.qq.com/s/QD… Front end performance optimization (2) : resource optimization [3] : https://mp.weixin.qq.com/s/Yb… Front end performance optimization (3) : build optimization [4] : https://mp.weixin.qq.com/s/sp… Hybrid Lazy Loading [5] : https://www.smashingmagazine… Lazy – Load With IntersectionObserver [6] : https://www.smashingmagazine… Fast Load Times [7] : https://web.dev/fast/#lazy-lo… The importance of attribute [8] : https://developers.google.com… BlurHash [9] : https://blurha.sh/ LQIP [10] : https://www.guypo.com/introdu… SQIP [11] : https://github.com/axe312ger/… polyfill[12]: https://github.com/jeremenich… Library file [13] : https://github.com/ApoorvSaxe… The unseen performance costs of modern CSS – in – JS libraries in The React apps [14] : https://calendar.perfplanet.c… Understanding Paint Performance with Chrome DevTools [15] : https://www.youtube.com/watch… How to Analyze Runtime Performance[16] : https://medium.com/ @Marielgra… CSS grid [17] : https://www.smashingmagazine… Part I: Objective time management [18] : https://www.smashingmagazine… Part II: the Perception management [19] : https://www.smashingmagazine… Part III: how management [20] : https://www.smashingmagazine…