preface

Optimizing the quality of user experience has always been key to the success of web sites. Through its ongoing engagement and collaboration with millions of web developers and site owners, Google has developed many useful metrics and tools in the browser to help business owners, marketers, developers, and others identify opportunities to improve the user experience.

However, a large number of metrics and tools present challenges such as prioritization, clarity, and consistency.

web vitals

In 2020, Google introduced a new concept called Web-Vitals, an initiative designed to provide unified guidelines for quality metrics. Google believes these metrics are critical to providing a great user experience on the Web. The definition given by Google is the Essential metrics for a healthy site.

Core Web Vitals

There are many ways to measure the quality of a user experience, and while some are site-specific or context-specific, there are some common metrics that are essential to all Web experiences — Core Web Vitals. Such core user experience requirements include the loading experience of page content, interactivity, and visual stability.

Core Web Vitals is a subset of Web Vitals, whose cores are LCP, FID, and CLS respectively. Google believes that it is not necessary for everyone to be an expert in website performance, and that most people only need to focus on the most Core and valuable metrics, so they come up with Core Web Vitals, which is a subset of Web Vitals. Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).

1. LCP

Largest Contentful Paint is an important, user-centric measure of perceived load speed of a page because it marks the point in the page load timeline when the main content of the page is likely to have been loaded — a quick LCP helps reassure users that the page is useful.

Historically, it has been a challenge for Web developers to measure how quickly the main content of a Web page loads and is visible to users.

Old metrics like Load or domContent-loaded are bad because they don’t necessarily correspond to what the user sees on the screen. Newer, user-centric performance metrics like First Contentful Paint (FCP) capture only the very beginning of the loading experience. If the page displays a splash screen or load instructions, this time is not very relevant to the user.

In the past, we’ve recommended performance metrics such as First Meaningful Draw (FMP) and Speed Index (SI), both of which can be measured in Lighthouse, to help capture more of the loading experience after the initial draw, but these metrics are complex and difficult to interpret, And they’re often wrong — meaning they still can’t tell when the main content of a page loads.

Sometimes simpler is better. Based on discussions in the W3C Web Performance Working Group and research from Google, we found that a more accurate way to measure when the main content of a page loads is to look at when the largest element is rendered.

The LCP metric reports the rendering time of the largest visible image or block of text in the viewport, relative to the time when the page first starts loading.

Currently specified in the Maximum Content Drawing (LCP) API, the element type for maximum content drawing is:

  1. The < img > element
  2. The element within the < SVG > element
  3. The background-image element is loaded via the URL () function
  4. Block-level elements that contain text nodes or other children of inline cascaded text elements

Here are some examples of when maximum content drawing occurs on some sites:

In the timeline above, the largest element changes as the content is loaded. In the example, new content is added to the DOM and changes the largest element.

While lazy-loaded content is usually larger than what is already on the page, this is not necessarily the case. The example above shows the maximum content rendering that occurs before the page is fully loaded.


To measure the LCP in JavaScript, use the Largest Contentful Paint API. The following example shows how to create a PerformanceObserver to listen for maximum content-drawing entries and log them to the console.

const p_ob = new PerformanceObserver((entryList) = > {
  for (const entry of entryList.getEntries()) {
    console.log('LCP candidate:', entry.startTime, entry);
  }
})

p_ob.observe({type: 'largest-contentful-paint'.buffered: true});
Copy the code

tips:

In the example above, the maximum content-drawing entry for each record represents the current LCP candidate. Typically, the startTime value of the last item emitted is the LCP value — but this is not always the case. Not all maximum content drawing entries are suitable for measuring LCP. Also: The Performance Observer LCP API is not friendly in Apple browser support.

2. FID

First Input Delay is an important, user-centric measure of load responsiveness because it quantifies the user’s experience when trying to interact with an unresponsive page — low FID helps ensure pages are available.

We all know how important it is to make a good first impression, and of course, it’s just as important when building an experience on the Web.

A good first impression can make the difference between someone becoming a loyal user or leaving and never coming back. The question is, what makes a good impression, and how do you measure what kind of impression you’re likely to make? First impressions can take many different forms — for example, we have first impressions of a website’s design and visual appeal, as well as its speed and responsiveness.

While it’s hard to measure a user’s love of a Web design through a Web API, it’s possible to measure its speed and responsiveness.

A user’s first impression of how fast a site loads can be measured by the LCP, however, how quickly a site can draw pixels on the screen is only part of the story, just as important is how responsive your site is when a user tries to interact with those pixels.

The First input Delay (FID) metric helps measure a user’s first impression of a site’s interactivity and responsiveness.

FID measures the time from the first time a user interacts with a page (that is, when a link is clicked, a button is clicked, or a custom or javascript-driven control is used) until the browser is actually able to start handling event handlers in response to that interaction.

FID measures only “latency” in event processing. It does not measure the event processing time itself, nor does it measure how long it takes the browser to update the UI after running the event handler. While this time does affect the user experience, making it part of FID encourages developers to respond to events asynchronously — which improves metrics but can make the experience worse.

Here’s a look at a typical page load timeline:

The above visualization shows a Web page loading process that makes several network requests for resources (most likely CSS and JS files) and then processes them on the main thread once the resources are downloaded, which causes the main thread to be temporarily busy.

The first input delay typically occurs between first content rendering (FCP) and interactionable time (TTI), because the page has rendered some content but is not yet ready to reliably interact.

Perhaps the following makes this point more clearly

You may have noticed that there is a considerable amount of time between FCP and TTI (including long Tasks), and if the user tries to interact with the page during this time (such as clicking a link), there is a delay between receiving the click and the main thread being able to process the response.

Because the input event occurs while the browser is running a task, it must wait until the task is complete before responding to the input event. The time it must wait is the FID value on the page. Of course, if the user interacts with the page while the main thread is idle, the browser may respond immediately.


FID is a metric that can only be measured in the field of operations because it requires real users to interact with your pages. To measure FID in JavaScript, you can use the Event Timing API. The following example shows how to create a PerformanceObserver to listen for the first input entry and log them to the console:

const p_ob = new PerformanceObserver((entryList) = > {
  for (const entry of entryList.getEntries()) {
    const delay = entry.processingStart - entry.startTime;
    console.log('FID candidate:', delay, entry);
  }
})

p_ob.observe({type: 'first-input'.buffered: true});

Copy the code



Also, refer to FID for more details, such as why only the first input is considered? What counts as a true first input?

3. CLS

The Cumulative Layout Shift measures visual stability and quantifies the accidental Layout offset of visible page content.

Have you ever had that experience. Have you ever read an article online and when something on the page suddenly changes, without warning, the text moves and is lost from its current location. Or worse: you’re about to click on a link or button, but just before your finger lands — suddenly the link moves and you end up clicking on some other button.

Most of the time, these experiences are just annoying, but in some cases, they can cause real harm.

Unexpected movement of page content is usually due to resources being loaded asynchronously or DOM elements being dynamically added to the page above existing content. The culprits could be images or videos of unknown size, third-party ads or widgets that dynamically resize themselves.

To make this problem worse, websites often operate in a very different way from how users experience them. Personalized or third-party content often behaves differently in development than it does in production, the images for testing are usually already cached in the developer’s browser, and the API calls that run locally are often so fast that the latency is not noticeable.

The Cumulative layout offset (CLS) metric helps you solve this problem by measuring how often it happens to real users.

CLS (Cumulative layout offset) is a measure of the maximum layout offset score for each unexpected layout offset that occurs over the life of a page. Measures the sum of layout offset scores for each unexpected style move that occurs throughout the life of the page.

Layout Layout is defined by the Layout Instability API, which reports Layout Shift entries between frames (for example, the top and left positions in default mode) whenever an element visible in the viewport changes its starting position. Such elements are considered unstable.

Note that Layout Shifts only occur when the starting position of an existing element changes, and if a new element is added to the DOM or an existing element changes size, it does not count as a Layout shift (as long as the change does not result in a change in the starting position of other visible elements).

CLS calculation method: Layout Shift Score = Impact Fraction * Distance Fraction

As shown in the figure above, the influence area (red) accounts for 75% of Viewport, and the arrow (purple) movement accounts for 25% of Viewport height, so 0.75 * 0.25 = 0.1875.


To measure CLS in JavaScript, you can use the Layout Instability API. The following example shows how to create a PerformanceObserver that listens for unexpected layout-shift entries, groups them into sessions, and records the maximum session value when it changes.

let clsValue = 0;
let clsEntries = [];

let sessionValue = 0;
let sessionEntries = [];

const p_ob = new PerformanceObserver((entryList) = > {
  for (const entry of entryList.getEntries()) {
    // Only count layout shifts without recent user input.
    if(! entry.hadRecentInput) {const firstSessionEntry = sessionEntries[0];
      const lastSessionEntry = sessionEntries[sessionEntries.length - 1];

      // If the entry occurred less than 1 second after the previous entry and
      // less than 5 seconds after the first entry in the session, include the
      // entry in the current session. Otherwise, start a new session.
      if (sessionValue &&
          entry.startTime - lastSessionEntry.startTime < 1000 &&
          entry.startTime - firstSessionEntry.startTime < 5000) {
        sessionValue += entry.value;
        sessionEntries.push(entry);
      } else {
        sessionValue = entry.value;
        sessionEntries = [entry];
      }

      // If the current session value is larger than the current CLS value,
      // update CLS and the entries contributing to it.
      if (sessionValue > clsValue) {
        clsValue = sessionValue;
        clsEntries = sessionEntries;

        // Log the updated value (and its entries) to the console.
        console.log('CLS:', clsValue, clsEntries)
      }
    }
  }
})

p_ob.observe({type: 'layout-shift'.buffered: true});

Copy the code





These indicators are benchmarked against important user-centered results, can be measured in the field, and have equivalent experimental indicators. For example, while LCP is a quality ceiling for page load metrics, it is also highly dependent on first content rendering (FCP) and first byte time (TTFB), which remain critical for monitoring and improvement.

conclusion

This paper mainly introduces the three core Web performance indicators proposed by Google, core-Web-Vitals. Although these three indicators are defined as the most core, they are often not enough in actual Web performance monitoring and statistical analysis. More performance indicator elements need to be added according to the actual situation.

There is no denying that Google has been making efforts to standardize and unify the Web.


Vx public number: front end little rookie 001

The article will be posted on the official account as soon as possible