Some time ago, we did a phase of webView performance optimization, in the optimization process, we pursue the page to be enough “fast” “stable”. How to measure this “fast” and “steady” quantitatively? Performance indicators help us understand the status quo of page performance and performance bottlenecks from the perspective of data, and measure the optimization effect after optimization.
Performance metrics are more or less common in daily development. For example, ð is the performance data of a page running under Lighthouse.
This article will answer the following questions:
- What performance indicators should be observed? What do they mean?
- With so many indicators, which ones should we focus on under what circumstances?
- How are indicators collected?
Common performance indicators
- First Paint
- First Contentful Paint
- Largest Contentful Paint
- First Meaningful Paint
- First Input Delay
- Cumulative Layout Shift
- Time to Interactive
- DOMContentLoaded
- Load
First Paint(FP) First Paint
The time of first rendering can be regarded as the white screen time, such as the time when the background color rendering is completed. Usually used as the earliest performance indicator at a point in time.
First Contentful Paint(FCP
The first time there is a content rendering time, a metric that measures the time it takes for a page to finish rendering on screen from the time it starts loading into any portion of the page content. For this metric, “content” refers to text, images, < SVG > elements, or non-white
FP vs FCP
FP: From the start of the load to the first render
FCP: From the start of loading to the first content rendering.
FCP is an enhanced version of FP and is more critical to users. Because FCP with images and text content information, users are more concerned about.
FP and FCP may be coincident.
Largest Contentful Paint (LCP
The time at which the maximum content of a page (usually the core content) is finished loading. This maximum content can be an image/text block. It is an SEO related metric.
LCP vs FCP
FCP: An early indicator in the page loading process. If a page is in loading state, this indicator may perform well, but it cannot measure when the actual content is presented to the user.
LCP: Focus on the presentation time of the core content of the page, which is more interesting and relevant to the user.
First Meaningful Paint(FMP) First Meaningful Paint
When meaningful content was first drawn. The industry’s preferred method is to draw the FMP of the current page at the time after the biggest layout change during loading and rendering. Lighthouse 6.0 was scrapped due to its relatively complex calculations and problems with accuracy.
LCP vs FMP
FMP: Early recommended performance metrics, but calculations are more complex and not very accurate
LCP: Updated data index, with API direct support, simple and accurate statistics, but there are also problems such as whether the largest content is the most core content.
First Input Delay(FID) Delay of First Input
This metric is triggered when the user first interacts with the page, and records the time between the user’s first interaction with the page and the browser’s ability to actually start processing event handlers in response to that interaction, the interaction delay. Such as when the user first clicks, keydown, etc. on the page.
Why the delay? Typically, input delays occur because the browser’s main thread is busy doing other work (such as parsing and executing large JS files) and is not yet responsive to the user.
Cumulative Layout Shift(CLS) Cumulative Layout Shift
In the life cycle of a page, layout shift will constantly occur, and a cumulative score will be made for each layout change. The maximum score for layout change is CLS. Visual Stability is an important indicator of page stability.
The impact of poor CLS on user experience ð
Core Web Vitals
In May 2020, Google put forward the core data indicators to measure the user experience of websites, covering page loading speed, interactivity and stability. Is an important indicator that will affect SEO recently, including the following three:
- LCP
- FID
- CLS
Time to Interactive(TTI) Indicates the Interactive Time
When it comes to TTI, let’s first introduce Long Task
Long Task: if the main thread of the browser executes a Task that takes more than 50ms, the Task is called a Long Task. The user’s interaction is also performed on the main thread. Therefore, when a Long Task occurs, the user’s interaction may not be executed in a timely manner. In this case, the user will experience lag (when the page response time exceeds 100ms, the user will experience lag), thus affecting the user experience.
The time it takes from the time the page loads until the page is in a fully interactive state. This usually occurs when the page depends on resources that have been loaded and the browser can quickly respond to user interaction times.
DOMContentLoaded(DCL)
Trigger when the DOM is loaded, without waiting for page resources to load.
Load(L)
The time when a page and its dependent resources are fully loaded, including all resource files such as style sheets and images.
Common performance nouns
User-centric Metrics
Traditional performance indicators:
For a long time, description performance was measured by the Load/DOMContentLoaded event, the number of seconds it takes for a website to complete loading. Load/DOMContentLoaded is a definite moment in the page lifecycle, but does it really reflect how the actual user feels when they visit the page?
For example, if the server responds to a small volume page, Load/DOMContentLoaded will fire quickly and asynchronously fetch the content for display on the page. This page Load/DOMContentLoaded takes a very short time, so does it perform well from a user experience perspective?
To answer the above question, this leads to the concept of ð.
User-centric metrics:
User-centric performance metrics focus more on how the page performs from the user’s point of view. Whether the content presented by the page meets the needs of users, whether the user interaction is smooth, etc. FCP, LCP, FMP, FID, CLS, and TTI are all user-centric performance indicators
Is it happening? | Is navigation started successfully? Is the server responding? |
---|---|
Is it useful? | Did you render enough content for the user to dig into? |
Is it available? | Is the page busy and can users interact with the page? |
Is it enjoyable? | Is the interaction smooth and natural, with no latency or lag? |
Performance indicators can be in different forms based on collection methods
According to different measurement methods, performance Data can be divided into Lab Data and Field Data
Lab Data / SYN
SYN is synthetic monitoring, orin the lab
Lab Data is the performance Data collected under controllable conditions, specific models and specific network environment. One usage scenario is that during the development of a new page, it is impossible to measure the performance index based on real users before the page is released to the production environment. At this time, if you want to know the performance, you can collect and check it through Lab.
Field Data / RUM
RUM stands for Real User Monitoring, and the collection form is also called in the Filed
Although the MEASUREMENT of Lab can reflect the performance, the page loading of real users has great uncertainty for different users due to different models and networks, and the Lab data is not necessarily the actual situation of real users. Filed data is a good solution to this problem, which has certain code intrusion and records the performance data of real users. Through RUM data, some performance anomalies that are difficult to be exposed under Lab data can be found.
RAIL Model
Let’s first see what the letters RAIL correspond to.
R: response
A: animation
I: idle
L: load
RAIL is a set of user-centered performance model proposed by Google, which reflects the performance of a website from various dimensions and provides a set of performance objectives for reference ð
- Response: responds to the event within 50ms
- Animation: one frame is generated within 10ms (each frame takes 16ms, the user will feel the Animation is smooth, why is it 10ms?)
- Idle: Maximizes the use of Idle time
- Load: Contents are transferred within 5s and users can interact with each other
How are performance indicators collected
Apis related to performance indicators and collection methods
Performance Observer
The Performance Observer API contains a set of Performance monitoring related apis
- Paint Timing API
- Largest Contentful Paint API
- Event Timing API
- Navigation Timing API
- Layout Instability API
- Long Tasks API
- Resource Timing API
Here are the apis used to collect different metrics.
Paint Timing API
Used to collect FP/FCP
new PerformanceObserver((list) = > {
for (const entry of list.getEntries()) {
console.log('FCP: ', entry.startTime);
}
}).observe({
type: 'first-contentful-paint'.buffered: true});new PerformanceObserver((list) = > {
for (const entry of list.getEntries()) {
console.log('FP: ', entry.startTime);
}
}).observe({
type: 'first-paint'.buffered: true});Copy the code
Largest Contentful Paint API
Used to collect LCP
Largest contentful-paint event will be triggered continuously during page loading based on the change of the largest element that has been rendered at that time. Until the user interacts with the page (click, keyDown) or the page is hidden or the page is unload, the last value monitored is reported.
new PerformanceObserver(entryList= > {
for (const entry of entryList.getEntries()) {
console.log('LCP: ', entry.startTime);
}
}).observe({
type: 'largest-contentful-paint'.buffered: true
});
Copy the code
For the full version of the code, see ð github.com/GoogleChrom…
This indicator is Core Web Vitals, but compatibility is not good, iOS is not supported ð
Event Timing API
For collecting FID
Listen for the user’s first input event, FID = time to start processing input – the start time of the input operation
new PerformanceObserver(list= > {
for (const entry of list.getEntries()) {
// Start input processing time - the start time of the input operation
const FID = entry.processingStart - entry.startTime;
console.log('FID:', FID);
}
}).observe({
type: 'first-input'.buffered: true});Copy the code
Navigation Timing 1.0 æ Navigation Timing 2.0
Used to collect Load/DOMContentLoaded
new PerformanceObserver(list= > {
for (const entry of list.getEntries()) {
const Load = entry.loadEventStart - entry.fetchStart;
console.log('Load:', Load);
}
}).observe({
type: 'navigation'.buffered: true});new PerformanceObserver(list= > {
for (const entry of list.getEntries()) {
const DOMContentLoaded = entry.domContentLoadedEventStart - entry.fetchStart;
console.log('DOMContentLoaded:', DOMContentLoaded);
}
}).observe({
type: 'navigation'.buffered: true});Copy the code
Layout Instability API
Collect CLS: Divide the loading process into sessions, listen for layout-shift changes to obtain the value of each layout change, and count the layout change score of each session. The maximum layout change score period is CLS.
let sessionEntries = [];
let sessionValue = 0;
let metric = {
value: 0
}
new PerformanceObserver(entryList= > {
for (const entry of entryList.getEntries()) {
if(! entry.hadRecentInput) {const firstSessionEntry = sessionEntries[0];
const lastSessionEntry = sessionEntries[sessionEntries.length - 1];
// If the time is close to the previous session, add this round layout-shift to the previous session
if (sessionValue &&
entry.startTime - lastSessionEntry.startTime < 1000 &&
entry.startTime - firstSessionEntry.startTime < 5000) {
sessionValue += entry.value;
sessionEntries.push(entry);
} else { // Start a new session
sessionValue = entry.value;
sessionEntries = [entry];
}
// If the current session value is greater than the previous maximum value, replace it with the current maximum value
if (sessionValue > metric.value) {
metric.value = sessionValue;
metric.entries = sessionEntries;
console.log('CLS: ', metric)
}
}
}
}).observe({
type: 'layout-shift'.buffered: true
});
Copy the code
The full code can be found at ð github.com/GoogleChrom…
Long Tasks API & Resource Timing API
TTI collection relies on these two apis, and the calculation process is:
- FCP is collected as a starting point
- Forward search for quiet Windows with a duration of at least 5 seconds along the timeline (Quiet window: No Long Task and no more than two network GET requests being processed)
- The last long task before the quiet window is searched backwards along the timeline, and if no long task is found, execution is stopped at the FCP step
- TTI is the end time of the last long task before the quiet window, or the same value as FCP if no long task was found
— Time to Interactive (TTI)
Mutation Observer
Collection of FMP: DOM changes are monitored through MutationObserver. Each callback calculates the score of the current DOM tree according to the number, type and depth of the old and new DOM. The moment when the score changes most violently is regarded as FMP, and the monitoring is stopped 200ms after the Load event is triggered. Take the record of the biggest change to report.
new MutationObserver(() = > {
// Here is the severity score calculation
}).observe(document, {
childList: true.subtree: true});Copy the code
Industry performance indicator collection tool
In the Lab
- Chrome DevTools
- Lighthouse
- WebPageTest
In the Field
- web-vitals
reference
Web Vitals
Largest Contentful Paint (LCP)
Cumulative Layout Shift (CLS)
First Input Delay (FID)
Measure performance with the RAIL model
âĪïļ Thank you
That is all the content of this sharing. I hope it will help you
Don’t forget to share, like and bookmark your favorite things.
Welcome to pay attention to the public number ELab team receiving factory good article ~
We are from the front end department of Bytedance, responsible for the front end development of all bytedance education products.
We focus on product quality improvement, development efficiency, creativity and cutting-edge technology and other aspects of precipitation and dissemination of professional knowledge and cases, to contribute experience value to the industry. Including but not limited to performance monitoring, component library, multi-terminal technology, Serverless, visual construction, audio and video, artificial intelligence, product design and marketing, etc.
Interested students are welcome to post in the comments section or use the internal tweet code to the author’s section at ðĪŠ
Bytedance correction/social recruitment internal promotion code: EYJFH4S
Post links: jobs.toutiao.com/s/dQ2dogm