1. Introduction
As a web application developers or maintainer, we need to constantly pay attention to the current site’s health, such as the main process under the condition of normal operation, the various aspects performance experience whether meet expectations, whether there is improvement and promotion space, the problem of how to make quick and accurate positioning, etc., in order to meet these demands, we need to conduct a comprehensive and objective performance testing
Awareness of performance testing
As a part of the performance optimization process, performance detection is usually used to provide guidance, reference baseline and comparison basis for subsequent optimization work
Performance testing is not a one-time effort, but an iterative process of testing, recording, and improvement to help the site’s performance optimization approach the desired results
Before we introduce the methods and tools of performance testing, we first need to dispel some of the misperceptions and misunderstandings about performance
- Don’t use a single metric to measure your site’s performance experience. This is the cognition generated from the perspective of user perception. It only has subjective advantages and disadvantages, and it is difficult to give practical optimization suggestions. Therefore, we suggest that the performance of web applications should be considered from more dimensions and more specific indicators, such as page first screen rendering time, loading times and speeds of different types of resources, cache hit ratio, etc
- Don’t try to get objective results about your site’s performance in one test. Site application of the actual performance is often highly variable, because it will be influenced by many factors, such as user equipment status, the current network connection speed, etc., so if you want to pass the performance test to get a more objective optimization guidance, can only rely on a testing data, and need to collect as much data in different environment, And then we use that to do performance analysis
- Do not just simulate performance testing in a development environment. In the development environment simulation performance testing has many advantages: such as can be easily set equipment of the current state of the detection and network speed, the test results can be repeated debug, but because of its limited can be covered by the scene, it is easy to fall into “survivor bias”, found the problem may not be the actual performance bottlenecks
Therefore, if we want to carry out effective performance optimization and improvement through detection, we need to consider the performance of the website from as many angles as possible, and at the same time ensure the objective and diverse detection environment, so that the results of the analysis can be closer to the real performance bottleneck, which will undoubtedly spend a lot of time and energy. So we also need to consider the cost of optimization before we optimize performance
Common detection tools
- Lighthouse
- WebPageTest
- The browser DevTools
- Browser task manager
- The Network panel
- Coverage panel
- The Memory panel
- The Performance panel
- The Performance monitor panel
- Performance Monitoring API
- Continuous performance monitoring solutions
2. Use the Lighthouse to test performance
Lighthouse literally means “Lighthouse”. It is an open-source Web performance testing tool developed by Google
The name of this performance monitoring tool implies the same meaning, that is, by monitoring and testing the performance of various aspects of the website application, to provide developers with guidance and suggestions for optimizing user experience and website performance
use
Lighthouse provides a variety of ways to use it:
- Use Lighthouse in Chrome DevTools
- Using Chrome Extensions
- Use the Node CLI tool
- Using the package
Performance report
As for the detection results of the performance report, Lighthouse provides the following information: detection score, performance indicators, optimization recommendations, diagnosis results, and passed performance. The following describes the results respectively
Test scores
After detection, Lighthouse gives an evaluation score of 0 to 100 for the five dimensions mentioned above. If it has no score or a score of 0, it is highly likely that something has gone wrong in the detection process, such as an abnormal network connection. A score of 90 or more indicates that a web application is performing in line with best practices in this area, as shown in the figure below:
As for how Lighthouse gets this score, it starts with raw performance data for each of the metrics it measures, then makes a weighted calculation based on their weights. Finally, it maps a logarithmic normal distribution of the scores from its vast database of assessments
- Lighthouse Scoring calculator (googlechrome.github.io)
- Lighthouse performance scoring (web.dev)
Performance indicators
There are six key stats about performance metrics:
These six metrics are detailed in user Experience based Performance metrics,
The data of these six different indicators need to be weighted to obtain the final score of performance. The greater the weight added, the greater the impact of the corresponding indicators on performance
And weight in the process of the system are optimized, although the Lighthouse gives larger weights are for the individual, also means that the optimization of the indicator can bring more significant performance score, but it also suggested that in the process of optimization do not only focus on the optimization of a single index, and to consider his overall performance boost optimization strategy
Optimization Suggestions
In order to make it easier for developers to optimize their performance faster, Lighthouse not only provides a score for key performance indicators, but also provides some practical optimization recommendations, as shown in the following figure in the inspection report:
These suggestions are arranged in ascending order according to the expected improvement effect after optimization. Each expansion will have more detailed optimization guidance and suggestions, including the following contents from top to bottom:
- Remove resources that block rendering. Some JavaScript script files and stylesheet files may block the system’s first rendering of the site page. It is recommended to reference them in an inline way and consider lazy loading. The report will arrange the resource files that need to be optimized below, each file also includes the size information and the effect that is expected to improve the first screen rendering time after optimization, according to which the priority of resource file optimization can be arranged
- Pre-connecting to the requested source, establishing a network connection with the resource to be accessed in advance, or speeding up the resolution of domain names can effectively improve page access performance. There are two options: one is to set the preconnect of < link rel=”preconnect” > and the other is to set the DNS prefetch of < link rel=” dnS-prefetch “>
- Reduce server-side response time. There are many reasons for slow server response in general, so there are many ways to improve: Such as server hardware upgrade to have more memory or CPU, optimize the server application logic to pages or resources needed to build a faster, and optimizing the query the database server, etc., do not think that these may not fall within the scope of the front-end engineers work don’t pay attention to, usually forwarding node server requires the corresponding optimization front-end engineer
- Adjust the image size appropriately. Using an image of an appropriate size can save network bandwidth and shorten the loading time. The optimization suggestions here can usually meet the needs of images with a smaller size, but large images with a high resolution are used, which can be compressed appropriately
- Remove unused CSS. This section lists unused CSS files that have been introduced and can be deleted to reduce network bandwidth consumption. If you need to simplify and delete the internal code usage of resource files, you can use the Coverage panel of Chrome Developer Tools to analyze it
The diagnosis
In this section, Lighthouse examines and analyzes data from multiple main dimensions that affect the performance of web pages in detail. Let’s introduce it as follows:
-
Use an efficient cache policy for static resource files. The file size and cache expiration time of all static resources are listed here. Developers can adjust the cache policy accordingly, for example, delaying the cache life of some static resources to speed up the secondary access
-
Reduce the work of the main thread. The main thread of the browser rendering process usually handles a lot of work: Such as parsing HTML to build DOM, parsing CSS stylesheet files and applying the specified styles, parsing and executing JavaScript files, but also need to deal with interactive events, so the main line of the rendering process is too busy to easily lead to the bad experience of user response delay. Lighthouse gives us the time taken for each task on the main page of the site in this session, allowing developers to purposefully optimize their exception handling
-
Reduce the execution time of JavaScript scripts. The logic of front-end projects basically relies on JavaScript execution, so the efficiency and time of JavaScript execution will also have a great impact on page performance. Through the detection of this dimension, it can be found that JavaScript files that take too long to execute can be found. Optimize the time taken to parse, compile, and execute JavaScript
-
Avoid the request of the network resources, there is a big size because if a resource file size is larger, the browser will need to wait for it in full load, to render for subsequent operations, this means that the greater the size of a single file its block rendering process, the longer the time could and packet loss risk exist in the process of network transmission, once a large file transmission failure, The cost of retransmission can also be high, so optimize for larger resources as much as possible. Typically, a larger code file can be packaged into multiple smaller code packages using the build tool. For the picture file if not necessary or in accordance with the visual requirements of the premise as far as possible compression. It can be seen that the large size resource files listed by this detection dimension are basically image files
-
If the request depth is too long, the total size of resources that need to be loaded will increase, which will greatly affect the page rendering performance. Therefore, you are advised to pay attention to this dimension and optimize it in time during performance detection
Passed performance
The optimization items listed in this section are the performance audit items that the website has passed:
- Delay loading images outside the first screen. After loading key resources on the first screen, delay loading images outside the first screen or in the hidden state to effectively shorten the waiting time before users can interact with each other and improve user access experience
- Compressing the CSS file reduces the network load
- Compress JavaScript files to reduce network load
- By adopting efficient encoding method for image files, the image files optimized by encoding will not only load faster, but also need to transfer smaller data scale
- With the new generation of image file formats, WebP, JPEG XR, JPEG 2000 and other newer image file formats usually have better compression effect than traditional PNG or JPEG, can obtain faster download speed and less traffic consumption, but at the same time need to pay attention to the compatibility of the new format processing
- Enable text compression. For text resources, compress them first and then provide them to minimize the total number of bytes transmitted over the network. Use at least one of the commonly used compression methods: Gzip, Deflate, and Brotli
- Avoid multiple page redirects, too many redirects can cause delays before the page loads
- Preload key request, pass
<link rel="preload">
To pre-fetch resources that need to be requested after the page loads, mainly to take advantage of downtime - Use video format to provide animated content. It is recommended to provide animations via WebM or MPEG4 instead of large GIF animations in web pages
- Avoid large DOM sizes, which can lead to large memory consumption, long style calculations, and high page layout rearrangement costs. Lighthouse’s general recommendation is that a page should have fewer than 1500 DOM elements and a tree depth of no more than 32 levels
- Make sure your text is visible during font loading. Use CSS’s font-display function to make the text on your web page always visible during font loading
3. Use WebPageTest to test performance
WebPageTest is a very professional Web page performance analysis tool, it can be the environment configuration of the examination analysis to carry on the highly customized, including test nodes of the physical location, the equipment type, browser version, network conditions and the testing number, etc., in addition, it also provides a performance comparison between the target site is applied to the competing goods, And view the network routing situation and other dimensions of the test tool
You can directly open the homepage of WEBPAGETEST, configure the website address of the target website application and test parameters, then you can start the test, and wait for the end of the test to view the detailed test report
The basic use
Just refer to the official documentation for getting started
Deploy the WebPageTest tool locally
1. Install Docker. 2
docker pull webpagetest/server
docker pull webpagetest/agent
Copy the code
3. Run the example
docker run -d -p 4000:80 --rm webpagetest/server
docker run -d -p 4001:80 --network="host" -e "SERVER_URL=http://localhost:4000/work/" -e "LOCATION=Test" webpagetest/agent
Copy the code
4. Use Chrome DevTools to test performance
Browser task manager
Using Chrome Task Manager, you can view the GPU, network, and memory usage of all Chrome processes, including the currently opened tabs, installed extensions, and default processes of the browser, such as GPU, network, and rendering. By monitoring these data, We can locate problematic processes that may have memory leaks or network resource loading anomalies when there is significant overhead compared to other processes
Network Network analysis
The Network panel is a frequently used tool in Chrome developer tools. It allows you to view requests for all resources on a website, including load time, size, priority Settings, and HTTP cache triggers. This helps us to find the problem that the resource size may be too large due to inefficient compression, or the problem that the second request load time is too long due to improper cache policy configuration
Reference: developer.chrome.com/docs/devtoo…
View network request information
Panel Settings
Cache test
Network throughput test
Network Request Blocking
- Open mode: Ctrl+ Shift + P Search network request block
- Enable network request blocking
- Adding a Block Rule
Coverage panel
- To open: Ctrl+ Shift + P search overlay
The Memory panel
The front-end mainly uses JavaScript code to process business logic, so it is particularly important to ensure a good memory cost during the execution of the code for the user’s performance experience. If memory leaks occur, the website application may stall or crash.
To better monitor the current Memory usage of web applications, Chrome developer Tools provides the Memory panel, which allows you to quickly take a snapshot of the current heap or see how Memory changes over time. This allows us to view and identify possible Memory leaks, as shown below using the Memory panel to view a snapshot of heap Memory usage
- Open mode: Ctrl+ Shift + P memory
The Performance panel
The Performance panel detects and analyzes the runtime Performance of web applications. The Performance panel not only detects the number of FPS, CPU consumption, and time spent on various requests, but also displays the execution of network tasks between the first and last 1ms
- Open mode: Ctrl+ Shift + P memory
- It is recommended to use this tool in anonymous mode of Chrome, because this mode is not affected by the existing cache or other plug-ins and provides a relatively clean running environment for performance detection
The three buttons in the figure are commonly used on the Performance panel. Usually, when we need to detect the performance of a period of time, we can click the “Start/Stop detection” button twice to set the start and end time. When we click the second button to stop the detection, the corresponding detection information will appear in the area below the control panel
The button “Start detection and refresh page” in the figure is used to detect the performance in the page refresh process. Clicking it will first clear the existing detection records, then start detection to refresh the page, and automatically stop detection when all the pages are loaded
Open the test example: googlechrome. Making. IO/devtools – sa…
Panel information
The control panel
- Screenshots: Indicates whether to take Screenshots of each frame. This option is selected by default, and Screenshots of each frame over time are displayed in the Overview panel. If deselected, these Screenshots are not displayed in the Overview panel
- Memory: Indicates whether to record Memory consumption. This parameter is not selected by default. If this parameter is selected, the Memory consumption curves of various types of resources will be displayed between the thread panel and the statistics panel
- Web indicator: indicates whether to display performance indicator information. This parameter is not selected by default. If this parameter is selected, the node status of core indicators is displayed between the network and Frames
- Disable javaScript samples: If checked, it means that the javaScript samples are closed to reduce the overhead of running on the mobile end, which needs to be checked if you want to simulate the running environment on the mobile end
- Enable Advanced paint Instrumentation (slow) : If selected, the accelerated rendering tool is enabled, which is used to record details about rendering events. Because this function consumes performance, it is slow to regenerate detection reports after it is enabled
- Network: used to switch the simulated Network environment during performance detection
- CPU: Limits the CPU processing speed and is used to simulate the performance of a low-speed CPU
The overview panel
On the timeline in the Overview panel, you can take a narrower view of performance by selecting a starting point and then holding down the left mouse button to slide a local range in the selection panel
The observable performance information includes: FPS, CPU overhead, and network request time. For frames per second, try to keep the animation at 60FPS for a smoother visual experience
For CPU overhead, can not only detect the timeline in the observation of CPU processing tasks in the form of curve on the time change, also can be in the statistical panel to view the current selected time zone in the task time proportion, which accounts for the large part might have performance problems, can further testing and analysis
For Network request time, the information provided by the overview panel may not be clear enough. It is recommended to check the Network request time in detail in the Network section of the threads panel. For example, the time and start and end points of each request in the timeline will be more clear, so as to facilitate the developer to detect and optimize the Network request that is too long
Thread panel
The most important information in this part is the flame map of the execution process of the main thread. In the process of parsing HTML and CSS, drawing pages and executing JavaScript, the call stack and time of each event are reflected in this map, where each bar represents an event. When you hover the mouse over the event, you can view the execution time and event name of the event.
The horizontal axis of the fire chart shows execution time, and the vertical axis shows the call stack. Events at the top call events at the bottom, and as you go down there are fewer events, so the fire chart is inverted
The events in the flame chart are color-coded. Common types of events are HTML parsing, JavaScript events (such as mouse clicks, scrolling, etc.), page layout changes, element style recalculation, and page layer drawing. Understanding and understanding the execution of these events can help identify potential performance problems
Statistical panel
The statistics panel displays visualized ICONS of the execution time of different types of tasks based on the time zones selected in the overview panel. The statistics panel contains four tabs
The Summary TAB shows the circular graph of the time consuming of various task events:
In the bottom-up TAB, you can view the sorting list of time consumed by each event, which contains two dimensions: the time consumed by the event itself after subevents are removed and the total time consumed by the subevents from start to end:
The Call Tree TAB allows you to view the Call stack of all or specified events in the flame chart, as shown below:
On the Event Log TAB page, you can view detailed Log information about each Event, as shown in the following figure:
Save test records
FPS counter
- Another very handy tool is the FPS count, which provides a real-time estimate of the FPS while the page is running
- Control + Shift + P search FPS
Performance monitor
Although comprehensive Performance data can be obtained by using the Performance panel for detection, there are still two problems in use, that is, the panel information is not intuitive enough and the real-time Performance of data is not strong enough
To address both of these shortcomings, Chrome has introduced the Performance Monitor panel in developer tools since version 64, which allows you to monitor the Performance of web applications in real time. Information such as CPU usage, JavaScript memory usage, number of DOM nodes hanging in memory, number of JavaScript event listens, and processing time for page redrawing and reordering
Therefore, if we find that there is a steep increase in one metric during the interaction with the page, there is a risk that the performance experience may be affected
As shown in the Performance Monitor panel, the obvious fluctuation in the figure is caused by the page refresh operation. It can be observed that both the size of the JavaScript heap memory and the number of DOM nodes have an obvious cliff drop, which is precisely after the original DOM nodes are cleared by the refresh operation. The point in time at which the new node has not been re-rendered
Refer to the link
- Docs.microsoft.com/zh-cn/micro…
- Developer.chrome.com/docs/devtoo…