Click on the blue word at the top to subscribe

This article was published in the August 2017 issue of Programmer magazine.

Why do we care about site performance, and why is performance so important? The first thing that matters for any Internet product today is traffic, which will eventually translate into business value. Therefore, in Internet products, traffic, conversion rate and retention rate are basically several factors that the product manager or business pays great attention to, while performance will directly affect the conversion and retention of users (after a certain stage, the influence is greater, and the factors of the initial function of the product account for a larger proportion). So performance, in other words, is money, and we focus and monitor performance not for data’s sake. In my opinion, product use experience includes three elements: product function, interactive vision and front-end performance. The ultimate goal of performance optimization is to improve front-end performance and thus improve product experience.

Traditional performance optimization

Fortunately, there are many proven theories and approaches to front-end performance optimization, such as Yahoo! Best Practices for Speeding Up Your Web Site, Google PageSpeed Insights Rules (https://developers.google.com/speed/docs/insights/rules). All of these performance optimization criteria can be applied to different stages of the Browser Processing Model.

Figure 1 The loading and processing of the browser

According to the Processing Model in Figure 1, the following performance indicators can be obtained statistically:

  • redirect: timing.fetchStart – timing.navigationStart

  • dns: timing.domainLookupEnd – timing.domainLookupStart

  • connect: timing.connectEnd – timing.connectStart

  • network: timing.connectEnd – timing.navigationStart

  • load: timing.loadEventEnd – timing.navigationStart

  • domReady: timing.domContentLoadedEventStart – timing.navigationStart

  • interactive: timing.domInteractive – timing.navigationStart

  • ttf: timing.fetchStart – timing.navigationStart

  • ttr: timing.requestStart – timing.navigationStart

  • ttdns: timing.domainLookupStart – timing.navigationStart

  • ttconnect: timing.connectStart – timing.navigationStart

  • ttfb: timing.responseStart – timing.navigationStart

  • firstPaint: timing.msFirstPaint – timing.navigationStart

These metrics are common to the front end, and the core focus is basically the same: first byte time (which measures network link and server response performance), firstPaint time, Interactive time, and load time. We’ve spent a long time quantifying our site performance based on these metrics without really thinking about whether they really reflect the sensory performance of our users.

Obviously, most of these indicators are non-visual Metrics, which are routine and key factors in experience and Performance optimization. But they are not sensory indicators, and can’t fully measure Perceptual Performance.

Sensory performance optimization

The so-called sensory performance is the performance perceived intuitively by the user. The user’s perception is a very subjective judgment. Then how to measure and count the sensory performance? Generally, we evaluate and measure user perception by means of research and analysis (eye tracker, user communication, user feedback, survey questionnaire and expert evaluation). But sensory performance is different from user experience. Is there a way to quantify and measure it? After some research and understanding, the author found that sensory performance can be measured, analyzed and targeted in a certain way, because the perception of performance is more reflected in visual changes, so we can measure sensory performance through some visual indicators:

  • First Paint Time

  • First Contentful Paint Time

  • First Meaningful Paint Time

  • First Interactive Time

  • Consistently Interactive Time

  • Fisrt Visual Change

  • Last Visual Change

  • Speed Index

  • Perceptual Speed Index

First Paint, also known as First non-blank Paint, represents the time when any element in the document is First rendered. First Contentful Paint represents the time when a content element (text, image, Canvas, or SVG) in a document is First rendered. It is usually meaningless rendering, such as headers and navigation bars. First Meaningful Paint Time stands for First Meaningful render Time (its statistics tend to represent the First render Time users care about after a significant layout change), Preferably Interactive Time and preferably Interactive Time respectively.

The flow chart in Figure 2 shows the analysis principle and reporting path of time-to-first-X-paint in the Blink kernel (other first-X-Paint indicators are similar).

Figure 2 Analysis principle and reporting path of time-to-first-X-paint in the Blink kernel

Fisrt Visual Change and Last Visual Change respectively represent the time points of the first and Last Visual changes. Speed Index and Perceptual Speed Index are both Visual Speed. The Difference between the two lies in the different algorithms used behind them. The former adopts Mean Pixel-histogram Difference algorithm, while the latter adopts Structural Similarity Image Metric algorithm. Among them, the statistical results of Perceptual Speed Index are closer to the real feelings of users. The Speed Index algorithm is as follows. It represents the visual change Speed of our page during loading. The smaller the value, the better the sensory performance:

Formula 1

Through FCP (First Contentful Paint), FMP (First Meaningful Paint), PSI (Perceptual Speed Index), We can achieve cross-platform sensory performance analysis and comparison (such as HTML5 and Hybrid comparison, HTML5 and native application comparison, etc.). Figure 3 shows the SI and PSI bar chart of a list page in our project.

Figure 3. SI and PSI bar diagram for a list page in a business project

Performance optimization analysis tool

When it comes to performance optimization analytics, with so many options available during development (such as Chrome’s Dev Tools, the old YSlow, and Google’s PageSpeed Insights), Lighthouse is the one I highly recommend. Lighthouse is an open source automation tool that can be run in two ways: as a Chrome extension; The other runs as a command-line tool. The Chrome extension provides a more user-friendly interface for easy reading of reports. You can integrate Lighthouse into a continuous integration system using command-line tools.

With Lighthouse, we can perform multi-dimensional analysis of pages in terms of PWA, performance, accessibility, and best practices, and provide results and recommendations. As mentioned above, FMP (First Meaningful Paint), FI (First Interactive), Preferably Interactive (CI) and PSI (Perceptual Speed Index) can be analyzed from the performance reports.

Due to space constraints, details, principles, and usage of Lighthouse will not be described here. Basically, in the development stage, Chrome Dev Tools and Lighthouse can be used for comprehensive performance experience and analysis, which has provided enough guidance and suggestions for our optimization.

Cross-platform benchmarking of sensory performance

It’s not enough just to have analytics tools available at the development stage. Often, we prefer to do benchmarking of sensory performance with competing products once the product is live, and this often involves cross-platform (because competing product implementations may be implemented through HTML5, It could be a hybrid development such as Weex, React Native, and of course a lot of it could be a Native implementation.

In my opinion, it is very important to carry out cross-platform comparison of sensory performance. Currently, video comparison is widely used in the industry to illustrate the improvement of sensory performance by comparing the time axis of two videos. Personally, I think this method cannot be quantified and automated, so the results obtained by comparison between different people may not be aligned, and the efficiency is low.

We just need to go one step further and quantify and automate the video comparison process. So after fully investigating some of the existing community implementations, I and my colleagues packaged an easy-to-use gadget (Twilight) for cross-platform benchmarking sensory performance. All we need to do is record the video and hit keyframe, After that, SI (Speed Index), PSI (Perceptual Speed Index), FVC (First Visual Change), FCP (FirstContentful Paint) and FMP (First) can be automatically transferred Meaningful Paint, LVC (Last Visual Change) and other indicators are visualized.

Figure 4. Process for obtaining sensory performance indicators using the Twilight analysis tool

Business application optimization practice

In terms of business items, we also conducted a series of explorations and practices for international air tickets. International tickets can be accessed in the client as well as in the pure browser. Vue 2.0 is adopted as the basic framework, and it is developed and deployed to the CDN in a pure static way. At the beginning, the page of international ticket loading was obviously blank (the rendering time for the first time was long), and the user experience was poor. Therefore, we used some of the tools mentioned above to analyze, and found that network requests, application startup, and interface requests are the three major factors affecting the performance load of the list page.

To solve the above problems, we adopted the following key optimization strategies:

  • Pure front-end offline (in the browser through the means of pure front-end resource files offline);

  • Offline client (resource files and pages are offline in the client container by means of offline package);

  • Pages are componentized and loaded on demand (fine-grained pages are partitioned and loaded on demand through componentization);

  • Pre-rendering improves sensory performance (pre-rendering ensures that the page frame is rendered as quickly as possible before the frame is activated).

After the above optimization strategy is optimized, the effect is significant. After the pure front-end goes offline, the interactive and full loading time is increased by 50%. After the client goes offline, the first byte time is basically zero (reduced by 500 milliseconds), and the interactive and full loading time is further increased by 30% to 50% compared with the pure front-end. First Meaningful Paint (FMP) increased by 80% after on-demand loading and pre-rendering went live.

Figure 5 shows a list page that simulates 4G in Chrome and simulates a CPU slowdown (CPU Throttling 4x Slowdowns are not included).

FIG. 5 Comparison of loading effect before and after performance optimization of international ticket project

Summary of performance analysis and optimization

In summary, the performance analysis and optimization methods of the front-end, both traditional and sensory performance, are completely traceable. Dev Tools, Lighthouse, non-visual Metrics, Visual Metrics, Follow the traditional performance optimization catch-all and Google PRPL Pattern for performance optimization. Twilight makes it easy to perform cross-platform benchmarking analysis of sensory performance. Although our practice in business projects has achieved certain results, there is still a long way to go and a lot of room for possibilities. Welcome those who are interested in performance optimization to Do Better Web.