• 原文 标 题 : front-end Performance Checklist 2019-2
  • Vitaly Friedman
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: Tartan Bear
  • Proofreader: Ivocin, Fengziyin1234

Let 2019 come faster! You are reading the annual summary of front-end performance optimizations for 2019, which started in 2016.

2019 Front-end Performance Optimization Annual Summary — Part 1 2019 Front-end Performance Optimization Annual Summary — Part 2 2019 Front-end Performance Optimization Annual Summary — Part 3 2019 Front-end Performance Optimization Annual Summary — Part 4 2019 2019 Front-end Performance Optimization Year Summary – Part 6

Set realistic goals

7. 100 ms response time, 60 FPS

To make the interaction feel smooth, the interface response time should not exceed 100ms. If this time is exceeded, the user will assume that the application is stalled. RAIL, a user-centric performance model, gives you healthy goals: in order to achieve responses of <100 milliseconds, the page must give back control to the main thread every 50 milliseconds. The estimated input delay time can tell us whether this threshold has been reached, and ideally it should be less than 50 milliseconds. For high pressure points like animation, it’s best not to do anything if you can.

RAIL, a user-centric performance model.

In addition, each frame of animation should be completed in 16ms to achieve 60 frames per second (1sec ÷ 60 = 16.6ms) — preferably under 10ms. Since the browser takes time to draw a new frame onto the screen, your code should be executed before the 16.6ms mark is reached. We started talking about 120 FPS (e.g., the iPad’s new screen runs at 120Hz), and Surma has covered some 120 FPS rendering performance solutions, but that’s probably not what we’re looking at right now.

Be pessimistic about performance expectations, but be optimistic about interface design and use idle time wisely. Obviously, these goals apply to runtime performance, not load performance.

8. Speed index < 1250, TTI (interaction time) < 5s (3G), key file size < 170KB (after GZIP compression)

It will be difficult to achieve, but it is better to limit the speed index to 1,250 or less after the first drawing time is less than one second. Since the benchmark is simulated on a $200 Android phone (such as the Moto G4) with slow 3G, 400ms RTT and 400kbps transmission speed, the target is interaction time less than 5 seconds, for repeated access, The goal is less than 2 seconds (only with the service worker).

Note that when it comes to Interactive metrics, it is best to distinguish between First CPU Idle and Time to Interactive to avoid misunderstanding. The former is the earliest point after the main content is rendered (where the page has a response time of at least 5 seconds). The latter is how long the page can always respond to input. (Thanks To Philip Walton!)

We had two major constraints that prevented us from setting a reasonable goal for the rapid delivery of content on the Web. On the one hand, we have network transmission limitations due to TCP’s slow start. The first 14 KB of HTML is the most critical payload block — and the only budget available for the first round trip (which is obtained in 1 second at 400ms RTT due to the phone wake up time)

On the other hand, memory and CPU have hardware limitations (we’ll discuss them in more detail later) due to JavaScript parsing time. To achieve the goals described in the first paragraph, we must consider the budget for JavaScript critical file sizes. There’s a lot of disagreement about what the budget should be (it should depend on the nature of your project), but gzip with a budget of 170KB of JavaScript already takes 1s to parse and compile on a normal phone. Assuming 170KB expands to three times the size when unzipped, it could be “the death knell of user experience” on the Moto G4 or Nexus 2 when unzipped (0.7MB).

Sure, your data may show that your customers aren’t using these devices, but they may not show up in your analysis at all because of poor performance that makes your service inaccessible. In fact, Alex Russels of Google suggests a compressed gzip size of 130 to 170KB as a reasonable upper limit, and you should be careful when exceeding this budget. In the real world, most products don’t come close; The average bundle size today is about 400KB, a 35% increase from the end of 2015. On mid-level mobile devices, time-to-interactive takes 30-35 seconds.

We can certainly exceed the bundle’s size budget. For example, we can set a performance budget based on the activity of the browser’s main thread, drawing before rendering, or tracking front-end CPU hot spots. Calibre, SpeedCurve, and Bundlesize help you keep your budget under control and are integrated into your build process.

In addition, performance budgets should probably not be fixed. Because of the dependence on network connections, performance budgets should be tailored (for different network conditions), but no matter how they are used, the load on slow connections is more “expensive”.

From Fast By Default: Modern Loading Best Practices By Addy Osmani

The performance budget should be adjusted for the network conditions of ordinary mobile devices. (Credit: Katie Hempenius)

Definition of environment

9. Select and set up your build tool

Don’t focus too much on cool stuff. Stick to your own build environment, whether it’s Grunt, Gulp, Webpack, Parcel or tool combination. As long as you get the results you want and there are no problems with the build, you should be fine.

Among the build tools, Webpack seems to be the most mature, with hundreds of plug-ins available to optimize build sizes. Getting started with Webpack can be difficult. So if you want to get started, here are some great resources:

  • Webpack Documentation – Obviously, a good place to start, as does Webpack. So are Webpack — The Confusing Bits by Raja Rao and An Annotated Webpack Config by Andrew Welch.
  • Sean Learkin has a free course called Webpack: The Core Concepts and Jeffrey Way has a free course called Webpack for Everyone. Both of these courses are good sources for digging into Webpack.
  • Webpack Fundamentals is a very comprehensive 4h free course created by Sean Larkin and published on FrontendMasters.
  • If you were a little more advanced, Rowan Oulton has published a Field Guide for Better Build Performance with Webpack and Benedikt Rotsch has conducted a study putting on excellence Webpack Bundle on a diet.
  • Webpack Examples contain hundreds of ready-to-use Webpack configurations, sorted by subject and purpose. A Webpack configuration generator is also provided to generate basic configuration files.
  • Awesome – Webpack is a handpicked list of useful Webpack resources, libraries, and tools, including articles, videos, lessons, books, and examples for Angular, React, and framework-independent projects.

10. Use progressive enhancement by default

Keeping progressive enhancement as a guiding principle for front-end architecture and deployment is a safe choice. Design and build the core experience first, then use advanced features to enhance the experience for supported browsers and create resilient experiences. If your site runs fast on a slow machine with a lousy Internet, screen, and browser, it will only run faster on a fast machine with a powerful Internet and browser.

11. Choose a high-performance benchmark

There are many unknowns that affect loading — networks, heat limits, third-party scripts, cache replacements, parser blocking patterns, disk I/O, IPC latency, installed extensions, anti-virus software and firewalls, background CPU tasks, hardware and memory limits, L2/L3 cache differences, RTTS, etc. JavaScript is the most expensive, and Web fonts and images that block rendering by default often consume too much memory. As the performance bottleneck moves from the server to the client, we as developers must consider all these unknowns in more detail.

Since the 170KB budget already included critical path HTML/CSS/JavaScript, routing, state management, utilities, frameworks and application logic, we had to thoroughly audit the network transport costs, parsing/compile time and runtime costs of the different frameworks we chose.

As Seb Markbage points out, a good way to measure the startup cost of a framework is to render a view first, then remove it and re-render it, because it tells you how the framework is compressed. The first rendering tends to wake up a bunch of lazy compiled code, and a larger tree can benefit when compressed. The second rendering basically simulates how page code reuse affects performance characteristics as page complexity increases.

From Fast By Default: Modern Loading Best Practices By Addy Osmani (Slide 18, 19).

12. Evaluate each framework and their dependencies

Now, not every project needs a framework, and not every page of a one-page application needs to load a framework. In Netflix’s case, “Removing React reduced the total amount of JavaScript by more than 200KB, resulting in a more than 50% reduction in the interaction time for Netflix to log off the home page.” The team then used the time the user spent on the target page to pre-read React for subsequent pages that the user might use (read on for details).

This sounds obvious, but it’s worth mentioning: some projects could also benefit from removing an existing framework entirely. Once you choose a framework, you will be using it for at least several years, so if you need to use it, make sure your choice is fully considered.

Inian Parameshwaran measured the performance footprint of the top 50 frameworks (for the first content rendering — the time from navigation to the browser rendering the first part of content from the DOM). Separately, Vue and Preact were the fastest — on both desktop and mobile, followed by React (slides), Inian found. You can examine your candidate framework and its proposed architecture, and examine how most solutions perform, for example, using server-side or client-side rendering, on average.

Baseline performance costs are important. According to a study by Ankur Sethi, “No matter how much you optimize it, your React app will never load less than 1.1 seconds on a typical Phone in India. Your Angular application always takes at least 2.7 seconds to launch. Users of your Vue application need to wait at least a second before they can start using it.” In any case, you may not identify India as a major market, but users with poor Internet connections will have a similar experience when visiting your site. In exchange, your team certainly gains maintainability and developer efficiency. But this consideration is open to question.

You can evaluate frameworks in Sacha Greif’s 12-point scale rating system (or any other JavaScript library) by exploring features, accessibility, stability, performance, package ecosystems, communities, learning curves, documentation, tools, track records, teams. But on a tough schedule, it’s best to at least consider the total cost of size + initial parsing time before choosing an option; Lightweight options, such as Preact, Inferno, Vue, Svelte or Polymer, do the job just fine. The size of the baseline defines the constraints on the application code.

A good starting point is to choose a good default stack for your application. Gatsby. Js (React), Preact CLI, and the PWA Starter Kit provide reasonable defaults for fast loading on medium mobile hardware.

(Image credit: Addy Osmani)

13. Consider using the PRPL pattern and the application shell architecture

Different frameworks can have different effects on performance and do not require different optimization strategies, so you must clearly understand all the details of the framework you will rely on. When building Web applications, look at the PRPL pattern and the application shell architecture. The idea is simple: push the minimum code required for the initial route interaction for quick rendering, then use the service worker to cache and pre-cache the resources, and then asynchronously lazily load the required routes.

PRPL stands for pushing critical resources on demand, rendering initial routes, pre-caching and lazily loading remaining routes on demand.

The application shell is the minimal HTML, CSS, and JavaScript required to drive the user interface.

14. Did you optimize the performance of each API?

Apis are the communication channels through which applications expose data to internal and third-party applications through so-called endpoints. When designing and building the API, we need a reasonable protocol to initiate communication between the server and third-party requests. Representational State Transfer (REST) is a reasonably mature choice: It defines a set of constraints that developers follow in order to access content in a high-performance, reliable, and extensible manner. Web services that comply with REST constraints are called RESTful Web services.

When the HTTP request succeeds, when the data is retrieved from the API, any delay in the server response is propagated to the end user, delaying rendering. When a resource wants to retrieve some data from the API, it will need to request the data from the corresponding endpoint. Components that render data from multiple resources (for example, articles with comments and author photos in each comment) may require multiple round trips to the server to get all the data before rendering. In addition, the amount of data returned through REST is usually greater than the amount of data required to render the component.

If many resources require data from the API, the API can become a performance bottleneck. GraphQL provides a high-performance solution to these problems. By itself, GraphQL is an API query statement, a server-side runtime that executes queries using the type system you define for your data. Unlike REST, GraphQL can retrieve all the data in a single request, and the response will be exactly as required, without reading too much or too little data as REST does.

In addition, because GraphQL uses Schema (metadata that describes data structures), it can already organize data into preferred structures, so, for example, with GraphQL, we can remove JavaScript code for handling state management and generate more concise application code, It can run faster on the client side.

If you want to get started with GraphQL, Eric Bear has published two wonderful articles on Smashing: A GraphQL Primer: Why We Need A New Kind Of API and A GraphQL Primer: The Evolution Of API Design

The difference between REST and GraphQL is the difference between Redux + REST on the left and Apollo + GraphQL on the right (Hacker Noon).

15. Can you use AMP or Instant Articles?

Depending on your organization’s priorities and strategies, you may want to consider using Google AMP or Facebook Instant Articles or Apple News. You get good performance without them, but AMP does provide a solid performance framework and a free content delivery network (CDN), while Instant Articles will improve visibility and performance on Facebook.

For users, the most obvious benefit of these technologies is guaranteed performance. So sometimes users even prefer AMP/Apple News/Instant Pages links to “normal” and potentially bloated Pages. For content-heavy sites dealing with a lot of third-party content, these options can help speed rendering times significantly.

Unless they don’t. For example, according to Tim Kadlec, “AMP documents tend to be faster than peers, but that doesn’t necessarily mean pages are high-performance. In terms of performance, AMP is not the biggest difference.”

The benefits to webmasters are obvious: these formats become more discoverable on their respective platforms and more visible to search engines. You can also build progressive Web APM by reusing AMP as a data source for PWA. As for the disadvantages? Obviously, because of the different requirements and limitations of each platform, developers need to create and maintain different versions of their content on different platforms, such as Instant Articles and Apple News without the actual URL (thank you, Addy, Jeremy).

16. Choose your CDN wisely

Depending on the amount of dynamic data you have, you can “outsource” certain portions of your content to a static site generator, push it to a CDN and provide a static version from it, thereby avoiding database requests. You can even choose a CDN-based statically hosted platform to enrich your pages with interactive components as enhancements (JAMStack). In fact, some of these generators (like Gatsby on Reats) are actually web compilers that provide a lot of automatic optimization features. As the compiler adds optimizations over time, the compiled output gets smaller and faster over time.

Note that a CDN can also serve (and unload) dynamic content. Therefore, there is no need to limit the CDN to static files only. Carefully check whether your CDN performs compression and conversion (for example, image optimization in terms of formatting, compression and edge sizing), support for Servers workers, including edges, assembling the static and dynamic parts of the page (i.e. the server closest to the user) at the EDGES of the CDN, and other tasks.

Note: Based on research by Patrick Meenan and Andy Davies, HTTP/2 is broken on many CDNS, so we should not be too optimistic about performance improvements there.

2019 Front-end Performance Optimization Annual Summary — Part 1 2019 Front-end Performance Optimization Annual Summary — Part 2 2019 Front-end Performance Optimization Annual Summary — Part 3 2019 Front-end Performance Optimization Annual Summary — Part 4 2019 2019 Front-end Performance Optimization Year Summary – Part 6

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.