• The State of The Web
  • Karolina Szczur
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: undead25
  • Proofread by: Sun, IridescentMia

Network Status: Performance improvement guidelines

The Internet is exploding, and so are the Web platforms we’ve built. We often fail to take into account the connectivity and usage of the user’s network. Even a glimpse of the current state of the World Wide Web shows that we haven’t built empathy and awareness of the changing landscape, let alone performance considerations.

So what is the state of the web today?

Of the 7.4 billion people on earth, only 46% have Internet access, with an average Internet speed of 7Mb/s. What’s more, 93 percent of Internet users access the Web from mobile devices — it’s inexcusable not to cater to handheld devices. Data is often more expensive than we think — buying 500MB of data takes an hour of work in Germany and 13 hours in Brazil (see Ben Schwarz’s Bubble Burst: Real Performance for more interesting statistics).

Our site didn’t perform as well — the average size was about the size of the first Doom game (3MB or so). (Note that the median is used for statistical accuracy, and I recommend reading Ilya Grigorik’s “Average Page Is a Myth.” The median site size is currently 1.4MB). Images can easily take up 1.7MB, while JavaScript averages 400KB. It’s not just the Web, native applications have the same problem. Have you ever had to download a 200MB app to fix some bug?

Technicians often find themselves in a privileged position. They have new high-end laptops, cell phones and fast Internet connections. It’s easy to forget that not everyone has this condition (it’s actually only a few of us).

If we build the Web platform from our perspective and not the user’s, it will lead to a poor user experience.

How can we do better by considering performance in our design and development?

Resource optimization

The most obvious but underutilized way to improve performance is to start by understanding how the browser analyzes and processes resources. It turns out that browsers do a pretty good job of resource discovery when they parse and prioritize resources immediately. Here is an explanation of the key request.

A request is a critical request if it contains resources required for rendering by the user’s viewport.

For most sites, the key request can be HTML, necessary CSS, LOGO, web fonts, or images. It turns out that in most cases, when a resource is requested, many other unrelated things (JavaScript, tracking code, ads, etc.) are also requested. But we can avoid this by carefully selecting and prioritizing important resources.

With , we can manually force the priority of the resource to ensure that the desired content is rendered on time. This technique can significantly improve the “interaction time” metric, making the best user experience possible.

Key requests still seem like a black box to many because of the lack of relevant information. Fortunately, Ben Schwarz has published a very comprehensive and accessible article called the Key Request. Alternatively, you can check out Addy’s article on preloading, preloading and Prioritization in Chrome.

Enable priority in Chrome Developer Tools

🛠 To track the effect of prioritizing requests, you can use Lighthouse’s performance monitor and Critical Request Link metrics, or check out the Request priorities under the Chrome Developer Tools Web TAB.

📝 General performance list

  1. Active cache
  2. Enable compression
  3. Priority key resources
  4. Use the CDN

Image optimization

Most of the data transferred on a page is usually images, so optimizing images can provide significant performance gains. There are a number of existing strategies and tools that can help us remove excess bytes, but the first question to ask is: “Is the image essential to convey the subsequent message and effect?” . If you can remove it, you can save bandwidth and reduce requests.

In some cases, we can use different techniques to achieve the same effect. CSS has many artistic properties, such as shadows, gradients, animations, and shapes, which allow us to replace images with DOM elements with appropriate styling.

Choose the correct format

If images must be used, it is important to determine which format is appropriate. The general choice is between vector and raster:

  • Vector graphics: Resolution independent, files are usually small. This is especially useful for logos, ICONS, and images made up of simple shapes (dots, lines, circles, and polygons).
  • Raster images: richer presentation. Apply to photos.

After making the above decision, there are several formats to choose from: JPEG, GIF, PNG-8, PNG-24 or the latest format, such as WEBP or JPEG-XR. With so much to choose from, how do we make sure we get it right? Here are some basic ways to find the best format:

  • JPEG: a richly colored image (such as a photograph)
  • PNG — 8: not very colorful image
  • PNG — 24: image with partial transparency
  • GIF: animated picture

When exporting images, Photoshop can optimize the image in the above format by setting some Settings, such as reducing quality, noise, or the number of colors. Make sure the designer is aware of performance practices and prepares the right images with the right optimization preset. If you want to learn more about how to develop graphics, you can read Lara Hogan’s Fast and furious: Improving The User Experience with Web Performance.

Try a new format

There are several new image formats developed by browser vendors: Google’s WebP, Apple’s JPEG 2000 and Microsoft’s JPEG-XR.

WebP is the most competitive, and its support for lossless and lossy compression makes it widely used. Lossless WebP is 26% smaller than PNG and 25-34% smaller than JPG. A 74% browser rating and downgrade scheme make it safe to use, saving up to a third of the bytes transferred. JPG and PNG can be converted to WebP using Photoshop and other image processing programs, or using the command line (Brew Install webp).

If you want to explore the visual differences between these formats, I recommend this great example on Github.

Optimize using tools and algorithms

Even using efficient image formats requires subsequent processing and optimization. This is an important step.

If you choose relatively small SVGS, they also need to be compressed. SVGO is a command-line tool that allows you to quickly optimize SVG by stripping away unnecessary metadata. Alternatively, Jake Archibald’s SVGOMG can be used if you prefer a Web interface or are limited by your operating system. Since SVG is an XML-based format, it can also be compressed by the server GZIP.

ImageOptim is a great choice for most other image formats, and it packs some great tools like PngCrush, PngQuant, MozJPEG, Google Zopfli, and more into a comprehensive open source package. As a Mac OS application, command line interface, and Sketch plug-in, ImageOptim can be easily used with existing workflows. Most ImageOptim relies on the CLI and can be used on Linux or Windows platforms.

If you’re inclined to try emerging encoders, earlier this year Google released Guetzli, an open source algorithm based on their research into WebP and Zopfli. Guetzli can produce jPeGs with 35 percent less volume than any other compression method available. The only downside: slow processing time (one minute per megapixel).

When selecting tools, make sure they meet expectations and fit into your team’s workflow. It is best to automate optimization so that all images are optimized.

Responsive picture

Ten years ago, one resolution might suffice for all scenarios, but as times change, responsive web sites are very different today. This is why we must be particularly careful to implement our carefully optimized visual resources and ensure that they are adapted to a variety of viewports and devices. Fortunately, thanks to the responsive graphics community, with the Picture element and the SRcset attribute (both with 85%+ browser approval), we can do it perfectly.

Srcset properties

Srcset works very well in resolution-switching scenarios — when we want to display images based on the user’s screen density and size. According to some predefined rules in the SRCSET and Sizes attributes, the browser will select the best image for display based on the viewport. This technique can save bandwidth and reduce requests, especially for mobile users.

Example of using the srcset attribute

Picture element

Picture elements and media attributes are designed to facilitate access to the art palace. By providing different sources for different criteria (through media-Queries testing), we were able to focus on the most important image elements regardless of resolution.

Example use of picture element

📚 Read Jason Grigsby’s Responsive Graphics 101 to get a full understanding of both approaches.

Use picture CDN

The final step in image performance is distribution. All resources can benefit from using CDN, but there are specific tools for images, such as Cloudinary or imgX. The benefits of using these services go beyond reducing server traffic; they can also significantly reduce response latency.

CDN can reduce the complexity of providing adaptive and high-performance images for heavy image sites. The services they offer vary (and at different prices), but most can be sized, cropped, and best formatted depending on the device and browser, and more — compression, detecting pixel density, watermarking, face recognition, and allowing post-processing. With these powerful features and the ability to attach parameters to urls, providing user-centric images is a breeze.

📝 Image performance list

  1. Choose the correct format
  2. Use vector diagrams whenever possible
  3. If the change is not obvious, the quality is reduced
  4. Try a new format
  5. Optimize using tools and algorithms
  6. learningsrcsetProperties andpictureThe element
  7. Use picture CDN

Optimizing web Fonts

The ability to use custom fonts is a very powerful design tool. But with great power comes great responsibility. 68% of web sites are using Web fonts, and this resource is one of the biggest performance bottlenecks (easily up to 100KB on average, depending on the various forms and number of fonts).

Even though volume is not the most important issue, invisible text flash (FOIT) is. FOIT occurs when a web font is loading or fails to load, resulting in a blank page that makes the content unreachable. It might be worth checking carefully to see if we need a web font. If so, there are strategies that can help mitigate the negative impact on performance.

Choose the correct format

There are four web font formats: EOT, TTF, WOFF and, more recently, WOFF2. TTF and WOFF are widely used and have over 90% browser support. Depending on the support you are targeting, WOFF2 may be the safest, and downgrade to WOFF for older browsers. The advantages of using WOFF2 are a full set of custom preprocessing and compression algorithms (such as Brotli) that reduce file size by 30% and improve parsing performance.

When defining the source of a web font in @font-face, use the format() prompt to specify which format should be used.

If you are using Google fonts or Typekit fonts, they have implemented some policies to mitigate the performance impact. Typekit all suites now support asynchrony to prevent FOIT and allow its JavaScript suite code to be cached for an additional 10 days (instead of the default 10 minutes). Google Fonts can automatically provide minimal files based on the user’s device.

Font selection evaluation

Whether self-hosted or not, the number, size, and style of fonts can significantly affect performance. Ideally, all we need is a regular and bold font. If you’re not sure how to choose a font, check out Lara Hogan’s Aesthetics & Performance.

Use unicode-range subsets

The Unicode-range subset allows large fonts to be split into smaller sets. This is a relatively advanced strategy, but it can significantly reduce font size, especially when targeting Asian languages (did you know that the average Chinese font is 20,000 characters?). . The first step is to limit the font to the necessary set of languages, such as Latin, Greek, or Cyrillic. If web fonts are only used for LOGO classes, you can use unicode-range descriptors to select specific characters.

Filament Group’s open-source command-line tool, Glyph hanger, generates lists of glyphs based on files or urls. Or, the Web-based Font Squirrel Web Font Generator, which provides advanced subsets and optimization options. If you use Google Fonts or Typekit, both provide language subsets in the font selection screen, making it easier to determine the basic subsets.

Set up a font loading policy

Fonts block rendering — because the browser needs to create the DOM and CSSOM first; Web fonts are not downloaded until they are used in CSS selectors that match existing nodes. This behavior obviously delays rendering of the text, often resulting in the previously mentioned invisible text flash (FOIT). FOIT is even more pronounced on slower networks and mobile devices.

Implementing a font loading policy prevents users from accessing content. Often, unstyled text flash (FOUT) is the simplest and most effective solution.

Font-display is a new CSS property that provides a JavaScript independent solution. Unfortunately, it is only partially supported (Chrome and Opera), Firefox and WebKit are currently in development. Nevertheless, it can and should be used in conjunction with other font loading mechanisms.

Example of the font-display property

Fortunately, Typekit’s Web font loader and Bram Stein’s font Viewer can help us manage the loading behavior of fonts. In addition, Zach Leatherman is an expert on web font performance, and his Comprehensive Guide to Font Loading Strategies will help you choose the right approach for your project.

📝 Network font performance list

  1. Choose the correct format
  2. Font selection evaluation
  3. Use unicode-range subsets
  4. Set up a font loading policy

Optimization of JavaScript

Currently, the average size of a JavaScript package is 446KB, making it the second largest type of resource by volume (after images).

We may not realize that our beloved JavaScript hides even more dangerous performance bottlenecks.

Monitoring JavaScript transfers

Optimizing transport is just one way to combat page bloat. Once the JavaScript is downloaded, it must be parsed, compiled, and run by the browser. Browsing some popular websites, we will find that the gzip compressed JS is at least three times larger after decompression. In fact, we’re sending out a bunch of code.

1MB JavaScript parsing time on different devices. Image from Addy Osmani’s JavaScript Startup Performance.

Analyzing parsing and compilation times, which vary depending on the hardware capabilities of the user’s device, is critical to understanding when the application is ready to interact. Parsing and compilation times can easily be 2-5 times higher on low-end phones. Addy’s research shows that it takes 16 seconds for an app to be interactive on a normal phone and 8 seconds on the desktop. Analyzing these metrics is crucial, and fortunately, we can do it with Chrome Developer Tools.

Review the parsing and compilation process in Chrome Developer Tools

Be sure to read Addy Osmani’s detailed summary in JavaScript Startup Performance.

Remove unnecessary dependencies

Today’s package management approach makes it easy to hide the number and size of dependent packages. Webpack-bundle-analyzer and Bundle-Buddy are great visualization tools to help us identify duplicate code, the biggest performance bottlenecks, and outdated and unnecessary dependency packages.

Example of Webpack Bundle Analyzer

With the Import Cost extension in VS Code and Atom, the size of the imported package is obvious.

Import Cost extension in VS Code

Implementing code segmentation

Whenever possible, we should provide only the resources necessary for the user experience. It’s not ideal to send users a complete bundle.js file, complete with handling code for interactions they may never see (imagine downloading JavaScript to handle the entire application when you visit the landing page). Similarly, we shouldn’t be handing out code for a particular browser or user agent everywhere.

Webpack is one of the most popular packaging tools and supports code splitting by default. The simplest code splitting can be done page by page (e.g. home.js for the landing page, contact.js for the contact page, etc.). But Webpack offers fewer advanced strategies, such as dynamic import or lazy load, that might be worth investigating.

Consider frame selection

JavaScript front-end frameworks are changing fast. React is the most popular, according to the 2016 state of JavaScript survey. A careful review of the architecture selection may reveal that you can use a more lightweight alternative, such as Preact. (It’s important to note that Preact is not a complete re-implementation of React, but a high-performance, lighter virtual DOM library.) Similarly, we can replace the larger library with a smaller alternative — moment.js with date-fns (or, in certain cases, remove unused locales from moment.js).

Before starting a new project, it is necessary to determine what functionality is required and choose the best performing framework for your needs and goals. Sometimes this might mean choosing to write more native JavaScript.

📝 JavaScript performance list

  1. Monitoring JavaScript transfers
  2. Remove unnecessary dependencies
  3. Implementing code segmentation
  4. Consider frame selection

Performance tracking, the way forward

In most cases, some of the strategies we’ve discussed will make a positive difference to the user experience of the products we’re building. Performance can be a tricky issue, and it is necessary to track the effects of our adjustments over time.

User-centric performance metrics

Superior performance metrics are designed to be as close to the user experience as possible. In the past, onLoad, onContentLoaded, or SpeedIndex gave very little information about how often the user could interact with the page. When focusing only on resource transfers, it is difficult to quantify perceived performance. Fortunately, there are times when the visibility and interactivity of content can be well described.

These metrics are white screen time, first effective rendering, visual integrity, and interactive time.

  • First Paint White screen time: The browser changes from white screen to the First visual change.
  • First Meaningful Paint renders effectively for the First time: text, images, and main content are visible.
  • And Visually Complete: all the contents of the visual mouth are visible.
  • Time to Interactive: Everything in the viewport is visible and can be interacted with (the JavaScript main thread stops being active).

These times are so relevant to the user experience that they can be tracked as a focus. If possible, record them all, otherwise choose one or two to better monitor performance. Other metrics need to be looked at as well, especially the number of bytes we send (optimized and decompressed).

Set a performance budget

All the data can quickly become confusing and incomprehensible. Without actionable goals, it’s easy to lose sight of our original purpose. A few years ago, Tim Kadlec wrote about the concept of a Performance budget.

Unfortunately, there’s no magic formula for setting them up. Performance budgets often boil down to competitive analysis and product goals that are unique to each business.

When setting a budget, it’s important to make a significant difference, usually at least a 20 percent improvement. Experiment and iterate on your budget, and take a look at Lara Hogan’s usage performance budget to approach new designs.

Use the Performance Budget Calculator or Browser Calories Chrome extension to help you create a budget.

Continuous monitoring of

Performance monitoring should be automated, and there are many powerful tools on the market that provide comprehensive reporting.

Google Lighthouse is an open source project that examines performance, accessibility, PWA, and more. You can use it from the command line or directly from Chrome Developer Tools.

Example of the Lighthouse performance review

For continuous tracking, Calibre offers performance budgeting, device emulation, distributed monitoring, and many other features that we couldn’t have done without spending a lot of effort on building our own performance suite.

Use Calibre for comprehensive performance tracking

Wherever you track, make sure the data is transparent and accessible to the entire team or the entire line of business in a small organization.

Performance is a shared responsibility, not just the development team — we are all responsible for the user experience we create, regardless of role or level.

At the product decision or design stage, it is important to promote speed and establish collaborative processes to identify possible bottlenecks.

Build performance awareness and empathy

Caring about performance is not just a business goal (but if you need sales statistics to make a sale, use PWA statistics). It’s about basic empathy and putting your users’ best interests first.

It’s our responsibility as technologists to keep the user’s attention and time away from the waiting page. Our goal is to build time-sensitive and people-centric tools.

Promoting performance awareness should be everyone’s goal. Let’s embrace performance and empathy to build a better, more meaningful future for all.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, React, front-end, back-end, product, design and other fields. If you want to see more high-quality translation, please continue to pay attention to the Project, official Weibo, Zhihu column.