Addy Osmani

Translator: UC International research and development Jothy

Welcome to the “UC International Technology” public account, we will provide you with the client, server, algorithm, testing, data, front-end and other related high-quality technical articles, not limited to original and translation.

This is an article about performance optimization, and it’s a great article to read. It’s so rich, it takes about 5-10 minutes to read.

Over the past year, we’ve been busy trying to figure out how to make the web faster and more efficient. Hence the new tools, methods, and libraries that we hope to share with you in this article. In Part 1, we’ll show you some optimization techniques we used to develop The Oodles Theater App. In part 2, we will discuss our predictive loading experiment and the new Guess. Js project.

Tip: you can watch the video on Youtube at https://www.youtube.com/watch?v=Mv-l3-tJgGk


Performance requirements

The Internet is getting heavier every year. If we check the state of the web page, we can see that a mid-sized mobile page is about 1.5MB, mostly JavaScript and images.

Increasing site size, coupled with other factors such as network latency, CPU limitations, rendering blocking patterns or redundant third-party code, can lead to complex performance challenges.

Most users view speed as the top of the User Experience Hierarchy of Needs (see figure below). This isn’t too surprising, since there are a lot of things you can’t do until the page loads. You can’t get value out of the page, you can’t appreciate the aesthetics.


Figure 1. How important is speed to users?

We know that performance is important to users, but it’s also like finding the secret of where to start optimizing. Fortunately, there are tools to help us.




Lighthouse – The foundation for performance workflows

Lighthouse is part of Chrome DevTools and allows you to review your site and make recommendations for improvements.

We have recently launched a series of new performance review (https://developers.google.com/web/updates/2018/05/lighthouse), they can be very useful in daily development workflow.

Figure 2. New Lighthouse Review


Let’s explore how to leverage them in a real-world example: the Oodles Theatre App. This is a small demo Web app where you can try out some of our favorite interactive Google Doodles and even play a game or two.

When building an App, we want to make sure it’s as efficient as possible. The starting point for optimization was the Lighthouse report.

Figure 3. Lighthouse reports for the Oodles App

Our App’s initial performance in the Lighthouse report was terrible. On 3G, users have to wait 15 seconds for the first meaningful drawing or for the App to interact. Lighthouse highlights a host of issues with our site, and the overall performance score of 23 fully reflects this.

Page size around 3.4MB – we desperately need to lose some weight.

This was where our first performance challenge began: finding content that could be easily removed without affecting the overall experience.



Performance optimization opportunity


1. Delete unnecessary resources

There are some obvious things that can be safely removed: white space and comments.

Figure 4. Minify and compressed JavaScript and CSS

Lighthouse highlights this opportunity in its Unminified CSS & JavaScript review. The program is built using WebPack, so to reduce the size, we chose the Uglify JS plug-in.

Scaling down is a common task, so you need to find an off-the-shelf solution that fits your build process.

Another useful review in this project is to enable text compression. There is no reason to send uncompressed files, and most CDNS now support this right out of the box.

We use Firebase Hosting to host our code, Firebase is gZIP enabled by default, so by Hosting our code on a reasonable CDN, we get this functionality for free.

While Gzip is a very popular method of compression, other mechanisms such as Zopfli and Brotli are becoming increasingly attractive. Brotli is supported by most browsers and you can pre-compress resources in binary mode before sending them to the server.





2. Use an effective cache policy

Our next step is to make sure we don’t send resources twice when we don’t have to.

The Lighthouse review of inefficient cache policies led us to note that we could do just that by optimizing our cache policies. By setting max-age expiration headers on our servers, we can ensure that users can reuse resources they previously downloaded in case of repeated access.

Ideally, you should cache as many resources as safely as possible for as long as possible, and provide validation tokens to efficiently revalidate updated resources.





3. Delete unused code

So far, we’ve removed the obvious parts of the file that weren’t necessarily downloaded, but what about the less obvious parts? For example, unused code.

Figure 5. Checking code coverage

Sometimes we include unnecessary code in our apps. This is especially true if your App has been in development for a long time, your team or your dependencies have changed, and sometimes the orphan library is forgotten. That’s exactly what happened to us.

Initially we used the Material Components library to quickly build our App prototype. As time went on and we moved to a more customized look, we completely forgot about the library. Fortunately, code coverage checks helped us rediscover it in the bundle.

You can check code coverage status, including runtime and application load times, in DevTools. You can see the two big red bars in the bottom screenshot – we have over 95% of our CSS unused, and a lot of JavaScript.

Lighthouse also addresses this issue in its review of unused CSS rules. It means we may save more than 400KB. So we went back to the code and removed the JavaScript and CSS parts of the library.

Figure 6. Deprecating the MVC adapter, our style drops to 10KB!


This reduced our CSS bundle by a factor of 20, which is great for a small, two-line commit.

Of course, it improved our performance score, and the interactivity time was optimized.

With such a change, however, it is not enough to just check the metrics and scores. Removing actual code is by no means risk-free, so you should always be on the lookout for potential risks.

Our code is not used in 95% of the code – 5% is still in use somewhere. Apparently one of our components is still using the style from the library – the little arrow in the doodle slider. Because it’s so small, we can manually incorporate these styles into the button.

Figure 7. A component is still using the deleted library

Therefore, if you remove code, make sure you have a test workflow in place to help you guard against potentially visible risk regression.





4. Avoid heavy network load



We know that a lot of resources can slow down web page loads. They can cost users money, and they can have a significant impact on their data plans, so it’s important to be aware of this.

Lighthouse was able to detect problems with some of our network loads using huge network load reviews.

Figure 8. Detecting heavy network load

As you can see from here, we had over 3MB of code transferred – which is not typically much, especially on mobile devices.

At the very top of the list, Lighthouse tells us that there is a 2MB uncompressed JavaScript Vendor package. This is also a problem with Webpack highlighting.

As the saying goes: The fastest request is the one that hasn’t been made yet.

Ideally, you should measure the value of every resource you send to users, measure the performance of those resources, and determine whether it’s really worth transferring based on initial experience. Because sometimes these resources can be sent in idle time, lazily loaded or processed.

In our case, because we are dealing with a large number of JavaScript packages, we are fortunate that the JavaScript community has a wealth of JavaScript package review tools.

Figure 9. JavaScript package review

We started using WebPack Bundle Analyzer, and it told us that we had a dependency called Unicode, which was 1.6 MB of parsed JavaScript, a very large file.

We then go to our editor and use the Import Cost plug-in for our visual code, through which we can see the Cost of each module we Import. It helps us discover which component contains code that references this module.

Then we switched to another tool, BundlePhobia. This tool allows you to enter the name of any NPM package and actually see its estimated size after it has been compressed and gzip. We found a good alternative, the slug module we used was only 2.2KB, so we used it.

This made a big difference to our performance. Between this change and the discovery of other opportunities to reduce the JavaScript package size, we saved 2.1 MB of code.

Combined with the compression and size reduction of these bundles, we saw an overall 65% improvement. We found it really worth doing.

So, in general, try to eliminate unnecessary downloads from your websites and apps. Taking inventory of resources and measuring their performance impact can lead to dramatic changes, so be sure to review your resources regularly.



Reduce JavaScript startup time through code splitting

While large network loads can have a big impact on our applications, there’s another thing that can have a big impact — JavaScript.

JavaScript is your most expensive asset. On mobile devices, if you send a lot of JavaScript, it can delay the user’s interaction with interface components. This means that their clicks on the UI may be meaningless. Therefore, it is important to understand why JavaScript costs are so high.

This is how browsers handle JavaScript.

Figure 10. JavaScript processing

We first have to download the script, and our JavaScript engine needs to parse the code, compile, and execute it.

Now, these phases don’t take very long on high-end devices like desktops or laptops, or even high-end phones. But on mid-range phones, this process can take five to 10 times longer. This is where the interaction gets delayed, so it’s critical to bring it down.

To help you spot these issues with your App, we’ve introduced Lighthouse with a new JavaScript startup time review tool.

Figure 11. JavaScript startup time review

In the Oodle App example, it shows that it took 1.8 seconds to launch JavaScript. All you do in that time is statically import all the routes and components into an entire JavaScript package.

One way to solve this problem is to use code splitting.

The idea of code splitting is that instead of giving your users a whole pizza of JavaScript at a time, how about giving them one slice at a time?

Code splitting can be applied at either the routing level or the component level. It works with React and React Loadable, vue.js, Angular, Polymer, Preact, and many other libraries.

We split the code and incorporated it into our application, switching from static to dynamic import to achieve the asynchronous lazy loading code we needed.

Figure 13. Code segmentation using dynamic imports

This effect both reduces the size of the bundle and saves us JavaScript startup time. It drops the time to 0.78 seconds, a 56 percent faster application.

In general, if you’re building your JavaScript experience, be sure to only send users the code they need.

Using concepts such as code segmentation, tree shaking, etc., take a look at the WebPack-Libs-Optimizations repository to learn how to reduce the size of your library when using WebPack.




To optimize the image

Image loading performance jokes

In the Oodle App, we use a lot of images. Unfortunately, Lighthouse was far less enthusiastic about it than we were. In fact, we failed all three image-related censors.

We forgot to optimize the images and didn’t get their volume right, which we could have done with another image format.

Figure 14. Lighthouse image review

We started to optimize the image.

For one-time optimizations, you can use visualization tools such as ImageOptim or XNConvert.

A more automated approach is to use a library like Imagemin that optimizes images during the build process.

This way, you can ensure that any images you add in the future are automatically optimized. Some CDNS, such as third-party solutions such as Akamai or Cloudinary or Fastly, also offer comprehensive image optimization solutions. So you can safely host images to these services, too.

Projects like Thumbor Imageflow also offer self-hosting alternatives if you don’t want to because of cost or latency issues.

Figure 15. Before and after optimization

Our background PNG image is marked as large in webpack, which it is. After adjusting to the ViewPort size and optimizing with ImageOptim, we brought the size down to 100KB, which was acceptable.

We repeated this for images on the site, which significantly reduced the overall page size.


Use the correct format for the animation content

Gifs are extremely expensive. Surprisingly, the GIF format was never intended to be an animation platform. Therefore, switching to a more suitable video format can save a lot of file size.

In the Oodle App, we used giFs as intro animations on the home page. According to Lighthouse, switching to a more efficient video format saves more than 7MB. Our animation is about 7.3MB, which is too big for any website, so we made it a video element with two source files – MP4 and WebM – to work with more browsers.

Figure 16. Replacing an animated GIF with a video

We use the FFmpeg tool to convert animated GIFs into MP4 files. The WebM format saves even more — the ImageOptim API does this conversion.

This conversion saved us more than 80% of our total volume. This brings us down to around 1MB.

Still, 1MB is a large resource for network traffic, especially for bandwidth-constrained users. Fortunately, we can use the Effective Type API to detect that they are on a slow network and provide them with smaller JPeGs.

This interface uses efficient round-trip times and reduced values to determine the type of network the user is using. It only returns a string like slow 2G, 2G, 3G or 4G. So, based on this value, for users below 4G, we replace the video element with an image.

It does sacrifice a bit of user experience, but at least the site is available on slow networks.



Lazy loading of off-screen images

Scrolling animations, sliders, or very long pages often load images, even if the user doesn’t see them immediately on the page.

Lighthouse flags this behavior in off-screen image auditing, which you can also view for yourself in the DevTools Web panel. If you see a lot of images coming in, but only a few of them are visible, that means you can consider lazy-loading them.

The browser itself does not yet support lazy loading, so we had to use JavaScript to add this functionality. We use Lazysizes library to add lazy loading behavior to our Oodle covers.

Lazysizes is very smart because it can not only track changes in the visibility of elements, but also proactively prefetch elements near the view for the best user experience. It also provides optional integration of IntersectionObserver, which makes visibility lookup very efficient.

After this change, our pictures will be extracted on demand. If you want to dig deeper into the subject, check out images.Guide – a very handy and comprehensive resource.

images.guide: https://images.guide/



Help browsers provide critical resources early

Not every byte sent across the network to the browser is of equal importance, and the browser knows this. Many browsers have a tentative approach to deciding what they should get first. So sometimes they get the CSS before they get the image or script.

What might be useful is that we, as page authors, can tell the browser what’s really important to us. Thankfully, browsers have added a number of features over the past few years to help us achieve this, such as resource hints using Link rel=preconnect, preload or Prefetch.

These Web platform features can help the browser get the right resource at the right time, and they can be more efficient than some custom, logic-based approach to scripting.

Let’s take a look at how Lighthouse actually guides us to use these features effectively.

One of the first things Lighthouse allowed us to do was to avoid multiple expensive round-trip requests to any one source.

Figure 17. Avoid multiple expensive round trip requests to any source

For the Oodle App, we actually used Google Fonts heavily. Whenever a Google Fonts style sheet is used in a page, it connects at most two subdomains. Lighthouse tells us that if we can warm up this connection, we can save up to 300 milliseconds on our first connection.

With Link Rel PreConnect, we can effectively mask connection latency.

Especially for resources like Google Fonts that host font CSS on Googleapis.com and font resources on Gstatic, this can have a big impact. So we applied this optimization for a few hundred milliseconds.

The next thing Lighthouse recommends is to pre-load key requests.

Figure 18. Preloading key requests

< Link rel=preload> is very powerful, notifying the browser that the resource is needed in the current navigation and trying to get it to the browser as soon as possible.

Right now, Lighthouse is telling us that we should preload our key Web font resources because we’re loading two web fonts.

Preload the network fonts as shown below – specify rel = preload, pass the font type into the AS field, and then specify the font type to load, such as woff2.

The effect on your page will be obvious.

Figure 19. Impact of preloading resources

In general, if you don’t use Link Rel Preload, and if Web fonts happen to be critical to your page, all the browser has to do is get the HTML first, parse the CSS, and then other resources, and then get your Web fonts last.

With Link Rel PreLoad, once the browser parses the HTML, it can start getting those Web fonts early. For our App, this reduces the time we spend rendering text using Web fonts.

Now, if you want to try using Google Fonts preloaded Fonts, it’s not that easy. We have a problem.

The Google Font URL we specified in the Font in the stylesheet happens to be something the Google Fonts team updates regularly. These urls may be out of date or updated regularly, so if you want complete control over the font loading experience, we recommend that you host your Web fonts yourself. This is also great because it gives you access to things like Link Rel Preload.

In our example, we found the Google Web Fonts Helper tool very useful for helping us offline some Web Fonts and setting them locally, so check it out.

Whether you include Web fonts or JavaScript as part of a critical resource, make that critical resource available to your browser as soon as possible.




Experiment: Priority hints

There’s one more special thing I want to share with you today. In addition to features like resource hints and preloading, we are also working on a new experimental browser feature we call priority hints.

Figure 20. Priority hints

This new feature allows you to alert your browser to the importance of a resource. It exposes a new attribute – importance – which can be low, high or auto.

This allows us to lower the priority of less important resources, such as non-critical styles, images, or FETCH API calls, to reduce traffic preemption. We can also prioritize more important things, such as our hero images.

For our Oodle App, this actually gave us an opportunity to optimize.

Figure 21. Setting the priority of the first visible content

Before we set lazy loading on the image, what the browser did was we used this image rotation with all the doodles, and the browser would get all the images at high priority at the beginning of the rotation. Unfortunately, the image in the middle of the rotation is the most important to the user. So what we did was, we made the background image very low in importance and the foreground image very high in importance, which gave us a 2 second speed gain at slow 3G, and we were able to capture and render those images very quickly. It’s a great experience.

We hope to bring this feature to Canary in a few weeks, so stay tuned.




Develop a web font loading strategy

Typography is the foundation of good design, and if you’re using web fonts, ideally you don’t want to block text rendering, and you definitely don’t want to show invisible text.

We highlighted this in Lighthouse, as can be found in the avoid Invisible Text While Web Fonts are loading review.

Figure 22. Avoid using invisible text when loading web fonts

If you load a Web font with a Font Face block and that font takes a long time to fetch, you’re letting the browser decide what to do. Some browsers will wait up to three seconds before returning to the system font, and will eventually switch back to the font once it has been downloaded.

We try to avoid this invisible text, in which case if the web font takes too long to load, we won’t be able to see this week’s classic doodle. Fortunately, you actually have more control over this process with a new feature called font-display.

Font display can help you decide how to render or de-scale web fonts based on how long it takes to swap.

In this case, we use the font display exchange. Swap provides zero second block cycles and infinite swap cycles for fonts. This means that if the font takes a while to load, the browser will immediately draw the text in the alternate font. As soon as the font is available, it transforms it.

This is great for our App, which allows us to display some meaningful text very early and convert it as soon as the web font is ready.

Figure 23. Font display results

In general, if you happen to be using web fonts and they occupy a large portion of the web, you need to have a good web font loading strategy.

There are many Web platforms that can help you optimize the loading experience of fonts, and you can check out Zach Leatherman’s repository of Web Font Recipes because it’s awesome.

Web Font Recipes repo: https://www.zachleat.com/web/recipes/




Reduce rendering blocking scripts

There are other parts of our app that can be pushed ahead in the download chain to provide some basic user experience ahead of time.

As you can see on the Lighthouse timeline, nothing is visible to the user for the first few seconds after all the resources are loaded.

Figure 24. Reducing the chance of blocking to render the stylesheet


Downloading and processing external stylesheets blocked the progress of our rendering process.

We can try to optimize our key render paths by providing some styles first.

If we extract the styles responsible for initial rendering and inline them in HTML, the browser can render them directly without waiting for an external stylesheet.

In our example, we use an NPM module called Critical to inline key content in index.html during the build step.

While this module does most of the heavy lifting for us, getting it to run smoothly on different routes is still a bit tricky.

If you’re not careful or your site structure is very complex, it can be very difficult to introduce this pattern if you don’t plan your app shell from the start.

That’s why it’s important to consider performance early on. If you don’t design performance from the start, you may have problems implementing it later.

Eventually our risk paid off and we managed to make it work and the App started offering content earlier, significantly improving our first meaningful drawing time.


The results of

This is a long list of performance optimizations that we apply to the site. So let’s see what happens. The results show how our application loads on medium devices, 3G networks, before and after optimization.


Lighthouse’s performance score went from 23 to 91. Considerable progress has been made in terms of speed. All of these changes have been driven by our constant review and adherence to Lighthouse reports. If you want to know how we implement all improvement in technology, please feel free to check our warehouse (http://github.com/google/oodle-demo), especially the PR.



Predictive performance – Data-driven user experience

We believe machine learning represents exciting opportunities in many areas of the future. One idea that we hope will lead to more experimentation in the future is the idea that real data can really guide the user experiences we’re building.

Today, we make a lot of arbitrary decisions about what a user might want or need to pre-extract, pre-load, or pre-cache. If we get it right, we can prioritize a small number of resources, but it’s hard to scale it to the entire site.

We actually have the data to better support our optimizations. Using the Google Analytics Reporting API, we can look at the percentage of dropouts from the next front page and any url on our site to draw conclusions about what resources we should prioritize.

If we combine this with a good probabilistic model, we can avoid wasting user data by actively pre-fetching content. We can leverage Google Analytics data and implement such models using machine learning and models such as Markov chains or neural networks.

Figure 25. Data-driven bindings for Web applications

To facilitate this experiment, we’re excited to announce a new project called Guess. Js.

Figure 26. Guess. Js

Guess. Js is a project focused on data-driven Web user experiences. We hope it will inspire people to explore using data to improve network performance and beyond. It is fully open source and is available on GitHub. It was built by Minko Gechev, Gatsby’s Kyle Matthews, Katie Hempenius and others in collaboration with the open source community.



conclusion

Scores and metrics help speed up the Web, but they are only means, not ends in themselves.

We’ve all experienced slow web page loads, but we now have an opportunity to make fast loading more enjoyable for our users.

Performance improvement is a journey. Many small changes can bring huge benefits. By using the right tuning tools and keeping an eye on Lighthouse reports, you can deliver a better, more inclusive experience for your users.

Special thanks to: Ward Peeters, Minko Gechev, Kyle Mathews, Katie Hempenius, Dom Farolino, Yoav Weiss, Susie Lu, Yusuke Utsunomiya, Tom Ankers, Lighthouse and Google Doodles.

英文原文 :

https://developers.google.com/web/updates/2018/08/web-performance-made-easy


UC International Technology is committed to sharing high quality technical articles with you

Please follow our official account and share this article with your friends