• Performance-tuning a React application
  • Originally by Joshua Comeau
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: ZhangFe
  • Proofread by: Atuooo, Jonjia

React Application performance tuning

A case study

In recent weeks, I’ve been working on Tello, a Web app that tracks and manages TV shows:

As a Web app, it’s pretty small, about 10,000 lines of code. React/Redux webPack-based application with a lightweight back-end Node service (Express and MongoDB based). Ninety percent of our code is in the front end. You can see our source code on Github.

Front-end performance can be considered from many angles. Historically, though, I’ve focused more on the points after the page loads: making sure the scrolling is consistent, for example, and the animation is smooth.

By comparison, I pay less attention to page load times, at least on small projects. After all, it doesn’t need to transfer much code; It must be accessible and usable very quickly, right?

However, when I did some benchmarking, I was surprised at how slow my 10K line of code was on a 3G network, taking about 5s to display meaningful content and 15s to resolve all network requests.

I realized I had to put some time and effort into this problem. It doesn’t matter how beautiful my animations are if people have to stare at a blank screen to watch the 5S.

All in all, I tried 6 techniques this weekend, and now it only takes around 2300ms to show something meaningful on the page — about 50% less time!

This blog post is a case study of the specific technologies I’ve tried and how they’ve worked, but more broadly, it’s a document of what I’ve learned while trying to solve the problem, and some of the ideas I’ve come up with as a solution.

methodology

All analyses used the same setup:

  • Fast 3G network speed.
  • Desktop resolution.
  • Disable HTTP caching.
  • Logged in, and this account follows 16 TV shows.

At baseline

We need a baseline against which to compare our results!

The page we tested was the summary view of the main login page, which is the page with the largest amount of data, so it also has the most room for optimization

The summary section contains a set of cards like the following:

Each show has its own card, and each episode has its own little square, a blue square means that the episode has been watched.

This is the profile view we benchmarked on a 3G network, and it doesn’t look very good.

First valid render: ~5000ms First image load: ~6500ms All requests End: >15,000ms

Gosh, it wasn’t until 5s or so that the page showed anything meaningful. The first image was loaded in 6.5 seconds, and all the requests took 15 seconds to complete.

This timeline view provides a range of content. Let’s take a closer look at what happened in between:

  1. First, the original HTML is loaded. Since our application is not server-side rendered, this part is very fast.
  2. After that, start downloading the entire JS bundle. This part took a long time. 🚩
  3. After the JS download is complete, React iterates through the component tree, calculates the state of the mount at initialization, and pushes it to the DOM. This section has a header, a footer, and a large black area. 🚩
  4. After the DOM is mounted, the application finds that it still needs some data, so it asks/meA GET request is made to GET the user’s data, along with a list of shows they care about and episodes they’ve watched.
  5. Once we have a list of key shows, we can start requesting the following:
    • Pictures of each program
    • A list of episodes for each show

This data comes from TV Maze’s API.

  • You might be wondering why I don’t store this episode information in my database so I don’t need to call the TV Maze interface. The main reason is that TV Maze’s data is more realistic; It has information about all the new episodes. Of course, I could have pulled this data on the server during step 4, but this would have increased the response time for this step, leaving the user staring at a large blank black area. Also, I like the lightweight server.

Another option would be to set up a timed task to sync TV Maze every day and only pull when I don’t have the latest data. However, I still like real-time data, so this solution was never implemented.

A significant upgrade

By far the biggest bottleneck is that the original JS bundle is too large and takes too long to download.

The bundle is 526KB in size and is currently uncompressed, so we need to use Gzip to save it.

Gzip is easy to implement via the server side of Node/Express; We just need to install the Compression module and use it as an Express middleware.

const path = require('path');

const express = require('express');
const compression = require('compression'); const app = express(); // Just use Compression as an Express middleware! app.use(compression()); app.use(express.static(path.join(rootDir,'build')));
Copy the code

Let’s see how our timeline changes by using this very simple solution:

First valid render: 5000ms -> 3100ms First image load: 6500ms -> **4600ms ** All data load complete: 6500ms -> **4750ms ** All images load complete: ~15,000ms -> ~13,000ms

The code size went from 526KB to just 156KB, and it made a huge difference in page loading speed.

Use LocalStorage cache

With the apparent improvement from my previous step, I looked back at the timeline. The first render was triggered at 2400ms, but this time it doesn’t make any sense. The actual content was not shown until 3100 ms, but all episode data was not available until around 5,000 ms.

I started thinking about using server-side rendering, but that didn’t solve the problem. The server still needs to call the database and then TV Maze’s API. To make matters worse, during this time the user can only stare at the blank screen.

What about using local-storage? We can store all state changes to the browser and replenish the local state when the user data is returned. The data on the front screen may be old, but that’s okay! The real data loads back quickly, and this makes the first load experience very fast.

Because the app uses Redux, persisting the data is very simple. First, we need a scheme to ensure that localStorage is updated when Redux state changes:

import { LOCAL_STORAGE_REDUX_DATA_KEY } from '.. /constants';
import { debounce } from '.. /utils'; // Generic debounce util // When our page first loads, a bunch of Redux actions are quickly dispatched // each show gets their episodes, So the minimum number of actions is 2n (n is the number of programs) // we don't need to update too oftenlocalConst updateLocalStorage = debounce(value => value!) const updateLocalStorage = debounce(value => value!) == null ?localStorage.setItem(LOCAL_STORAGE_REDUX_DATA_KEY, value)
      : localStorage.removeItem(LOCAL_STORAGE_REDUX_DATA_KEY), 2500 ); // Store when updated, stores the relevant parts tolocalIn the Storageexport const handleStoreUpdates = functionHandleStoreUpdates (store) {// Ignore modals and Flash messages, they don't need to be stored const {modals, flash... relevantState} = store.getState(); updateLocalStorage(JSON.stringify(relevantState)); } // A function to clear data when logging outexportConst clearReduxData = () => {// Immediately clear stored inlocalStorage of the data window. The localStorage. RemoveItem (LOCAL_STORAGE_REDUX_DATA_KEY); // Because deletion is synchronous and persistent data is asynchronous, this leads to a subtle bug: // To fix this, we pass a null that terminates all updates to the current queue updateLocalStorage(NULL); // We need to trigger asynchronous and synchronous operations. // Sync ensures that the data can be deleted immediately, so if the user clicks exit and immediately closes the page, the data can also be deleted};Copy the code

Next, we need to get the Redux Store to subscribe to this function and initialize it with the data from the previous session.

import { LOCAL_STORAGE_REDUX_DATA_KEY } from './constants';
import { handleStoreUpdates } from './helpers/local-storage.helpers';
import configureStore from './store';


const localState = JSON.parse(
  localStorage.getItem(LOCAL_STORAGE_REDUX_DATA_KEY) || '{}'
);

const store = configureStore(history.localState);

store.subscribe(() => {
  handleStoreUpdates(store);
});
Copy the code

There were a few remaining issues, but thanks to the Redux architecture, we were able to get most of the functionality done with only a few minor changes.

Let’s take a look at the new timeline:

Great! It’s hard to say anything with these small screenshots, but our rendering at 2600ms shows something; It includes a full list of shows and saved episode information from previous sessions.

First valid render: 3100ms -> **2600ms ** Get episode data: 4750ms -> 2600ms (!)

While this doesn’t affect the actual load time (we still have apis to call and spend time on), the user can get the data directly, so the perceived speed increase is significant.

Pages continue to change even though the content is already there, which is a very popular technique for making pages appear faster and updated as new content becomes available. However, I prefer to present the final UI immediately.

This scheme has some additional advantages in some non-perf cases. For example, a user could change the order of a program, but the data might be lost due to the end of the session. Now, when they return to the page, their previous preferences are still saved!

However, there is a downside: I don’t know if you’re waiting for new data to load. I plan to add a load box in the corner to show if any other requests are being loaded.

On the other hand, you might be thinking “That might be good for old users, but it won’t do much for new users!” . You’re right, but in fact, it doesn’t apply to new users either. New users have no shows to follow, only a prompt to add them, so their pages load very quickly. So, for all users, both new and existing, we have effectively avoided the experience of staring at a black screen all the time.

Images and lazy loading

Even with this latest improvement, images still take a lot of time to load. The timeline isn’t shown, but on 3G, all the images took more than 12 seconds to load.

The reason was simple: TV Maze returned a huge movie poster-style photo, but ALL I needed was a long, narrow bar chart to help the user identify the show at a glance.

Left: downloaded image… Right: actually used image

To solve this problem, my initial idea was to use a CLI tool similar to ImageMagick, which I used to make ColourMatch.

When the user adds a new program, the server will request a copy of the image, crop out the middle of the image using ImageMagick and send it to S3. The client will then use S3’s URL instead of the image link from TV Maze.

However, I decided to use Imgix for this feature. Imgix is an image service based on S3(or any other cloud storage provider) that allows you to dynamically create cropped or resized images. All you need to do is use a link like the one below and it will create and provide the appropriate image.

https://tello.imgix.net/some_file?w=395&h=96&crop=faces
Copy the code

It also has the advantage of being able to find interesting areas in an image and crop them. You’ll notice that in the left/right photo comparison above, it cropped out four kids on bikes instead of just cropping out the center of the image

To work with Imgix, your images need to be available via S3 or similar services. Here is a snippet of my back-end code that uploads an image when adding a new program:

const ROOT_URL = 'https://tello.imgix.net'; Const uploadImage = ({key, url}) => (new Promise((resolve, reject)) => {// Some cases where the program does not have a link, skip this caseif(! url) { resolve();return;
    }

    request({ url, encoding: null }, (err, res, body) => {
      if(err) { reject(err); } s3.putObject({ Key: key, Bucket: BUCKET_NAME, Body: body, }, (... args) => { resolve(`${ROOT_URL}/${key}`); }); }); }));Copy the code

By calling this Promise for each new program, we get an image that can be cropped dynamically.

On the client side, we use the srcset and Sizes image properties to ensure that the image is provided based on window size and pixel ratio:

const dpr = window.devicePixelRatio;

const defaultImage = 'https://tello.imgix.net/placeholder.jpg';

const buildImageUrl = ({ image, width, height }) => (`
  ${image || defaultImage}? fit=crop&crop=entropy&h=${height}&w=${width}&dpr=${dpr} ${width * dpr}w
`);


// Later, in a render method:
<img
  srcSet={`
    ${buildImageUrl({
      image,
      width: 495,
      height: 128,
    })},
    ${buildImageUrl({
      image,
      width: 334,
      height: 96,
    })}
  `}
  sizes={`
    ${BREAKPOINTS.smMin} 334px,
    495px
  `}
/>
Copy the code

This ensures that mobile devices get a larger version of the image (because the cards take up the entire viewport width), while desktop clients get a smaller version.

Lazy loading

Now, each image is smaller, but we still load the entire page of images at once! On my large desktop window, I could only see six shows at a time, but we got all 16 images at once as the page loaded.

Thankfully, there’s a great library called React-LazyLoad that provides very convenient lazy loading. A code example is as follows:

import LazyLoad from 'react-lazyload'; // In some render method somewhere: <LazyLoad once height={UNITS_IN_PX[6]} offset={50}> <img srcSet={`... omitted`} sizes={`... omitted`} /> </LazyLoad>Copy the code

Come on, let’s look at the timeline again.

Our first effective rendering time hasn’t changed, but the image load time has significantly decreased:

First image: 4600ms -> 3900ms All images in visible range: ~9000ms -> 4100ms

Sharp-eyed readers may have noticed that only six episodes have been downloaded for this timeline, not all 16. Because my first attempt (and the only attempt I can remember) was lazy loading of show cards, not just pictures.

However, it also raises more questions than I solved this weekend, so I’ve simplified it a bit. However, this does not affect image load time optimization.

The code segment

I’m sure splitting code is a very smart decision.

Because there’s an obvious problem right now, we only have one code bundle. Let’s use code splitting to reduce the amount of code required for a request!

The routing scheme I use is React Router 4, which has a very simple example of creating a
component in its documentation. I set up several different configurations, but in the end the code didn’t split up very effectively.

Finally, I separated the mobile and desktop views. The mobile version has its own view, which uses a sliding library, some custom static resources, and several additional components. The bundle is surprisingly small — about 30KB before compression — but it still has some significant effects:

First effective render: 2600ms -> 2300ms First image load: 3900ms -> 3700ms

One thing I learned from this experiment is that the effectiveness of code splitting depends largely on the type of application you have. In my case, the biggest dependence is React and some libraries in its ecosystem. However, this code is required for the whole site and does not need to be separated

At page load time, we could split the components at the routing level to get some marginal benefit, but this would cause additional delay every time the route changed; It’s not fun to have to deal with these little problems everywhere.


Some other ways to try and think about it

Server side rendering

My idea is to render a “shell” on the server side — a placeholder map with the correct layout, just no data.

But I foresee a problem, because the client already gets the previous session data through localStorage and it initializes with that data. But at this point the server is unaware, so I need to deal with the tag mismatch between the client and the server.

I figured I could shave half a second off my first effective rendering time with SSR, but at that point the entire site was not interactive; It’s very strange when a site looks ready but isn’t.

Also, SSR can add complexity and slow down development. Performance is important, but good enough is enough.

One issue I was interested in but didn’t have time to investigate was compile-time SSR. It may only work on static pages, such as logout pages, but I find it very effective. As part of my build process, I create and persist index.html and serve it to the user as a pure HTML file via the Node server. The client will still download and run React, so the pages will still be interactive, but the server won’t have to spend time building them because I built them directly when the code was deployed.

Dependence on CDN

Another idea that I think has a lot of potential is to host React and ReactDOM on the CDN.

Webpack makes this easy to do; You can avoid packing them into your bundle by defining the externals keyword.

// webpack.config.prod.js
{
  externals: {
    react: 'React'.'react-dom': 'ReactDOM',}}Copy the code

This approach has two advantages:

  • Get a popular library from the CDN, which is most likely already cached by the user
  • Dependencies can be parallelized and your code can be downloaded at the same time instead of a large file

I was surprised to find that, at least in the worst-case scenario where the CDN is not cached, there is no benefit to moving React to a CDN:

First effective rendering time: 2300ms -> 2650ms

You might notice that React and the React DOM are downloaded in parallel with my main package, and it does slow down the overall time.

I’m not saying that using CDN is a bad idea. I’m not very professional at this and it’s probably my fault, not the idea! At least in my case it didn’t work.

Note: The React CDN option has no advantage if there is no local cache, because your total code volume does not decrease, your bandwidth does not change, JS is downloaded in parallel but executed in serial, so there is no advantage in total download time or execution time. It can be slowed down by the loss of HTTP links, which is why we want to minimize HTTP requests; And because it is a local test, the advantages of CDN may not be realized. However, I think this scheme is still advisable for two main reasons: 1. CDN can guarantee the download speed of most people, while many people download very slowly on your server due to transmission problems; 2. 2. Because libraries related to React are removed, this part of the code is removed from the cache every time the code is changed and released, which can reduce the loading time of subsequent users


conclusion

Through this article, I hope to convey two points of view:

  1. Small programs are very easy to use out of the box, but a weekend can make a huge difference. This is thanks to the Chrome Developer Tool, which can quickly identify bottlenecks and surprise you with how many performance depressions there are in a project. You can also delegate complex tasks to a low-cost or free service like Imgix.
  2. Every application is different, and this article details some of Tello’s techniques, but they have a particular focus. Even if these tips don’t apply to your application, I hope I’ve made the point clear: Performance depends on the creativity of web developers.

For example, in some conventional wisdom, server-side rendering is a must. But in my application, it is a better choice to do front-end rendering based on local-storage or service-workers! Maybe you can do some work at compile time to reduce SSR time, or learn from Netflix and not pass React to the front end at all!

When you do performance tuning, it takes a lot of creativity and an open mind, and that's what makes it so much fun.Copy the code

Thank you very much for reading! I hope this article has been helpful :). If you have any thoughts, please contact me on Twitter.

The source code for Tello is available on Github at ****🌟


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.