Original text: medium.com/swlh/should…

Some of our developers recently asked the question, when should I use useMemo in React? That’s a very good question. In this article, we’ll take a scientific approach, define a hypothesis, and test it using real-world benchmarks in React.

Read on to learn about useMemo’s impact on performance.

What is useMemo?

UseMemo is a hook function provided by React. This hook allows developers to cache variables’ values and lists of dependencies. If any variables in this dependency list change, React reruns the processing of this data and re-caches it. React will fetch the value from the cache if the variable value in the dependency list has been cached before.

This mainly affects the re-rendering of components. Once the component is rerendered, it extracts the value from the cache rather than having to loop through the array or process the data over and over again.

React useMemo

Taking a first look at the React documentation, useMemo is not mentioned when it should be used. They simply mention what it does and how to use it.

You may rely on useMemo as a performance optimization

You can rely on useMemo as a performance tuning tool

This discussion of useMemo usage is interesting!

How complex or large should the data be before we see the performance benefits of using useMemo? When should developers use useMemo?

The experiment

Before we start the experiment, let’s define a hypothesis.

Let’s first define the object to be executed and the complexity of the processing to be n. If n = 100, then we need to loop through an array of 100 items to get the final value of the memo-ed variable.

Then, we also need to separate the two operations. The first action is the initial rendering of the component. In this case, if a variable uses useMemo or does not use useMemo, they must both compute initial values. Once the first rendering is done, followed by a re-rendering with useMemo (the second operation we need to measure), values can be retrieved from the cache, where the performance advantage should be visible compared to the uncommented version.

In all cases, I expect about 5-10% overhead during initial rendering to set up the cheat cache and store the values. When n < 1000, I expect to see useMemo performance degrade. For n > 1000, I would expect to see similar or better performance with useMemo re-rendering, but the initial rendering should still be slightly slower due to the additional caching algorithm. What’s your guess?

Benchmark setup

We set up a small React component as follows, which will generate an object of complexity n, defined at the props level.

  • BenchmarkNormal.jsx
import React from 'react';
const BenchmarkNormal = ({level}) = > {
    const complexObject = {
        values: []};for (let i = 0; i <= level; i++) {
        complexObject.values.push({ 'mytest' });
    }
    return ( <div>Benchmark level: {level}</div>);
};
export default BenchmarkNormal;
Copy the code

This is our normal benchmark component, and we will also make a benchmark component BenchmarkMemo for useMemo.

  • BenchmarkMemo.jsx
import React, {useMemo} from 'react';
const BenchmarkMemo = ({level}) = > {
    const complexObject = useMemo(() = > {
        const result = {
            values: []};for (let i = 0; i <= level; i++) {
            result.values.push({'mytest'});
        };
        return result;
    }, [level]);
    return (<div>Benchmark with memo level: {level}</div>);
};
export default BenchmarkMemo;
Copy the code

We then set up these components in app.js to display when the button is pressed. We also use React to provide render time

function App() {
    const [showBenchmarkNormal, setShowBenchmarkNormal] = useState(false);
    // Choose how many times this component needs to be rendered
    // We will then calculate the average render time for all of these renders
    const timesToRender = 10000;
    // Callback for our profiler
    const renderProfiler = (type) = > {
        return (. args) = > {
            // Keep our render time in an array
            // Later on, calculate the average time
            // store args[3] which is the render time ...
        };
    };
    // Render our component 
    return <p> {showBenchmarkNormal && [...Array(timesToRender)].map((index) => {
        return <Profiler id={`normal-The ${index} `}onRender={renderProfiler('normal')} >
            <BenchmarkNormal level={1} />
        </Profiler>;
    })}
    </p>;
}

Copy the code

As you can see, we rendered the components 10,000 times and took the average render time for those components. Now we need a mechanism to trigger the on-demand rerendering of components without having to recalculate the useMemo, so we don’t want to change any values in the useMemo’s dependency list.

Rerender the trigger mechanism

To keep the results clear, we always start with a new browser page before starting the test (except for re-rendering) to clear any caches that might still be on the page and affect our results.

The results of

N = 1

Complexity is shown in the left column, where the first test is the initial render, the second test is the first re-render, and the last test is the second re-render. The second column shows the results of the normal benchmark, excluding useMemo. The last column shows the benchmark results using useMemo. These values are the average of over 10,000 render times for our base component.

When using useMemo, initial rendering is 19% slower, which is much higher than the expected 5-10%. Subsequent rendering is still slow because the overhead of passing the useMemo cache is greater than the overhead of recomputing the actual value.

In summary, for complexity n = 1, it is always faster not to use useMemo because the overhead is always more expensive than the performance gain.

N = 100

At a complexity of 100, initial rendering with useMemo was 62% slower, which is a significant amount. Subsequent re-votes seem, on average, to be slightly faster or less so.

In summary, the initial render at complexity 100 is significantly slower, while subsequent re-renders are fairly similar, at best only slightly faster. At this point, useMemo doesn’t seem to make much sense.

N = 1000

Due to the complexity of 1000, we noticed that initial rendering with useMemo was 183% slower, so it can be assumed that the useMemo cache has a harder time storing these values. Subsequent rendering is about 37% faster!

At this point, we can see some performance improvements in rerendering, but it’s not without cost to come. The initial rendering was much slower, losing 183% of the time.

In summary, at 1000 complexity, we see a greater performance loss (183%) at the initial render, however, subsequent renders are 37% faster.

Whether this is already useful will largely depend on your use case. A 183% performance loss in initial rendering is a tough sell, but may be justified in the case of many re-rendered components.

N = 5000

At a complexity of 5000, we noticed that the initial rendering speed of useMemo was 545% slower. It seems that the higher the complexity of the data and processing, the slower the initial rendering will be.

The fun part comes from rendering again. Here, we noticed that useMemo performance improved 437% to 609% with each subsequent render.

In summary, initial rendering using useMemo is more expensive, but subsequent re-rendering provides greater performance gains. If your application has more than 5000 data/processing complexity and some re-rendering, we can see the benefits of using useMemo.

The result shows that

The friendly reader community has pointed out a few possible reasons why initial rendering is much slower, running production mode, etc. We retested all the experiments and found that the results were similar. These rates are similar, but the actual value may be lower. All the conclusions are the same.

conclusion

These are the results of a component of n complexity, where the application loops through and adds values to the array n times. Note that the results will vary depending on how you process the data and how much data you have. However, this should give you an idea of the difference in performance between data sets of different sizes.

Whether you should use useMemo will largely depend on your use case, but with a complexity of less than 100, useMemo doesn’t seem interesting.

It’s worth noting that useMemo’s initial rendering suffered considerable performance setbacks. We expected an initial performance loss of around 5-10%, but found that this was largely dependent on the complexity of the data/processing and could result in a performance loss of up to 500%, which is 100 times more than expected.

We have rerunned the tests several times, and even after getting the results, we can say that the subsequent results are very consistent, similar to the initial results we have documented.

The key point

We all agree that useMemo is an effective way to avoid unnecessary duplicate rendering by keeping the same object references to variables.

In the case of actual calculations using useMemo caching, the primary goal is not to avoid re-rendering in child components:

  • UseMemo should be used when processing volumes are high
  • Since when did useMemo become useful to avoid extra processing, the threshold depends largely on your application
  • There may be additional usage overhead when using useMemo for very low data processing

When do you use useMemo? Will these findings change your mind about when to use useMemo? Let us know in the comments!