Some of our developers recently asked the question, when should I use useMemo in React? That’s a very good question. In this article, we’ll take a scientific approach, define a hypothesis, and test it using real-world benchmarks in React.

Read on to learn about useMemo’s impact on performance.

What is useMemo?

UseMemo is a hook provided by React. This hook allows the developer to cache the value of a variable against a dependency list. If any of the variables in the dependency list change, React will re-run the process that handles this data and re-cache it. React will fetch the value from the cache if the variable values in the dependency list have been cached before.

This has a significant impact on the re-rendering of components. Once the component is re-rendered, it extracts values from the cache rather than having to loop through an array or process data over and over again.

How does React describe useMemo?

If we look at the React documentation for useMemo, it’s not mentioned when it should be used. They simply mention what it does and how to use it.

You can probably rely on useMemo as a performance tuning tool

The question here is, from what point of view is useMemo interesting? How complex or large should the data be before we see the performance benefits of using useMemo? When should developers use useMemo?

Assuming that

Before we start the experiment, let’s make a hypothesis.

Let’s first define the object to be executed and the complexity of the processing to be n. If n = 100, then we need to loop through an array of 100 items to get the final value of the memo-ed variable.

Then, we also need to separate the two operations. The first action is the initial rendering of the component. In this case, if a variable uses useMemo or does not use useMemo, they must both compute initial values. Once the first rendering is done, subsequent re-rendering with useMemo (the second operation we need to measure) retrieves values from the cache, and the performance advantage over the non-Memo version should be visible.

In all cases, I expect around 5-10% of the overhead during the initial render to set up the cache and store the value. When n<1000, I expect to see useMemo performance suffer. For n>1000 cases, I would expect to see similar or better performance when rerendering using useMemo, but the initial rendering should still be slightly slower due to the extra caching algorithm. What is your hypothesis?

Benchmark setup

We set up a small React component as follows, which will generate an object of complexity n, defined as the property level.

import React from 'react';
const BenchmarkNormal = ({level}) => {
    const complexObject = {
        values: []
    };
    for (let i = 0; i <= level; i++) {
        complexObject.values.push({ 'mytest' });
    }
    return ( <div>Benchmark level: {level}</div>);
};
export default BenchmarkNormal;
Copy the code

This is our normal benchmark component, and we will also make a benchmark component BenchmarkMemo for useMemo.

import React, {useMemo} from 'react';
const BenchmarkMemo = ({level}) => {
    const complexObject = useMemo(() => {
        const result = {
            values: []
        };
        
        for (let i = 0; i <= level; i++) {
            result.values.push({'mytest'});
        };
        return result;
    }, [level]);
    return (<div>Benchmark with memo level: {level}</div>);
};
export default BenchmarkMemo;
Copy the code

We then set up these components in app.js to display when the button is pressed. We also use React to provide render time.

function App() { const [showBenchmarkNormal, setShowBenchmarkNormal] = useState(false); // Choose how many times this component needs to be rendered // We will then calculate the average render time for all of these renders const timesToRender = 10000; // Callback for our profiler const renderProfiler = (type) => { return (... args) => { // Keep our render time in an array // Later on, calculate the average time // store args[3] which is the render time ... }; }; // Render our component return <p> {showBenchmarkNormal && [...Array(timesToRender)].map((index) => { return <Profiler id={`normal-${index}`} onRender={renderProfiler('normal')}> <BenchmarkNormal level={1} /> </Profiler>; })} </p>; }Copy the code

As you can see, we rendered the components 10,000 times and took the average render time of those components. Now we need a mechanism to trigger on-demand re-rendering of components without having to recalculate the useMemo, so we don’t want to change any of the values in the useMemo’s dependency list.

// Add a simple counter in state
// which can be used to trigger re-renders
const [count, setCount] = useState(0);
const triggerReRender = () => {
    setCount(count + 1);
};
// Update our Benchmark component to have this extra prop
// Which will force a re-render
<BenchmarkNormal level={1} count={count} />
Copy the code

To keep the results clear, we always start with a new browser page before starting the test (except for re-rendering) to clear any caches that might still be on the page and affect our results.

The results of

N = 1

Complexity is shown in the left column, where the first test is the initial render, the second test is the first re-render, and the last test is the second re-render. The second column shows the results of a normal benchmark, without useMemo. The last column shows the benchmark results using useMemo. These values are the average of over 10,000 render times for our base component.

When using useMemo, initial rendering is 19% slower, which is much higher than the expected 5-10%. Subsequent rendering is still slow because the overhead of passing the useMemo cache is greater than the overhead of recomputing the actual value.

In summary, for complexity n = 1, it is always faster not to use useMemo because the overhead is always more expensive than the performance gain.

N = 100

At a complexity of 100, initial rendering with useMemo was 62% slower, which is a significant number. Subsequent re-renders seem on average slightly faster or similar.

In summary, the initial render at complexity 100 is significantly slower, while subsequent re-renders are fairly similar, at best only slightly faster. At this point, useMemo doesn’t seem to make much sense.

N = 1000

Due to the complexity of 1000, we noticed that initial rendering with useMemo was 183% slower, because presumably the useMemo cache was working harder to store values. The subsequent rendering was about 37% faster!

At this point, we can see some improvement in performance when re-rendering, but this is not without cost. Initial rendering was much slower, with a time loss of 183%.

In summary, at 1000 complexity, we see a greater performance loss (183%) at the initial render, however, subsequent renders are 37% faster.

Whether this is already interesting will depend highly on your use case. Losing 183% of performance during initial rendering is a hard sell, but can be justified in the case of a lot of re-rendering in components.

N = 5000

At a complexity of 5000, we noticed that the initial rendering speed was 545% slower using useMemo. It seems that the higher the complexity of the data and processing, the slower the initial rendering with useMemo compared to without useMemo.

The interesting part comes when looking at the subsequent render. Here, we noticed a 437% to 609% improvement in performance with useMemo in each subsequent render.

In summary, with useMemo, the initial rendering cost is much higher, but subsequent rerenders have greater performance gains. If your application has data/processing complexity >5000 and needs to be re-rendered several times, we can see the benefits of using useMemo.

The result shows that

The friendly reader community has pointed out a few possible reasons why initial rendering is much slower, running production mode, etc. We retested all the experiments and found that the results were similar. These rates are similar, but the actual value may be lower. The general conclusion is the same.

conclusion

These are our results for the component of complexity N, where the application loops and adds values to an array n times. ** Please note that the results will vary depending on how you process the data and the amount of data. ** However, this should give you an idea of the performance differences for different sized datasets.

Whether you should use useMemo will largely depend on your use case, but for complexity less than 100, useMemo doesn’t seem interesting.

It’s worth noting that useMemo’s initial rendering suffered considerable performance setbacks. We expected an initial performance loss of around 5-10%, but found that this was largely dependent on the complexity of the data/processing and could result in a performance loss of up to 500%, which is 100 times more than expected.

We have rerunned the tests several times, and even after getting the results, we can say that the subsequent results are very consistent, similar to the initial results we have documented.

The main harvest

We all agree that useMemo is useful to avoid unnecessary rerendering by keeping the same object reference to variables.

In the case of actual calculations using useMemo caching, the primary goal is not to avoid re-rendering in child components:

  • UseMemo should be used when processing volumes are high
  • At what point does useMemo become interesting to avoid the threshold for extra processing depend largely on your application
  • There may be additional usage overhead when using useMemo at very low processing volumes.

When do you use useMemo? Will these findings change your mind about when to use useMemo? Let us know in the comments!

Original link: medium.com/swlh/should…