Author: DoubleJan

Preface:

We often encounter data processing requirements during page rendering, and every change of props and state will cause the page to be re-rendered, so the data processing needs to be re-executed. To reduce repetitive data processing, we can introduce Memoize to solve this problem.

Brief introduction:

Memoize’s basic processing is to decide whether to execute a function by determining whether the last parameter passed to the function is the same as the new one. Return the cached result if the previous two passes are the same, otherwise rerun the function to get a new result. This can reduce the data processing of the same parameter, reduce the amount of page computation, and improve performance.

Here are some of the more common Memoize tool libraries:

MemoizeOne:www.npmjs.com/package/mem… Memoizee:www.npmjs.com/package/mem… Lodash: www.lodashjs.com/docs/latest… And performance comparison results:

Individual into the ginseng: www.measurethat.net/Benchmarks/… Multiple into arguments: www.measurethat.net/Benchmarks/…

MemoizeOne:

Here’s how MemoizeOne works and uses it.

As the name implies, the library caches only one result per instance. Take a look at the official example:

import memoizeOne from 'memoize-one';
 
const add = (a, b) = > a + b;
const memoizedAdd = memoizeOne(add);
 
memoizedAdd(1.2); / / 3
 
memoizedAdd(1.2); / / 3
// Add function is not executed: previous result is returned
 
memoizedAdd(2.3); / / 5
// Add function is called to get new value
 
memoizedAdd(2.3); / / 5
// Add function is not executed: previous result is returned
 
memoizedAdd(1.2); / / 3
// Add function is called to get new value.
// While this was previously cached,
// it is not the latest so the cached result is lost
Copy the code

As you can see from the example, add is not called when the parameters passed in are exactly the same, and memoizeAdd returns the last cached result directly. When the incoming parameters change, rerunce the add function.

Here we can find a problem, how is the comparison of input parameters handled? The arguments used in this example are all strings. If congruent (===) comparison is used, then objects or arrays cannot be compared using this method. For this, MemoizeOne provides configuration for custom matching methods. We can try the following:

import memoizeOne from 'memoize-one';
import { isEqual } from 'lodash'; 
 
const func= (list) = > list.filter(item= >! item.hidden);const memoizedFilterList = memoizeOne(func, isEqual);
// Use lodash's isEqual method to make a deep comparison of the incoming arguments


const func= (id, name) = > `${id}-${name}`;
const compare = (prevArgs, nextArgs) = > return prevArgs[0] === nextArgs[0]
const memoizedFilterList = memoizeOne(func, compare);
// Use custom methods for deep comparisons of incoming parameters
PrevArgs and nextArgs correspond to the old inparameter group and the new inparameter group. You can customize the comparison method to your own needs
// In the example above, only the value of the passed id is compared, and the function is called again when the ID changes. Name is not used as the judgment condition
Copy the code

Note also that this is also the environment variable to be compared by default. If the direction of this changes, Memoize will not use the last cached result, as in the following example:

function getA() {
  return this.a;
}
 
const temp1 = {
  a: 20};const temp2 = {
  a: 30}; getA.call(temp1);/ / 20
getA.call(temp2); / / 30
Copy the code

MemoizeOne is a very lightweight memoizeOne library with the main implementation code as follows:

// areInputsEqual 
// The default comparison function

export default function areInputsEqual(newInputs: readonly unknown[], lastInputs: readonly unknown[],) :boolean {
  // First compare the length of the parameter group, if not the same, do not proceed to the next step, directly return the different
  if(newInputs.length ! == lastInputs.length) {return false;
  }

  // If the length of the input parameter is the same, use! == Compares whether each parameter is the same
  for (let i = 0; i < newInputs.length; i++) {
    // using shallow equality check
    if(newInputs[i] ! == lastInputs[i]) {return false; }}return true;
}

import areInputsEqual from './are-inputs-equal';

// Using ReadonlyArray<T> rather than readonly T as it works with TS v3
export type EqualityFn = (newArgs: any[], lastArgs: any[]) = > boolean;

export default function memoizeOne<
  // Need to use 'any' rather than 'unknown' here as it has
  // The correct Generic narrowing behaviour.
  ResultFn extends (this: any, ... newArgs: any[]) = >ReturnType<ResultFn> > (resultFn: ResultFn, isEqual: EqualityFn = areInputsEqual) :ResultFn {
  let lastThis: unknown;
  let lastArgs: unknown[] = [];
  let lastResult: ReturnType<ResultFn>;
  let calledOnce: boolean = false;

  // breaking cache when context (this) or arguments change
  function memoized(this: unknown, ... newArgs: unknown[]) :ReturnType<ResultFn> {
	
	// If this function has been run before, and the environment variable this has not changed and the input parameter is the same, the last result is returned directly
    if (calledOnce && lastThis === this && isEqual(newArgs, lastArgs)) {
      return lastResult;
    }

	// Otherwise, run the function and record the this, input, and result of the run
    // Throwing during an assignment aborts the assignment: https://codepen.io/alexreardon/pen/RYKoaz
    // Doing the lastResult assignment first so that if it throws
    // nothing will be overwritten
    lastResult = resultFn.apply(this, newArgs);
    calledOnce = true;
    lastThis = this;
    lastArgs = newArgs;
    return lastResult;
  }

  return memoized as ResultFn;
}
Copy the code

Practical: Memoize is a great way to improve the performance of our programs, but it’s also a way to trade space for speed, which won’t work in every situation. In general, pure functions, recursive functions, graph data processing, etc. that involve a lot of computation can be effectively utilized. When the output of the function does not depend entirely on the input parameter, or the input parameter changes very frequently, and so on, it may not be appropriate.

In our current backend project, there are places in the framework where MemoizeOne is used for processing, such as menu.js. The menu changes less when we go into the background, but is often re-rendered, which is a good time to use this method to improve rendering performance:

import memoizeOne from 'memoize-one';
import { isEqual } from 'lodash';

/** * Format menu */
function formatter(data, parentAuthority, parentName) {
  return data
    .map(item= > {
      if(! item.name || ! item.path) {return null;
      }

      // The following is the menu multi-language Settings
      let locale = 'menu';
      if (parentName) {
        locale = `${parentName}.${item.name}`;
      } else{
        locale = `menu.${item.name}`;
      }
      // if enableMenuLocale use item.name,close menu international
      const name = menu.disableLocal
        ? item.name
        : formatMessage({ id: locale, defaultMessage: item.name });
      constresult = { ... item, name, locale,authority: item.authority || parentAuthority,
      };
      if (item.routes) {
        const children = formatter(item.routes, item.authority, locale);

        // Reduce memory usage
        result.children = children;
      }
      delete result.routes;
      return result;
    })
    .filter(item= > item);
}

const memoizeOneFormatter = memoizeOne(formatter, isEqual);


/** * get breadcrumb mapping */
const getBreadcrumbNameMap = menuData= > {
  const routerMap = {};

  const flattenMenuData = data= > {
    data.forEach(menuItem= > {
      if (menuItem.children) {
        flattenMenuData(menuItem.children);
      }
      // Reduce memory usage
      routerMap[menuItem.path] = menuItem;
    });
  };
  flattenMenuData(menuData);
  return routerMap;
};

const memoizeOneGetBreadcrumbNameMap = memoizeOne(getBreadcrumbNameMap, isEqual);
Copy the code

Conclusion:

Memoize can help us solve performance problems in our daily development, and we can also think about where and how we can reduce computing and optimize performance during development.