“This is the 13th day of my participation in the First Challenge 2022. For details: First Challenge 2022.”

preface

Before we get into the topic of this article, let’s take a look at a question:

If your system has a lot of pages that need to be cached, but you can only cache 10 pages at most because of memory issues, how would you design the cache algorithm?

If I had been asked this question, I would have been stuck, and I would have said, use the queue, first in, first out.

const queue = ['page 1'.'page 2'. .'page 10'Assume a new entry to the page11, then the page is eliminated1
Copy the code

The questioner says: It can be done, but there is a better way.

I said, “Use an object to store the number of times each page has been visited, and each time a new one comes in, the less visited ones are eliminated.”

constObj = {assuming a new page comes in11, then the page is eliminated1
  'page 1': 1.'page 2': 5.'page 3': 9.'page 10': 15,}Copy the code

The questioner said: This is a good idea, but the memory overhead is a bit high, there is a better solution.

Me: I can’t think of any other way.

Q: Have you ever used Keep-alive? Keep-alive handles caching, right?

Me: Right! Keep-alive is used to cache components to save their state, avoiding re-rendering and improving page performance.

Questioner: Yes, do you know how keep-alive works? There must have been some internal optimizations for keep-Alive, otherwise the memory would have run out if there were too many cached components.

Me:…

Q: Actually keep-Alive uses an LRU Cache internally. Do you know what an LRU Cache is?

Me:…

Q: In the source code of Vue3, not only keep-alive, but also compiler- SFC components use LRU Cache

Me: sorry, I too dish, what also don’t know.

Questioner: No, no, no, it doesn’t matter. I’ll give you a link to Wikipedia. There are more than ten kinds of caching algorithms.

Cache_replacement_policies

I: I a small front end, you gao looked at me, calculate, since vue3 source code inside all used, school LRU.

LRU Cache

What is an LRU Cache?

LRU, or Least recently used, is a cache flushing strategy.

Computer memory space is limited, if the memory is full to delete some content, to make room for new content.

But the question is, what to delete? We want to delete data that we don’t use much and keep data that we use a lot for later use.

The LRU Cache considers the least recently used data in the Cache to be unimportant and should be deleted first.

We use LRU Cache every day in our daily life. Take a look at this example.

Mobile phone background application management

We know that apps on mobile phones can run in the background.

  • Every time you open a new app, it puts the new app at the top of the background list.
  • If an application is running in the background and is opened, it will be placed first in the background list.
  • Apps that haven’t been used for too long are relegated to the bottom of the background list.

Look at this example:

Let’s say your phone has a maximum of three apps running in the background.

In the initial state, the mobile phone has no background program running, and opens the three applications keep, B station and JINGdong successively, and inserts three members into the cache list.

Every time you open a new app, you put that app to the far right of the list.

keep

keep -> bilibili

keep -> bilibili -> jd
Copy the code

These three programs were running in the background for some time, and I suddenly opened Keep.

Keep -> bilibili -> JD becomes bilibili -> jd -> keepCopy the code

Keep is moved from its original position to the far right of the list.

At this time, opened a new program, NetEase cloud music.

bilibili -> jd -> keep

bilibili -> jd -> keep -> music


            |
bilibili -> | jd -> keep -> music
            |
            
jd -> keep -> music       
Copy the code

Since our application can only run in the background for a maximum of 3, station B, which has not been used for a long time, was removed.

This is how the LRU Cache works.

Obviously, LRU Cache has frequent insert and delete operations, so it must use the linked list data structure to implement it, because the time complexity of insert and delete of linked list is O(1).

For those of you who have forgotten the complexity of linked list insertion and deletion time, I wrote in this article about the introduction of linked list (JS) for front-end development.

Implement LRU Cache with JS

In JS, the linked list is not implemented as a data structure, but the Iterator in JS is very similar to the linked list, and the Map data structure can be used to implement LRU Cache.

  • Returns a traverser for the key name using the map.keys () method.
  • Using the next() method of the traverser, you can point to the first member of the traverser.

Let’s test the Map data structure using the phone background application example above:

// keep -> b -> jd -> music

const cache = new Map()

cache.set('keep'.'keep')
cache.set('b stand'.'b stand')
cache.set('jd'.'jd')
cache.set('music'.'music')

console.log('cache :>> ', cache)
console.log('cache.keys() :>> ', cache.keys())             
console.log('cache.keys().next() :>> ', cache.keys().next())
console.log('cache.keys().next().value :>> ', cache.keys().next().value)
Copy the code

With this foundation, we can try to implement an LRU Cache.

class LRUCache {
  constructor (capacity) {
    this.cache = new Map(a)this.max = capacity                    // Max indicates the maximum cache capacity
  }

  get (key) {                              // Get method
    if (this.cache.has(key)) {             // If there is a member in the cache list, the member is moved to the latest traverser
      const val = this.cache.get(key)     
      this.cache.delete(key)               // The move operation is to delete first, and then to the last
      this.cache.set(key, val)
      return val
    }
    return -1
  }

  put (key, value) {                                    // Change the method
    if (this.cache.has(key)) {                          // If there is a member, move the member to the latest traverser
      this.cache.delete(key)
    } else if (this.cache.size >= this.max) {           // Delete the first member (oldest member) of the traverser when the capacity exceeds
      const firstEle = this.cache.keys().next().value
      this.cache.delete(firstEle)
    }
    this.cache.set(key, value)                          // The move operation is to delete first, and then to the last}}Copy the code

Now that the code is finished, let’s test it by following the GIF process above:

const list = new LRUCache(3)                          // Set the maximum capacity to 3
list.put('keep'.'keep')
list.put('b stand'.'b stand')
list.put('jd'.'jd')                                  // Run keep, b, jd in sequence
 
console.log(list.cache.keys())                        // Check the status of the program

list.get('keep')                                      / / keep running
console.log(list.cache.keys())

list.put('music'.'music')                            // Open the music app
console.log(list.cache.keys())
Copy the code

So far, we have implemented an LRU Cache in JS.

Application of LRU Cache in keep-alive

What does keep-alive have to do with LRU Cache?

As we know, keep-Alive is used to cache components to save their state and avoid re-rendering to improve page performance.

If there are 1000 components in the system that need to be cached, will keep-alive cache all of them?

Obviously not, because browser memory is limited, and if you cache as many components as you want, memory will run out.

It is the LRU Cache that is used in keep-alive.

Vue2 keep-alive source code analysis

To analyze the specific implementation of keep-alive is still tedious, we focus on the implementation of LRU Cache in Keep-alive.

Keep-alive is a component that defines this.cache and this.keys in a created hook to cache the created VNode (virtual DOM).

created () {
  this.cache = Object.create(null)     // Cache the virtual DOM
  this.keys = []                       // Cache the key set of the virtual DOM
},
Copy the code

Max is defined in props to represent an upper limit on the number of cached components, beyond which an LRU elimination strategy is applied.

props: {
    ...
    max: [String.Number]},Copy the code

When rendering, if the cache is hit, the vNode component instance is taken directly from the cache and the key order is rearranged to the last; Otherwise, the updated life cycle is triggered and cacheVNode is executed to cache the vNode

render(){...const { cache, keys } = this
      constkey: ? string = vnode.key ==null      // Define the component's cache key. If the virtual DOM has a key, create one if it does not
      ? componentOptions.Ctor.cid + (componentOptions.tag ? ` : :${componentOptions.tag}` : ' ')
      : vnode.key
      if (cache[key]) {                           // If the component is already cached
        vnode.componentInstance = cache[key].componentInstance
        remove(keys, key)
        keys.push(key)                           // Rearrange the order of the keys so that they are placed last
      } else {                                   // If this component has not been cached
        this.vnodeToCache = vnode                // Triggers the updated life cycle to cache the component
        this.keyToCache = key
      }
  ...
}
Copy the code
updated () {
  this.cacheVNode()
},
Copy the code

In the cacheVNode method, if the vNode length in the cache exceeds this. Max, the pruneCacheEntry method is executed, which removes the first component in the cache.

cacheVNode() {
  const { cache, keys, vnodeToCache, keyToCache } = this
  if (vnodeToCache) {
    const { tag, componentInstance, componentOptions } = vnodeToCache
    cache[keyToCache] = {
      name: getComponentName(componentOptions),
      tag,
      componentInstance,
    }
    keys.push(keyToCache)
    if (this.max && keys.length > parseInt(this.max)) {  // If the number of caches exceeds the limit, delete the first one
      pruneCacheEntry(cache, keys[0], keys, this._vnode)
    }
    this.vnodeToCache = null}}Copy the code

In addition to removing it from the cache, determine if the cached component’s tag to be removed is the currently rendered component’s tag, and if so, perform the $destroy method to remove the cached component instance as well.

function pruneCacheEntry (
  cache: CacheEntryMap,
  key: string,
  keys: Array<string>, current? : VNode) {
  constentry: ? CacheEntry = cache[key]if(entry && (! current || entry.tag ! == current.tag)) {// If the cache component's 'tag' is the current rendering component's 'tag', destroy it
    entry.componentInstance.$destroy()
  }
  cache[key] = null
  remove(keys, key)
}
Copy the code

Assuming Max is 4, draw a picture to review the overall flow:

  • When the cache is not full, components are added to the cache one by one
  • Added when the cache is full, remove the first component (keys[0]) to make room for the latest component to be pushed from behind
  • Access existing components and move existing components to the back (delete existing components in their original location and push existing components from behind)

On the whole, the Cache strategy of Keep-alive in VUE2 is exactly the same as that of background application management of mobile phone we analyzed before, which is LRU Cache.

Now that we understand the caching mechanism of the component, let’s take a look at the complete keep-alive code and open SRC /core/components/keep-alive.js.

Because there are so many of them, I’ll use comments to indicate what each step does.

export default {
  name: 'keep-alive'.abstract: true.// Abstract the component so that keep-alive is not rendered as a real DOM after it is defined

  props: {
    include: patternTypes,                    // Cache whitelist
    exclude: patternTypes,                    // Cache blacklist
    max: [String.Number]                     // An upper limit on the number of cache components, beyond which LRU elimination will be applied.
  },

  methods: {
    cacheVNode() {                            // Cache component logic is in this function
      const { cache, keys, vnodeToCache, keyToCache } = this
      if (vnodeToCache) {
        const { tag, componentInstance, componentOptions } = vnodeToCache
        cache[keyToCache] = {
          name: getComponentName(componentOptions),
          tag,
          componentInstance,
        }
        keys.push(keyToCache)                                // The cache is not full
        // prune oldest entry
        if (this.max && keys.length > parseInt(this.max)) {  // If the number of caches exceeds the limit, delete the first one
          pruneCacheEntry(cache, keys[0], keys, this._vnode)
        }
        this.vnodeToCache = null
      }
    }
  },

  created () {
    this.cache = Object.create(null)        // Cache the virtual DOM
    this.keys = []                          // Cache the key set of the virtual DOM
  },

  destroyed () {
    for (const key in this.cache) {         // Delete all caches when the component is destroyed
      pruneCacheEntry(this.cache, key, this.keys)
    }
  },

  mounted () {
    this.cacheVNode()                                // Initialize the cacheVNode method once to write the component to the cache.
    this.$watch('include'.val= > {                  // Listen whitelist changed
      pruneCache(this.name= > matches(val, name))   // Update cached values based on whitelist changes
    })
    this.$watch('exclude'.val= > {                  // Listen blacklist changed
      pruneCache(this.name= >! matches(val, name))// Update cached values based on blacklist changes
    })
  },

  updated () {
    this.cacheVNode()                                // The updated life cycle is triggered if the cache is not hit during rendering
  },

  render () {
    const slot = this.$slots.default
    const vnode: VNode = getFirstComponentChild(slot)      // Get the vNode of the first child
    constcomponentOptions: ? VNodeComponentOptions = vnode && vnode.componentOptionsif (componentOptions) {                                
      constname: ? string = getComponentName(componentOptions)// Check whether the current component is in the blacklist or whitelist
      const { include, exclude } = this
      if( (include && (! name || ! matches(include, name))) ||// If the whitelist is configured, the current component is not matched
        (exclude && name && matches(exclude, name))           // Or a blacklist is configured to match the current component
      ) {
        return vnode                                          // returns the vNode of the current component
      }

      const { cache, keys } = this                            // If not, go to cache logic
      constkey: ? string = vnode.key ==null                  // Define the component's cache key. If the virtual DOM has a key, create one if it does not
        ? componentOptions.Ctor.cid + (componentOptions.tag ? ` : :${componentOptions.tag}` : ' ')
        : vnode.key
      if (cache[key]) {                                       // If the component is already cached
        vnode.componentInstance = cache[key].componentInstance
        remove(keys, key)
        keys.push(key)                                        // Rearrange the order of the keys to the last one
      } else {                                                // If this component has not been cached
        this.vnodeToCache = vnode                             
        this.keyToCache = key                                 // To cache the component, the updated life cycle function is triggered by the data update and cacheVNode is executed to cache the component
      }

      vnode.data.keepAlive = true                            // The key reason why a component wrapped by a keep-alive component does not perform a normal life cycle can be found in SRC /core/vdom/create-component.js
    }
    return vnode || (slot && slot[0])                        / / return vnode}}Copy the code

Now, we can see that Vue2 keep-alive implements LRU Cache with an object and an array, which is not the same as what we just implemented with Map.

The reason I know LRU Cache can be implemented with Map is because I have read the keep-alive source code for Vue3.

Vue3 keep-alive source code analysis

In VUE3, keep-alive is implemented using the Composition API. Again, there is too much code for keep-alive, so let’s just look at the LRU Cache.

In setup, you define the cache and keys used to cache vNodes (virtual DOM) that you have created. Unlike Vue2, new data structures Map and Set are used

setup(){...const cache: Cache = new Map(a)const keys: Keys = new Set()
  ...
}
Copy the code

Also define Max in props to represent the upper limit of the number of cached components, beyond which LRU elimination will be applied.

props: {
    ...
    max: [String.Number]},Copy the code

When rendering, if the cache is hit, the vNode component instance is taken directly from the cache and the key order is rearranged to the last; Otherwise, set vNode to cache.

If the vNode size in the cache exceeds Max, the pruneCacheEntry method is executed to remove the first component in the cache.

setup(){...return () = >{...const cachedVNode = cache.get(key)
    if (cachedVNode) {                                           // If the component is already cached
        vnode.el = cachedVNode.el
        vnode.component = cachedVNode.component                  // Just grab the vnode component instance directly from the cache
        if (vnode.transition) {
          setTransitionHooks(vnode, vnode.transition!)
        }
        vnode.shapeFlag |= ShapeFlags.COMPONENT_KEPT_ALIVE
        keys.delete(key)                                         // Rearrange the order of keys to the last one
        keys.add(key)
      } else {                                                   // If the component is not cached
        keys.add(key)                                            // Set this component to cache
        if (max && keys.size > parseInt(max as string, 10)) {    // If the maximum value is exceeded, the first component in the cache is deleted
          pruneCacheEntry(keys.values().next().value)
        }
      }
  }
}
Copy the code
function pruneCacheEntry(key: CacheKey) {
  const cached = cache.get(key) as VNode
  if(! current || cached.type ! == current.type) { unmount(cached) }else if (current) {
    resetShapeFlag(current)
  }
  cache.delete(key)                                     // Delete cached components based on the key value
  keys.delete(key)
}
Copy the code

The design of keep-alive for Vue3 and Vue2 is based on LRU Cache, but the data structure used to implement LRU Cache is different.

Vue2 uses the Flow and Options API, while Vue3 uses the Typescript and Composition API.

But the idea of an LRU Cache is absolutely the same.

Vue3 is another place where LRU Cache is used

core/packages/compiler-sfc/src/cache.ts

import LRU from 'lru-cache'

export function createCache<T> (size = 500) {
  return __GLOBAL__ || __ESM_BROWSER__
    ? new Map<string, T>()
    : (new LRU(size) as any as Map<string, T>)
}
Copy the code

LRU is used to parse single-file components, but it is a tripartite library.

The keep-alive LRU Cache needs a more detailed design, so it needs to be written by itself.

Leetcode related algorithms

146. The LRU cache

Design and implement a data structure that satisfies the LRU (least recently used) cache constraints.

Implement LRU Cache class:

  • LRUCache(int capacity) The LRU cache is initialized using a positive integer as capacity
  • Int get(int key) Returns the value of the key if it exists in the cache, otherwise -1 is returned.
  • Void put(int key, int value) If the key already exists, change its value. If not, the group key-value is inserted into the cache. If the number of keywords exceeds capacity due to an insert operation, you need to export the unused keywords.

The functions get and PUT must run in O(1) average time complexity.

Example:

Input ["LRUCache"."put"."put"."get"."put"."get"."put"."get"."get"."get"]
[[2], [1.1], [2.2], [1], [3.3], [2], [4.4], [1], [3], [4[]] outputnull.null.null.1.null, -1.null, -1.3.4] LRUCache LRUCache =new LRUCache(2);
lRUCache.put(1.1); // Cache is {1=1}
lRUCache.put(2.2); // Cache is {1=1, 2=2}
lRUCache.get(1);    / / returns 1
lRUCache.put(3.3); {1=1, 3=3}
lRUCache.get(2);    // return -1 (not found)
lRUCache.put(4.4); {4=4, 3=3}
lRUCache.get(1);    // return -1 (not found)
lRUCache.get(3);    / / return 3
lRUCache.get(4);    / / return 4
Copy the code

In fact, this problem is a handwritten LRU Cache, which we have implemented in the previous section.

Vue2 and Vue3 source code about the design of LRU Cache, this problem can still be confused us?

class LRUCache {
  constructor (capacity) {
    this.cache = new Map(a)this.max = capacity                    
  }

  get (key) {                              
    if (this.cache.has(key)) {            
      const val = this.cache.get(key)     
      this.cache.delete(key)              
      this.cache.set(key, val)
      return val
    }
    return -1
  }

  put (key, value) {                                    
    if (this.cache.has(key)) {                          
      this.cache.delete(key)
    } else if (this.cache.size >= this.max) {           
      const firstEle = this.cache.keys().next().value
      this.cache.delete(firstEle)
    }
    this.cache.set(key, value)                          
  }
}
Copy the code

Time complexity: O(1) for put and GET.

Space complexity: O(capacity), which is related to the maximum capacity.

summary

After reading so much, I think you already understand the LRU Cache.

LRU Cache is a linked list.

Design system Cache algorithm -> thought of LRU Cache -> use Map to implement -> Map is Iterator -> Iterator has linked list properties -> use linked list to solve complex problemsCopy the code

So, we solved this problem, and we came back to the linked list data structure, so it’s important to learn algorithms!

Previous algorithms related articles

The list is introduced

Write a brief introduction to the front-end development algorithm

Tree introduction

Divide-and-conquer and Quicksort (JS) for beginners

Introduction to hash tables

Breadth-first search

Depth-first search

Greedy algorithms for beginners algorithms