The article was first published on the public account “Mushroom Can’t Sleep”

Antecedents to review

“Source level to understand Redis persistence” “chat Redis expiration key delete strategy” “Redis data structure in detail” “super detailed Redis five data structure underlying implementation”

Let’s take a look at Redis’ memory elimination strategy

Why is there a memory flush mechanism

As we all know, keys in Redis will set expiration time, and corresponding keys will be cleared through certain strategies when the expiration time is reached. However, Redis memory is limited by upper limit, and when the memory upper limit is reached, corresponding KV key value pairs should be eliminated through certain strategies.

Redis memory upper limit

The maxMemory configuration option is used to configure the maximum memory limit that Redis can use to store data. This can be configured in the built-in redis.conf file or by using the CONFIG SET command when redis runs. For example, if we want to configure the Redis cache with a memory limit of 100M, we can configure the following in redis.conf:

maxmemory 100mb
Copy the code

Setting maxMemory to 0 means there is no memory limit. The default is 0 for unlimited on 64-bit systems, but 3GB on 32-bit systems.

When the storage limit is reached, Redis will choose a different policy depending on the situation, either return errors (which results in wasting more memory), or clear some old data reclamation memory to add new data.

Redis memory elimination strategy

  • Noenviction: Does not clear data, just returns an error, which causes more memory to be wasted, for most write commands (except DEL and a few others)
  • Allkeys-lru: Select the least recently used data from all data sets for culling for new data
  • Volatile – LRU: Selects the least recently used data from a set with expiration time to make it obsolete for new data
  • Allkeys-random: Randomly selects data from all data sets to be eliminated for new data
  • Volatile -random: Randomly selects data from a set whose expiration time has been set to obsolete for use by new data
  • Volatile – TTL: Selects expired data from a set with expiration time to be discarded for use by new data
  • Volatile – lFU: Eliminates the least frequently used key from all keys with expiration time
  • Allkeys-lfu: eliminate the least frequently used key from allkeys

Recovery process

It is important to understand that the recycling process is an operational process. The recycling process is as follows:

  • A client runs a new command that adds new data.
  • Redis checks memory usage and if the maxMemory limit is exceeded, reclaims the key according to the policy.
  • A new command is executed, and so on.

We continuously cross the memory limit by checking to add data and then reclaiming the key to go back below the limit.

If a command causes a large amount of memory to be consumed (such as a large collection saved to a new key), the memory limit can quickly be exceeded by this apparent amount of memory.

Approximate LRU algorithm

The LRU algorithm of Redis is not a strict LRU implementation. This means that Redis cannot choose the best candidate keys to recycle, i.e. the ones that have not been accessed for the longest time. Instead, Redis tries to perform an approximate LRU algorithm by sampling a small number of keys and then retrieving the one that fits best (with the longest access time) among the sampled keys.

However, starting with Dis3.0, the algorithm was improved to maintain a pool of reclaim candidate keys. This improves the performance of the algorithm and makes it more similar to the behavior of the real LRU algorithm. One important thing about Redis’s LRU algorithm is that you can adjust the accuracy of the algorithm by changing the number of samples checked per collection.

This parameter can be configured with the following command:

maxmemory-samples 5
Copy the code

The reason Redis doesn’t use a real LRU implementation is because it consumes more memory. However, the approximations are essentially equivalent for applications that use Redis.

LFU

LFU (Least frequently used) is the Least frequently used algorithm. LRU is the least recently used algorithm.

Starting with Redis 4.0, LFU expiration policies can be used. This pattern may be better in some cases (providing better hit/miss ratio), because using LFU Redis attempts to track how frequently items are accessed, so infrequently used items are eliminated, while frequently used items have a higher chance of remaining in memory.

So why is there an LFU algorithm? Take a look at the following scene:

A - A - A - - - A - A -A - - -
B - - - - B - - B - - - - - - B
Copy the code

If it is the LRU algorithm, A will be eliminated, because B is recently used, but it is obvious that A is the most frequently used, so IT should keep A, so the LFU algorithm comes into being. (Eliminate the least used key)

LFU divides the 24 bits of the internal clock of the original key object into two parts, with the first 16 bits also representing the clock and the last 8 bits representing a counter, called a Morris counter. The last eight bits represent the current access frequency of the key object. The eight bits can only represent 255, but redis does not take a linear ascending approach. Instead, it uses a complex formula to adjust the increasing speed of the data by configuring two parameters.

The following figure shows the number of key hits from left to right and the impact factor from top to bottom. When the impact factor is 100, the last eight bits can be added to 255 after 10M hits.

factor 100 hits 1000 hits 100K hits 1M hits 10M hits
0 104 255 255 255 255
1 18 49 255 255 255
10 10 18 142 255 255
100 8 11 49 143 255

This parameter is configurable through this:

lfu-log-factor 10
Copy the code

The counter is growing, so what is the case to cut that?

The default is that the Morris counter is reduced by 1 for every minute a key is not used. This can also be configured by:

lfu-decay-time 1
Copy the code

There is a problem is, how to do with the new key, is not up to be eliminated?

To avoid this problem, Redis defaults the last 8-bit counter value of the newly assigned key to 5, preventing it from being deleted directly due to low access frequency.

conclusion

In order to avoid memory exceeding capacity, Redis uses specific memory elimination strategy to release memory, the main idea is LRU and THE LFU algorithm introduced by Redis 4.0. LRU is the least recently used algorithm and LFU is the least recently used algorithm.

More exciting content, wechat search “mushroom can’t sleep”

If you think it is helpful, please give me a thumbs-up. Your support is the biggest motivation for my creation

The more active you are, the more active you will be. See you next time