Memory used by Redis

We know that Redis is a key-value database based on memory. Because the memory size of the system is limited, we can configure the maximum memory size that Redis can use when using Redis.

1. Use the configuration file

Set the memory size by adding the following configuration to the redis.conf configuration file under the Redis installation directory

// set the maximum Redis memory size to 100M maxmemory 100mbCopy the code

The redis configuration file does not necessarily use the redis. Conf file under the installation directory. When starting the Redis service, you can pass a parameter to specify the redis configuration file

2. Run commands to modify the Settings

Redis supports dynamic memory size changes at run time through commands

// Set the maximum memory used by Redis to 100M 127.0.0.1:6379> configsetMaxmemory 100MB // Obtain the maximum memory size for Redis 127.0.0.1:6379> config get maxmemoryCopy the code

If the maximum memory size is not set or the maximum memory size is set to 0, the memory size is not limited in a 64-bit operating system and the maximum memory size is 3GB in a 32-bit operating system

Redis memory obsolescence

Since you can set the maximum memory usage of Redis, the configured memory will run out. Redis will run out of memory if it continues to add data to Redis.

In fact, Redis defines several strategies for dealing with this situation:

Noeviction: Write requests will no longer be served, errors will be returned (except DEL requests and some special requests)

Allkeys-lru: uses the lru algorithm to weed out allkeys

Volatile – LRU: Uses the LRU algorithm to flush out expired keys

Allkeys-random: randomly weed out data from allkeys

Volatile -random: Randomly weed out keys with expiration dates

Volatile – TTL: The key whose expiration time is set is discarded according to its expiration time. The earlier the key expires, the earlier the key is discarded

Like Noeviction, error returns when no key can be eliminated when using volatile- LRU, volatile- Random, volatile- TTL policies

How do I obtain and set a memory flushing policy

Get the current memory elimination policy:

127.0.0.1:6379 > config get maxmemory - policyCopy the code

Set the elimination policy through the configuration file (modify the redis.conf file) :

maxmemory-policy allkeys-lru
Copy the code

Modify the elimination policy by running the following command:

127.0.0.1:6379 > configset maxmemory-policy allkeys-lru
Copy the code

LRU algorithm

What is the LRU?

The LRU algorithm can be used to flush out memory when the maximum memory available is used.

LRU(Least Recently Used) is a cache replacement algorithm. When using memory as a cache, the size of the cache is usually fixed. When the cache is full and data is added to the cache, some of the old data needs to be eliminated to free up memory for new data. At this point you can use the LRU algorithm. The idea is that if a piece of data has not been used in the recent past, it is unlikely to be used in the future, so it can be eliminated.

Use Java to implement a simple LRU algorithm
public class LRUCache<k.v> {
    / / capacity
    private int capacity;
    // How many nodes are there
    private int count;
    // Cache node
    private Map<k, Node<k, v>> nodeMap;
    private Node<k, v> head;
    private Node<k, v> tail;

    public LRUCache(int capacity) {
        if (capacity < 1) {
            throw new IllegalArgumentException(String.valueOf(capacity));
        }
        this.capacity = capacity;
        this.nodeMap = new HashMap<>();
        // Initialize the head and tail nodes, using sentry mode to reduce the code that determines that the head and tail nodes are empty
        Node headNode = new Node(null.null);
        Node tailNode = new Node(null.null);
        headNode.next = tailNode;
        tailNode.pre = headNode;
        this.head = headNode;
        this.tail = tailNode;
    }

    public void put(k key, v value) {
        Node<k, v> node = nodeMap.get(key);
        if (node == null) {
            if (count >= capacity) {
                // Remove a node first
                removeNode();
            }
            node = new Node<>(key, value);
            // Add a node
            addNode(node);
        } else {
            // Move the node to the head nodemoveNodeToHead(node); }}public Node<k, v> get(k key) {
        Node<k, v> node = nodeMap.get(key);
        if(node ! =null) {
            moveNodeToHead(node);
        }
        return node;
    }

    private void removeNode(a) {
        Node node = tail.pre;
        // Remove from the list
        removeFromList(node);
        nodeMap.remove(node.key);
        count--;
    }

    private void removeFromList(Node<k, v> node) {
        Node pre = node.pre;
        Node next = node.next;

        pre.next = next;
        next.pre = pre;

        node.next = null;
        node.pre = null;
    }

    private void addNode(Node<k, v> node) {
        // Add the node to the header
        addToHead(node);
        nodeMap.put(node.key, node);
        count++;
    }

    private void addToHead(Node<k, v> node) {
        Node next = head.next;
        next.pre = node;
        node.next = next;
        node.pre = head;
        head.next = node;
    }

    public void moveNodeToHead(Node<k, v> node) {
        // Remove from the list
        removeFromList(node);
        // Add the node to the header
        addToHead(node);
    }

    class Node<k.v> {
        k key;
        v value;
        Node pre;
        Node next;

        public Node(k key, v value) {
            this.key = key;
            this.value = value; }}}Copy the code

The above code implements a simple LUR algorithm. The code is very simple and annotated, and it is easy to understand if you look closely.

Implementation of LRU in Redis

Approximate LRU algorithm

Redis uses an approximate LRU algorithm, which is not quite the same as the regular LRU algorithm. The approximate LRU algorithm weeded out data by random sampling, randomly producing 5 (default) keys at a time, and weeding out the least recently used keys from the data.

The number of samples can be modified with the maxmemory-samples parameter: example: maxmemory-samples 10 The larger the maxmenory-samples configuration, the closer the result of the elimination is to the strict LRU algorithm

In order to realize the approximate LRU algorithm, Redis adds an extra 24bit field to each key, which is used to store the last access time of the key.

Redis3.0 optimization of approximate LRU

Redis3.0 has some optimizations for the approximate LRU algorithm. The new algorithm maintains a candidate pool (size 16). The data in the pool is sorted according to the access time. The first randomly selected key is added to the pool, and the subsequent randomly selected key is added to the pool only when the access time is shorter than the minimum time in the pool. When the pool is full, if new keys need to be added, the pool with the highest last access time (most recently accessed) is removed.

When it is necessary to flush out a key, select the key from the pool that has been accessed for the shortest time and flush it out.

Comparison of LRU algorithms

We can compare the accuracy of each LRU algorithm through an experiment. First, add a certain amount of data N to Redis to make the available memory of Redis run out, and then add n/2 new data to Redis. At this time, part of the data needs to be eliminated. The one that should be eliminated is the n/2 data that was added first. Generate the following LRU algorithm comparison diagram (image source) :

  • Light grey is the data that is eliminated
  • Gray is old data that hasn’t been eliminated
  • Green is the new data

We can see that the Redis3.0 sample number of 10 produces a graph that is closest to a strict LRU. Redis3.0 is also better than Redis2.8 with the same 5 samples.

LFU algorithm

LFU algorithm is a new elimination strategy added in Redis4.0. Its full name is Least Frequently Used, and its core idea is that key is eliminated according to the frequency of recent visits. Those that are seldom visited are eliminated first, and those that are Frequently visited are retained.

The LFU algorithm can better represent the heat of a key being accessed. If you are using the LRU algorithm, a key that has not been accessed for a long time, but only once in a while, is considered hot data and will not be eliminated, while some keys that are likely to be accessed in the future will be eliminated. This is not the case with the LFU algorithm, because using a key once does not make it hot data.

LFU has two strategies:

  • Volatile – LFU: The LFU algorithm is used to eliminate the key whose expiration time is set
  • Allkeys-lfu: uses the lfu algorithm to eliminate data in allkeys

Set the two elimination strategies as described above, but note that the two-week strategy can only be set in Redis4.0 or above, if set below Redis4.0 will report an error

The problem

Last but not least, some of you may have noticed that I did not explain why Redis uses an approximate LRU algorithm instead of an exact LRU algorithm. Please give your answer in the comments section and we can discuss and learn together.

References:

Redis. IO/switchable viewer/lru -…

Segmentfault.com/a/119000001…

Segmentfault.com/a/119000001…