In Redis, expiration policies and memory flushing policies are two completely different concepts.
Redis expiration policy refers to the policy used by Redis to delete expired key/value pairs.
The Redis memory flushing mechanism refers to the strategy used to delete eligible key/value pairs when the running memory of Redis exceeds the maximum memory set by Redis, so as to ensure the efficient operation of Redis.
Redis maximum running memory
The memory flushing mechanism is triggered only when Redis reaches a certain threshold of running memory, which is the maximum running memory we set.
This value can be found in the Redis configuration file Redis. Conf with the configuration item maxMemory.
Memory elimination execution process:
Example Query the maximum running memory
Use the command config get maxmemory to view the maximum running memory set.
A result of 0 is the default value for 64-bit operating systems. When maxMemory is 0, there is no memory limit.
For 32-bit operating systems, the default maximum memory value is 3 GB.
Memory flushing strategy
Run the config get maxmemory-policy command to view the current Redis memory flushing policy.
Redis uses noeviction type memory flushing mechanism, which means that when running memory exceeds maximum set memory, no data will be voted out, but new operations will report errors.
Memory flushing policy classification
Earlier versions of Redis had the following six elimination strategies:
- noeviction: Does not eliminate any data, when the memory is insufficient, the new operation will report an error,
Redis
Default memory flushing policy; - Allkeys-lru: Removes the oldest unused key from the entire key;
- Allkeys-random: randomly eliminate any key value;
- Volatile -lru: Eliminates the oldest unused key of all keys with expiration dates; volatile-lru: eliminates the oldest unused key of all keys with expiration dates.
- Volatile -random: randomly weed out any key with expiration time; volatile-random: randomly weed out any key with expiration time.
- Volatile – TTL: preemptively eliminates keys that expire earlier.
In Redis 4.0, two new elimination strategies have been added:
- Volatile -lfu: eliminates the least used key of all expired keys.
- Allkeys-lfu: Removes the least used key from the entire key.
Allkeys-xxx indicates the flushing of data from allkeys, and volatile- XXX indicates the flushing of data from keys with expired keys.
Modify Redis memory flushing policy
- Method 1: Pass
Config set maxmemory-policy Specifies the policy
Command Settings. It has the advantage that it takes effect immediately after being set up and does not require a restartRedis
Service, the disadvantage is restartRedis
After that, the Settings are deactivated. - Method 2: Modify the parameters
Redis
Configuration file modification, settingMaxmemory - policy strategy
, its advantage is restartRedis
The configuration is not lost after service, but must be restartedRedis
Service, the setting takes effect.
Memory elimination algorithm
LRU algorithm
LRU (Least Recently Used) is a common page replacement algorithm that selects pages that have not been Used for the most recent time to eliminate.
1. Implementation of LRU algorithm
The LRU algorithm needs to be based on the structure of the linked list. The elements in the linked list are arranged from front to back according to the operation order. The key of the latest operation will be moved to the head of the list.
Java implements the LRU algorithm
2. Near LRU algorithm
Redis uses an approximate LRU algorithm to save memory. It is realized by adding an extra field to the existing data structure, which is used to record the last access time of this key value. When Redis memory is obsolete, random sampling is used to eliminate data. It takes five values at random (configurable) and weeds out the one that hasn’t been used the longest.
3. Shortcomings of LRU algorithm
One disadvantage of the LRU algorithm is that, for example, a key that has not been used for a long time will not be weeded out if it has been accessed recently, even if it is the least used cache.
LFU algorithm
LFU is an algorithm that weeds out data based on the total number of accesses. The core idea is that if data has been accessed multiple times in the past, it will be accessed more Frequently in the future.
LFU solves the problem that data will not be eliminated after being accessed once in a while, which is also more reasonable than LRU algorithm.
In Redis each object header records LFU information, source code is as follows:
typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or * LFU data (least significant 8 bits frequency * and most significant 16 bits access time). */
int refcount;
void *ptr;
} robj;
Copy the code
In Redis, LFU storage is split into two parts, a 16-bit LDT (Last Decrement Time) and an 8-bit LogC (Logistic Counter).
logc
It is used to store the access frequency. The maximum integer value that 8 bits can represent is 255. The smaller its value is, the lower the use frequency is and the easier it is to be eliminated.ldt
It’s used to store the last timelogc
Update time of.
reference
Redis Core Principles and Combat