preface
Redis is a memory-based Key — Val database, which can be configured to set the maximum amount of memory used to avoid excessive use of server memory. When the available memory is zero, Redis will actively weed out some data to free up memory space. This article analyzes Redis’s active weed out data and passive weed out data strategy.
The body of the
You can set the expiration time parameter for key in Redis. At present, there are three mainstream policies for deleting expired data:
- Delete:
- Periodic scan: Scan the database for expired data at regular intervals and then delete it
- Scheduled deletion: A timer is set every time an expired key is created. After the timer is triggered, the data is deleted
- Passive delete: When accessing data, check whether it has expired and delete it
++ Timed delete will create a lot of timer waste CPU resources, Redis main service is a single thread running, is not friendly; Passive delete wastes memory because expiring does not free memory immediately ++.
Redis uses a combination of periodic scanning and passive deletion. When the server is started, it will run a scheduled task and actively weed out data when it is accessed and memory is full.
Flush out on access
In previous articles, when using get/set methods, we used expireIfNeeded to determine whether a key was expired
int expireIfNeeded(redisDb *db, robj *key) {
/ / key has not expired
if(! keyIsExpired(db,key))return 0;
// If only the equipment is returned from the library, there is no delete key involved
if(server.masterhost ! =NULL) return 1;
server.stat_expiredkeys++;
// Command propagation
propagateExpire(db,key,server.lazyfree_lazy_expire);
notifyKeyspaceEvent(NOTIFY_EXPIRED,
"expired",key,db->id);
// Delay or delete directly
int retval = server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
dbSyncDelete(db,key);
if (retval) signalModifiedKey(NULL,db,key);
return retval;
}
Copy the code
Lazyfree_lazy_expire is used to determine whether data is deferred for deletion
#define LAZYFREE_THRESHOLD 64
int dbAsyncDelete(redisDb *db, robj *key) {
// Delete the expires dictionary first
if (dictSize(db->expires) > 0) dictDelete(db->expires,key->ptr);
dictEntry *de = dictUnlink(db->dict,key->ptr);
if (de) {
robj *val = dictGetVal(de);
// Returns the amount needed to delete the object
size_t free_effort = lazyfreeGetFreeEffort(val);
// If there is a large amount of object recycling, put it into the BIO thread and set the entry to NULL
if (free_effort > LAZYFREE_THRESHOLD && val->refcount == 1) {
atomicIncr(lazyfree_objects,1);
bioCreateBackgroundJob(BIO_LAZY_FREE,val,NULL.NULL);
dictSetVal(db->dict,de,NULL); }}if (de) {
dictFreeUnlinkedEntry(db->dict,de);
if (server.cluster_enabled) slotToKeyDel(key->ptr);
return 1;
} else {
return 0; }}Copy the code
When a dict key-val is deleted, the data is not released but placed in a BIO thread in the background
The above is an expired key deletion strategy for accessing data
Out of memory
When memory is low, the usual elimination strategy is LRU or LFU. As mentioned earlier, there is an LRU parameter on a redisObject to record data
typedef struct redisObject {
unsigned lru:LRU_BITS;
} robj;
Copy the code
Redis’s LRU and LFU are not exactly the same as traditional algorithms, which maintain a bidirectional linked list through which data is moved. Redis only uses LRU to record information because of the large amount of data and the frequent movement of linked list nodes will affect the performance.
For an LRU algorithm, LRU records object access times, but for an LFU, LRU is more than just an increment of access times, because data that is frequently accessed at one time may be less frequently accessed later.
Lru can be divided into two sections when LFU algorithm is used:
8 bit after 16 bit before | | | | — — — — — — — | — — — — — — – | — — — — — – | | reduces the time counting | | the latest count
The LFU will decrease the count over time, and to avoid this, you can see how this is done in the code.
if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
updateLFU(val);
} else {
val->lru = LRU_CLOCK();
}
Copy the code
As mentioned earlier, commands such as set/get trigger changes to the LRU parameter of the key
void updateLFU(robj *val) {
unsigned long counter = LFUDecrAndReturn(val);
counter = LFULogIncr(counter);
val->lru = (LFUGetTimeInMinutes()<<8) | counter;
}
Copy the code
First call LFUDecrAndReturn to decrease the count and then increase the count
// Count time since last change
unsigned long LFUTimeElapsed(unsigned long ldt) {
unsigned long now = LFUGetTimeInMinutes();
if (now >= ldt) return now-ldt;
return 65535-ldt+now;
}
unsigned long LFUDecrAndReturn(robj *o) {
unsigned long ldt = o->lru >> 8;
unsigned long counter = o->lru & 255;
// Calculate how much count to subtract according to lFU_decay_time
unsigned long num_periods = server.lfu_decay_time ? LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
if (num_periods)
counter = (num_periods > counter) ? 0 : counter - num_periods;
return counter;
}
Copy the code
Redis supports the following configurations to set memory flushing policies
- Noeviction do not exclude any data
- Volatile -random Randomly deletes keys with expiration time
- Allkeys-random Deletes random keys
- Volatile – TTL Deletes the key closest to the expiration time (smaller TTL).
- Volatile – LRU Uses an approximate LRU to weed out expired keys
- Allkeys-lru uses an approximate lru algorithm to eliminate allkeys
- Volatile – LFU Uses an approximate LFU algorithm to eliminate expired keys
- Allkeys-lfu uses an approximate lfu algorithm to eliminate allkeys
if(server.maxmemory && ! server.lua_timedout) {int out_of_memory = freeMemoryIfNeededAndSafe() == C_ERR;
int freeMemoryIfNeededAndSafe(void) {
if (server.lua_timedout || server.loading) return C_OK;
return freeMemoryIfNeeded();
}
Copy the code
The entry to the execution is determined in processCommand, call the freeMemoryIfNeeded method
Let’s look at the freeMemoryIfNeeded process step by step
int freeMemoryIfNeeded(void) {
int keys_freed = 0;
// The slave node is configured to ignore maxMemory
if (server.masterhost && server.repl_slave_ignore_maxmemory) return C_OK;
size_t mem_reported, mem_tofree, mem_freed;
mstime_t latency, eviction_latency, lazyfree_latency;
long long delta;
int slaves = listLength(server.slaves);
int result = C_ERR;
// Client pause returns OK
if (clientsArePaused()) return C_OK;
// Calculate whether there is free space
if (getMaxmemoryState(&mem_reported,NULL,&mem_tofree,NULL) == C_OK)
return C_OK;
mem_freed = 0;
latencyStartMonitor(latency);
// Do not eliminate the strategy
if (server.maxmemory_policy == MAXMEMORY_NO_EVICTION)
goto cant_free;
Copy the code
The first part calculates the used space, as well as the free space, using the getMaxmemoryState method
int getMaxmemoryState(size_t *total, size_t *logical, size_t *tofree, float *level) {
size_t mem_reported, mem_used, mem_tofree;
// Get the memory used by zmalloc
mem_reported = zmalloc_used_memory();
if (total) *total = mem_reported;
// MaxMemory is not configured or the maximum memory is not reached
intreturn_ok_asap = ! server.maxmemory || mem_reported <= server.maxmemory;if(return_ok_asap && ! level)return C_OK;
mem_used = mem_reported;
// Get the OutputBuffer of aofBuf and client
size_t overhead = freeMemoryGetNotCountedMemory();
mem_used = (mem_used > overhead) ? mem_used-overhead : 0;
// Calculate the usage
if (level) {
if(! server.maxmemory) { *level =0;
} else {
*level = (float)mem_used / (float)server.maxmemory; }}if (return_ok_asap) return C_OK;
// The limit has not been reached
if (mem_used <= server.maxmemory) return C_OK;
// How much space to free up
mem_tofree = mem_used - server.maxmemory;
if (logical) *logical = mem_used;
if (tofree) *tofree = mem_tofree;
return C_ERR;
}
Copy the code
The used memory is retrieved using the zmalloc_used_memory method and then evaluated
size_t zmalloc_used_memory(void) {
size_t um;
atomicGet(used_memory,um);
return um;
}
Copy the code
Redis uses its own zmalloc method to allocate memory, increasing USED_memory after each successful allocation to count the amount of memory used
// Until enough space is free
while (mem_freed < mem_tofree) {
int j, k, i;
static unsigned int next_db = 0;
sds bestkey = NULL;
int bestdbid;
redisDb *db;
dict *dict;
dictEntry *de;
Copy the code
The loop continues until the free space reaches mem_tofree. The above is the data required to define, and only one Key is selected at a time
if (server.maxmemory_policy & (MAXMEMORY_FLAG_LRU|MAXMEMORY_FLAG_LFU) ||
server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL)
{
struct evictionPoolEntry *pool = EvictionPoolLRU;
Copy the code
LRU, LFU and TTL algorithms are implemented together
struct evictionPoolEntry *pool = EvictionPoolLRU;
while(bestkey == NULL) {
unsigned long total_keys = 0, keys;
for (i = 0; i < server.dbnum; i++) {
db = server.db+i;
// select ALLKEYS according to policy ALLKEYS or only in expiration
dict = (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) ?
db->dict : db->expires;
if((keys = dictSize(dict)) ! =0) { evictionPoolPopulate(i, dict, db->dict, pool); total_keys += keys; }}// No key is selected
if(! total_keys)break;
/* Go backward from best to worst element to evict. */
for (k = EVPOOL_SIZE- 1; k >= 0; k--) {
if (pool[k].key == NULL) continue;
bestdbid = pool[k].dbid;
// select ALLKEYS according to policy ALLKEYS or only in expiration
if (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) {
de = dictFind(server.db[pool[k].dbid].dict,
pool[k].key);
} else {
de = dictFind(server.db[pool[k].dbid].expires,
pool[k].key);
}
// Delete the pool key
if(pool[k].key ! = pool[k].cached) sdsfree(pool[k].key); pool[k].key =NULL;
pool[k].idle = 0;
/* If the key exists, is our pick. Otherwise it is * a ghost and we need to try the next element. */
if (de) {
bestkey = dictGetKey(de);
break;
} else {
/ / key does not exist}}}Copy the code
The evictionPoolEntry array is used at the beginning to hold data that needs to be culled, 16 at a time
#define EVPOOL_SIZE 16
#define EVPOOL_CACHED_SDS_SIZE 255
struct evictionPoolEntry {
unsigned long long idle; //LRU indicates idle time and LFU indicates frequency
sds key; /* Key name. */
sds cached; /* Cached SDS object for key name. */
int dbid; // Database id
};
Copy the code
You can see from the big piece of code above:
- ALLKEYS differs only in choosing from regular dict versus Expires
- EvictionPoolPopulate filters the data into the pool and then reversely retrieves the bestkey
void evictionPoolPopulate(int dbid, dict *sampledict, dict *keydict, struct evictionPoolEntry *pool) {
int j, k, count;
dictEntry *samples[server.maxmemory_samples];
// Get count of random elements
count = dictGetSomeKeys(sampledict,samples,server.maxmemory_samples);
for (j = 0; j < count; j++) {
unsigned long long idle;
sds key;
robj *o;
dictEntry *de;
de = samples[j];
key = dictGetKey(de);
// If you don't delete the expiration date, you should get the object from the dict
if(server.maxmemory_policy ! = MAXMEMORY_VOLATILE_TTL) {if(sampledict ! = keydict) de = dictFind(keydict, key); o = dictGetVal(de); }//LRU and LFU are calculated differently
if (server.maxmemory_policy & MAXMEMORY_FLAG_LRU) {
idle = estimateObjectIdleTime(o);
} else if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
idle = 255-LFUDecrAndReturn(o);
} else if (server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL) {
DictGetVal returns the maximum time used - Expiry time The shorter the expiry time the greater the value
idle = ULLONG_MAX - (long)dictGetVal(de);
} else {
serverPanic("Unknown eviction policy in evictionPoolPopulate()");
}
/ / idle
k = 0;
while (k < EVPOOL_SIZE &&
pool[k].key &&
pool[k].idle < idle) k++;
// The insertion position is 0
if (k == 0 && pool[EVPOOL_SIZE- 1].key ! =NULL) {
continue;
// The empty slot can be inserted
} else if (k < EVPOOL_SIZE && pool[k].key == NULL) {}else {
// The rightmost element is empty to move to make room for insertion
if (pool[EVPOOL_SIZE- 1].key == NULL) {
sds cached = pool[EVPOOL_SIZE- 1].cached;
memmove(pool+k+1,pool+k,
sizeof(pool[0])*(EVPOOL_SIZE-k- 1));
pool[k].cached = cached;
} else {
// There is no space to insert into the k-1 bit to discard
k--;
sds cached = pool[0].cached;
if (pool[0].key ! = pool[0].cached) sdsfree(pool[0].key);
memmove(pool,pool+1.sizeof(pool[0])*k); pool[k].cached = cached; }}/* Cache string */
int klen = sdslen(key);
if (klen > EVPOOL_CACHED_SDS_SIZE) {
pool[k].key = sdsdup(key);
} else {
memcpy(pool[k].cached,key,klen+1); sdssetlen(pool[k].cached,klen); pool[k].key = pool[k].cached; } pool[k].idle = idle; pool[k].dbid = dbid; }}Copy the code
It can be summarized as follows:
- EvictionPoolPopulate is also randomly selected and randomly selects N elements through dictGetSomeKeys to return the actual selected quantity count
- The pool is sorted by the value of LRU, LFU, or TTL. Finally, 16 data worthy of elimination are sorted
DictGetSomeKeys method is ignored, the principle is to randomly go to a dict slot, and then collect N data sequentially
/* Randomly delete all/set expired keys */
else if (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM ||
server.maxmemory_policy == MAXMEMORY_VOLATILE_RANDOM)
{
// Select traversal via nextDb
for (i = 0; i < server.dbnum; i++) {
j = (++next_db) % server.dbnum;
db = server.db+j;
dict = (server.maxmemory_policy == MAXMEMORY_ALLKEYS_RANDOM) ?
db->dict : db->expires;
if(dictSize(dict) ! =0) {
de = dictGetRandomKey(dict);
bestkey = dictGetKey(de);
bestdbid = j;
break; }}}Copy the code
The random method is actually getting a random key
if (bestkey) {
db = server.db+bestdbid;
robj *keyobj = createStringObject(bestkey,sdslen(bestkey));
// Propagate expired commands
propagateExpire(db,keyobj,server.lazyfree_lazy_eviction);
// Below is the space freed by deleting data computations
delta = (long long) zmalloc_used_memory();
latencyStartMonitor(eviction_latency);
if (server.lazyfree_lazy_eviction)
dbAsyncDelete(db,keyobj);
else
dbSyncDelete(db,keyobj);
signalModifiedKey(NULL,db,keyobj);
latencyEndMonitor(eviction_latency);
latencyAddSampleIfNeeded("eviction-del",eviction_latency);
delta -= (long long) zmalloc_used_memory();
mem_freed += delta;
server.stat_evictedkeys++;
notifyKeyspaceEvent(NOTIFY_EVICTED, "evicted",
keyobj, db->id);
decrRefCount(keyobj);
keys_freed++;
if (slaves) flushSlavesOutputBuffers();
// When using asynchronous delete, you need to check whether it has been released
if(server.lazyfree_lazy_eviction && ! (keys_freed %16)) {
if (getMaxmemoryState(NULL.NULL.NULL.NULL) == C_OK) {
/* Let's satisfy our stop condition. */mem_freed = mem_tofree; }}}else {
goto cant_free; /* nothing to free... * /
}
}
result = C_OK;
Copy the code
The real deletion logic is divided into asynchronous deletion and synchronous deletion
cant_free:
if(result ! = C_OK) { latencyStartMonitor(lazyfree_latency);while(bioPendingJobsOfType(BIO_LAZY_FREE)) {
if (getMaxmemoryState(NULL.NULL.NULL.NULL) == C_OK) {
result = C_OK;
break;
}
usleep(1000);
}
latencyEndMonitor(lazyfree_latency);
latencyAddSampleIfNeeded("eviction-lazyfree",lazyfree_latency);
}
latencyEndMonitor(latency);
latencyAddSampleIfNeeded("eviction-cycle",latency);
return result;
}
Copy the code
Failure to free memory will sleep the main thread
Conclusion:
- LRU/LFU is only the difference on the tag, the real filter or random, with LRU/LFU/TTL just make the premise of random more accurate.
- The implementation of ALLKEYS is just a difference between dict and Expires dictionaries, and using ALLKEYS can render normal data obsolete.
Periodically delete
Both of the previous versions do not flush data until command execution, and Redis also periodically flush data, just like the previous periodic Rehash.
void databasesCron(void) {
/* Expire keys by random sampling. Not required for slaves * as master will synthesize DELs for us. */
if (server.active_expire_enabled) {
if (iAmMaster()) {
activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
} else{ expireSlaveKeys(); }}Copy the code
The activeExpireCycle method is executed periodically
void activeExpireCycle(int type) {
unsigned long
// The activity of the validity period controls the following parameters
effort = server.active_expire_effort- 1./* Rescale from 0 to 9. */
// Number of samples
config_keys_per_loop = ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP +
ACTIVE_EXPIRE_CYCLE_KEYS_PER_LOOP/4*effort,
// Task time interval
config_cycle_fast_duration = ACTIVE_EXPIRE_CYCLE_FAST_DURATION +
ACTIVE_EXPIRE_CYCLE_FAST_DURATION/4*effort,
/ / CPU usage
config_cycle_slow_time_perc = ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC +
2*effort,
// Percentage of sample deletions
config_cycle_acceptable_stale = ACTIVE_EXPIRE_CYCLE_ACCEPTABLE_STALE-
effort;
Copy the code
The active_expire_effort can be customized to control the size of the following parameters, ranging from 1 to 10
The range of the following parameters can be calculated according to the active_expire_effort range:
- Config_keys_per_loop Number of samples: [25-65
- Config_cycle_fast_duration Fast time interval: [1250-3250]Microseconds
- Config_cycle_slow_time_perc SLOW scan CPU usage: [25-43]%
- Config_cycle_acceptable_stale Sample percentage: [10-1]%
// Use static parameters to record the last DB selected, whether the last interrupt, and the last call time
static unsigned int current_db = 0;
static int timelimit_exit = 0;
static long long last_fast_cycle = 0;
int j, iteration = 0;
// The number of processed databases is 16 by default
int dbs_per_call = CRON_DBS_PER_CALL;
long long start = ustime(), timelimit, elapsed;
Copy the code
These are some of the parameters
if (type == ACTIVE_EXPIRE_CYCLE_FAST) {
// If there was no early exit last time and the system key expired, the estimated percentage is smaller than the currently set percentage
if(! timelimit_exit && server.stat_expired_stale_perc < config_cycle_acceptable_stale)return;
// The current time is less than the start time of the last task + twice the task interval
if (start < last_fast_cycle + (long long)config_cycle_fast_duration*2)
return;
last_fast_cycle = start;
}
Copy the code
If fast scan is specified, the above situation is not scanned this time
// The number of scans should not be greater than the total number of db scans
if (dbs_per_call > server.dbnum || timelimit_exit)
dbs_per_call = server.dbnum;
// Calculate the collection time limit based on CPU usage and current CPU execution efficiency
timelimit = config_cycle_slow_time_perc*1000000/server.hz/100;
timelimit_exit = 0;
if (timelimit <= 0) timelimit = 1;
if (type == ACTIVE_EXPIRE_CYCLE_FAST)
timelimit = config_cycle_fast_duration;
// Accumulate data
long total_sampled = 0;
long total_expired = 0;
Copy the code
Conditional initialization of dbs_per_CALL and Timelimit
Next comes the elimination process
for (j = 0; j < dbs_per_call && timelimit_exit == 0; j++) {
unsigned long expired, sampled;
redisDb *db = server.db+(current_db % server.dbnum);
current_db++;
do {
unsigned long num, slots;
// Calculate the average expiration time of keys
long long now, ttl_sum;
int ttl_samples;
iteration++;
// No expiration interrupts
if ((num = dictSize(db->expires)) == 0) {
db->avg_ttl = 0;
break;
}
//
slots = dictSlots(db->expires);
now = mstime();
// num/slots indicates that the percentage of the table space used is less than 1%. Don't waste your time scanning the table
if (num && slots > DICT_HT_INITIAL_SIZE &&
(num*100/slots < 1)) break;
expired = 0;
sampled = 0;
ttl_sum = 0;
ttl_samples = 0;
if (num > config_keys_per_loop)
num = config_keys_per_loop;
// The maximum number of accesses is the number of samples * 20
long max_buckets = num*20;
long checked_buckets = 0;
while (sampled < num && checked_buckets < max_buckets) {
for (int table = 0; table < 2; table++) {
// Skip table scan if there is no rehash [1]
if (table == 1 && !dictIsRehashing(db->expires)) break;
unsigned long idx = db->expires_cursor;
idx &= db->expires->ht[table].sizemask;
dictEntry *de = db->expires->ht[table].table[idx];
long long ttl;
// Scan all nodes on the slot
checked_buckets++;
while(de) {
dictEntry *e = de;
de = de->next;
ttl = dictGetSignedIntegerVal(e)-now;
// An attempt to delete expired data succeeded in adding expired
if (activeExpireCycleTryExpire(db,e,now)) expired++;
if (ttl > 0) {
ttl_sum += ttl;
ttl_samples++;
}
sampled++;
}
}
db->expires_cursor++;
}
total_expired += expired;
total_sampled += sampled;
/* Update the average TTL stats for this database. */
if (ttl_samples) {
long long avg_ttl = ttl_sum/ttl_samples;
// The weight currently calculated with the new average is only 2%
if (db->avg_ttl == 0) db->avg_ttl = avg_ttl;
db->avg_ttl = (db->avg_ttl/50) *49 + (avg_ttl/50);
}
// The iteration every 16 cycles is now interrupted if the timeout is exceeded
if ((iteration & 0xf) = =0) { /* check once every 16 iterations. */
elapsed = ustime()-start;
if (elapsed > timelimit) {
timelimit_exit = 1;
server.stat_expired_time_cap_reached_count++;
break; }}// If the sample does not detect expiration data or the current temporary expiration ratio is greater than the threshold, proceed to the next round
} while (sampled == 0 ||
(expired*100/sampled) > config_cycle_acceptable_stale);
}
elapsed = ustime()-start;
server.stat_expire_cycle_time_used += elapsed;
latencyAddSampleIfNeeded("expire-cycle",elapsed/1000);
/* Update our estimate of keys existing but yet to be expired. * Running average with this sample accounting for 5%. */
double current_perc;
if (total_sampled) {
current_perc = (double)total_expired/total_sampled;
} else
current_perc = 0;
// With the new server stat_expired_STALe_perc current_perc only affects 5%
server.stat_expired_stale_perc = (current_perc*0.05)+
(server.stat_expired_stale_perc*0.95);
Copy the code
According to the notes, there are the following:
- All db will be scanned, each time will count the time, if the timeout is interrupted and flags will be set
- You skip tables that use less than 1% because it’s not worth it to collect them
- Will record the current DB scan out the average expiration time, and then with the new field, only 2% weight
- Finally, the new stat_expired_STALe_perc (the estimated percentage of system key expiration) only has 5% weight this time
int activeExpireCycleTryExpire(redisDb *db, dictEntry *de, long long now) {
long long t = dictGetSignedIntegerVal(de);
if (now > t) {
sds key = dictGetKey(de);
robj *keyobj = createStringObject(key,sdslen(key));
propagateExpire(db,keyobj,server.lazyfree_lazy_expire);
if (server.lazyfree_lazy_expire)
dbAsyncDelete(db,keyobj);
else
dbSyncDelete(db,keyobj);
notifyKeyspaceEvent(NOTIFY_EXPIRED,
"expired",keyobj,db->id);
trackingInvalidateKey(NULL,keyobj);
decrRefCount(keyobj);
server.stat_expiredkeys++;
return 1;
} else {
return 0; }}Copy the code
Delete expired key methods, again classified as synchronous or asynchronous
The expires_CURSOR parameter on the DB records the location of expired keys. Within the specified time range, delete expired keys as far as possible, and calculate the expiration rate.
conclusion
- The main difference between Redis LRU/LFU is in the record, the actual deletion logic is the same
- Redis asynchronous deletes free memory operations outside of the main thread
- Redis can only free key-val memory, not primary/secondary and AOF buffers
- Redis will sleep when it is unable to free memory, which is not easy to detect