I believe you are familiar with the EXPIRE command of Redis and have used it in your daily work. But do you understand how expire works? Today we will take a brief look at the implementation and workings of the EXPIRE command. The reason I’m talking about this with Redis cache flushing is because many people confuse it, and we’ll get into that later.
Expired mechanism
> expire mykey 1000
Copy the code
When we do this, we set an expiration time of 1000s for myKey. Redis stores this key in the Expires dictionary entry of the server DB.
/* EXPIRE key seconds */
void expireCommand(client *c) {
expireGenericCommand(c,mstime(),UNIT_SECONDS);
}
void expireGenericCommand(client *c, long long basetime, int unit) {.../* EXPIRE with negative TTL, or EXPIREAT with a timestamp into the past * should never be executed as a DEL when load the AOF or in the context * of a slave instance. * * Instead we take the other branch of the IF statement setting an expire * (possibly in the past) and wait for an explicit DEL from the master. */
// Remove the key from an instance that is out of date (such as initializing data from AOF or RDB files)
if(when <= mstime() && ! server.loading && ! server.masterhost) { ... }else{ setExpire(c,c->db,key,when); . }... }Copy the code
After 1000 seconds, the key will expire and we will not be able to access the value of the key. If the key expires after 1000 seconds, even though we can’t access it, is the data still in memory? Has Redis freed this data from memory?
Redis has two ways to deal with expired keys, active deletion and passive deletion. In passive deletion mode, the user reads the key and checks whether the key expires. If the key expires, the user deletes the key and returns null to the client. But if the expired key is never accessed, is the key still in memory? No, Redis has a scheduled task to delete the expired key.
Passively deleting a Key
The expireIfNeeded function is called when data is read or written to Redis (except for overwriting, such as set) to determine whether the key is expired. If the key is expired on the master node, it is deleted and del or unlink is sent to the slave node and AOF file. Then null is returned. If it is from a node, null is returned.
/* This function is called when we are going to perform some operation
* in a given key, but such key may be already logically expired even if
* it still exists in the database. The main way this function is called
* is via lookupKey*() family of functions.
*
* The behavior of the function depends on the replication role of the
* instance, because slave instances do not expire keys, they wait
* for DELs from the master for consistency matters. However even
* slaves will try to have a coherent return value for the function,
* so that read commands executed in the slave side will be able to
* behave like if the key is expired even if still present (because the
* master has yet to propagate the DEL).
*
* In masters as a side effect of finding a key which is expired, such
* key will be evicted from the database. Also this may trigger the
* propagation of a DEL/UNLINK command in AOF / replication stream.
*
* The return value of the function is 0 if the key is still valid,
* otherwise the function returns 1 if the key is expired. */
int expireIfNeeded(redisDb *db, robj *key) {
// Whether the key has expired. If not, 0 is returned
if(! keyIsExpired(db,key))return 0;
/* If we are running in the context of a slave, instead of * evicting the expired key from the database, we return ASAP: * the slave key expiration is controlled by the master that will * send us synthesized DEL operations for expired keys. * * Still we try to return the right information to the caller, * that is, 0 if we think the key should be still valid, 1 if * we think the key is expired at this time. */
if(server.masterhost ! =NULL) return 1;
/* Delete the key */
server.stat_expiredkeys++;
// Handle expired keys for persistence and master-slave replication
propagateExpire(db,key,server.lazyfree_lazy_expire);
notifyKeyspaceEvent(NOTIFY_EXPIRED,
"expired",key,db->id);
// Delete the expired key
return server.lazyfree_lazy_expire ? dbAsyncDelete(db,key) :
dbSyncDelete(db,key);
}
Copy the code
The code comments are clear and the logic is simple, so I won’t go over them.
Deleting a Key
When Redis starts, it initializes scheduled tasks,
/* Create the timer callback, this is our way to process many background * operations incrementally, like clients timeout, eviction of unaccessed * expired keys and so forth. */
// Set the background timer event to serverCron().
if (aeCreateTimeEvent(server.el, 1, serverCron, NULL.NULL) == AE_ERR) {
serverPanic("Can't create event loop timers.");
exit(1);
}
Copy the code
The task handle is the serverCron function, and the scheduled task is executed every millisecond.
/* This is our timer interrupt, called server.hz times per second. * Here is where we do a number of things that need to be done asynchronously. * For instance: * * - Active expired keys collection (it is also performed in a lazy way on * lookup). * - Software watchdog. * - Update some statistic. * - Incremental rehashing of the DBs hash tables. * - Triggering BGSAVE / AOF rewrite, and handling of terminated children. * - Clients timeout of different kinds. * - Replication reconnection. * - Many more... * * Everything directly called here will be called server.hz times per second, * so in order to throttle execution of things we want to do less frequently * a macro is used: run_with_period(milliseconds) { .... } * /
int serverCron(struct aeEventLoop *eventLoop, long long id, void *clientData) {.../* Handle background operations on Redis databases. */databasesCron(); .// The default value of server.hz in redis.conf is 10, which is executed 10 times per second
return 1000/server.hz;
}
Copy the code
# Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10Copy the code
/* This function handles 'background' operations we are required to do * incrementally in Redis databases, such as active key expiring, resizing, * rehashing. */
void databasesCron(void) {
/* Expire keys by random sampling. Not required for slaves * as master will synthesize DELs for us. */
if (server.active_expire_enabled) {
if (server.masterhost == NULL) {
// Master instance
activeExpireCycle(ACTIVE_EXPIRE_CYCLE_SLOW);
} else {
// SlaveexpireSlaveKeys(); }}... }Copy the code
The activeExpireCycle function performs expiration cache cleanup,
/* Try to expire a few timed out keys. The algorithm used is adaptive and * will use few CPU cycles if there are few expiring keys, otherwise * it will get more aggressive to avoid that too much memory is used by * keys that can be removed from the keyspace. * * No more than CRON_DBS_PER_CALL databases are tested at every * iteration. * * This kind of call is used when Redis detects that timelimit_exit is * true, so there is more work to do, and we do it more incrementally from * the beforeSleep() function of the event loop. * * Expire cycle type: * * If type is ACTIVE_EXPIRE_CYCLE_FAST the function will try to run a * "fast" expire cycle that takes no longer than EXPIRE_FAST_CYCLE_DURATION * microseconds, and is not repeated again before the same amount of time. * * If type is ACTIVE_EXPIRE_CYCLE_SLOW, that normal expire cycle is * executed, where the time limit is a percentage of the REDIS_HZ period * as specified by the ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC define. */
void activeExpireCycle(int type) {
/* We can use at max ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC percentage of CPU time * per iteration. Since this function gets called with a frequency of * server.hz times per second, the following is the max amount of * microseconds we can spend in this function. */
/ / 1000000 * 25/10/100 = 25000 microseconds
timelimit = 1000000*ACTIVE_EXPIRE_CYCLE_SLOW_TIME_PERC/server.hz/100; ./* Continue to expire if at the end of the cycle more than 25% * of the keys were expired. */
do{...// num <= 20
while (num--) {
...
// Get any key from db-> Expires
if ((de = dictGetRandomKey(db->expires)) == NULL) break;
ttl = dictGetSignedIntegerVal(de)-now;
// Determine whether the key has expired. If so, send unlink live del to the AOF file and the secondary node, and delete the key data
if(activeExpireCycleTryExpire(db,de,now)) expired++; . }/* We don't repeat the cycle if there are less than 25% of keys * found expired in the current DB. */
} while (expired > ACTIVE_EXPIRE_CYCLE_LOOKUPS_PER_LOOP/4);
}
Copy the code
The general logic goes like this:
Delete expired keys 10 times per second:
- Filter 20 out of all keys that are set to expire (any 20 keys in the DB -> Expires dictionary entry)
- Delete an expired key
- If more than 25% of the keys are expired, start again from Step 1.
This algorithm is implemented on a probabilistic basis and does not remove all expired keys to prevent thread blocking and slow client response. If active deletes miss an expired key, that’s fine, there’s passive deletes. If passive deletion is also missed, there is also the following cache elimination mechanism. The cache elimination mechanism is also based on probability, and there may be omissions, as explained below. This is a sacrifice made by Redis for efficient and fast response.
A few tips
1. The system time is not accurate, for example, I am in the primary instance:
> expire mykey 1000
Copy the code
When the key is synchronized to the slave instance, the Redis task expires, and the key is no longer in the slave instance. Redis time is based on server time.
2. Clear the expiration time
Only commands that delete or overwrite key content (including DEL, SET, GETSET, and all * STORE commands) can clear the expiration time. This means that all operations that conceptually change the value stored on the key without replacing it with a new key will keep the expiration time unchanged. For example, incrementing the value of a key using INCR, pushing a new value into a list using LPUSH, or changing the value of a field in a hash using HSET are all operations that leave the timeout unchanged.
>lpush mylist q
"1"
>expire mylist 1000
"1"
>ttl mylist
"997"
>lpush mylist w
"2"
>ttl mylist
"985"
Copy the code
>set mykey a
"OK"
>expire mykey 1000
"1"
>ttl mykey
"990"
>set mykey b
"OK"
>ttl mykey"1"Copy the code
The cache out
Redis data is stored in memory, memory is not infinite, always use up time. When the memory threshold is reached, Redis initiates a cache flush policy to clean up a portion of memory. LRU algorithm is commonly used for cache elimination, and LFU algorithm was added in version 4.0.
Elimination Policy Configuration
1. Maxmemory Specifies the maximum memory size
Redis.conf:
maxmemory 100mb
Copy the code
Or when Redis is running, use the CONFIG SET command:
> config set maxmemory 100mb
Copy the code
Set 2.maxmemory-policy
Following Redis4.0, there are eight cache flushing configurations:
- Volatile – LRU Uses the LRU algorithm to select out data from keys that are set to expire
- Using the lru algorithm, select from allkeys (recommended)
- Volatile – LFU Uses the LFU algorithm and is selected from the key that sets the expiration time
- Using the lfu algorithm, select from allkeys (recommended for version 4.0 and above)
- Volatile -random Selects randomly from the keys that set the expiration time
- Allkeys-random selects randomly from allkeys
- Volatile – TTL Specifies the data with the smallest TTL that is about to expire
- Noeviction does not eliminate data, errors will be reported when memory is insufficient
Why is the AllKeys-LRU policy recommended?
LFU
After all, version 4.0 is not available, useLRU
Don’t worry about version incompatibilities- Random selection, easy to delete the hot data, when the cache breakdown GG
- If fewer keys are set to expire, less cache flushing will occur and memory will quickly run out
Of course, allKeys-LFU is recommended if your company requires Redis 4.0 or higher.
3. Number of random samplesmaxmemory_samples
The precision of random sampling is the number of keys taken out randomly. The larger the value is, the closer it is to the real LRU algorithm. However, the larger the value is, the higher the corresponding consumption will be, which has a certain impact on performance. The default value is 5.
Will be called when a command to Redis freeMemoryIfNeededAndSafe function to judge whether to cache out which exit strategy and execution.
int processCommand(client *c) {...if(server.maxmemory && ! server.lua_timedout) {intout_of_memory = freeMemoryIfNeededAndSafe() == C_ERR; . }... }Copy the code
Approximate LRU algorithm
Instead of using a real LRU algorithm, which would be too memory intensive, Redis uses an approximate LRU algorithm to sample a small number of keys and then drive the oldest unused key out of the sampled keys. The sample number is maxmemory-samples, with a default value of 5. Version 3.0 further optimizes the algorithm to make it perform more like a true LRU algorithm.
The keys and values of Redis are redisObject objects:
typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or * LFU data (least significant 8 bits frequency * and most significant 16 bits access time). */
int refcount;
void *ptr;
} robj;
Copy the code
The redisObject has a 24-bit space LRU, which is used to store low-level timestamps in seconds when the LRU algorithm is used. This timestamp, LRU_CLOCK, is updated whenever the key is updated or accessed,
/* Return the LRU clock, based on the clock resolution. This is a time * in a reduced-bits format that can be used to set and check the * object->lru field of redisObject structures. */
unsigned int getLRUClock(void) {
return (mstime()/LRU_CLOCK_RESOLUTION) & LRU_CLOCK_MAX;
}
Copy the code
Redis3.0 provides a pool array of 16 candidate keys to be eliminated. When a cache flush is performed, candidate keys are filtered from all dB to be flushed.
/* We don't want to make local-db choices when expiring keys, * so to start populate the eviction pool sampling keys from * every DB. */
for (i = 0; i < server.dbnum; i++) {
db = server.db+i;
dict = (server.maxmemory_policy & MAXMEMORY_FLAG_ALLKEYS) ?
db->dict : db->expires;
if((keys = dictSize(dict)) ! =0) {
// Build the cache pool to be deprecatedevictionPoolPopulate(i, dict, db->dict, pool); total_keys += keys; }}Copy the code
There is a concept of idle idle time, i.e. how long the key has not been accessed (idle equals lru_clock-lru), in order of idle ascending order. Maxmemory-samples keys will be randomly selected from the key space (all keys or keys with expiration time set) and their idle time will be calculated respectively. The pool will be entered only when the pool has free space or idle time is greater than the minimum idle time in the pool. Select the key with the maximum idle time from the pool.
The comparison between real LRU algorithm and approximate LRU algorithm is shown as follows:
The light gray band is the object that has been eliminated, the gray band is the object that has not been eliminated, and the green band is the object that has been added. As can be seen, Redis 3.0 works better than Redis 2.8 when maxmemory-samples is 5. With maxmemory-samples at 10, Redis 3.0 is pretty close to theoretical performance.
LFU
Configuration of LFU policies: In addition to the maxMemory, maxMemory-policy, and maxmemory_samples mentioned above (maxmemory-policy configures volatile- LFU or AllKeys-LFU), there are two configuration items:
# Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good # idea to start with the default settings and only change them after investigating # how to improve the performances and how the keys LFU change over time, which # is possible to inspect via the OBJECT FREQ command. # # There are two tunable parameters in the Redis LFU implementation: the # counter logarithm factor and the counter decay time. It is important to # understand what the two parameters mean before changing them. # # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis # uses a probabilistic increment with logarithmic behavior. Given the value # of the old counter, when a key is accessed, the counter is incremented in # this way: # # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P. # # The default lfu-log-factor is 10. This is a table of how the frequency # counter changes with a different number of accesses with different # logarithmic factors: # # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | | 255 | 255 # + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + # 18 49 | | | | 1 255 | | 255 | 255 # + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + # 18 | | | | 10 10 142 | | 255 | 255 # + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + 8 # | 100 | | | | 49 11 143 | 255 | # +--------+------------+------------+------------+------------+------------+ # # NOTE: The above table was obtained by running the following commands: # # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo # # NOTE 2: The counter initial value is 5 in order to give new objects a chance # to accumulate hits. # # The counter decay time is the time, in minutes, that must elapse in order # for the key counter to be divided by two (or decremented if it has a value # less <= 10). # # The default value for the lfu-decay-time is 1. A Special value of 0 means to # decay the counter every time it happens to be scanned. # lfu-log-factor 10 lfu-decay-time 1Copy the code
lfu-log-factor
Adjust the counter growth rate, the larger the lfu-log-factor, the slower the counter growth.lfu-decay-time
Controls how fast counter decreases in minutes
Lru in Redis redisObject:
typedef struct redisObject {
unsigned type:4;
unsigned encoding:4;
unsigned lru:LRU_BITS; /* LRU time (relative to global lru_clock) or * LFU data (least significant 8 bits frequency * and most significant 16 bits access time). */
int refcount;
void *ptr;
} robj;
Copy the code
When using LFU, the last eight bits of LRU hold the counter and the first 16 bits hold the access time (minute level).
Redis’s LFU algorithm maintains a counter for each key. The counter increases each time the key is accessed. The larger the counter, the more frequent the access can be
Update this value when data is accessed:
robj *lookupKey(redisDb *db, robj *key, int flags) {...if(server.maxmemory_policy & MAXMEMORY_FLAG_LFU) { updateLFU(val); }... }Copy the code
updatelru
/* Update LFU when an object is accessed. * Firstly, decrement the counter if the decrement time is reached. * Then logarithmically increment the counter, and update the access time. */
void updateLFU(robj *val) {
unsigned long counter = LFUDecrAndReturn(val);
counter = LFULogIncr(counter);
val->lru = (LFUGetTimeInMinutes()<<8) | counter;
}
Copy the code
Decrease the counter before decreasing LFUDecrAndReturn and increasing the counter counter.
/* If the object decrement time is reached decrement the LFU counter but * do not update LFU fields of the object, we update the access time * and counter in an explicit way when the object is really accessed. * And we will times halve the counter according to the times of * elapsed time than server.lfu_decay_time. * Return the object frequency counter. * * This function is used in order to scan the dataset for the best object * to fit: as we check for the candidate, we incrementally decrement the * counter of the scanned objects if needed. */
unsigned long LFUDecrAndReturn(robj *o) {
unsigned long ldt = o->lru >> 8;
unsigned long counter = o->lru & 255;
unsigned long num_periods = server.lfu_decay_time ? LFUTimeElapsed(ldt) / server.lfu_decay_time : 0;
if (num_periods)
counter = (num_periods > counter) ? 0 : counter - num_periods;
return counter;
}
Copy the code
The function first takes the last update time of 16 bits high LDT and the last update time of 8 bits low counter, and then calculates whether or by how much it decreases based on the configured lFU_decay_time.
LFUTimeElapsed is used to calculate the difference between the current elapsed time and LDT:
/* Return the current time in minutes, just taking the least significant * 16 bits. The returned time is suitable to be stored as LDT (last decrement * time) for the LFU implementation. */
// The current time (minute level) is 16 bits lower
unsigned long LFUGetTimeInMinutes(void) {
return (server.unixtime/60) & 65535;
}
/* Given an object last access time, compute the minimum number of minutes * that elapsed since the last access. Handle overflow (ldt greater than * the current 16 bits minutes time) considering the time as wrapping * exactly once. */
unsigned long LFUTimeElapsed(unsigned long ldt) {
unsigned long now = LFUGetTimeInMinutes();
if (now >= ldt) return now-ldt;
return 65535-ldt+now;
}
Copy the code
The current time (minute level), take the lower 16 bits, then calculate the difference with LDT now-ldt. When LDT > now, a period (16 bits, maximum 65535) has passed by default. The value is 65,535 – LDT +now.
Then divide by lFU_DECay_time, LFUTimeElapsed(LDT)/server.lFU_decay_time, reduce counter by N, Counter – num_periods. If lFU_decay_time has not passed, counter remains the same.
This is done to prevent keys that are frequently accessed for a period of time that may be rarely accessed for a later period of time. Incrementing the counter does not mean that the larger the counter is, the more frequently it is accessed.
If the data is accessed frequently, the counter will not be reduced.
Increase LFULogIncr * *
/* Logarithmically increment a counter. The greater is the current counter value * the less likely is that it gets really implemented. Saturate it at 255. */
uint8_t LFULogIncr(uint8_t counter) {
if (counter == 255) return 255;
double r = (double)rand()/RAND_MAX;
double baseval = counter - LFU_INIT_VAL;
if (baseval < 0) baseval = 0;
double p = 1.0/(baseval*server.lfu_log_factor+1);
if (r < p) counter++;
return counter;
}
Copy the code
Instead of simply adding +1 to each call, counter uses a control factor p between 0 and 1. The maximum value of counter is 255. Take a random number r between 0 and 1 and compare it with p. When r
+--------+------------+------------+------------+------------+------------+ | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | +--------+------------+------------+------------+------------+------------+ | 0 | 104 | 255 | 255 255 | 255 | | + -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + | 1 18 49 | | 255 | | 255 | 255 | + -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + 10 10 18 | | | | 142 | | 255 | 255 + + -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- - + -- -- -- -- -- -- -- -- -- -- -- -- + | 100 | 8 11 49 | | 143 | | 255 | +--------+------------+------------+------------+------------+------------+Copy the code
When creating new objects, objects whose counter is 0 can easily be eliminated, so Redis sets an initial counter for each new key
robj *createObject(int type, void *ptr) {
robj *o = zmalloc(sizeof(*o));
o->type = type;
o->encoding = OBJ_ENCODING_RAW;
o->ptr = ptr;
o->refcount = 1;
/* Set the LRU to the current lruclock (minutes resolution), or * alternatively the LFU counter. */
if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
// The initial counter is 5
o->lru = (LFUGetTimeInMinutes()<<8) | LFU_INIT_VAL;
} else {
o->lru = LRU_CLOCK();
}
return o;
}
Copy the code
LFU also has a pool to be deprecated. Its logic is the same as that of LRU, but the calculation logic of IDLE is a little different.
/* This is an helper function for freeMemoryIfNeeded(), it is used in order * to populate the evictionPool with a few entries every time we want to * expire a key. Keys with idle time smaller than one of the current * keys are added. Keys are always added if there are free entries. * * We insert keys on place in ascending order, so keys with the smaller * idle time are on the left, and keys with the higher idle time on the * right. */
void evictionPoolPopulate(int dbid, dict *sampledict, dict *keydict, struct evictionPoolEntry *pool) {...// Random sample
count = dictGetSomeKeys(sampledict,samples,server.maxmemory_samples);
for (j = 0; j < count; j++) {
unsigned long longidle; ./* Calculate the idle time according to the policy. This is called * idle just because the code initially handled LRU, but is in fact * just a score where an higher score means better candidate. */
if (server.maxmemory_policy & MAXMEMORY_FLAG_LRU) {
// LRU idle calculation
idle = estimateObjectIdleTime(o);
} else if (server.maxmemory_policy & MAXMEMORY_FLAG_LFU) {
/* When we use an LRU policy, we sort the keys by idle time * so that we expire keys starting from greater idle time. * However when the policy is an LFU one, we have a frequency * estimation, and we want to evict keys with lower frequency * first. So inside the pool we put objects using the inverted * frequency subtracting the actual frequency to the maximum * frequency of 255. */
// LFU idle calculation
idle = 255-LFUDecrAndReturn(o);
} else if (server.maxmemory_policy == MAXMEMORY_VOLATILE_TTL) {
/* In this case the sooner the expire the better. */
idle = ULLONG_MAX - (long)dictGetVal(de);
} else {
serverPanic("Unknown eviction policy in evictionPoolPopulate()"); }... }}Copy the code
Idle = 255-LFUDecrAndReturn(o); idle = 255-LFUDecrAndReturn(o);
Afterword.
I believe there should be a lot of people think, Redis key expired, if accessed, determine whether expired, if expired, delete. If it is not accessed, it is reclaimed by the cache flush policy. That’s not true. Redis cache expiration is removed either actively or passively. The cache flush policy just might be able to filter a key out of date. It doesn’t care if the key is out of date, but if it is, it will delete the key.
Both LRU and LFU in Redis cache elimination strategy maintain a cache pool to be eliminated. Sample data of random Maxmemory-samples are taken to calculate their idle (calculation methods are different) and then determine whether they can be inserted into the pool. Then, the largest idle key in the pool is discarded.
PS. Tired ~~, feel good, just click a praise bar ^_^