preface
Recently in view of the Internet company interview asked knowledge points, summed up the Java programmer interview involves most of the interview questions and answers to share with you, I hope to help you review before the interview and find a good job, but also save you on the Internet to search for information time to learn.
Content covers: Java, MyBatis, ZooKeeper, Dubbo, Elasticsearch, Memcached, Redis, MySQL, Spring, SpringBoot, SpringCloud, RabbitMQ, Kafka, Linux and other technology stacks.
Full version Java interview questions address: Java backend questions integration
Set validity period
When using the Redis service, in many cases certain key-value pairs will only be valid for a certain period of time. To prevent this type of data from holding up memory, we can set a key-value pair expiration date. Redis allows you to set an expiration date for a key with four separate commands: 10 + sets of Java interview documents, address: Java interview questions for 2021
expire key ttl
Will:key
Value the expiration time is set tottl
seconds.pexpire key ttl
Will:key
Value the expiration time is set tottl
ms.expireat key timestamp
Will:key
Value is set to the specified expiration timetimestamp
Number of seconds.pexpireat key timestamp
Will:key
Value is set to the specified expiration timetimestamp
Number of milliseconds.
PS: No matter which command is used, the bottom layer of Redis is finally implemented with the pexpireat command. In addition, the set command can also set the expiration time along with the key, which ensures the atomicity of the set value and expiration time.
After the validity period is set, you can run the TTL and PTTL commands to query the remaining expiration time. If the expiration time is not set, -1 is returned. If an invalid expiration time is set, -2 is returned.
ttl key
returnkey
Remaining expiration seconds.pttl key
returnkey
Number of milliseconds remaining expired.
Expiry policies
When deleting an expired key, we generally have three strategies:
Time to delete
Set a timer for each key and delete the key when the expiration time is up. This strategy is memory-friendly, but cpu-unfriendly, because each timer consumes a certain amount of CPU resources.
Lazy to delete
Do not delete the key regardless of whether it is expired or not. Wait until each time to obtain the key to determine whether it is expired, delete the key if it is expired, otherwise return the corresponding value of the key. This strategy is not memory friendly and can waste a lot of memory.
Regularly scan
The system scans the keys every once in a while and deletes expired keys. This strategy is a compromise between the above two strategies. It should be noted that the regular frequency should be controlled according to the actual situation. One drawback of using this strategy is that expired keys may also be returned.
In Redis, it chooses the combined use of strategy 2 and strategy 3. Redis periodically scans only the keys with expiration date set. Because Redis stores the keys with expiration date separately, it does not scan all keys:
typedef struct redisDb { dict *dict; // All key values are dict *expires; // Dict *blocking_keys with an expiration date; Dict *watched_keys; // Dict *watched_keys; // Dict *watched_keys; //WATCHED keys int id; //Database ID //... Other attributes are omitted} redisDb; Copy the codeCopy the code
Eight elimination strategies
If all keys in Redis are not expired and memory is full, what will Redis do if the client continues to execute commands such as set? Redis offers different elimination strategies to handle this scenario.
First, Redis provides a parameter maxMemory to configure the maximum amount of memory Redis can use:
Maxmemory <bytes> Copies the codeCopy the code
Alternatively, you can run the config set maxmemory 1GB command to dynamically change the memory. If this parameter is not set, Redis uses a maximum of 3GB of memory on 32-bit operating systems and no limit on 64-bit operating systems.
Redis provides eight types of elimination policies, which can be configured with the maxmemory-policy parameter:
PS: You can also run the config set maxmemory-policy < policy > command to dynamically configure an elimination policy.
LRU algorithm
LRU: Least Recently Used. That is, it has not been used for the longest time. This is mainly about time of use.
Redis improved LRU algorithm
In Redis, the traditional LRU algorithm is not adopted, because the traditional LRU algorithm has two problems:
- Extra space is required for storage.
- There may be some
key
Value is used frequently, but not recently, thus being usedLRU
Algorithm deletion.
In order to avoid the above two problems, Redis has reformed the traditional LRU algorithm and deleted it through sampling.
The configuration file provides an attribute, Maxmemory_samples 5, which defaults to 5, to randomly extract five keys and then delete them using the LRU algorithm, so it’s clear that the larger the key, the more accurate the deletion will be.
For the sampling LRU algorithm and the traditional LRU algorithm, there is a comparison chart on the official website of Redis:
- Light gray bands are deleted objects.
- The gray bands are objects that have not been deleted.
- Green is the added object.
The first image on the upper left represents the traditional LRU algorithm. As you can see, when the sample number reaches 10 (upper right), it is very close to the traditional LRU algorithm.
How does Redis manage heat data
When we talked about string objects earlier, we mentioned that redisObject has an LRU property:
typedef struct redisObject { unsigned type:4; // Object type (4 bits =0.5 bytes) Unsigned Encoding :4; // Encode (4 bits =0.5 bytes) unsigned LRU :LRU_BITS; // Record the last time the object was accessed by the application (24 bits =3 bytes) int refcount; // Reference count. Void * PTR; void * PTR; // Point to the underlying actual data storage structure, such as SDS, etc. (8 bytes)} robj; Copy the codeCopy the code
The LRU attribute is written when the object is created and updated when the object is accessed. The normal thinking is that the final decision to delete a key is always to subtract the LRU from the current timestamp, and the largest difference will be deleted first. However, this is not done in Redis. The global attribute lru_clock is maintained in Redis. This attribute is updated every 100 milliseconds by a global function serverCron, which records the current Unix timestamp.
The final decision to delete data is obtained by subtracting the object’s LRU attribute from the lRU_clock. So why is Redis doing this? Wouldn’t it be more accurate to just take the global time?
This is because it avoids the need to directly fetch the global attribute every time the lRU attribute of the object is updated, instead of calling the system function to obtain the system time, thus improving efficiency (Redis has a lot of such details to improve performance, which can be said to optimize the performance as much as possible).
The lRU attribute in the redisObject is only 24 bits, which can only store 194 days of timestamp size. After 194 days, the value is reset from 0. It is possible that the LRU attribute in the redisObject is greater than the global LRU_clock attribute.
Because of this, the calculation also needs to be divided into two cases:
- When the global
lruclock
>lru
, use thelruclock
–lru
Get free time. - When the global
lruclock
<lru
, use thelruclock_max
(i.e.194
Days) –lru
+lruclock
Get free time.
It should be noted that this calculation method does not guarantee that the data with the longest idle time can be deleted from the sampled data. This is because it is rare for the first 194 days not to be used, and again only if the LRUClock continues to exceed the LRU attribute in round 2 does the calculation go wrong.
For example, the LRU recorded by object A is 1 day, but the second round of LRUClock has reached 10 days, so the calculation result is only 10-1=9 days, actually 194+10-1=203 days. But this kind of situation can be said to happen less, so this kind of processing method is possible to delete inaccurate situation, but itself this algorithm is an approximate algorithm, so it will not have too much impact.
LFU algorithm
LFU full name: most Frequently Used. That is: the least frequency of recent use, this is mainly aimed at the frequency of use. This property is also recorded in the LRU property of the redisObject.
When we adopted the LFU reclamation strategy, the high 16 bits of the LRU attribute were used to record the visit time (Last Decrement Time: LDT, in minutes) and the low 8 bits were used to record the visit frequency (Logistic Counter: logC), or counter for short.
Increasing access frequency
The LFU counter has only 8 bits per key, and the maximum it can represent is 255, so Redis uses a probability-based logarithm to increment counter. r
Given an old access frequency, counter increments as a key is accessed in the following way:
- extract
0
和1
Random number betweenR
. counter
– Initial value (Default value5
), and obtain a base difference if the difference is less than0
, is directly taken0
For convenience of calculation, the difference is denoted asbaseval
.- The probability of
P
The calculation formula is:1/(baseval * lfu_log_factor + 1)
. - if
R < P
, the frequency increases (counter++
).
The lfu_log_factor in the formula is called the logarithmic factor. The default value is 10, which can be controlled by parameters:
Lfu_log_factor 10 copies the codeCopy the code
Here is the logarithmic factorlfu_log_factor
And the frequency ofcounter
Growth diagram:
As you can see, when the logarithmic factor lFU_log_factor is 100, it takes about 10M (10 million) visits to increase access counter to 255, The default 10 can also support up to 1M (1 million) visits to counter to reach the 255 limit, which is sufficient for most scenarios.
The access frequency decreases
If the visit frequency of counter is only increasing, it will all reach 255 sooner or later. In other words, increasing counter cannot fully reflect the heat of a key. Therefore, when a key is not accessed for a period of time, counter also needs to decrease accordingly.
The decaying rate of counter is controlled by the lfu-decay-time parameter, which defaults to 1 in minutes. The default value of 1 means that if there is no access for N minutes, counter is reduced by N.
Lfu-decay-time 1 Duplicates the codeCopy the code
The specific algorithm is as follows:
- Gets the current timestamp, converted tominutesAfter low
16
Bit (For subsequent calculations, this value is denoted asnow
). - Fetch from the object
lru
Height in attributes16
Bit (For subsequent calculations, this value is denoted asldt
). - when
lru
>now
, the default is that a cycle has passed (16
A, the largest65535
), then take the difference65535-ldt+now
: whenlru
< =now
, take the differencenow-ldt
(For the sake of subsequent calculations, the difference is denoted asidle_time
). - Fetch the configuration file
lfu_decay_time
Value, and then calculate:idle_time / lfu_decay_time
(For subsequent calculations, this value is denoted asnum_periods
). - The final will be
counter
Reduce:counter - num_periods
.
It seems so complicated, but the formula is simply: Compare the current timestamp with the lRU attribute of the object and calculate how long it is currently inaccessible. For example, the result is 100 minutes, and then remove the configuration parameter lFU_decay_time. If this configuration is 1 by default, 100/1=100, It means no access for 100 minutes, so counter goes down by 100.
conclusion
This paper mainly introduces the Redis expired key processing strategy, and when the server memory is not enough Redis eight kinds of elimination strategy, and finally introduces two main elimination algorithm in Redis LRU and LFU. Full version Java interview questions address: Java backend questions integration