This post was first posted by Yanglbme on GitHub’s Doocs (stars over 30K). Project address: github.com/doocs/advan…

The interview questions

What are redis expiration policies? What are the memory flushing mechanisms? Write the LRU code implementation by hand?

Interviewer psychoanalysis

If you don’t even know this question, you will be confused and unable to answer it. When you write code online, you will take it for granted that the data written into Redis will exist, which will lead to various bugs in the system. Who is responsible?

There are two common problems:

  • What happened to the data written to Redis?

Some students may encounter that redis in the production environment often loses some data, writes in, after a while may be lost. Oh, my God, classmate, that means redis is useless. Redis is a cache. You store it as a cache, right?

What is cache? Use memory as a cache. Is memory infinite? Memory is precious and finite, disks are cheap and plentiful. A machine may have only a few tens of gigabytes of memory, but a few terabytes of hard disk space. Redis is primarily memory based for high performance, high concurrency read and write operations.

Since memory is limited, for example redis can only use 10GB, what if you write 20GB of data into it? Of course it kills 10 gigabytes of data, and then it keeps 10 gigabytes of data. What data does that kill? What data is retained? Of course, it is to eliminate the infrequently used data, keep the commonly used data.

  • Data clearly expired, how can still occupy the memory?

This is determined by redis expiration policy.

Analysis of interview questions

Redis expiration policy

The Redis expiration policy is: delete periodically + delete lazily.

The so-called periodic deletion means that redis randomly selects some keys with expiration time every 100ms by default, checks whether they are expired, and deletes them if they are expired.

If you put 10W keys in redis, all of which are set to expire, and you check 10W keys every few hundred milliseconds, the Redis will basically die, and the CPU will be overloaded with expired keys. Note that this is not a case of going through all the keys with expiration dates every 100ms, which would be a performance disaster. In fact redis randomly selects some keys every 100ms to check and delete.

However, the problem is that regular deletion may result in many expired keys not being deleted when the time comes. What can be done? So it’s lazy deletion. That is, when you get a key, Redis checks, is that key expired if it’s expired? If it’s out of date it’s deleted at this point and it doesn’t give you anything back.

When a key is retrieved, if the key has expired, it is deleted and nothing is returned.

But this is actually a problem. What happens if you periodically delete a lot of expired keys and you don’t check them in time? What if a large number of expired keys pile up in memory, causing redis to run out of memory blocks?

The answer is: go through the memory elimination mechanism.

Memory flushing mechanism

Redis memory flushing mechanism has the following:

  • Noeviction: New write operations will bug when memory is not large enough to accommodate new writes.
  • Allkeys-lru: Removes the least-recently used key (which is the most commonly used) from the key space when memory is insufficient to accommodate new writes.
  • Allkeys-random: Removes a random key from the key space when memory is insufficient to accommodate new data writes.
  • Volatile -lru: Removes the least recently used key from the expired key space when memory is insufficient to accommodate new writes (this is generally not appropriate).
  • Volatile -random: Randomly removes a key from the expired key space when memory is insufficient to accommodate new writes.
  • Volatile – TTL: When the memory is insufficient to accommodate new data, the key whose expiration time is earlier is removed from the key space.

Write an LRU algorithm

You can hand-write the original LRU algorithm on the spot, but the amount of code is too large to be practical.

Don’t build your own LRU from the ground up, but at least know how to implement a Java version of LRU using existing JDK data structures.

class LRUCache<K.V> extends LinkedHashMap<K.V> {
    private final int CACHE_SIZE;

    /** * How much data can be cached at most **@paramCacheSize cacheSize */
    public LRUCache(int cacheSize) {
        // True allows the linkedHashMap to be sorted in order of access, with the most recent in the header and the oldest in the tail.
        super((int) Math.ceil(cacheSize / 0.75) + 1.0.75 f.true);
        CACHE_SIZE = cacheSize;
    }

    @Override
    protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
        // When the amount of data in the map exceeds the specified number of caches, the oldest data is automatically deleted.
        returnsize() > CACHE_SIZE; }}Copy the code

Welcome to follow my wechat public account “Doocs Open Source Community” and push original technical articles as soon as possible.