Cache breakdown

Cache breakdown refers to a key is very hot, in the continuous carrying of large concurrency, large concurrency focused on this point to access, when the key in the moment of failure, continuous large concurrency will Pierce the cache, directly request the database, just like in a barrier cut a hole.

So how to solve this problem?

There are many ways to solve this problem, such as making the cache resident in memory, using asynchronous refresh methods, and so on. Here we introduce the use of distributed locks to solve this problem.

General train of thought

  • 1. We fetch data from the cache and return it if we get it.
  • 2. If the data is not obtained from the cache, it needs to be read from the database. To prevent a large number of concurrent direct access to the database, a distributed lock is required to ensure that only one request can access the database.
  • 3. When the lock is successfully added, perform the following three steps: 1) read the data from the database, 2) write the data to the cache, 3) delete the distributed lock.
  • 4. If the lock fails, sleep the process for a period of time and repeat Step 1.
  • 5. In order to prevent the occurrence of unexpected situations, we need to set a threshold value, when the number of times in the cache exceeds the threshold, directly exit the program, to prevent a large number of concurrent blocked.

Redis implements distributed locking

We use Redis to implement distributed locks, using the following statement

    SET [KEY] [RAND_NUM] EX [LOCK_TIME] NX
Copy the code
  • KEY: specifies the KEY of the lock.
  • RAND_NUM: indicates a random number, which prevents lock deletion due to lock failure due to a long execution time.
  • EX: specifies the expiration time of the lock to prevent deadlocks after the lock expires.
  • LOCK_TIME: specifies the number of seconds after expiration.
  • NX: creates a secondary lock if the secondary lock does not exist.

If the lock does not exist, Redis will return “OK “, indicating that the lock was successfully created. If the lock exists, Redis will return “NULL”, which can be used to determine whether the lock was successfully created.

Usage scenarios

As we can see from the flowchart, updating the cache causes concurrent threads to block.

There are two key points that affect system performance:

  • 1. The greater the number of concurrent caches, the greater the impact on system performance.
  • 2. The longer it takes to update a cache, the greater the impact on system performance.

Therefore, this method is used in the following scenarios: non-hot simple cache data (small concurrency, short update time).