Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.


In the last article, we covered cache penetration and how to deal with it. The remaining two problems are relatively simple, and we’ll look at how to solve them in this article.

Cache breakdown

Cache breakdown refers to data that is not in the cache but exists in the database. Due to a large number of concurrent requests, data is not read from the cache and data is fetched from the database at the same time, causing sudden increase in database pressure and excessive pressure.

There are roughly two situations that cause this:

  • The first time the data is queried, the cache is not warmed up and the data is not added to the cache.
  • The cache became invalid due to the expiration time.

Solution:

  • When the cache is not matched, redis distributed lock is used before querying the database, and the queried key value is used as the lock condition.
  • The thread that acquires the lock queries the cache again before querying the database. The reason for this is that there is a high number of concurrent requests to acquire the lock, but the thread that first entered the database will write to the cache after the query, and the thread that later acquired the lock will directly query the cache to obtain the data.
  • Release the distributed lock after reading data.

Code thread:

public String queryData(String key) throws Exception { String data; data = queryDataFromRedis(key); If (data == null) {if(redisLock.tryLock()){// Obtain distributed lock data = queryDataFromRedis(key); If (data == null) {data = queryDataFromDB(key); WriteDataToRedis (data); } redislock. unlock(); // release distributed lock}} return data; }Copy the code

Specific distributed locks can be implemented using the powerful setnx command in Redis:

/* * lock * key; Value - value * NXXX - nx (only if there is a key does not exist can be set) | xx (only exist in the key set) * expx seconds - ex representative, px on behalf of ms; Time - Expiration time, in expx units. * */ jedis.set(key, value, nxxx, expx, time); / / unlock jedis. Del (key);Copy the code

By setting an expiration time at the same time as the lock is added, you can also prevent a thread from hanging and still holding the lock.

Cache avalanche

Cache avalanche refers to the situation where a large amount of data in the cache reaches the expiration time, causing most of the cache to be suddenly unavailable at the same time, and a large amount of query data causes the database to be overburdened or even break down. Note the difference between a cache breakdown and a cache avalanche: a cache breakdown is a large number of concurrent requests to query the same piece of data; Cache avalanche is when a large number of caches expire at the same time, resulting in many query requests not being able to access the cached data to the database.

Solution:

  • Stagger the expiration time of the cache. You can set the expiration time of the cache data to the default value and add a random value to prevent a large number of data from expiration at the same time.
  • Build a highly available Redis cluster to avoid the avalanche of cache server outages.
  • Refer to Hystrix for fuse downgrading.

Conclusion:

With the increasing popularity of Redis, more and more systems begin to use cache technology, but with the convenience, it also causes many problems due to improper use. Only when these problems are considered and overcome during system design can the system become more robust.