Cache avalanche
define
A cache avalanche refers to a situation where a large number of caches expire in a short period of time, resulting in a large number of requests to query the database directly, which causes great pressure on the database. In serious cases, it may lead to database downtime.
Normal execution process:
Cache avalanche execution process:
As can be seen, when the cache fails, a large number of requests directly bypass Redis to request the database, resulting in a lot of pressure on the database.
To solve
Lock line
Locking queuing can play a buffer role to prevent a large number of requests from operating the database at the same time, but its disadvantage is to increase the response time of the system, reduce the throughput of the system, and sacrifice part of the user experience.
When the cache is not queried, only one thread is allowed to query the database, while the other threads are queued. The locking logic is similar to the singleton mode.
Code implementation:
/ / the cache key
String cacheKey = "userlist";
// Query the cache
String data = jedis.get(cacheKey);
if (StringUtils.isNotBlank(data)) {
// Query data and return the result directly
return data;
} else {
// queue up the database before placing it in the cache
synchronized (cacheKey) {
data = jedis.get(cacheKey);
if(! StringUtils.isNotBlank(data)) {// Double judgment
// Query the database
data = findUserInfo();
// Put it in the cache
jedis.set(cacheKey, data);
}
returndata; }}Copy the code
Randomize expiration time
To avoid simultaneous cache expiration, you can add a random time when setting the cache, which can greatly avoid a large number of simultaneous cache expiration.
Code implementation:
// The original cache expiration time
int exTime = 10 * 60;
// Random number generation class
Random random = new Random();
// Cache Settings
jedis.setex(cacheKey, exTime + random.nextInt(1000) , value);
Copy the code
Setting level 2 Cache
Level 2 cache refers to setting up another level of cache in addition to the cache of Redis. When Redis fails, the level 2 cache will be queried first.
For example, you can set up a local cache to query the local cache rather than the database if the Redis cache is invalidated.
The local Cache can be set using Google’s Guava Cache, with capacity and time caching policies, a good Cache tool class.
Guava Cache Official Chinese document
In fact, in most cases we use the local cache first, then access distributed cache (Redis), because the local cache is the fastest, there is no network overhead, but it needs to be updated within a certain period of time, in order to keep consistent with the data in the distributed cache.
The cache to penetrate
define
Cache penetration refers to that both the database and the cache have no data. Because there is no data in the database query, the result is not saved in the cache for fault tolerance. Therefore, the database is queried every time.
To solve
Use filters
We can use A Bloom filter to reduce the number of requests to the database. The principle of a Bloom filter is to hash the data of the database into a bitmap. Before each query, a Bloom filter is used to filter out invalid requests that must not exist, so as to avoid the query pressure caused by invalid requests to the database.
# Redis Bloom filter
Cache empty results
We can save the data from the database to the cache every time. In order to improve the experience of the foreground users (to solve the problem of not finding any information for a long time), we can set the cache time of empty results to be shorter, such as 3~5 minutes.
Cache breakdown
define
A cache breakdown is when a hot cache fails at a certain point and there are a large number of concurrent requests that put a lot of strain on the database.
To solve
Lock line
And cache avalanche lock processing is the same, then check the database to lock, buffer a large number of requests.
Set to never expire
For some hotspot caches, we can set the cache to never expire, so as to ensure the stability of the cache. However, it is necessary to update the cache in time after data changes, otherwise the query results will be incorrect.
Cache warming
Cache preheating is not a problem, but rather an optimization when using caching to improve the experience of the front end user.
Cache preheating means that query results are stored in the cache when the system is started so that users can read the query results directly from the cache, saving users’ waiting time.
There are three ways to realize cache preheating:
- Write the methods that need to be cached in the system initialization method, so that the system will automatically load data and cache data at startup.
- Manually trigger cache warming by mounting methods that need to be cached to a page or back-end interface.
- Set a scheduled task to automatically preheat the cache.
reference
Redis Core Principles and Combat