“This is the 20th day of my participation in the November Gwen Challenge. See details of the event: The Last Gwen Challenge 2021”.
Cache avalanche
If the cache fails in a large area at the same time, subsequent requests will fall on the database, causing the database to crash because it cannot withstand a large number of requests in a short period of time
For example, on the home page of e-commerce, the expiration time of keys on all home pages is 12 hours, and they are refreshed at 12:00 noon. If there is a seckill activity at midnight and a large number of users flood in, but all keys are invalid at the time of cache, all requests will fall into the database at this time, and the database cannot carry them, it will be directly GG, or Redis will break down. It also causes a large number of requests to fall into mysql, causing a hang up.
The solution
-
Therefore, in this case, we should add a random value to the expiration time of each key to avoid a large number of key failures at the same time. In redis cluster deployment, hot data can be distributed to different libraries.
-
Pre-event: try to ensure the high availability of the Redis cluster, as soon as possible to make up for machine downtime, choose the appropriate memory elimination strategy
-
In: Local EhCache + Hystrix limit and degrade to avoid mysql crash
-
After the fact: data stored in redis persistence mechanism is restored to the cache as soon as possible.
The cache to penetrate
A large number of requested keys do not exist in the cache. For example, a hacker creates a large number of requests for keys that do not exist in the cache, resulting in a large number of requests falling into the database.
The solution
-
The first step should be to do basic input verification, illegal parameters are directly blocked, such as query database ID cannot be less than 0, verify mailbox format, and so on
-
If neither the cache nor the database can find the data for a key, the key is written to Redis with a value of null and the expiration time is set to avoid the next request falling on the database.
-
Through the Bloom filter, the Bloom filter can be very convenient to determine whether a given data exists in the mass data. You can store the value of all possible requests to the Bloom filter, when the request comes to determine whether the request sent by the user exists in the Bloom filter, not directly intercept.
Cache breakdown
Cache breakdown refers to a Key is very hot, in the continuous carrying of large concurrency, large concurrency on this point to access, when the Key failure moment, continuous large concurrency will Pierce the cache, direct requests to the database
Cache consistency
If you’re looking for strong consistency, you can’t use caching because you can’t guarantee strong consistency, only final consistency.
- Delete the cache first, then update the database
If the database update fails, then the database is still the old data, redis is empty, data will not be inconsistent, read empty will go to the database for query, and then update to the cache.
- Join the queue and serialize
First remove the cache, then update the database, in high concurrency scenarios also can appear problem, such as delete the cache, haven’t update the database at this moment, another thread in, found redis is empty, will be to read the database, and then update to redis, but at that moment, delete the cached thread and then update the database, can cause the database and redis data inconsistencies, You can queue up updates, serialize them, and they don’t happen, but it’s not recommended because it’s inefficient.