This paper mainly introduces the three concepts of Redis, aiming at understanding the concepts before coming up with corresponding solutions.
The cache to penetrate
There is no data in the cache and database, so each request will go to check the library, not check the cache, if a large number of requests come in at the same time, it will cause huge query pressure to the database, and even break down the DB system
- solution
- If the query result is empty, the cache time is set shorter, or the cache is cleared after the data corresponding to the key is inserted. Filter for keys that must not exist. You can put all possible keys into a large Bitmap and filter the query through this Bitmap.
Cache breakdown
A key is very hot, in the non-stop carrying large concurrency, large concurrency focus on this point to access, when the key in the moment of failure, continuous large concurrency will wear through the cache, direct request database; When a cache breakdown occurs, the query pressure on the database multiplicates, resulting in a large number of requests being blocked.
Cache avalanche
A large amount of data in the cache expires at the same time, and a huge amount of requests directly fall into the DB layer, resulting in db pressure and even downtime.
- solution
-
After cache invalidation, the number of threads that read the database write cache is controlled by locking or queuing. For example, only one thread is allowed to query data and write to the cache for a key, while the other threads wait.
-
Level 2 cache. A1 is the original cache and A2 is the copy cache. When A1 fails, access A2.
-
Set different expiration times for different keys, so that the cache expiration time is as uniform as possible.
-