Star /fork: github.com/Wasabi1234/…
1 Cache avalanche
1.1 What is cache avalanche?
Due to the
- The same expire is used for all keys when setting up the cache
- Update strategy
- Hot data
- The cache service is down
The cached data may be unavailable at a large scale at the same time, or all the cached data may be updated.
1.2 Solutions
- The update strategy is relatively uniform in time
- The thermal data used is spread over different machines as much as possible
- Multiple machines master slave copy or multiple copies to achieve high availability
- Fusing current limiting mechanism is realized to control the system load capacity
Adding a random value to the original expiration time, such as a random value of 1-5 minutes, will reduce the repetition rate of expiration time per cache, and the collective failure probability will be greatly reduced.
2 Cache Penetration
2.1 What is Cache penetration?
A large number of concurrent queries for non-existent keys result in direct transmission of stress to the database.
Why multiple pass-throughs? There is no always empty. Care needs to be taken to allow the cache to distinguish between a nonexistent KEY and a null value.
For example, access the data whose ID is -1. It is possible to bypass Redis and still access the database frequently. This is called cache penetration, which occurs when the query is null and is not cached.
2.2 Solutions
- Cache the null KEY so that the first time it doesn’t exist it will be loaded and the next time it gets the KEY
- Bloom filtering or RoaringBitmap determines whether a KEY exists
The most common type of bloom filter hashes all possible data into a bitmap large enough that a non-existent data will be intercepted by the bitmap, thus avoiding the query pressure on the underlying storage system.
- Fully caching, use strategy 2 of lazy asynchronous loading so that updates are not triggered.
A more crude approach is that if a query returns null data (either nonexistent or a system failure), the null result is still cached, but its expiration time is short, no more than 5 minutes.
3 Cache Breakdown
A breakdown is for one key cache, whereas an avalanche is for many keys.
When a KEY fails, a large number of concurrent requests to access the KEY occur.
A cache + expiration time strategy is often used to help speed up interface access, reduce back-end load, and keep functionality updated, which is generally enough.
However, if the following problems occur at the same time, the system may be fatal:
- Is a hot key with a large traffic volume
- Caches take time to build (can be complex processes such as complex SQL, multiple I/ OS, multiple interface dependencies)
As a result, a large number of threads build the cache as soon as the cache fails, leading to increased back-end load and possibly even a system crash.
3.2 Solutions
So the problem is to limit the number of processing threads, i.e. update operations on keys add global mutex.
The mutex
Instead of immediately loading db when the cache fails (if the value is null)
- Set a mutex key using some operation of the cache tool that returns a successful operation (Redis’s SETNX)
- When the operation returns a success, load db’s operation and reset the cache. Otherwise, retry the entire GET cache method.
Use mutex keys in advance:
Set a timeout value (timeout1) inside the value, which is smaller than the actual memcache timeout(timeout2). When timeout1 is read from the cache and found to be out of date, extend timeout1 and reset it to the cache. The data is then loaded from the database and set to the cache. The pseudocode is as follows:
Cache shall prevail
Using asynchronous threads is responsible for maintaining cached data, triggering updates periodically or conditionally so that no updates are triggered.
Current limiting
Like hystrix or Sentinel.
reference
- www.iteye.com/blog/carlos…