Don’t say anything. Frequent interview questions.
1. Cache penetration
If the user wants to query a data and finds that there is no data in the redis in-memory database, that is to say, there is no cache hit, the user will also query the persistent layer database and find that there is no data, then the query fails.
If there are a lot of users in a high concurrency scenario looking for this data, and the cache misses, the pressure goes directly to the persistence layer database, which is called cache penetration.
In general:
- When the client requests to query data, it caches the query first.
- If there is data in the cache, return it directly.
- If the cache does not exist, go to mysql to query.
Such scenarios are common, as is known as seckilling, where simultaneous requests come in at the same point in time and can have unpredictable consequences if the database is flushed.
How to solve it?
1. Bloom filter
This is a data structure that stores hash for all possible query parameters. When a query is received, the system checks the data in the filter. If no data is found, the system discards the data to avoid pressure on the underlying storage system.
However, the reason why the Bloom filter is efficient in time and space is that it sacrifices the accuracy of judgment, which may lead to misjudgment. In short:
- When a Bloom filter says a value exists, it may not exist
- When a Bloom filter says it doesn’t exist, it definitely doesn’t exist
2. Cache empty objects
When the persistence layer database also does not query the data, even if the returned empty object is also cached, at the same time set an expiration time, later query the data directly from the cache, but is empty, thus protecting the persistence layer database.
But there are some problems with this approach:
- If there are many null-valued keys, these keys will be cached and consume more space.
- Even if the expiration time is set for null values, they will still exist, and there will be a period of time window inconsistency between the cache layer and the persistence layer, which will affect the business that needs to maintain consistency.
Second, cache breakdown
Cache breakdown is when a key is very hot and is constantly carrying high concurrency. If the key fails, at the moment of failure, the continuous amount of concurrency will Pierce the cache and hit the persistence layer database directly, just like a hole is cut in the defense wall.
Note the difference with cache penetration:
- The cache penetrates because it cannot be found
- Cache breakdown is due to large amount of cache expiration
The most famous is when the weibo hot search, when a star derailed, this became a hot key, the final request through this hot key, the pressure directly hit the bottom database.
How to solve it?
1. Set the hotspot data to never expire
From the cache level, no expiration time is set, so there is no cache expiration problem.
2. Add a mutex lock
With distributed locks, only one thread can query backend services for each key at the same time. Other threads must wait if they do not have access to distributed locks. This approach shifts the pressure of high concurrency to distributed locks, so the test of distributed locks is very high.
Cache avalanche
At some point in time, the cache set expires or Redis breaks down.
For example, if there is a panic buying event at 12 PM, put this wave of goods in the cache in advance, assuming that the expiration time is set to 1 hour. So at 1 o ‘clock, the cache will expire, and the new cache hasn’t been set up yet. If there are a lot of queries for these items at this point, they fall directly into the persistence layer. For persistent layer databases, periodic pressure peaks occur.
If it’s the cache server that’s down, it’s bad enough that the persistence layer database can be flushed out in an instant.
How to solve it?
1. Redis is highly available
In fact, is to add machines, build redis cluster, live more remote.
2. Traffic limiting degrade
After the cache fails, the number of threads reading from the database to the cache is controlled by locking or queuing, such as the distributed locking mentioned above.
3. Preheat data
Data that may be accessed is pre-accessed before deployment, so that a large amount of data that may be accessed frequently is loaded into memory.
4. Set different expiration times
The expiration time of cache data is set randomly to prevent a large number of data from expiring at the same time.