What is cache breakdown
In a high concurrency system, when a large number of requests query a key at the same time, if the key happens to be invalid or deleted, it will lead to a large number of requests to the database, resulting in the database can not withstand such high concurrency traffic, stalling or even crash. This phenomenon is called cache breakdown
Graph LR client request -- - > | QPS | cache cache - > | key failure or update | DB DB - > client request
How to solve the cache breakdown problem?
For the specific scenario of periodic cache update, the principle of master-slave polling is generally used to solve cache breakdown.
- Timer update Principle
Create two caches, A and B. When the timer updates the cache, update the B (primary) cache first, and then update the A (secondary) cache, remember to follow this order.
Graph LR regularly update -- - > | 1 to update | main cache regularly updated B - > 2 after update | | from A cache
- Principle of query
The user first queries cache A (secondary), and then queries cache B (primary) if cache A cannot be queried (for example, cache A was deleted during cache update).
Graph LR client - > from the cache from the cache A A - - > | | don't exist main B from the cache is A cache - > | | existence to return the result Main cache B - > | | existence to return the result Main cache B - > | no | DB DB - > returns the result
The above two steps create two caches from the original one, and finally solve the problem of cache breakdown
Actual combat: master slave polling
@PostConstruct
public void initJHSAB(a){
log.info("Start AB timer..........");
new Thread(()->runJhsAB()).start();
}
Copy the code
public void runJhsAB(a) {
while (true) {// Simulate reading 100 specials from the database for loading into the juju bargain page
List<Product> list=this.products();
// Update first B: Delete first then update
this.redisTemplate.delete(Constants.JHS_KEY_B);
this.redisTemplate.opsForList().leftPushAll(Constants.JHS_KEY_B,list);
A: Delete it first and then update it
this.redisTemplate.delete(Constants.JHS_KEY_A);
this.redisTemplate.opsForList().leftPushAll(Constants.JHS_KEY_A,list);
try {
Thread.sleep(1000*60);
} catch (InterruptedException e) {
e.printStackTrace();
}
log.info("Refresh.............."); }}Copy the code
@GetMapping(value = "/findAB")
public List<Product> findAB(int page, int size) {
List<Product> list=null;
long start = (page - 1) * size;
long end = start + size - 1;
try {
// Use redis,list lrange command to implement paging query.
list = this.redisTemplate.opsForList().range(Constants.JHS_KEY_A, start, end);
If cache A cannot be queried (for example, cache A was deleted during cache update), then check cache B
if (CollectionUtils.isEmpty(list)) {
this.redisTemplate.opsForList().range(Constants.JHS_KEY_B, start, end);
}
log.info("{}", list);
} catch (Exception ex) {
// The redis network timeout is not enabled
log.error("exception:", ex);
//TODO go to DB query
}
return list;
}
Copy the code
Note: Primary/secondary polling and delayed dual-delete are different scenarios. Delayed dual-delete is a solution to double write consistency.
Redis distributed cache series
Chapter 1: Redis distributed Cache (twenty-seven) — Taobao Cost-effective product list solution
- 👍🏻 : have harvest, praise encouragement!
- ❤️ : Collect articles, easy to look back!
- 💬 : Comment exchange, mutual progress!