Original: Taste of Little Sister (wechat official ID: XjjDog), welcome to share, please reserve the source.
Today, the director is very angry, because the caches synchronization solution has been emphasized for many years, but someone is challenging the authority by not following the rules. Eventually something went wrong, which made the director lose face.
“A group of people took the big boss’s account number and tested it, and the result was something wrong with the data,” said the director, snorting and crossing his back. Ever since the boss’s account was once tested, it has become the unspoken ultimate test account. Many people use, abruptly put the balance of an account operation, to make a high concurrency.
“The boss’s account is a test account…” “There was a low murmur below.
“No account in reality has such intensive operations…” “, another low murmur made the prefect’s face grow heavier and heavier.
“Do you think I’m joking?” “I once had a level 1 failure due to such data inconsistencies. Today, I will take you to understand why there are inconsistent data.
Holding my glasses, I stagger to the stage, laughing in my heart. The director wants to popularize Cache Aside Pattern again.
Welcome to star: github.com/xjjdog/bcma… . It includes ToB complex business, Internet high concurrency business, cache applications; DDD, microservices guidance. Model driven, data driven. Understand the evolution of large-scale services, coding skills, Learning Linux, performance tuning. Docker/ K8S power, monitoring, log collection, middleware learning. Front-end technology, back-end practices, etc. Main technology: SpringBoot+JPA+Mybatis-plus+Antd+Vue3.
1. Why are the data inconsistent?
The database bottleneck is well known, high concurrency environment, it is easy to I/O lock. The most urgent task is to get the most frequently used data into faster storage. This faster storage can be distributed, like Redis, or standalone, like Caffeine.
But once caching is added, you have to face a painful problem: consistency of data.
The world is full of data inconsistencies. Java multithreading students, will certainly remember the JMM model. A number, as long as it’s stored in two places at once, can cause problems.
But caching systems and databases are far less reliable than JMM. Because distributed components are more vulnerable, they can go wrong at any time.
2. Cache Aside Pattern
How do you ensure consistency between DB and cache? Now a good best practice is to Cache Aside Pattern.
Read cache first, then read DB. The detailed steps are as follows:
- Every time data is read, it is read from the cache
- If it does, it returns a cache hit
- If the data cannot be read from the cache, it is retrieved from the DB, which is called cache miss
- I’m going to stuff it into the cache, and the next time I read it, I’m going to hit it
Let’s look at the write request. The rule is: update db first, then delete cache. The detailed steps are as follows:
- Write the changes to the database
- Delete the corresponding data from the cache
At this point, I watched several people frown. I knew there would be people who would rebel and think they were right. For example, why delete the cache and not update the cache? Would it be less efficient? Why not delete the cache first and then update the database?
Boy, they’re gonna question the director.
3. Whydelete
Caching, not cachingupdate
The cache?
This is easier to understand. When multiple update operations arrive at the same time, the delete action produces a definite result; An update operation, on the other hand, may produce different results.
As shown in figure. Two requests A and B, request B after request A, the data is up to date. Because of the existence of the cache, if there is A slight deviation in the storage, it will cause the cache value of A to overwrite the value of B, and then the recorded value in the database will be inconsistent with that in the cache until the next data change.
In the case of deletion, the cache will miss, so the latest data will be obtained from the DB for filling every time, which has little to do with the timing of the cache operation.
4. Why not delete the cache first and then update the database?
The problem is similar. We don’t even need concurrent writing scenarios to find problems.
The cache deletion action we mentioned above, and the database update action, are obviously not in the same transaction. If a request is removed from the cache and another request arrives, a copy of the cache entry is loaded from the database into the cache system. Then, the database update operation is completed, and the contents of the database and the contents of the cache are inconsistent.
As shown above, the write request first removes the cache. It turns out that at this point, there’s some other read request that reads the old value of the database into the database, and the data in the cache is 0. Next I updated the DB to change the database record to 100. After such a shudder, the data in the database and cache are inconsistent.
We all nodded and many people smiled.
5. Caching annotations in Spring
The Java Redis client is jedis, Redisson and lettuce. Spring uses lettuce by default.
Many people prefer to use Spring’s abstract cache package, Spring-cache.
It abstracts the Cache layer using annotations and AOP, and can switch between various in-heap caching frameworks and distributed frameworks. Here are its Maven coordinates:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-cache</artifactId>
</dependency>
Copy the code
There are three steps to using spring-cache:
-
Add the @enablecaching annotation to the startup class;
-
Use CacheManager to initialize the cache framework to be used, and use the @Cacheconfig annotation to inject resources to be used.
-
Caching resources with annotations such as @cacheable.
There are three annotations for caching operations:
-
@cacheable indicates that the return value of a method is cached if it is not already in the cache system.
-
@cacheput indicates that the return value is cached each time the method is executed.
-
@cacheevict indicates that some cached values are cleared when a method is executed.
The @cacheevict annotation in spring-cache should be deleted first or later. Not figuring that out really keeps people up at night. Key technology, not only to use happy, but also to use the rest assured.
The removal of the cache is implemented in CacheAspectSupport, and we notice the following code.
// Process any early evictions
processCacheEvicts(contexts.get(CacheEvictOperation.class), true, CacheOperationExpressionEvaluator.NO_RESULT); .// Process any late evictions
processCacheEvicts(contexts.get(CacheEvictOperation.class), false, cacheValue);
Copy the code
It has a front-generated clear action and a post-generated clear action set with a bool variable Boolean beforeInvocation. Where does this value come from? Again, look at the @cacheevict annotation.
/**
* Whether the eviction should occur before the method is invoked.
* <p>Setting this attribute to {@code true}, causes the eviction to
* occur irrespective of the method outcome (i.e., whether it threw an
* exception or not).
* <p>Defaults to {@code false}, meaning that the cache eviction operation
* will occur <em>after</em> the advised method is invoked successfully (i.e.,
* only if the invocation did not throw an exception).
*/
boolean beforeInvocation(a) default false;
Copy the code
The default value is false, indicating that the delete action is lagging.
6. Are there any other modes?
I heard that there are other common Caching synchronization modes, such as Read Through Pattern, Write Through Pattern, and Write Behind Caching Pattern. Why not use these? One of the students, whose hips had been moving back and forth in his chair, seized the opportunity and finally spoke.
In fact, these methods are also widely used, but because most of the business is unaware, so many people ignore. In other words, most of these patterns are implemented in middleware or lower-level databases that business code may not touch.
For example, if you Read Through, you don’t know that the cache layer is there. Normally, you would implement cache loading manually, but Read Through might have a proxy layer to do it for you.
For Write Through, you don’t have to worry about whether the database is in sync with the cache. The proxy layer does all the work for you, and you just stuff data into it.
Read Through and Write Through are non-conflicting, and they can exist together, eliminating the concept of synchronization in business layer code. Cool, dropping.
Write Behind Caching means that data is first dropped into the cache, and then asynchronous threads slowly drop the cached data into the DB. To use this thing, you need to evaluate whether your data can be lost and whether your cache capacity can withstand peak business. Today’s operating systems, DB, and even message queues such as Kafaka all implement this pattern to some extent.
But it has nothing to do with our business needs right now.
7. Cache Aside Pattern is also problematic
Director mounted the horse, one against two, popular science for a long time, all the students were convinced. Just when everyone wanted to give applause to the director, a discordant voice came.
I found a huge problem. Some students said that if the database update is successful, but the cache deletion fails, it will also cause cache inconsistency.
That’s a good question, because most failures are caused by these extreme conditions. And that’s where it gets interesting. We have to spell probability, because there’s no 100% condom. The director smiled.
Method 1: Put data update and cache delete actions in a transaction, advance and retreat simultaneously.
Method 2: After the cache deletion fails, retry for a certain number of times. If the fault persists, the cache service may fail. In this case, log and delete the keys when the cache service recovers
Method 3: Delete the cache, update the data, and delete the cache again. It’s more maneuverable, but it’s safer.
Is there no problem? The director looked around and saw everyone nodding. No, No, No. There are still inconsistencies.
Everyone was confused.
The picture above, which looks correct, is actually wrong. Why is that? Because the operation of reading data from the database to the cache is not atomic.
For example, when the cache is invalidated (or deleted), a read request comes in. The read request, which takes the old database value, is delayed rather than written to the cache immediately due to various reasons (such as network outages). In the time it was going to write to the cache, a lot of things happened, there was another request to update the value of the database to 200 and remove the cache.
Until the second request is complete, the first request is written to the cache. But actually, the values in the database and the cache are no longer synchronized.
So why do people almost ignore this scenario in their normal design? Because the odds of it happening are so low. It requires two or more concurrent writes (or data invalidation) while reading data, which is far too rare in a real-world application scenario. In addition, we should note that the dashed line duration is a database update operation, plus a cache delete operation, which is generally longer than the cache setting, so the probability is further reduced.
So, do you know the correct way to do it? The director asked.
Know! From now on we’ll use spring-cache annotations to do the work instead of writing consistency logic in the code.
Good, good, good, good, good, good, good, good, good, good, good, good, good, good, good, good.
Xjjdog is a public account that doesn’t allow programmers to get sidetracked. Focus on infrastructure and Linux. Ten years architecture, ten billion daily flow, and you discuss the world of high concurrency, give you a different taste. My personal wechat xjjdog0, welcome to add friends, further communication.