This is how you play microservice caching
Caching is already an essential module in microservice projects, and it will not work in 2020 without microservice caching. Today I’m going to tell you what microservice caching is and how it should be played.
Before we start, let’s put together a quick overview of Redis. If you don’t have a problem with this section, you can skip it.
1. SpringBoot integrates Redis
Springboot has helped us extract Redis as a Startes, or scene launcher. We can take a look at the official website:
We need to introduce Redis in our POM files:
The version number is not written because uniform version control is done by the parent project.
We can look at it after we introduce Redis
Configure redis
A few configuration items to keep an eye on:
- Host: indicates the IP address of the redis server
- Port: indicates the port number of redis. Generally for 6379
- Password: indicates the password of redis
If redis does not have a password, there is no need to write a password.
Use StringRedisTemplate to manipulate redis
The first thing you need to do is inject StringRedisTemplate via @AutoWired
Next, write the test class
At this point, we have successfully integrated Redis
2. The problem of using cache in distributed system
-
Cache breakdown
Cache breakdown refers to keys that have been set to expire on the assumption that they will be accessed by ultra-high concurrency at some point in time. And the key fails just before a large number of requests come in, and all requests are forwarded to the database, a scenario we call cache breakdown.
The solution
A large number of concurrent requests can be unlocked with a lock, allowing one request to check and the rest to wait. Someone else gets the lock, goes to the cache first, hits the cache, and then doesn’t go to the database again. We will discuss some of the problems that locking can cause later.
-
The cache to penetrate
Cache penetration refers to the query of a non-existent data, because the cache does not hit, the database will be queried, but the database does not have this record, we did not write the null query this time into the cache, which will result in the non-existent data will be queried in the database every time, cache will be meaningless.
risk
If it is maliciously used, the instantaneous pressure of the database will be too large, leading to the database crash.
The solution
Write the null value to the cache and set a short expiration time.
-
Cache avalanche
Cache avalanche refers to when we set the cache key with the same expiration time, causing the cache to fail at the same time, all requests are forwarded to the database at the same time, and the database is under too much instantaneous pressure, causing the database to crash.
The solution
Adding a random value to the original expiration time, such as 1-5 minutes random, will reduce the repetition rate of each cache expiration time and make it difficult to trigger cache collective expiration events.
3, lock to solve the cache breakdown
-
Use synchronized
As long as it is the same lock, all threads that need to acquire the lock can be locked. All components in combination with SpringBoot are singletons. Synchronized (this)
Suppose three requests are made simultaneously. The request does not hit the cache,A obtains the lock, B and C are in A blocking wait state,A checks the database, finds the data, inserts it into the cache, returns the request, releases the lock. B and C query the cache again. The cache matches and the result is returned.
In addition, we need to pay attention to the timing of the lock did not hit the cache, check the database, insert the cache. The above three operations must be atomic. Otherwise, it will cause locking timing issues, resulting in no locking.
- How to lock in distributed environment
Local locks can only hold the current process, so we need distributed locks.
4. Distributed locks
Principle of distributed locking
We can grab the same lock at the same time, grab it, execute the relevant business logic, otherwise we have to wait until the lock is released.
For this lock, we can use Redis, mysql, anywhere else that everyone can access.
Wait for the way you can spin.
Implementation of distributed Lock (based on Redis)
Take a look at the command introduction on the official website
- EX seconds: Sets the expiration time of a key, in hours
- X ^ (x ^ 5) ^ (x ^ 5)
- NX: The key value is set only when the key does not exist
- XX: The key value is set only when the key exists
We can implement distributed locking with the SET NX parameter.
Phase one
A distributed lock can cause some problems:
- Setnx holds the lock, the business code is abnormal or the program is unexpectedly down. No logic was performed to delete the lock, resulting in a deadlock
So how do you solve this problem when it happens?
- Set the automatic expiration time of a lock. Even if the lock is not deleted, the lock will be automatically deleted after expiration
Phase two
Is there any problem with this design?
It’s not:
- If the service takes a long time to execute and the lock expires, we can delete the lock directly, which may delete the lock of others.
Solution: When a lock is occupied, specify a UUID and delete it only when each person matches his or her own lock.
At this point we can already guarantee that locking (lock + expiration time) and deleting locks (judge + delete) are atomic operations, but we still can’t ignore the automatic renewal of locks.
For automatic lock renewal we can use a thread to check the lock every unit of time, if the lock is still held, then extend the lock for a long time.
There is, of course, a Redisson framework that encapsulates all of this for us, and we’ll talk about Redisson distributed locking next time.
Thank you for reading and I hope my sharing will help.
I’m Wu four three two one.