When our server QPS is higher, and the real-time demand of data is not high, tend to access the cache in order to achieve rapid Response, to reduce the database of pressures, commonly used to cache as middleware such as Redis, often the interviewer to ask questions at the interview, as the most common, almost will breakdown, penetration, avalanche examination questions of cache scenarios, Do you really understand?
The front end initiates a Request, after three handshakes, connects to the server, and wants to obtain the corresponding data. Then, after the server is connected to the cache middleware, what is the process from receiving the Request to the final Response? The following discussion ignores logic such as parameter verification and goes directly to the core link.
Call link
When a Request Request comes, the server first establishes a connection with the cache middleware and transmits the corresponding key to the cache middleware to obtain the corresponding data. After the server gets the returned result, it judges whether there is data in the returned result. If there is data, it will return the result obtained from the cache. If there is no data in cache middleware, is establishing a database connection, access to the database server, according to the corresponding logic to return a result, judge whether there is data in the results, if there is a corresponding data is returned, if not in accordance with the requirements for the business scenario, return the corresponding results (generally is null or new an empty object).
Cache breakdown
Meaning:
What is cache breakdown? Colloquially, this refers to the scenario where there is no data in the cache but data in the database. So why is there no data in the cache? There is no data because the cache expires because the cache time is set. The cache can not find the data to the database query ah, why also called breakdown? Because the data corresponding to the key is a hotspot data, the amount of concurrent access is large, and the database is queried at the same time. As a result, the database pressure increases suddenly, which may crash the database.
Solution:
1. If the data is unchangeable, such as constant values, you can set the corresponding hotspot key to never expire.
2. Add mutex to prevent the same server from having multiple connections to access the database at the same time.
/ / pseudo code
public class Main {
// Double check lock
public static String getHotData(String key) {
// Obtain the corresponding hotspot key data from the cache middleware
String response = redis.get(key);
// The cache has no data
if(Objects.isNull(response)) {
// Only one thread can access a server at a time
synchronized (Main.class) {
Suppose thread A accesses synchronized and threads B and C block outside synchronized sized
// After thread A exits synchronized, threads B and C should fetch from Redis instead of accessing the database again
response = redis.get(key);
// Access the database and write the data into redis
if(Objects.isNull(response)) { response = loadDataFromMySQL(key); redis.set(key, response); }}}returnresponse; }}Copy the code
3. With distributed locks, only one thread can access the database globally.
/ / pseudo code
public class Main {
// Distributed unique key
public static String getHotData(String key, int tryTime) throws InterruptedException {
if(tryTime >= 4) {
return "";
}
// Obtain the corresponding hotspot key data from the cache middleware
String response = redis.get(key);
// The cache has no data
if(Objects.isNull(response)) {
// Ensure that the entire service cluster is accessed by only one thread at a time
if (redis.tryLock()) {
try {
// Access the database and write the data into redis
if(Objects.isNull(response)) { response = loadDataFromMySQL(key); redis.set(key, response); }}finally{ redis.unlock(); }}else {
TimeUnit.MILLISECONDS.sleep(100);
getHotData(key, tryTime + 1); }}returnresponse; }}Copy the code
The cache to penetrate
Meaning: Cache penetration refers to the situation that neither the cache middleware nor the database has corresponding data, but continuously receives data requesting to obtain the key, causing the database to be overburdened or even crash.
Solution:
1. If no data can be obtained from the database, add a key value to the cache layer and set an expiration time, such as 10s or 1min, based on service requirements. So why not set it to expire? The key may have a business meaning, but there is no data at this point in time, so it cannot be set to not expire. The second is that if the access is really malicious, then there may be no such request after a while, so there is no need to keep the data in the cache.
2, add verification, if it is not in line with the expected request can be directly filtered, for example, the cache stored user information, the corresponding cache key is related to the ID, so if your ID is greater than or equal to 0, for the ID is less than 0 can be directly filtered.
@Controller
public class Controller {
@RequestMapping(value="/test")
public String printHello(Integer id) {
if(Objects.isNull(id) || id < 0) {
return null;
}
// handle the corresponding logic}}Copy the code
Cache avalanche
Meaning:
Cache avalanche refers to a large amount of data in the cache expire at the same time, and all of them are hot data. As a result, concurrent pressure hits the database at the same time, resulting in a sudden increase of database pressure and even downtime. Some people may ask, this and cache breakdown is not the same meaning? A cache breakdown is a concurrent query of a hot key, and a cache avalanche is a large volume of data. One scenario is in some core page where the contents of the page are all cached and all set to the same cache time.
Solution:
1. The simplest method is to ensure that hotspot data does not expire.
2. When setting the expiration time for each hotspot key, add a random value so that the hotspot data will not expire in large quantities at the same time.
3. Use mutex and distributed locks as described in the cache breakdown scenario.
May everyone read this article with skepticism and explore how it works.
Road obstruction and long, the past for preface, the future for chapter.
Looking forward to our next meeting!