Work in our daily life, high concurrency scenario is quite common, the most typical is “618”, “double a” big promoting activities, and cope with high concurrency is the most effective means of distributed cache, the cache is not just so simple of querying data, also can be in current limit, queue peak clipping, high-speed, speaking, reading and writing, such as distributed lock scene play a major role, Caching is a great tool for dealing with high concurrency. This paper only analyzes the architecture system, and does not elaborate too much on specific details.

Overview of multi-level cache architecture

Let’s take a user’s request as an example to analyze the structure of the figure above:

1, the browser can do local cache, can cache HTML pages and other static resources, to prevent frequent refresh of the backend caused a huge impact

2. LVS can record different protocols and cache request links of different users

3, Nginx can do HTML caching configuration, Nginx itself can also cache data

4. Lua script can directly query the database to bypass our Java application, greatly improving the query efficiency and greatly improving the concurrency efficiency

5. Redis can cache the data of our database, build Redis Sentinel or Redis Cluster, and also improve our concurrency efficiency

6. Integrated canal real-time synchronization database incremental data to Redis

Hot data query for multi-level cache applications

As we all know, masks affected by the epidemic, especially in the early stage of the epidemic, received a large number of user queries. In such cases, we use multi-level caching, which will greatly improve our efficiency. We take this case for hotspot data analysis, as shown in the figure below:

This is a commonly used architecture. Users query mask products, go through Nginx and Tomcat, and then we judge whether there is relevant data in Redis. If there is any returned data, if there is no query in the database, and then we cache the data in Redis. However, in the case of emergencies such as masks, our system may be overwhelmed or even crash. We can solve this problem by using multi-level caching.

Above is our multi-level cache architecture using OpenResty + Lua, which we can think of as an enhanced version of Nginx, which allows us to use Lua scripts in Nginx, and Lua is particularly good for high concurrency conditions. According to this framework, let’s briefly analyze the process of querying masks, a popular commodity:

1. Users query masks and execute the Lua script

2. Lua determines whether there is data in the nginx cache

If there is no data in the nginx cache, check whether Redis exists, return data, and cache data to nginx

4, If Redis does not have data, query the database, use Lua to cache data to Redis and nginx, and return data

Through this process, we can find that if there is data in the Nginx cache, it will directly return data, greatly improving our response speed, and the concurrency of a single Nginx is about 10K, greatly improving the processing capacity of the system.

Three, multi-level cache grab red packet MQ peak clipping

Red envelopes are now a very common way of promotion, with the following characteristics:

1. Large amount of concurrency

2. Red envelopes are not distributed all at once, and sometimes need to be distributed over a period of time

3. There are probability and upper limit problems for each user, and the probability and amount of users with higher levels are also larger

4. The amount grabbed by the user cannot be greater than the amount issued

5. Additional red envelopes can be issued immediately or delayed

Above we have analyzed the characteristics of red envelope rain, to achieve an efficient red envelope rain system, the cache architecture is the key. According to the characteristics of hongbao Rain, we designed the hongbao rain cache architecture as shown in the figure above.

1. Import hongbao rain into Redis cache in batches instead of operating the database each time

2. When many users grab red packets, in order to avoid one red packet being grabbed by many people, we need to use Redis queue to store red packets. 3

4, users will go through Nginx, through the Lua script to check whether there is a red envelope in the cache, if there is no red envelope, then directly stop grabbing red envelope

5. If there are still red packets, in order to avoid the background processing many requests at the same time, queue caching is used here, and the back end is executed to grab red packets through consumption queue

Users get red envelopes of high concurrency scenario, if the backend server handles all grab a red envelope directly, the server is likely to collapse, as high-speed if a lot of people crowded into the station without queuing, is likely to lead to the high-speed order, high-speed service collapse, program, we using the message queue to peak clipping.