“This is the 22nd day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”
Seckill is a very typical activity scenario, for example, in the Double 11, 618 and other e-commerce promotional activities, seckill scenario. The business characteristics of the seckill scenario are time-limited. The business system needs to process a large number of instantaneous high concurrent requests, and Redis is often used to support seckill activities.
However, the second kill scenario consists of multiple stages, which can be divided into three stages: before, after and second kill. Each stage has different request processing requirements, and Redis cannot support every stage of the second kill scenario.
So, where exactly does Redis support the seckill scene? How is it supported? With this in mind, we can know how to use Redis to support high concurrency stress in the second kill scenario and how to deal with the second kill scenario.
Next, let’s take a look at the load characteristics of the second kill scenario.
Requirements for the support system in the slasher scenario
Seckill events usually sell goods at very good prices, attracting a large number of users to snap them up. However, the inventory of goods is much smaller than the number of users who buy the goods, and users are limited to purchase the goods within a certain period of time. This brings two obvious load characteristics to the second kill system, and correspondingly, it also puts forward requirements for the support system. Let’s analyze it.
The first feature is that instantaneous concurrent traffic is very high.
Whereas a typical database can only handle thousands of concurrent requests per second, Redis can handle tens of thousands of concurrent requests per second or more. Therefore, when a large number of concurrent requests flood into a seckill system, we need to use Redis to intercept most of the requests first, so as not to send a large number of requests directly to the database and overwhelm the database.
The second characteristic is that the read operation is a simple query operation.
In the seckill scenario, users need to check whether there is inventory of the product (that is, check the inventory of the product according to the product ID), and only when there is inventory surplus, the seckill system can carry out inventory deduction and order operation.
The inventory check operation is a typical key-value query, and Redis’s efficient support for key-value query matches the requirements of this operation.
However, only a small number of users can successfully place an order in the second kill activity, so the inventory query operation (read operation) is far more than the inventory deduction and order operation (write operation).
Of course, the actual second kill scene is often a number of links, just introduced the user check inventory is only one of the links. So, where exactly does Redis work in the overall seckill scenario? This is about to talk about the overall process of the second kill activity, let’s analyze.
Where does Redis come into play in a seckill scenario?
We can generally divide the seckilling activity into three phases. At each stage, Redis plays a different role.
The first stage is before the seckill.
At this stage, the user is constantly refreshing the product detail page, which results in an instantaneous spike in requests to the detail page. The solution at this stage is to try to make the page elements of the commodity details page static, and then use CDN or browser to cache these static elements. In this way, the large number of requests before the second kill can be directly served by the CDN or browser cache, and will not reach the server side, which relieves the pressure on the server side.
At this stage, CDN and browser caching service requests are sufficient, we don’t need Redis yet.
The second stage is the start of the second kill.
At this point, a large number of users click the second kill button on the commodity details page, resulting in a large number of concurrent requests to query the inventory. Once an inventory is found for a request, it is immediately followed by an inventory deduction. The system then generates the actual order and performs subsequent processing, such as order payment and logistics services. If the request does not find the inventory, it will return. The user will usually continue to click the kill button and continue to query the inventory.
In simple terms, there are three operations in this stage: inventory inspection, inventory deduction and order processing. Since each snapkill request queries the inventory, subsequent inventory deductions and order processing will not be executed until the request finds the inventory margin. Therefore, the biggest concurrent pressure in this phase is on the inventory check operation.
To support a large number of highly concurrent inventory check requests, we need to use Redis to hold the inventory in this link so that the request can read the inventory directly from Redis and check it.
So, can inventory deduction and order processing be handed over to the back-end database?
Actually, the order processing can be performed in the database, but the inventory deduction operation cannot be handed over to the back-end database.
The reason for processing an order in a database is relatively simple, and I’ll start with that.
Order processing will involve payment, goods out of the warehouse, logistics and other related operations, these operations themselves involve a number of data tables in the database, to ensure transactional processing, need to be completed in the database. Furthermore, there is no longer a lot of request pressure on order processing, and the database can support these order processing requests.
So why can’t inventory deduction be performed in the database? This is because once the request is found to be in stock, it means that the user who sent the request is eligible to purchase the item, and the user will place the order. At the same time, merchandise inventory margin also need to reduce one. If we put the inventory deduction operation into the database, there are two problems.
- Additional overhead. Redis stores the inventory, and the latest value of the inventory is maintained by the database, so after the database is updated, it needs to synchronize with Redis. This process adds additional operation logic, and also brings additional overhead.
- Order quantity exceeds actual stock, appear oversell. Due to the slow processing speed of the database, the inventory margin cannot be updated in time, which will lead to a large number of inventory inspection requests to read the old inventory value and place orders. At this point, the number of orders will be greater than the actual inventory, resulting in overbooking, which does not meet the requirements of the business layer.
Therefore, we need to make inventory deduction directly in Redis. The specific operation is that after the inventory inspection is completed, we will immediately deduct the inventory in Redis once there is surplus in the inventory. Furthermore, in order to avoid asking for old inventory values, the inventory check and inventory deduction operations need to be atomic.
The third stage is after the second kill.
At this stage, some users may refresh the product details page and try to wait for other users to return the order. Users who have placed a successful order will refresh the order details and track the progress of the order. However, the volume of user requests in this phase has dropped so much that the server side can generally support it, so we won’t focus on it.
Okay, let’s first summarize the Redis requirements for seckill scenarios.
The second kill scene is divided into three stages: before, after and after. Before and after the second kill starts, the high concurrency pressure is not so great, we do not need to use Redis, but in the process of the second kill, commodity inventory needs to be checked and deducted. Inventory inspection faces a large number of high concurrency requests, and inventory deduction needs to be implemented together with inventory inspection to ensure atomicity. This is the need for Redis.
The following figure shows the two parts that Redis needs to be involved in a seckill scenario:
Now that we understand the requirements, it becomes clear how we will use Redis to support the seckill scenario. Next, I’ll show you two ways.
What Redis methods can support seckill scenarios?
There are two basic requirements for Redis operations in seckill scenarios.
- Supports high concurrency. This is easy, because Redis allows for high concurrency by processing requests at high speeds. Also, if there are multiple split-kill items, we can use a sliced cluster to keep inventory of different items in different instances, thus avoiding the problem of using a single instance and concentrating all split-kill requests on one instance. However, it should be noted that when using the slicing cluster, we need to use CRC algorithm to calculate the Slot corresponding to the key of different slicing goods. Then, we can allocate the Slot corresponding to different slicing goods to different instances for saving when allocating the Slot and instance mapping.
- Ensure the atomic implementation of inventory inspection and inventory deduction. We can use Redis atomic operations or distributed locks to support this requirement.
Let’s take a look at how Redis supports seckill scenarios based on atomic operations.
Atom-based operation supports seckill scenarios
In the second kill scenario, the inventory of a commodity corresponds to two pieces of information, namely the total inventory and the second kill. This data model happens to have a key that corresponds to two attributes (total inventory and killed inventory), so we can use a Hash key-value pair to store the two information of the inventory, as shown below:
key: itemID
value: {total: N, ordered: M}
Copy the code
ItemID is the number of an item, total is the total inventory, and ordered is the ordered quantity.
Because the two operations of inventory inspection and inventory deduction are guaranteed to be performed together, a direct approach is to use Redis atomic operations.
As we learned in Lesson 29, atomic operations can be atomic commands provided by Redis itself or Lua scripts. Because inventory checking and inventory reduction are two operations that cannot be done with a single command, we need to perform both operations atomically using Lua scripts.
How do you implement these two operations in a Lua script? I’ve provided you with pseudo-code from a Lua script that shows the implementation of these two operations.
Local counts = redis. Call ("HMGET", KEYS[1], "total", "ordered"); Local Ordered = tonumber(counts[2]) If the currently requested inventory plus the inventory that has been killed is still less than the total inventory, Redis. Call ("HINCRBY",KEYS[1],"ordered",k) return k; end return 0Copy the code
With the Lua script in hand, we can execute the script on the Redis client using the EVAL command.
Finally, the client determines whether the seckill succeeded or failed based on the return value of the script. If the return value is k, it’s a success; If it’s 0, it’s a failure.
Here, we learned how to use atomic Lua scripts to implement inventory checks and inventory deductions. Another way to ensure atomicity of the inventory check and subtract operations is to use distributed locks to ensure that multiple clients can perform the two operations mutually exclusive. Next, let’s look at how distributed locks can be used to support seckill scenarios.
Distributed lock based to support seckill scenarios
A distributed lock is used to support a seckill scenario by having a client apply for a distributed lock from Redis, and only the client with the lock can perform inventory checks and inventory deductions. As a result, a large number of seckill requests are filtered out while competing for distributed locks. Furthermore, atomic operations are also eliminated for inventory checking and deduction, since only one of multiple concurrent clients can acquire the lock, ensuring mutual exclusivity of concurrent client access.
You can see the pseudocode below, which shows the process of using distributed locks to perform inventory checks and deductions.
Key = itemID // Value val = clientUniqueID // Apply distributed lock, Lock =acquireLock(key, val, Timeout) If (lock == True) {availavailstock = DECR(key, k); // availStock = DECR(key, k); If (availStock < 0) {releaseLock(key, val) return error} ReleaseLock else{releaseLock(key, val) // order processing}Copy the code
Note that in order to use distributed locks, clients need to request the lock from Redis first, and only after the lock is obtained can they perform operations such as inventory check. Therefore, when clients fight for distributed locks, most of the seconds kill requests will be blocked because the lock cannot be obtained.
So, I have a small suggestion for you. We can use different instances in the sliced cluster to hold distributed locks and commodity inventory information separately. With this save, the seckill request first accesses the instance that holds the distributed lock. If clients do not have the lock, they will not query the inventory, which takes the pressure off of keeping instances of inventory information.
summary
In this lesson, we learned about the specific application of Redis in the seckill scenario. The second kill scenario has two load characteristics: instantaneous high concurrent requests and excessive read and write. Redis has high concurrent processing capability and efficient key-value read and write features, which can meet the requirements of seckilling scenarios.
In a seckill scenario, we can intercept a large number of pre-seckill requests through the front-end CDN and browser cache. Inventory check and inventory deduction are two operations that are under huge concurrent request pressure when the actual kill activity is going on, and at the same time, the execution of these two operations needs to ensure atomicity. Redis atomic operation and distributed locking are two features that can effectively support the requirements of seckill scenarios.
Of course, Redis alone is not enough for seckill scenarios. The second kill system is a systematic engineering. Redis has realized the support of inventory inspection and deduction. In addition, there are four other links that we need to deal with well.
- Front-end static page design. Any page element that can be statically processed on a seckilling page should be as static as possible to make full use of CDN or browser cache service requests before seckilling starts.
- Request interception and flow control. At the access layer of the seckill system, the system intercepts malicious requests to prevent malicious attacks on the system. For example, the system uses a blacklist to prevent malicious IP addresses from accessing the system. If the Redis instance is overloaded, we also need to limit the flow at the access layer to control the number of requests coming into the second kill system to avoid instance crashes.
- Inventory information expiration time processing. The inventory information stored in Redis is actually the database cache. To avoid cache breakdown, we do not set the expiration time for the inventory information.
- Database order exception processing. If the database fails to process the order successfully, you can add order retry to ensure that the order will eventually be processed successfully.
Finally, I would like to give you a small suggestion: the request flow brought by the activity of seckilling is huge, so we need to save the inventory information of seckilling in a separate instance instead of the same instance with the data of daily business system, so as to avoid interfering with the normal operation of business system.
Each lesson asking
As usual, I’ll give you a quick question. Assuming an item inventory is 800, we use a sliced cluster of four instances to service the kill request. Do you think it’s a good idea to have each instance maintain a repository of 200, and then the client’s kill request can be distributed to different instances for processing?
Welcome to write down your thoughts and answers in the comments area, and we will exchange and discuss together. If you find today’s content helpful, you are welcome to share it with your friends and colleagues. I’ll see you next time.