preface

This article is the second seckill system, through the actual code to explain, to help you quickly understand the key points of seckill system, start the actual project.

This article mainly explains the interface traffic limiting measures. In fact, the definition of interface traffic limiting is very broad. Interface traffic limiting itself is also a measure of system security protection.

  • Token bucket flow limiting
  • Traffic limiting of single user access frequency
  • Panic buying interface Hide

In addition, after the above issue, many students for optimistic lock in high concurrency can not sell all goods put forward “solemn protest”, so or in this supplementary explanation of optimistic lock and pessimistic lock.

Previous review and article planning:

  • Build an easy kill system from scratch: Prevent oversold
  • Build a simple kill system from scratch: interface traffic limiting (token bucket traffic limiting) + talk about oversold
  • Build a simple second kill system from scratch: interface traffic limit (single-user traffic limit + purchasing interface hide)
  • Build an easy kill system from scratch: Use Redis to cache hot data
  • Build a simple kill system from scratch: message queues asynchronously process orders
  • .

Welcome to pay attention to my personal public account to get the most complete original article: Back-end technology talk (QR code see the bottom of the article)

The body of the

The project source code is here

Mom no longer need to worry about only reading the article will not achieve:

Github.com/qqxx6661/mi…

This section describes the seckill system

Check out the first article in the series, which will not be reviewed here:

Build a simple second kill system from scratch (1) : prevent oversold

The interface current limiting

In the face of high concurrent purchase requests, if we do not limit the flow of the interface, it may cause great pressure to the background system. Especially for the single interface, too many requests to the database will affect the stability of the system.

Therefore, the snapkill system is deployed independently from other back-end systems of the company to prevent the breakdown of snapkill services from affecting other systems.

In addition to the standalone deployment, the best we can do is to make the backend system as stable and elegant as possible to handle a large number of requests.

Interface traffic limiting actual combat: Token bucket traffic limiting algorithm

There have been a lot of introductions on the token bucket traffic limiting algorithm on the Internet. Here is an excerpt:

The token bucket algorithm originated from computer networks. To prevent network congestion during data transmission, you need to limit the outgoing network traffic and ensure that the outbound traffic is sent out at a uniform speed. The token bucket algorithm achieves this function by controlling the amount of data sent to the network and allowing burst data to be sent.

A token bucket of a fixed size generates tokens at a constant rate. If the tokens are not consumed, or are consumed less quickly than they are generated, the tokens will continue to grow until the bucket is full. Subsequent tokens overflow from the bucket. The maximum number of tokens that can be held in the final bucket never exceeds the size of the bucket.

Token bucket algorithm and leaky bucket algorithm

The idea of leaky bucket algorithm is very simple. The water (request) enters the leaky bucket first, and the leaky bucket flows out of the water at a certain speed. When the inflow speed is too high, the water directly overflows.

The token bucket algorithm should not be confused with another common algorithm, the leak-bucket algorithm. The main differences between the two algorithms are:

The leaky bucket algorithm can restrict the transmission rate of data, while the token bucket algorithm can limit the average transmission rate of data, but also allow some degree of burst transmission. In the token bucket algorithm, as long as there are tokens in the token bucket, data is allowed to be transmitted in a burst until the user-configured threshold is reached, so it is suitable for traffic with burst characteristics.

Use Guava’s RateLimiter to implement the token bucket flow limiting interface

Guava is Google’s open source Java utility class, which contains everything. It also provides RateLimiter, which implements the token bucket algorithm.

We took out the source code and added the token bucket limiting code to the optimistic lock-buying interface mentioned before:

OrderController:

@Controller public class OrderController { private static final Logger LOGGER = LoggerFactory.getLogger(OrderController.class); @Autowired private StockService stockService; @Autowired private OrderService orderService; RateLimiter RateLimiter = RateLimiter. Create (10); @RequestMapping("/createWrongOrder/{sid}")
    @ResponseBody
    public String createWrongOrder(@PathVariable int sid) {
        int id = 0;
        try {
            id = orderService.createWrongOrder(sid);
            LOGGER.info(Create order ID: [{}], id);
        } catch (Exception e) {
            LOGGER.error("Exception", e);
        }
        returnString.valueOf(id); } /** * Optimistic lock update inventory + token bucket limit * @param sid * @return
     */
    @RequestMapping("/createOptimisticOrder/{sid}") @responseBody public String createOptimisticOrder(@pathVariable int sid) {// Block token // logger.info ()"Waiting time"+ rateLimiter.acquire()); // Get the token non-blockingif(! rateLimiter.tryAcquire(1000, TimeUnit.MILLISECONDS)) { LOGGER.warn("You've been restricted. Unfortunately, return to failure.");
            return "Failed purchase, low stock.";
        }
        int id;
        try {
            id = orderService.createOptimisticOrder(sid);
            LOGGER.info("Successful purchase, remaining inventory is: [{}]", id);
        } catch (Exception e) {
            LOGGER.error("Purchase failed: [{}]", e.getMessage());
            return "Failed purchase, low stock.";
        }
        return String.format("Successful purchase, remaining stock: % D", id); }}Copy the code

RateLimiter RateLimiter = RateLimiter. Create (10); Here the token bucket class is initialized to allow 10 requests per second.

In the interface, you can see two ways to use it:

  • Blocking token acquisition: After the request comes in, if there are not enough tokens in the token bucket, it blocks here and waits for the token to be issued.
  • Non-blocking token fetch: After the request comes in, if there are not enough tokens in the token bucket, it will try to wait for the set time (1000ms here). It will automatically determine whether the request can get the token after 1000ms. If not, it will return to the failed purchase. If timeout is set to 0, it is equivalent to getting the token when blocking.

We used JMeter to set up 200 threads to snap up 100 iphones in the database at the same time. (Please check the database structure and usage of JMeter to build a simple second kill system from scratch (1) : prevent oversold)

We separately assert the request response with the result “you are restricted, unfortunately, return failure” :

Let’s use ratelimite.tryacquire (1000, timeunit.milliseconds), a non-blocking tok-bucket algorithm, to see the result of the purchase:

As you can see, the green requests represent requests that were blocked by the token bucket, and the red ones are requests for successful purchase orders. According to JMeter’s request summary report, the rate of requests not being curbed in this case is around 15%.

As can be seen, among the 200 requests that are not restricted, some concurrent database updates fail due to optimistic locking, resulting in the goods not being sold. This is also the most frequently asked question in the last article. So I want to talk a little bit more about optimistic locks and pessimistic locks.

Before we talk more about locking, let’s try blocking use of the token bucket algorithm again by changing the code to Ratelimiter.acquire (); , then restore the database to 100 inventory, order table zero. Start a request:

The results are very interesting, here are some pictures of the results (screenshots in order), so you can speculate what I want to say next.

Conclusion:

  • First, all requests are processed, but the flow is limited to 10 requests per second.
  • In the first request, 10 tokens were taken from the bucket at once, so the optimistic lock concurrent update failed as shown in the second figure. However, in the later request, requests came in evenly because the tokens were removed once generated, so there was no concurrent update of the inventory. This also fits the definition of a “token bucket” and can handle unexpected requests (only the purchase conflicts due to optimistic locks). Rather than a “leaky bucket” of constant request limits forever.
  • 200 requests, in the optimistic lock case, sold all 100 items, but without the limit, if the requests were too concentrated, you wouldn’t have sold a few. Just like in the first article.

Implementation principle of RateLimiter in Guava

The realization principle of token bucket, in this article no longer teach fish to fish, or based on actual combat.

After all, Guava only provides an implementation of the token bucket. In actual projects, you will definitely need to use it or implement it yourself. You can check out this article:

Segmentfault.com/a/119000001…

Talk about preventing oversold

After talking about the token bucket flow limiting algorithm, let’s go back to the problem of oversold. In the scenario of massive requests, if optimistic lock is used as in the first article, a large number of requests will fail to return and snap up, resulting in poor user experience.

However, using pessimistic locks, such as database transactions, allows the database to process inventory changes one by one and then receive the next request after successful changes. Therefore, pessimistic locks and optimistic locks should be used according to the actual situation in different situations.

Pessimistic locks, as the name implies, are Pessimistic. Each time I fetch the data, I think someone else will change it, so I Lock the data each time I fetch it, so that someone else will try to fetch it and block it until it gets the Lock. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock.

Optimistic Lock, as the name implies, is very Optimistic. Every time I go to get data, I think that others will not modify it, so I will not Lock it. But when UPDATING, I will judge whether others have updated the data during this period, and I can use the version number and other mechanisms. Optimistic locks are suitable for multi-read applications to improve throughput. For example, if a database provides a mechanism similar to write_condition, it will provide optimistic locks.

The two kinds of locks have their own advantages and disadvantages, and cannot simply define which is better than which.

  • Optimistic locking is more suitable for less data modification, read more frequent scenarios, even if there is a small number of conflicts, it also saves a lot of lock overhead, so as to improve the throughput of the system.
  • However, pessimistic locking is more appropriate in situations where conflicts occur frequently (in the case of a large amount of data being written) and the upper layer application is not continuously retry, thereby reducing performance.

Implement optimistic locks that do not require a version number field

In my last article, I used an optimistic lock based on updating the database version number, and here I posted an optimistic lock SQL statement with no extra fields.

<update id="updateByOptimistic" parameterType="cn.monitor4all.miaoshadao.dao.Stock">
    update stock
    <set>
      sale = sale + 1,
    </set>
    WHERE id = #{id,jdbcType=INTEGER}
    AND sale = #{sale,jdbcType=INTEGER}
</update>
Copy the code

Implementing pessimistic locking

We implemented a pessimistic lock in order to be able to sell goods faster and better under high traffic. Let’s see what happens with pessimistic locks.

In Controller, add a pessimistic lock to sell goods interface:

/ * * *forUpdate Update inventory * @param sid * @return
 */
@RequestMapping("/createPessimisticOrder/{sid}")
@ResponseBody
public String createPessimisticOrder(@PathVariable int sid) {
    int id;
    try {
        id = orderService.createPessimisticOrder(sid);
        LOGGER.info("Successful purchase, remaining inventory is: [{}]", id);
    } catch (Exception e) {
        LOGGER.error("Purchase failed: [{}]", e.getMessage());
        return "Failed purchase, low stock.";
    }
    return String.format("Successful purchase, remaining stock: % D", id);
}
Copy the code

In the Service, add a transaction to the process of selling goods:

@Transactional(rollbackFor = Exception.class, All access = all access.REQUIRED) @override public int createPessimisticOrder(int sid){// all access = all accessforupdate) Stock stock = checkStockForUpdate(sid); // Update stock saleStock(stock); Int id = createOrder(stock);returnstock.getCount() - (stock.getSale()); } /** * check the inventory ForUpdate * @param sid * @return
 */
private Stock checkStockForUpdate(int sid) {
    Stock stock = stockService.getStockByIdForUpdate(sid);
    if (stock.getSale().equals(stock.getCount())) {
        throw new RuntimeException("Out of stock");
    }
    returnstock; } /** * update stock * @param stock */ private void saleStock(stock stock) {stock.setsale (stock.getsale () + 1); stockService.updateStockById(stock); } /** * Create order * @param stock * @return
 */
private int createOrder(Stock stock) {
    StockOrder order = new StockOrder();
    order.setSid(stock.getId());
    order.setName(stock.getName());
    int id = orderMapper.insertSelective(order);
    return id;
}
Copy the code

Transactional(rollbackFor = exception. class, Propagation = Propagation.required); return Exception if a rollback occurs. And transaction propagation uses PROPAGATION_REQUIRED — supports the current transaction, and if there is no current transaction, create a new one. You can read up on Spring’s transaction propagation mechanism and come up with a summary article later.

We still set 100 goods, empty order table, begin to use JMeter interface change request/createPessimisticOrder / 1, a 200 requests:

Looking at the results, we can see that in the summary report given by HMeter, of 200 requests, 100 returned successful purchases and 100 returned failed purchases. And the merchandise sold to the first 100 requests that came in, very orderly.

Therefore, pessimistic locks have a better success rate of selling under a large number of requests. Note, however, that pessimistic locking can lead to long blocking waits for subsequent requests if the volume of requests is high, and the user must wait on the page, much like “suspended animation”, which can be optimized with token bucket limiting or by giving the user a prominent waiting cue.

Does pessimism lock inventory?

One last question, I want to prove that my transaction actually locks the inventory after executing for Update, preventing other threads from modifying the inventory.

We interrupt the point in idea and let the code run until after the for Update execution is complete. Update stock set count = 50 where id = 1; After trying to modify the inventory stealthily and hitting enter, you find that the command line is blocked and does not return any messages, apparently waiting for the line lock to be released.

Next, you manually continue to run the program to complete the transaction. A successful modification on the command line immediately after the transaction completes indicates that the lock has been released by the thread and that other threads can successfully modify the inventory. Verify that the transaction’s row lock is valid!

conclusion

The code of this project is open source on Github, we can use it freely:

Github.com/qqxx6661/mi…

In the next chapter, we will continue to explain the interface traffic limiting (traffic limiting for single-user users + snap up interface hiding).

I’m a little tired now. Have a rest.

I hope you can support my princess: Back-end technology ramble.

reference

  • Cloud.tencent.com/developer/a…
  • Juejin. Cn/post / 684490…
  • Crossoverjie. Top / % 2 f2018%2 f0…
  • www.jianshu.com/p/5d4fe4b2a…
  • Segmentfault.com/a/119000001…
  • Zhenganwen. Top/posts / 30 bb5…

Pay attention to my

I’m a back-end development engineer.

Focus on back-end development, data security, Internet of Things, edge computing direction, welcome to exchange.

I can be found on every platform

  • Wechat official account: A ramble on back-end technology
  • Making: @ qqxx6661
  • CSDN: @ Rude3knife
  • Zhihu: @ Ramble on back-end technology
  • Jane: @pretty three knives a knife
  • Nuggets: @ pretty three knife knife

Original blog main content

  • Back-end development techniques
  • Java Interview Knowledge
  • Design patterns/data structures
  • LeetCode/ Sword finger offer algorithm parsing
  • SpringBoot/SpringCloud entry combat series
  • Data analysis/data crawler
  • Anecdotes/good books to share/personal life

Personal public account: Back-end technology ramble

If the article is helpful to you, you might as well bookmark, forward, in the look ~