Article Contents:

  1. 2. Introduction of algorithm – counter method 3. Introduction of algorithm – sliding window 4. Algorithm introduction – Leakage bucket algorithm 5. Algorithm introduction – token bucket algorithm

preface

In a high concurrency system, traffic control is very important. When a huge amount of traffic is directly requested to our server, the interface may not be available for a long time, or even the entire application may not be available if not handled.

So what is current limiting? As the name implies, limiting traffic is limiting traffic, for example, you broadband packet 1 GIGAByte of traffic, the traffic is exhausted, there is no. Through current limiting, we can control the QPS of the system well, so as to achieve the purpose of protecting the system. This article will introduce the common traffic limiting algorithms and their characteristics.

Algorithm is introduced

Counter method

The counter method is the simplest and easiest algorithm to implement in the current limiting algorithm. For example, we stipulate that for interface A, we can not access more than 100 times per minute. We can start by setting a counter that increments every time a request comes in. If the value of counter is greater than 100 and the interval between the request and the first request is less than 1 minute, then there are too many requests. If the interval between this request and the first request is greater than 1 minute and the value of counter is still within the flow limit range, then reset counter. The algorithm is shown as follows:

What should you say when asked about limiting the flow of the Spring Boot interface?

The specific pseudocode is as follows:

Although this algorithm is simple, there is a very fatal problem, that is, the critical problem, we see the following figure:

As can be seen from the figure above, if a malicious user sends 100 requests at 0:59 and another 100 requests at 1:00, then the user sends 200 requests in one second. We just specified a maximum of 100 requests per minute, which is 1.7 requests per second, and the user can exceed our rate limit instantaneously by popping requests at the reset node in the time window. Users could overwhelm our app with this flaw in the algorithm.

Smart friends may have noticed that the problem is actually due to our low statistical accuracy. So how to deal with this problem well? Or, how do you reduce the impact of critical problems? We can look at the sliding window algorithm below.

The sliding window

Rolling window, also known as rolling Window. To solve this problem, we introduce the sliding window algorithm. If you have learned the TCP network protocol, then you must be familiar with the term sliding window. Here is a good illustration of the sliding window algorithm:

How to perform flow limiting on the Spring Boot interface? What should you say when asked

In the figure above, the entire red rectangle represents a time window, in our case a time window is a minute. Then we divide the time window, for example, in the picture, we have divided the sliding window into 6 squares, so each one represents 10 seconds. Every 10 seconds, our time window slides one space to the right. Each grid has its own counter. For example, when a request arrives at 0:35 seconds, the counter from 0:30 to 0:39 is incremented by one.

So how does sliding Windows solve the critical problem just now? If you look at the figure above, 100 requests arriving at 0:59 will land in the gray grid, and requests arriving at 1:00 will land in the orange grid. When the time reaches 1:00, our window will move one space to the right, so the total number of requests in the time window at this time is 200, exceeding the limit of 100, so it can be detected that the traffic limiting is triggered at this time.

Let me review the counter algorithm, and we can see that the counter algorithm is actually a sliding window algorithm. However, it does not further divide the time window into 60s.

It can be seen that the more grids the sliding window is divided, the smoother the rolling of the sliding window will be, and the more accurate the statistics of current limiting will be.

image.png

Bucket algorithm

Leaky Bucket algorithm, also known as leaky bucket. To facilitate understanding of the leaky bucket algorithm, let’s look at a schematic diagram of the algorithm:

image.png

As you can see from the figure, the whole algorithm is actually quite simple. First of all, we have a fixed capacity bucket with water coming in and water going out. We can’t predict how much water will flow in, or how fast it will flow. But for the water that goes out, the bucket can fix the rate of flow out. Also, when the bucket is full, the excess water will overflow.

Replacing the water in the algorithm with the request in the real world, we can see that the leaky bucket algorithm inherently limits the speed of the request. When using the leaky bucket algorithm, we can guarantee that the interface will process requests at a constant rate. So leaky bucket algorithms inherently don’t have criticality problems. Specific pseudo-code implementation is as follows:

Token bucket algorithm

Token bucket algorithm, also known as token bucket. To understand the algorithm, let’s look at a schematic diagram of the algorithm:

image.png

As we can see from the figure, the token bucket algorithm is slightly more complex than the leaky bucket algorithm. First, we have a fixed capacity bucket with tokens in it. The bucket is initially empty, and the tokens fill the bucket at a constant rate r until the bucket reaches its capacity, and excess tokens are discarded. Every time a request comes in, an attempt is made to remove a token from the bucket, and if there is no token, the request cannot pass.

Specific pseudo-code implementation is as follows:

How to perform flow limiting on the Spring Boot interface? What should you say when asked

RateLimiter implementation

For the token bucket code implementation, you can use RateLimiter directly from the Guava package.

How to perform flow limiting on the Spring Boot interface? What should you say when asked

Related variants

If we take a closer look at the algorithm, we see that by default removing the token from the bucket is not time consuming. If you set a delay for removing the token, then you are using the leak-bucket algorithm again. The SmoothWarmingUp class from Google’s Guava library takes this approach.

The critical problem

Let’s consider the critical problem scenario again. At 0:59 seconds, since the bucket was full of 100 tokens, the 100 requests could pass instantaneously. However, tokens are filled at a low rate. Therefore, the number of tokens in the bucket cannot reach 100 at 1:00. In this case, no more 100 requests can pass. So token bucket algorithm can solve the critical problem well. The figure below compares the rate changes of the counter (left) and the token bucket algorithm (right) at critical points. We can see that although the token bucket algorithm allows the burst rate, the next burst rate cannot occur until there are enough tokens in the bucket:

conclusion

Counter VS sliding window

The counter algorithm is the simplest algorithm and can be regarded as a low-precision implementation of the sliding window. Sliding Windows require more storage space to implement because they need to store multiple counters (one per cell). In other words, the higher the accuracy of the sliding window, the more storage space is required.

Leaky bucket vs. token bucket

The most obvious difference between the leaky bucket algorithm and the token bucket algorithm is that the token bucket algorithm allows a certain degree of burst traffic. Because of the default token bucket algorithm, taking tokens is not time consuming, that is, assuming there are 100 tokens in the bucket, 100 requests can be allowed through instantly.

Token bucket algorithm is widely used in the industry because it is simple to implement, allows some traffic burst and is user-friendly. Of course, we need to do it on a case-by-case basis, there is only the best algorithm, there is no optimal algorithm.

If the article helps you, give it a “like” before you leave