Current limiting
The purpose of traffic limiting is to limit the speed of concurrent access/requests or requests within a time window to protect the system. Once the rate reaches the limit, service denial, queuing or waiting, and degradation can be processed
The term flow limiting is often used in computer networks and is defined as follows:
In computer networks, rate limiting is used to control the rate of traffic sent or received by a network interface controller and is used to prevent DoS attacks.
- Prevents possible DOS attacks by controlling the rate at which network data is sent or received. In the actual software service process, traffic limiting can also be used to protect API services.
- The computer resources (including CPU, memory, disk, and network bandwidth) that provide services are limited, so are the QPS of API services. Traffic limiting tools use traffic limiting algorithms to limit API access to ensure that services do not exceed the load they can bear.
Main Contents:
A brief introduction and comparison of common traffic limiting algorithms
Common traffic limiting algorithms
Common traffic limiting algorithms mainly include:
- Token bucket- Token bucket
- Leaky bucket – bucket
- Fixed Window Counter – Fixed window counter
- Sliding window log- Sliding window log
- Sliding Window Counter – Sliding window counter
- The above methods can be simply divided into counting algorithm, leaky bucket algorithm and token bucket algorithm.
Counting traffic limiting algorithm
The core of both fixed Windows and sliding Windows is to count requests, the only difference is the processing of the counting time interval.
Fixed window count
Realize the principle of
- The idea of fixed window counting method is relatively simple, and only two parameters need to be determined: counting period T and the maximum number of visits (calls) within the period N. When the request arrives, operate using the following process:
Algorithm advantages
- Fixed window counting is simple to implement and only needs to record the start time of the previous cycle and the total number of accesses in the cycle, consuming little extra storage space.
Algorithm of defect
The disadvantage of fixed window counting is also very obvious. During the period switch, the total number of visits in the previous period will be immediately set to 0, which may lead to traffic burst during the period switch.
As is shown in
Simplified model
- In the two cycles T0, n1 accesses arrive simultaneously at moment A;
- In period T1, n2 visits arrive simultaneously at moment B;
- N1 and n2 are both smaller than the set maximum access times N (otherwise, traffic limiting will be triggered).
When time interval switch occurs, concurrent mutation occurs in the process of switch, so it is possible for fixed window counter to break quota N in actual use.
- For example, if the QPS limit is 10 and a user sends 10 requests twice within 0.1 seconds before and after the cycle switch, the 20 requests can pass the current limiter according to the algorithm rules, then the number of requests in 0.1 surface seconds is 20, exceeding the maximum 10 requests per second limit.
Sliding window counting
In order to solve the problem of traffic burst at cycle switching caused by fixed window counting, sliding window counting can be used. Sliding window calculation is also fixed window counting in essence, the difference lies in the refinement of counting period.
Realize the principle of
Compared with the fixed window counting method, the sliding window counting method adds a parameter M to set the number of sliding Windows in the period T in addition to counting the two parameters of period T and the maximum number of visits (calls) within the period N.
The flow limiting process is as follows:
Sliding window count On the basis of the fixed window count record data, a count array with length M is needed to record the access data of each window in the sliding process. An example of the process is as follows:
Cycle switching problem
The sliding window is subdivided for the period, and there is no case that the count is reset to 0 directly after the period, so there will be no cross-period traffic limitation problem.
Non – counting current limiting method
There are two commonly used traffic limiting algorithms: leaky bucket algorithm and token bucket algorithm
- The idea of leaky bucket algorithm is very simple. The water (request) enters the leaky bucket first, and the leaky bucket flows out of the water at a certain speed. When the inflow speed is too high, the water directly overflows.
Bucket current limit
Realize the principle of
Schematic diagram of leakage bucket current limiting algorithm:
- Set the leaky bucket flow rate and the total capacity of the leaky bucket, and determine whether the current leaky bucket capacity is full when the request arrives. If the request is not satisfied, the request can be stored in the bucket. Otherwise, the request is discarded.
- A thread is used to fetch and process requests at a set rate.
- You need to set the leaky bucket flow rate (r) and leaky bucket capacity (N)
The process is as follows:
Algorithm characteristics
- The main feature of the leaky bucket algorithm is that it can ensure that no matter what the rate of the received request is, the maximum rate of the request that actually reaches the server interface is R, and smooth the input request.
- The disadvantage of the leak-bucket algorithm is also very obvious. Because it can only process requests at a specific rate, how to determine the rate is the core problem. If the rate is set too low, the performance resources will be wasted; if the rate is set too high, the resources will be insufficient.
- In addition, due to the rate setting, no matter how the input rate fluctuates, it will not be reflected in the server. Even if the resources are free, the sudden request cannot be processed in a timely manner. Therefore, this method is not suitable for sudden request processing.
Token bucket flow limiting
-
In addition to limiting the average data transfer rate, some degree of burst transmission is required for many application scenarios. In this case, the leaky bucket algorithm may not be suitable, but the token bucket algorithm is more suitable.
-
The token bucket algorithm works by putting tokens into the bucket at a constant rate. If the request needs to be processed, a token needs to be fetched from the bucket first. When no token is available in the bucket, the service is denied.
Realize the principle of
Set, the rate at which the token is added to the token bucket and setting maximum storage of tokens in the bucket, when the request arrives, the request token in the bucket (according to the application requirements, may be one or more), if required token quantity, delete the corresponding number of tokens and through the current request, if insufficient number of tokens in the bucket the trigger current limiting rules.
According to the description, the parameters to be set are token adding rate r and maximum capacity N in the token bucket. The flow is as follows:
Algorithm characteristics
The token bucket algorithm can control the average speed of request passing by setting the rate of token putting, and can tolerate a certain burst of traffic because the bucket with a capacity of N caches tokens.
Comparison of traffic limiting algorithms
The above mentioned four algorithms, this section mainly on the four algorithms to do a simple comparison algorithm comparison.