This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.

What is the counter algorithm?

The counter algorithm accumulates the number of access times within a specified period of time and triggers the traffic limiting policy when the number reaches a specified threshold. In the next time period, the number of accesses is cleared. This algorithm is very simple to implement in both single machine and distributed environment. It can be easily implemented by using redis incR atomic autoincrement and key expiration time.

From the figure above, we set the threshold of one minute to 100, and the number of requests from 0:00 to 1:00 is 60. When it reaches 1:00, the number of requests is cleared and the calculation starts from 0. At this time, the maximum number of requests we can handle between 1:00 and 2:00 is 100, and the system rejects more than 100 requests.

This algorithm has a critical problem. For example, in the figure above, between 0:00 and 1:00, there are only 60 requests at 0:50, and between 1:00 and 2:00, there are only 60 requests at 1:10. Although there are no more than 100 requests in both one-minute periods, during the 20 seconds between 0:50 and 1:10, There were 120 requests, and we didn’t exceed the threshold in each cycle, but in those 20 seconds, we were way above our original threshold of 100 requests per minute.

What is the sliding time window algorithm?

In order to solve the critical value problem of counter algorithm, the sliding window algorithm is invented. In TCP network communication protocol, the sliding time window algorithm is used to solve the problem of network congestion.

Sliding time window is to cut the actual period in the counter algorithm into multiple small time Windows, record The Times of access in each small time window, and then slide the window forward according to the time and delete the expired small time window. Finally, we only need to count the total number of requests in the small time window within the sliding window.

Minutes above, suppose that we set up the request of the threshold value is 100, we will be a minute into four small time window, so that each small time window can handle only 25 request, we used dotted box for sliding time window, the window size is 2, which is within the window up to 50 request processing. As time goes by, the sliding window also moves forward with time. For example, at the beginning of the figure above, the window is in the range of 0:00 to 0:30, and after 15 seconds, the window is in the range of 0:15 to 0:45, and the requests in the window are cleared again. In this way, the critical value problem of the counter algorithm is well solved.

In the sliding time window algorithm, the more small Windows are divided, the smoother the rolling of the sliding window will be, and the more accurate the statistics of flow limiting will be.

What is leaky bucket traffic limiting algorithm?

The leaky bucket algorithm works as its name suggests. We maintain a funnel with a constant outflow velocity, which remains constant no matter how fast the water flows in. Similar to message-oriented middleware, the processing power of a message depends on the consumer, regardless of how much the producer requests the message.

Leaky bucket capacity = Leaky bucket flow rate x Acceptable waiting time. Requests within this range are queued to be processed by the system, and those exceeding this range are discarded.

In leaky bucket flow limiting algorithm, the following situations exist:

  1. When the request rate is greater than the flow rate of the drain bucket, that is, when the number of requests exceeds the upper limit that the service can handle, the traffic limiting policy is triggered.

  2. If the request speed is less than or equal to the outbound flow rate of the leaky bucket, that is, the service processing capability is greater than or equal to the request volume, the service is executed normally.

    Leaky bucket algorithm has a disadvantage: when the system has a sudden large flow in a short period of time, leaky bucket algorithm can not deal with.

What is the token bucket traffic limiting algorithm?

The token bucket algorithm is to add a container with a fixed size, namely the token bucket, and the system puts tokens into the token bucket at a constant rate. If a client requests, it needs to take a token from the token bucket and get the token before it is qualified to access the system. At this time, one token is missing in the token bucket. When the token bucket is full and a token is generated from the token bucket, the token is discarded.

In the token bucket algorithm, there are several cases:

  1. Request speed is greater than token generation speed: tokens in the token bucket will be fetched, and subsequent requests will be curbed because no tokens can be retrieved.

  2. The rate of request is equal to the rate of token generation: then the system is stationary.

  3. The request speed is less than the token generation speed: then the number of visits of the system is far below the concurrent capacity of the system, and the request can be processed normally.

    Token bucket algorithm, due to the presence of a bucket, can handle a short period of heavy traffic scenarios. This is the difference between a token bucket and a leaky bucket.


“Welcome to the discussion in the comments section. The excavation authorities will draw 100 nuggets in the comments section after project Diggnation. See the event article for details.”