Compared with the traffic limiting algorithm based on the time window, the traffic shaping effect of the token bucket and the missed bucket algorithm is much better than that of the time window algorithm, but the better the shaping effect is, the more appropriate it is. For the token bucket without preheating in advance, the rejection traffic limiting will lead to the killing of many requests by error. In the above algorithm, when n is small, such as 50, a token will be put into the bucket at an interval of 20ms, and the interface access may be very random within 1s, which will appear: Although the limit on maximum access frequency is effective on the curve and the traffic is smooth at a fine time granularity, it mistakenly kills many interface requests that should not have been rejected.

Therefore, the token bucket and leak-bucket algorithms are suitable for blocking traffic limiting. For example, when the traffic limiting of some background jobs exceeds the maximum access frequency, the request will not be rejected, but will be blocked until there is a token. For such as micro service interface current-limiting scene of response time is more sensitive, is more suitable for choosing veto type current limit algorithm based on time window, the sliding time window current limit space complexity is higher, the footprint will be more, so by contrast, although the ability to deal with critical breaking flow fixed window algorithm is poorer, but implementation is simple, The simplicity brings good performance and is not error-prone, so the fixed time window algorithm is also a good microservice interface traffic limiting algorithm.


These are the pros and cons.

To refuse is to give up the request. To block is to wait and process the request later rather than discarding it.

Best practices

Reject the algorithm. Time algorithm.

Refused to

Because blocking algorithms are only suitable for scenarios where every request must be processed. The rejection algorithm is suitable for current limiting and simple to implement.

Of course, a blocking algorithm can also abandon a request, that is, reject it. However, the suitable application scenarios are different. Because the leaky bucket algorithm processes requests at a fixed speed, while the time algorithm processes requests at a fixed time.

That is, one is 1ms to process one request and one is 1s to process 1000 requests.

What’s the difference between these two? It is our requirement that 1MS should handle 10 requests instead of 1. With the time algorithm, 10 requests can be processed in any 1ms within 1s. However, if it is a leaky bucket algorithm, then only one request can be processed, and the other nine requests cannot be processed, so the request must be discarded.

The essential difference is that there are two dimensions. Of course, if you have to solve it, the leaky bucket algorithm can also handle 10 requests in 1ms. Of course, in the final analysis, there are some differences in the applicable application scenarios. It is not that an algorithm can only solve the problem of one algorithm, but can not solve the problem of another algorithm.

blocking

Blocking is also flow limiting, essentially controlling flow.

Blocking means you have to consume this data. For example, the real-name authentication interface in the payment system, every request must be processed. However, your thread pool has a blocking queue of 1000. If you pass that queue, you have to block first, wait for the queue data to be consumed, and then queue.

reference

Mp.weixin.qq.com/s/k9tm-4lBw…