A, what? According to?
Second, the strategy of limiting traffic
Denial of service
Service degradation
Privilege request
Elastic scaling
Three, the implementation of current limiting
3.1 speed limiter
3.1.1 Static speed limiter
3.1.1.1 Leaky Bucket
3.1.1.2 Token Bucket
3.1.1.3 Comparison of two classical algorithms
LEAKY BUCKET | TOKEN BUCKET |
When the host has to send a packet , packet is thrown in bucket. | In this leaky bucket holds tokens generated at regular intervals of time. |
Bucket leaks at constant rate | Bucket has maximum capacity. |
Bursty traffic is converted into uniform traffic by leaky bucket. | If there is a ready packet , a token is removed from Bucket and packet is send. |
In practice bucket is a finite queue outputs at finite rate | If there is a no token in bucket, packet can not be send. |
3.1.1.4 Open Source limiter
3.1.2 Dynamic limiter
How to solve these problems?
Basic introduction
Algorithm implementation
A. Initial status
window_(t) = confidence, where t = 0
Copy the code
threshold_(t) = 2 * confidence, where t = 0
Copy the code
B. Window growth:
Window incremental: (on success, before reaching threshold) Window_ (t) = Window_ (T-1) + (α * success_weight(t))Copy the code
Slow incremental: Window_ (t) = Window_ (T-1) + (α * success_weight(t))/Window_ (T-1)Copy the code
C. Reduce Window drop:
w_drop = array(confidence * 2, .....)
w_drop = append(w_drop, rps_drop)threshold_(t) = sum(w_drop) / len(w_drop)
Copy the code
Refer to Designing Simple & Dynamic Rate Limiter In Golang for more details.
Summary: The dynamic flow limiting algorithm based on flow control window is based on the congestion control algorithm of TCP protocol. The details of window increase and window increase and window decrease as well as the strategy can be defined by themselves. Some simple ones are directly increased or decreased by percentage on the benchmark, and some complex ones are defined by stages such as fast growth, slow growth and multiplication reduction described above.
3.2 a fuse
3.2.1 Fuse Introduction
3.2.2 State machine fuse
Closed state:
We need a counter for failed calls, incrementing the number of failures by one if the call fails. If the number of recent failures exceeds the threshold for allowing failures within a given period of time, switch to the Open state. A timeout clock is enabled, and when the clock exceeds this time, the state switches to half-open.
In this state, requests to the application immediately return an error response without invoking the service on the back end.
Allow an application to invoke a service on a certain number of requests. If these requests make successful calls to the service, it is assumed that the error that caused the call to fail has been fixed, the fuse switches to the closed state, and the error counter is reset. If this number of requests fail, the problem that caused the previous call failure is considered to still exist, the fuse switches back to off, and the timer is reset to give the system time to correct the error. A semi-disconnected state is a great way to prevent a recovering service from being dragged down again by a sudden flood of requests.
3.2.3 Open source implementation
3.3 Other overload judgment methods
Four, industry boundary flow introduction
4.1 wechat overload control
How to determine overload?
Processing policy after overload
Priority policy:
The working process