Preface:

After the service goes live, we usually have an estimate of how many requests the service can handle. When the number of requests per unit time is too high to exceed our estimated threshold, we should reject the extra requests. We typically implement this logic using the leaky bucket and token bucket algorithm.

Token bucket algorithm:

Principle:

A bucket fills the bucket with tokens at a certain rate, and every request that comes in tries to get a token from the bucket. If it doesn’t get one, the request is rejected. The bucket has a certain capacity and can hold n tokens at most. When the number of tokens is too large, it will overflow.

Code implementation:

var (
	lastReqTime        = time.Now().Unix() // The timestamp of the last request
	lastTokenNum int64 = 1                 // Tokens left over from the last request
	tokenRate    int64 = 1                 // Add 1 token per second
	bucketCap    int64 = 100               // Total token buckets

)

func tokenLimiter(a) bool {
	nowT := time.Now().Unix() // The current time
	// (current time minus last request time) times speed = the number of tokens produced during this period plus the number of original tokens to determine whether the number of tokens is greater than the bucket capacity, if so, take the bucket capacity
	nowTokenNum := int64(math.Min(float64((nowT-lastReqTime)*tokenRate+lastTokenNum), float64(bucketCap))) // The number of tokens left

	if nowTokenNum > 0 {
		lastReqTime = nowT
		lastTokenNum--
		return true
	}

	return false
}
Copy the code

Leaky bucket algorithm:

Principle:

A bucket has a hole under it that leaks water at a certain rate. Each request is like adding a drop of water to the bucket. If the bucket is currently full, the request will overflow, and the overflow will be like the rejected request.

Code implementation:

var (
	lastReqTime       = time.Now().Unix() // Last requested time
	lastReqNum  int64 = 10                // The number of buckets left last time
	leakyRate   int64 = 1                 // The leakage rate is one drop per second
	bucketCap   int64 = 100               // Total token bucket capacity
)

func leakyLimiter(a) bool {
	nowT := time.Now().Unix() // The current time
	// The current amount of water = (the amount of water left last time - the amount of water removed during this period), the current amount of water is minimum 0
	nowReqNum := int64(math.Max(float64(lastReqNum-(nowT-lastReqTime)*leakyRate), 0))

	// If the current amount of water is already larger than the bucket, return false to indicate that the water is overflowing
	if nowReqNum > bucketCap {
		return false
	}

	lastReqNum++
	lastReqTime = nowT

	return true
}
Copy the code

Conclusion:

  1. Many sources say that a token bucket is better than a leaky bucket because it limits the flow and allows a reasonable amount of concurrency in a short period of time.
  2. This code is just learning, online production should also take into account atomic operations, etc.
  3. You can use Redis with Lua to do this logic