Make writing a habit together! This is the sixth day of my participation in the “Gold Digging Day New Plan · April More text Challenge”. Click here for more details.

Traffic limiter is a very important component to improve service stability. It can be used to limit the request rate and protect the service from overloading. There are many methods to implement the current limiter. The common algorithms of current limiter include fixed window, sliding window, leaky bucket and token bucket

To put it simply, a Token bucket is an imaginary bucket of a fixed size, into which tokens are placed at a constant rate until the bucket is full. When there are few requests, the bucket can first “save” some tokens to cope with sudden traffic. If there are any tokens left in the bucket, the bucket can keep withdrawing them. If there are no tokens left, wait until tokens are placed in the bucket.

Some students want to implement a current limiter in their own projects after they understand the principle of token bucket. Em… How to say, building a wheel is really beneficial to improve their level, but if applied to commercial projects, actually do not need to build their own wheels, Golang officials have already made the wheels for us…… ~!

Golang’s official extension library has its own implementation of the limiting algorithm, namely golang.org/x/time/rate. The current limiter is also based on Token Bucket.

Internal structure of current limiter

The Limiter type of the Time /rate package defines the Limiter. All the functions of the Limiter are realized based on the Limiter type. Its internal structure is as follows:

type Limiter struct {
 mu     sync.Mutex
 limit  Limit
 burst  int // The token bucket size
 tokens float64
 last time.Time // Last time TOKENS was updated
 lastEvent time.Time // The last time a limiter event occurred (pass or limit is a limiter event)
}
Copy the code

The main fields are:

  • Limit: The limit field indicates the rate at which tokens are put into the bucket. It is of type Limit and is the type alias of INT64. When you set the limit, you can specify the number of tokens to be placed in the bucket per second and the interval at which tokens are placed in the bucket. After specifying the number of tokens to be placed in the bucket per second, you can calculate the interval at which tokens are placed in the bucket.

  • Burst: Size of the token bucket.

  • Tokens: Tokens in the bucket.

  • Last: The last time the Token was put into the bucket.

  • LastEvent: time of last limiter event (pass or limit are limiter events)

It can be seen that in the timer/rate flow limiter implementation, there is no separate maintenance of a timer and queue to actually put tokens into the bucket at intervals, but only through the way of counting the remaining tokens in the bucket. Before each Token consumption, the number of tokens in the bucket will be updated according to the time difference between the last Token update.

After understanding the internal implementation of the Time/Rate current limiter, we will focus on the specific use of this component in the following content:

Construct a current limiter

We can construct a current limiter object using the following methods:

limiter := rate.NewLimiter(10.100);
Copy the code

There are two parameters:

  1. The first parameter is r Limit, which sets the Limiter Limit field, representing how many tokens can be generated into the Token bucket per second. Limit is actually an alias of Float64.

  2. The second parameter is b int, where b represents the capacity of the Token bucket, i.e. the burst field of the Limiter set.

So, for the example above, it constructs a flow limiter with a Token bucket size of 100 and places tokens into the bucket at a rate of 10 tokens per second.

In addition to specifying the number of tokens generated per second directly to the r Limit argument, you can also use the Every method to specify the interval at which tokens are placed into the bucket, for example:

limit := rate.Every(100 * time.Millisecond);
limiter := rate.NewLimiter(limit, 100);
Copy the code

Put one Token into the bucket every 100ms. It’s essentially 10 a second in a bucket.

Use a current limiter

Limiter provides three methods for programs to consume tokens, either one Token at a time or multiple tokens at once. Each method represents a different corresponding method when the Token is insufficient. It can block waiting for the Token replenishment in the bucket, or directly return that the Token retrieval failure.

Wait/WaitN

func (lim *Limiter) Wait(ctx context.Context) (err error)
func (lim *Limiter) WaitN(ctx context.Context, n int) (err error)
Copy the code

Wait is actually WaitN(CTX,1).

When tokens are consumed using the Wait method, if there is not enough tokens in the bucket (less than N), the Wait method will block for some time until the Token is satisfied. If sufficient, return directly.

As you can see, the Wait method takes a context parameter. We can set the Deadline or Timeout of the context to determine the maximum duration of the Wait.

// Wait until the token in the bucket is fetched
err := limiter.Wait(context.Background())
iferr ! =nil {
 fmt.Println("Error: ", err)
}

// Set the wait timeout to one second
ctx, _ := context.WithTimeout(context.Background(), time.Second * 1)
err := limiter.Wait(ctx)
iferr ! =nil {
 fmt.Println("Error: ", err)
}
Copy the code

Allow/AllowN

func (lim *Limiter) Allow(a) bool
func (lim *Limiter) AllowN(now time.Time, n int) bool
Copy the code

Allow is a simplified function of AllowN(time.now (),1).

The AllowN method indicates whether there are at least n buckets in the bucket by a certain time. If so, return true and consume n tokens from the bucket. If tokens in the bucket are not consumed, return false.

The usage scenario of the corresponding line is that if the request rate exceeds the limit, the overclocked request is directly discarded.

if limiter.AllowN(time.Now(), 2) {
    fmt.Println("event allowed")}else {
    fmt.Println("event not allowed")}Copy the code

Reserve/ReserveN

func (lim *Limiter) Reserve(a) *Reservation
func (lim *Limiter) ReserveN(now time.Time, n int) *Reservation
Copy the code

Reserve is equivalent to ReserveN(time.now (), 1).

ReserveN’s usage is relatively complex, and when the call is complete, it returns a *Reservation object, regardless of whether the Token is sufficient. You can call the object’s Delay() method, which returns a parameter of type time.duration that reflects the amount of time to wait before you can do anything else. If you don’t want to wait, you can call the Cancel() method, which returns the Token.

As a simple example, we can use the Reserve method like this.

r := limiter.Reserve() f ! r.OK() {// Not allowed to act! Did you remember to set lim.burst to be > 0 ?
    return
}
time.Sleep(r.Delay())
Act() // Execute the relevant logic
Copy the code

Dynamically adjust the rate and bucket size

Limiter supports dynamic adjustment of the rate and bucket size after creation:

  1. SetLimit(Limit) Changes the rate at which tokens are placed

  2. SetBurst(int) Changes the Token bucket size

With these two methods, we can dynamically change the Token bucket size and rate based on our existing environment and conditions and our needs.

conclusion

Today we summarized the use of Golang’s official current limiter, which is a token bucket implementation of the current limiter. Among them, Wait/WaitN and Allow/AllowN are commonly used. The former allows the program to Wait for new tokens in the bucket if there are insufficient tokens in the bucket during Token consumption (it is better to set the waiting time), and the latter directly discards the request when there are insufficient tokens in the bucket.

In addition to Golang’s official flow limiter implementation, Uber’s open source flow limiter uber-go/ Ratelimit is also a good choice. Different from Golang’s official flow limiter, Uber’s flow limiter is realized by leaky bucket algorithm, but the traditional leaky bucket algorithm is improved. If you are interested, you can experience it by yourself.