This is the 10th day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021
Writing in the front
Nginx can not only serve as a Web server, reverse proxy, load balancer, but also as a traffic limiting system. Here we take Nginx as an example to describe how to configure a traffic limiting system.
Have written about nginx before:
What nginx knows (Worth checking out)
Let’s talk a little bit about the leaky bucket algorithm
The flow limiting algorithm used by Nginx is the leaky bucket algorithm. , so let’s briefly understand the leaky bucket algorithm and then proceed to the following study.
Leaky Bucket algorithm, also known as leaky bucket. To understand the leaky bucket algorithm, let’s look at a diagram of the algorithm:
As you can see from the figure, the whole algorithm is actually quite simple. First of all, we have a fixed capacity bucket with water coming in and water going out. We can’t predict how much water will flow in, or how fast it will flow. But for the water that goes out, the bucket can fix the rate of flow out. Also, when the bucket is full, the excess water will overflow.
Replacing the water in the algorithm with the request in the real world, we can see that the leaky bucket algorithm inherently limits the speed of the request. When using the leaky bucket algorithm, we can guarantee that the interface will process requests at a constant rate. So leaky bucket algorithms inherently don’t have criticality problems.
The leaky bucket algorithm can be roughly considered as the process of water injection and leakage. Water flows out of the bucket at a certain rate and flows into the bucket at any rate. When the water exceeds the bucket flow rate, it is discarded, because the bucket capacity remains unchanged, ensuring the overall rate.
So let’s see what the algorithm is. How to configure nginx to limit traffic
Configuration module
In the HTTP block, configure the basic traffic limiting configuration:
01 http {
02 limit_req_zone$binary_remote_addr zone=mylimit:10m rate=10r/s;
03
04 server {
05 location /test/ {
06 limit_reqzone=mylimit;
07
08 proxy_pass http://api;09}10 }
11 }
Copy the code
The above code is familiar, where Server defines a Server interface. Lines 2 and 6 work together to complete a limiting setting. Here’s what they do:
Limit_req_zone the limit_req_zone command is specifically used in the Nginx configuration file to define limiting traffic. It must be placed in an HTTP block otherwise it will not take effect because the command is only defined in HTTP.
This field contains three parameters. Let’s take a look
- The first parameter is the key, the location of the value $binary_remote_ADDR, which represents the key used by our flow limiting system to limit requests.
Here, we use $binary_remote_ADDR, which is a built-in Nginx value that represents the binary representation of the client’s IP address. So in other words, our example configuration expects the traffic limiting system to perform traffic limiting with the IP address of the client as the key.
- The second parameter is the shared memory usage (zone) of the flow limiting configuration. For performance benefits, Nginx places the stream limiting configuration in shared memory for all Nginx processes to use. Since it takes up memory, we expect developers to specify a reasonable amount of space that is not wasteful and can store enough information. As a rule of thumb, 1MB of space can store 16,000 IP addresses.
In this declaration, we declare a memory space called myLimit, which is then 10M, or 160,000 IP addresses
- The third configuration is the access rate, which is the number of requests and the unit of time separated by a left slash. The access rate here is the maximum rate, so 10r/s is 10 requests per second. The rate of requests to the back-end server through this Nginx server cannot exceed 10 requests per second.
In line 6, we use the limit_req command to declare that the API requires a traffic limiting configuration and that the zone of that traffic limiting configuration is myLimit. In this way, all requests to the API read the traffic limiting configuration on line 6, find the parameter declared on line 2 based on the name of the traffic limiting configuration myLimit, and decide whether the request should be rejected.
Mind you, this may not be enough. Remember, Nginx uses a leaky bucket algorithm, not a time window algorithm. As we mentioned earlier, leaky buckets can be configured with two parameters.
Configure peak values. The peak property of the Nginx leaky bucket algorithm is set in the API. The parameter name is Burst. As follows:
01 http {
02 limit_req_zone$binary_remote_addr zone=mylimit:10m rate=10r/s;
03
04 server {
05 location /test/ {
06 limit_reqzone=mylimit burst=20;
07
08 proxy_pass http://api;09}10 }
11 }
Copy the code
In line 6, we only need to specify a burst along with the limit_REq declaration, where we specify a burst of 20, meaning that in the leaky bucket algorithm our bucket can accept up to 20 requests.
Such a Nginx limiting system is configured, is not a thief simple haha, you can also try. That’s all for today. I’ll see you next time. Next time, we’ll go to the queue. We study together, come on!!
overtones
Thank you for reading, and if you feel like you’ve learned something, you can like it and follow it. Also welcome to have a question we comment below exchange
Come on! See you next time!
To share with you a few I wrote in front of a few SAO operation
Copy object, this operation is a little SAO!
Dry! SpringBoot uses listening events to implement asynchronous operations