background
When we do community, there will often be students Posting messages. For such malicious posts, our operation students are very headache, and this can not be filtered in the gateway IP, can only be processed based on a single user, our regular strategy is: the number of posts per minute can not exceed 2, after more than 10 minutes to close the black room.
A scenario
- The above mentioned post brush mechanism.
- Anti-brush of advertising flow.
- The interface request fails to be processed by the fusing mechanism.
- .
The solution
For this “black evil” request, we have to do is shut the little black house, of course, some larger system architecture, had to shut the gateway level, we are here to do in the business layer, because our business is not very big, of course the students can take the transplantation to the gateway, so don’t have to penetrate to the side of our business, At least it can reduce the internal network traffic of our machine room.
The flow chart
The process that
- The interface initiates a request, and the server obtains the unique id of the interface user (user ID, phone number…). .
- Determines whether the user is locked and returns an error code if so.
- Mark the request unlocked, or overlay it (overlay pit, look below).
- Calculate whether the current user exceeds the threshold we set within a certain period of time. If not, return directly.
- If it does, it locks, returns, and decides on the next request.
The specific plan
Take our scenario as an example, using Redis for distributed locks and atomic counters
Time to determine whether the superposition value exceeds the threshold
This plan, when many people design, will consider, it seems that there is no big problem, the main process is:
- Suppose we use Redis for atomic counting, every time we come in we do incR and set our key to a threshold expiration time.
// Add our user requests to 1$request_nums = Redis::incr('user:1:request:nums', 1); // Set the expiration time of the key for the first stackif ($request_nums == 1){
Redis::expire('user:1:request:nums', 300); }if($request_nums> 10){// join the small black room, the next time to enter the lock judgment}...Copy the code
- Each request is stacked first, and then within this interval, we count the number of requests. If the number of requests exceeds the threshold, we close the black room, and if not, we continue.
Problem: there is no problem at first glance, each calculation in my range, to ensure that a range of request quantity is no problem, but also to our Redis atomic counter, but there is one problem is that a user both time period there is no problem, but across time this point is not to consider.
So is there a way to solve the time lapse problem that causes the time period calculation to be inaccurate?
The answer is yes, and WHAT I did next was use the ordered collection of Redis.
The request is written directly to the ordered collection without time period distinction
General process:
- Each request is written to the ordered collection, and the sorCE value of the collection is the current millisecond timestamp (to prevent duplicate seconds), which can be considered as one timestamp per request.
- Remove all collection data from the collection up to 10 minutes ago. And then calculate the number in the current set
- According to this amount and our threshold to make a size judgment, if it exceeds the lock, otherwise continue to go on
// Add our timestamp to our ordered set redis ::zadd('user:1:request:nums', 1561456435,'1561456435.122'); // Set the key expiration time to 10 minutes Redis::expire('user:1:request:nums', 300); Redis::ZREMRANGEBYSCORE('user:1:request:nums', 0156456, 135); // Get the number of requests left$request_nums=(int)Redis::zcard(self::TIMELINE_ELEVEL_KEY);
if($request_nums>= 10){// Add a small black room, the next time you enter the lock judgment}...Copy the code
Because we don’t just record numbers, we record the request time, so over time, our request count is not chronological.
conclusion
- At the beginning, I was thinking about the problem of the first scheme. Later, when discussing the scheme, I always found that the time moved and the value should be changed. However, in the first scheme, the amount of our request would not be changed, and our time period had been solidified into a value.
- The overall scheme design we use the orderly collection of Redis to do, of course, there are better schemes welcome everyone to recommend ha, this for Redis reading and writing pressure is very big, but as a temporary data storage, this scene is more consistent.
- We recommend atomization for all operations in Redis, which can combine multiple statements into one statement using the official Lua script, and lua execution speed is also very high.
Thanks for reading!!