Writing in the front

DRF common functions of the case of basic usage are explained, about Throttling (Throttling) this function in fact in real business scenarios can be really used in fact not much. Today’s topic is to discuss the function on the one hand, on the other hand, it is also to look at our development process from a different perspective, I hope you can use DRF function at the same time, but also understand the implementation behind the function.

The text start

Let’s talk about the concept of current limiting, and the first time you get to know this concept is at the front end. In a real business scenario, you don’t want to call the back-end interface every time you type a character into a search box, but you want to call the interface after a pause. This feature is necessary to reduce the pressure on both the front-end request and rendering and the back-end interface access. The code for a frontend like function looks like this:

// Front end function stream limiting example
function throttle(fn, delay) {
    var timer;
    return function () {
        var _this = this;
        var args = arguments;
        if (timer) {
            return;
        }
        timer = setTimeout(function () {
            fn.apply(_this, args);
            timer = null;
        }, delay)
    }
}
Copy the code

But the back-end traffic limiting is similar in purpose to the front-end, but the implementation is different. Let’s look at DRF traffic limiting.

1. Stream limiting in the DRF

  • Project configuration
# demo/settings.py

REST_FRAMEWORK = {
    #...
    'DEFAULT_THROTTLE_CLASSES': (
        'rest_framework.throttling.AnonRateThrottle'.'rest_framework.throttling.UserRateThrottle'.'rest_framework.throttling.ScopedRateThrottle',),'DEFAULT_THROTTLE_RATES': {
        'anon': '10/day'.'user': '2/day'}},# article/views.py

ViewSet based traffic limiting
class ArticleViewSet(viewsets.ModelViewSet, ExceptionMixin) :
    API path that allows users to view or edit. "" "
    queryset = Article.objects.all(a)Use default user traffic limiting
    throttle_classes = (UserRateThrottle,)
    serializer_class = ArticleSerializer

# View-based traffic limiting
@throttle_classes([UserRateThrottle])
Copy the code
  • Using the demonstration
$ curl -H 'Accept: application/json; indent=4'{-u admin: admin http://127.0.0.1:8000/api/article/1/"detail": "Request was throttled. Expected available in 86398 seconds."
}
Copy the code

Because my configured user can only request twice a day, the exception for 429 Too Many Requests will be given after the third request, with the exception information as long as the next available time is 86,398 seconds.

2. Configure advanced flow limiting

The traffic limiting configuration shown above is applicable to users. For example, if I change users to continue access, I still have two chances.

$ curl -H 'Accept: application/json; indent=4'-u root: a root {http://127.0.0.1:8000/api/article/1/"id": 1,
    "creator": "admin"."tag": "Modern Poetry"."title": "如果"."content": "I'll never think of you again in my whole life except on some nights when tears are wet and if you want to."
}
Copy the code

The three types of traffic limiting classes are described

  • AnonRateThrottleApplies to any user restrictions on interface access
  • UserRateThrottleThis parameter applies to the restriction on the interface access after the authentication request is completed
  • ScopedRateThrottleApplies to restrictions on access to multiple interfaces

Therefore, the three different classes are suitable for different business scenarios. The specific use is selected according to different business scenarios. The expected effect can be achieved by configuring the frequency of corresponding scope.

3. Traffic limiting analysis

Imagine if you were coding for this requirement.

In fact, this function is not difficult, the core parameters are time, number, use range, the following demonstration of the number of calls to the function limit.

from functools import wraps

TOTAL_RATE = 2

FUNC_SCOPE = ['test'.'test1']


def rate_count(func) :
    func_num = {
        Function names must not be repeated
        func.__name__: 0
    }

    @wraps(func)
    def wrapper() :
        if func.__name__ in FUNC_SCOPE:
            if func_num[func.__name__] >= TOTAL_RATE:
                raise Exception(f"{func.__name__}Function calls exceed set number")
            result = func()
            func_num[func.__name__] += 1
            print(F "function{func.__name__}Number of calls:{func_num[func.__name__]}")
            return result
        else:
            Functions that are not limited by the count are not limited
            return func()

    return wrapper


@rate_count
def test1() :
    pass


@rate_count
def test2() :
    print("test2")
    pass


if __name__ == "__main__":
    try:
        test2()
        test2()
        test1()
        test1()
        test1()
    except Exception as e:
        print(e)
    test2()
    test2()
    
""" test2 test2 test1 is called: 1 test1 is called: 2 test1 is called more than the set number of test2 test2 """ "
Copy the code

Here you monitor the number of times a function is called and you set up the functions that can use that function. Throws an exception when the number of function calls exceeds the specified threshold. It’s just that there’s no time limit.

4. Source analysis

We just analyzed how to implement the limit on the number of times a function is called, which can be a little more complicated for a single request. Here’s how DRF is implemented:

class SimpleRateThrottle(BaseThrottle) :
   
    #...
    
    def allow_request(self, request, view) :
        """ Implement the check to see if the request should be throttled. On success calls `throttle_success`. On failure calls  `throttle_failure`. """
        if self.rate is None:
            return True

        self.key = self.get_cache_key(request, view)
        if self.key is None:
            return True

        self.history = self.cache.get(self.key, [])
        self.now = self.timer()

        # Change the cache number of requests to the set time limit
        while self.history and self.history[-1] <= self.now - self.duration:
            self.history.pop()
        # Core logic is here to judge the number of requests
        if len(self.history) >= self.num_requests:
            return self.throttle_failure()
        return self.throttle_success()
    
    #...
    
class UserRateThrottle(SimpleRateThrottle) :
    """ Limits the rate of API calls that may be made by a given user. The user id will be used as a unique cache key if the  user is authenticated. For anonymous requests, the IP address of the request will be used. """
    scope = 'user'

    def get_cache_key(self, request, view) :
        if request.user.is_authenticated:
            ident = request.user.pk
        else:
            # Consider that the user is not authenticated and the AnonRateThrottle key is the same
            ident = self.get_ident(request)
        # Build the cache key based on the set scope
        return self.cache_format % {
            'scope': self.scope,
            'ident': ident
        }

Copy the code

To sum up:

  • The core judgment logic is still to get the number of calls per user from the cache and determine whether the set threshold is exceeded based on the range and time.
  • For different types of traffic limiting, the design of the cache key is different. The default key is the request keyREMOTE_ADDR.

5. Other precautions

  • Since this implementation uses caching, it’s important to note that in the case of multi-instance deployments you need to configure a uniform caching service (the default caching is Django’s in-memory implementation).

  • The restart of the cache service may cause the existing count to clear. If there is a strong service logic requirement, you need to implement the traffic limiting logic.

  • If it is a custom user table, you need to override the logic for get_cache_key in the cache.

  • If you need to statistically analyze the traffic limiting situation of users, you also need to redesign the logic of traffic limiting.

  • Flow limiting logic should be used with caution in a production environment because it can limit user use of the product and is not user-friendly.

The resources

  • DRF current-limiting
  • Django cache