The more you know, the more you don’t know
A problem often leads to a series of problems, the blind spot of knowledge is so quietly discovered 🤣. Rut had more problems than one when he wrote his own limiting notes:
- Which is more appropriate for limiting traffic algorithm
- How do I use annotations to implement stream limiting
- How do I limit each method individually
- How to convert a long string to a short string
- 64 into the system
or
62 into the system - What is LRU and how can it be implemented using simple data structures
practice
What is current limiting
Limit the number of requests received by the server so that only some of them actually reach the server, and others can be deferred or rejected. Thus avoiding all requests to the database and crushing the DB. Take a life we may encounter the scene, especially in the north, Guangzhou and Shenzhen or new first-tier cities, Hangzhou metro line 1, Fengqi road station, when the passenger flow to a certain peak, the police uncle 👮♀ may not let you into the subway, so that the use of other means of transportation ️… Are tears
Which is more appropriate for limiting traffic algorithm
There are a lot of explanations on the Internet about the limiting algorithm, the leaky bucket algorithm, the token bucket algorithm, and so on. If you do a search, you will know that the rutting is implemented with the simplest counter algorithm.
Counter algorithm
- Divide one second into 10 phases of 100ms each.
- Record the number of interface calls every 100ms.
- Of course, as time goes by, there will be more and more stages. At this point, you can delete the first n stages, leaving only 10, which is 1s.
- The number of times the last one minus the first one is the number of times the interface is called in 1s
How do I use annotations to implement stream limiting
When using Nginx to limit traffic, nginx is used as the proxy layer to intercept requests and process them. In Spring, the proxy layer is AOP
AOP
In web servers, there are many scenarios that can be implemented with AOP, such as
- Print logs, record time classes, methods, and parameters
- Use reflection to set paging PageRow, the default value of PageNum
- Game scenarios, whether the game is over or not, don’t use every method
- Decryption, verification and so on
Timing task
In the counter algorithm, we mentioned that we need to record the number of interface calls every 100ms and save them. This is where timed tasks come in handy. Timing task to realize there are a lot of, like using the thread pool ScheduledExecutorService, Scheduled Spring of course also don’t have to. Second, what data structure holds the number of calls –>LinkedList. In addition, we need to limit the flow of multiple methods, how to solve the problem? –> Each method has a unique value: Package + Class + methodName, so we use this unique value as key and linkedList as map, code below
/** Count of calls for each key **/ private Map<String, Long> countMap = new ConcurrentHashMap<>(); /** LinkedList **/ private static Map<String, linkedList <Long>> calListMap = new ConcurrentHashMap<>();Query every s
@Scheduled(cron = "*/1 * * * *?")
private void timeGet(){
countMap.forEach((k,v)->{
LinkedList<Long> calList = calListMap.get(k);
if(calList == null){
calList = new LinkedList<>();
}
The number of calls to each method is put into the linkedList
calList.addLast(v);
calListMap.put(k, calList);
if(calList.size() > 10) { calList.removeFirst(); }}); }Copy the code
AOP check
Custom annotation
import java.lang.annotation.*;
@Target(ElementType.METHOD)
@Retention(RetentionPolicy.RUNTIME)
@Documented
public @interface CalLimitAnno {
String value() default "" ;
String methodName() default "" ;
long count() default 100;
}
Copy the code
Check before calling the interface
@Around(value = "@annotation(around)") public Object initBean(ProceedingJoinPoint point, CalLimitAnno around) throws Throwable {/** Get class name and method name **/ MethodSignature Signature = (MethodSignature) point.getSignature(); Method method = signature.getMethod(); String[] classNameArray = method.getDeclaringClass().getName().split("\ \.");
String methodName = classNameArray[classNameArray.length - 1] + "." + method.getName();
String classZ = signature.getDeclaringTypeName();
String countMapKey = classZ + "|" + methodName;
LinkedList<Long> calList = calListMap.get(countMapKey);
if(calList ! = null){/** The number of calls to determine whether the value set by the annotation has been exceeded **/if ((calList.peekLast() - calList.peekFirst()) > Long.valueOf(around.count())) {
throw new RuntimeException("The flow is restricted."); PutIfAbsent (countMapKey,0L); countMap.put(countMapKey,countMap.get(countMapKey) + 1); } Object object = point.proceed();return object;
}
Copy the code
methods
Considering that the frequency of scheduled tasks should not be too small, the scheduled task is executed once every second. Here, we need to set the flow limiting value for 10s, resulting in a larger granularity.
@CalLimitAnno(count = 1000)
public void testPageAnno(){
System.out.println("Successful execution");
}
Copy the code
The Map optimization
As we used package + className + methodName as the only key above, the length of the key becomes extremely long. Should we find a way to reduce the length of the key? Some students will think of compression, but this is simply not realistic, see the link for more details. This also cannot use, that also cannot use, still let not let a person live 🥺. Have you ever thought of the SMS received at ordinary times, sometimes there will be a short link, these short links are actually using the sender -> from a service to obtain a unique self-increasing ID, and then transform this ID. For example, if it increments to 100,000, convert 100,000 from decimal to 62-base q0U. It’s similar to the link on the text, isn’t it?
The Map persistence
Since it is incremented, the same long character is transformed into a different short string by calling the service. In some service scenarios, the call may be frequent, so KV storage is required. Otherwise there is no need to do storage, do more wrong ~
Kv storage optimization
If we need to do KV storage, we can probably think of JVM memory or Redis. Because this correspondence is usually not stored for long, it is usually used as a query in a hot event. If it is Redis, you can set expiration time as expulsion. So in JVM memory, what we need to take into account is the LRU. It is the most commonly used recently
- The used key must be placed at the head of the queue
- The least frequently used ones need to be removed once they exceed the length of the queue limit. So what data structure do we need to implement this conditional queue?
GET
- If the key does not exist, null is returned
- If the key exists, you need to return the value, delete the corresponding key, and put the key to the first queue.
In such scenarios, it is obvious that collections of arrays underlying such as ArrayList are not applicable. Don’t say you don’t understand. That leaves only linked lists like LinkedList, but LinedList queries need to traverse the list. What if we saved our LinkedList as well as our Map? Of course… No, the map has a requirement that node save the previous node. In this way, while looking up the value, obtain the previous node, you can delete the corresponding node in the linked list
PUT
- Assuming key doesn’t exist, put it at the head of the queue
- If the key exists, delete it and place it at the head of the queue
After the foreshadowing of Get, it goes without saying
End result :LinedHashMap
LinkedHashMap specific ruts are not forced here, or baidu, you know
At the end
This side does not consider the thread caused by concurrent unsafe ha, just a reference ~~~ talked for a long time, we should still have some will not understand, please leave a message below. Have no way, language is poor 😂.