A flow limiting algorithm and its application
Current limiting is a protection measure for the system. That is, limiting the frequency of traffic requests (how many requests are processed per second). In general, when the traffic exceeds the bottleneck of the system, the excess traffic is discarded to ensure the availability of the system. That is, they either don’t let it in, and those who do promise to provide service.
1.1 the counter
1. Overview The counter adopts simple counting operation and automatically clears after a certain period of time.
2. Implement
Public class Counter {public static void main(String[] args) {// Counter, Semaphore = new Semaphore(3); Semaphore = new Semaphore(3); / / timer, point-to-point reset ScheduledExecutorService service = Executors. NewScheduledThreadPool (1); service.scheduleAtFixedRate(new Runnable() { @Override public void run() { semaphore.release(3); } }, 3000, 3000, TimeUnit.MILLISECONDS); While (true) {try {// Count semaphore.acquire(); } catch (InterruptedException e) { e.printStackTrace(); } // If the response is allowed, print an OK system.out.println ("ok"); }}}Copy the code
3. Results were analyzed and presented in groups of 3 OK, which were blocked before the next counting cycle
4. Pros and cons Are simple to implement. The control force is too simple. If 3 times are limited within 1s, then if 3 times have been used up in the first 100ms, the following 900ms will only be blocked and wasted.
5. There are few applications that use the counter for flow limiting, because its processing logic is not flexible enough. The most common scenario is that the login password authentication on the Web is frozen for a period of time. If the site requests use a counter, then a malicious attacker will eat the traffic count for the first 100ms, so that all subsequent normal requests are blocked, and the entire service can be easily brought down.
1.2 Leaky bucket algorithm
1. An overview of the
Leaky bucket algorithm caches requests in buckets and processes the requests at a uniform speed. The part that exceeds the bucket capacity is discarded. The leaky bucket algorithm is mainly used to protect internal processing services and ensure stable and rhythmic processing of requests. However, it cannot flexibly adjust the response capability according to the fluctuation of traffic. In reality, similar service halls with limited capacity open fixed service Windows. 2. Implement
Public class Barrel {public static void main(String[] args) {public static void main(String[] args) { Final LinkedBlockingQueue<Integer> que = new LinkedBlockingQueue(3); / / timer, equivalent to a service window, 2 s handling a ScheduledExecutorService service = Executors. NewScheduledThreadPool (1); service.scheduleAtFixedRate(new Runnable() { @Override public void run() { int v = que.poll(); System.out.println(" handle: "+ v); } }, 2000, 2000, TimeUnit.MILLISECONDS); Int I = 0; while (true) { i++; try { System.out.println("put:" + i); // que.put(I); // que.put(I); Que. offer(I, 1000, timeunit.milliseconds); } catch (Exception e) { e.printStackTrace(); }}}}Copy the code
3. Result analysis
4. The advantages and disadvantages
The system effectively blocks external requests and protects internal services from overload. Internal services execute at a uniform speed, cannot cope with traffic peaks, and cannot flexibly handle unexpected tasks. When a task overflows due to timeout, it is discarded. In reality, you might need cache queue assist to persist for a while.
5. Application
Traffic limiting in Nginx is a typical application of the leaky bucket algorithm. The configuration example is as follows:
HTTP {#$binary_remote_addr specifies that the remote_addr identifier is used as the key to restrict the same client IP address. #zone=one:10m indicates that a memory region with a size of 10m named ONE is generated to store the frequency information of access. Limit_req_zone $binary_REMOTE_ADDR zone=one:10m rate=1r/s; Server {location /limited/ {#zone=one # Burst =5 buffer where requests exceeding the access frequency limit can be placed first, similar to the queue length in code. #nodelay if set, 503 will be returned when the number of requests exceeds and the buffer is full. If not set, all requests will be queued. limit_req zone=one burst=5 nodelay; }}Copy the code
1.3 the token bucket
1. An overview of the
2. Implement
Public class Token {public static void main(String[] args) throws InterruptedException { Final Semaphore Semaphore = new Semaphore(3); / / timer, 1 s a, uniform issued token ScheduledExecutorService service = Executors. NewScheduledThreadPool (1); service.scheduleAtFixedRate(new Runnable() { @Override public void run() { if (semaphore.availablePermits() < 3) { semaphore.release(); } / / System. Out. Println (" token number: "+ semaphore. AvailablePermits ()); } }, 1000, 1000, TimeUnit.MILLISECONDS); Thread.sleep(5); For (int I = 0; int I = 0; i < 5; i++) { semaphore.acquire(); System.out.println(" "+ I); } // for (int I = 0; i < 3; i++) { Thread.sleep(1000); semaphore.acquire(); System.out.println(" I: "+ I); Thread.sleep(1000); } for (int I = 0; i < 5; i++) { semaphore.acquire(); System.out.println(" "+ I); } // Check the token bucket count for (int I = 0; i < 5; i++) { Thread.sleep(2000); System. The out. Println (" token remaining: "+ semaphore. AvailablePermits ()); }}}Copy the code
3. Result analysis
4. The application
In SpringCloud, the Gateway can be configured with a token bucket to achieve flow limiting control. Examples are as follows:
Cloud: gateway: routes: ‐ id: limit_route uri: http://localhost:8080/test filters: ‐ name: RequestRateLimiter args: ‐resolver interface key‐resolver interface key‐resolver '#{@ipresolver}' # average rate of token bucket replenishment per second, equivalent to the replenishment frequency in the code redis‐ Rate ‐limiter. ReplenishRate: 1 # Total token bucket capacity, equivalent to the capacity of the semaphore in the code redis‐rate‐limiter. BurstCapacity: 3Copy the code
1.4 Sliding Window
1. An overview of the
The sliding window can be understood as a counter after subdivision. The counter roughly limits the number of access times within 1 minute, while the sliding window flow limiting divides 1 minute into multiple segments, requiring not only that the number of requests within the whole 1 minute is less than the upper limit, but also that the number of requests for each segment is less than the upper limit. It is equivalent to splitting the original counting period into several fragments. More subtle.
2. Implement
package com.yungtay.mpu.util; Public class Window {final int totalMax = 5; public class Window {final int totalMax = 5; Final int sliceMax = 5; final int sliceMax = 5; Final int slice = 3; Final LinkedList<Long> LinkedList = new LinkedList<>(); TreeMap Map<Long, AtomicInteger> Map = new TreeMap(); // Heartbeat beats once every 1s. The sliding window moves forward one step. In actual services, you may need to manually control the timing of the sliding window. ScheduledExecutorService service = Executors.newScheduledThreadPool(1); Private Long getKey() {return system.currentTimemillis () / 1000; } public Window() {public Window() {public Window() {public Window(); for (int i = 0; i < slice; I++) {linkedList. AddFirst (key ‐ I); The map. The put (key ‐ I, new AtomicInteger (0)); } // Start heartbeat task, window according to the time, automatically slide forward, Second step 1 service. ScheduleAtFixedRate (new Runnable () {@ Override public void the run () {Long key = getKey (); // Add the latest slice linkedList.addLast(key); map.put(key, new AtomicInteger()); Map.remove (linkedlist.getFirst ()); linkedList.removeFirst(); System.out.println("step:" + key + ":" + map); ; } }, 1000, 1000, TimeUnit.MILLISECONDS); } public Boolean checkCurrentSlice() {long key = getKey(); AtomicInteger integer = map.get(key); if (integer ! = null) { return integer.get() < sliceMax; } // Return true is allowed by default; } public Boolean checkAllCount() {return map.values().stream().mapToint (value ‐ > value.get()).sum() < totalMax; } // The request comes to.... public void req() { Long key = getKey(); While (linkedlist.getLast () < key) {try {thread.sleep (200); while (linkedlist.getlast () < key) {thread.sleep (200); } catch (InterruptedException e) { e.printStackTrace(); } // If the upper limit is not reached, return OK, increase the counter 1 // If any item reaches the upper limit, reject the request, achieve the purpose of limiting traffic // here is a direct rejection. If (checkCurrentSlice() && checkAllCount()) {map.get(key).incrementandGet (); System.out.println(key + "=ok:" + map); } else { System.out.println(key + "=reject:" + map); } } public static void main(String[] args) throws InterruptedException { Window window = new Window(); // Simulate 10 discrete requests with 200ms intervals between them. For (int I = 0; i < 10; i++) { Thread.sleep(200); window.req(); } thread.sleep (3000);} thread.sleep (3000); / / simulation of sudden request, a single piece of counter reaches limit by current limit System. Out. The println (" ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ "); for (int i = 0; i < 10; i++) { window.req(); }}}Copy the code
3. Result analysis
4. Application of results
The sliding window algorithm is used in TCP packet sending. In the real web scenario, the flow control can be refined to solve the problem of too rough control force of the counter model.
Timing algorithm and application
It is inevitable to encounter various tasks that need to be executed automatically in the system or project, and there are various means to realize these tasks. For example, crontab of the operating system, Quartz of the Spring framework, Timer of Java and ScheduledThreadPool are all typical means of scheduled tasks.
2.1 the minimum heap
Timer is the most typical Java Timer based on priority queue + minimum heap. It internally maintains a priority queue to store scheduled tasks, and the priority queue uses minimum heap sort. When we call the schedule method, a new task is added to the queue, and the heap is rearranged, always keeping the heap top with the minimum execution time (i.e., the most recent execution). At the same time, the internal equivalent of a thread constantly scanning the queue, from the queue to obtain the top of the heap elements to execute, the task is scheduled.
2. Case introduction to the implementation principle of priority queue + minimum heap algorithm:
class Task extends TimerTask {
@Override
public void run() {
System.out.println("running...");
}
}
Copy the code
public class TimerDemo { public static void main(String[] args) { Timer t = new Timer(); // Execute after 1 second and then run every 2 seconds. T.schedule (new Task(), 1000, 2000) }}Copy the code
3. The t.schedule method is added to the queue when a new task is added for source analysis
void add(TimerTask task) { // Grow backing store if necessary if (size + 1 == queue.length) queue = Arrays.copyOf(queue, 2 * queue.length); queue[++size] = task; fixUp(size); }Copy the code
Add implements capacity maintenance and capacity expansion when it is insufficient. Meanwhile, new tasks are added to the end of the queue to trigger heap ordering and the minimum line of the top element is always maintained.
Private void fixUp(int k) {while (k > 1) {//k is the parent node of the queue, int j = k >> 1; If (queue[j].nextexecutionTime <= queue[k].nextexecutionTime) break; TimerTask TMP = queue[j]; queue[j] = queue[k]; queue[k] = tmp; // After the swap, the current pointer continues to point to the newly added node, and the loop continues until the heap rearranges ok k = j; }}Copy the code
Run in thread scheduling, which mainly calls the internal mainLoop() method, uses the while loop
private void mainLoop() { while (true) { try { TimerTask task; boolean taskFired; synchronized(queue) { //... // Queue nonempty; look at first evt and do the right thing long currentTime, executionTime; task = queue.getMin(); synchronized(task.lock) { //... // currentTime = system.currentTimemillis (); // executionTime = task.nextexecutionTime; If (taskFired = (executionTime<=currentTime)) { If (task.period == 0) {// Non‐repeating, remove queue.removemin (); task.state = TimerTask.EXECUTED; } else { // Repeating task, ‐period //2. Period is positive, then the next time is the current time of the task +period // Note! The two strategies are quite different. Because Timer is single threaded // if it is 1, then currentTime is the currentTime, depending on how long the task is executed // if it is 2, then executionTime is the absolute timestamp, Queue. RescheduleMin (task.period<0? CurrentTime ‐ task.period: executionTime + task.period); }}} // If (! taskFired) // Task hasn't yet fired; Wait queue. Wait (executionTime ‐ currentTime); } // Run! if (taskFired) // Task fired; run it, holding no locks task.run(); } catch(InterruptedException e) { } } }Copy the code
- application
This section use the Timer to introduce algorithm principle, but the Timer is out of fashion, it is recommended to use in the actual application ScheduledThreadPoolExecutor (internal use same DelayedWorkQueue and minimum heap sort) Timer is single-threaded, once a failure or abnormal, Will interrupt all task queues, the thread pool will not Timer in JDk1.3 +, and the thread pool needs JDk1.5 +Copy the code
2.2 time round
1. An overview of theTime wheel is a more common scheduling algorithm, scheduled task scheduling of various operating systems, Linux Crontab, Java-based communication framework Netty and so on. It was inspired by the clocks in our lives. The roulette wheel is actually an array of loops that end to end, and the number of arrays is the number of slots in which tasks can be placed. Take 1 day as an example, place the execution time %12 of the task on the time wheel according to the obtained value, scan the hour pointer along the wheel, and take out the point scanned to execute the task:
2. Implement
Public class RoundTask {// How many seconds after executing int delay; Int index; public RoundTask(int index, int delay) { this.index = index; this.delay = delay; } void run() { System.out.println("task " + index + " start , delay = " + delay); } @Override public String toString() { return String.valueOf(index + "=" + delay); }}Copy the code
Time wheel algorithm:
Public class RoundDemo {// Number of slots int size1 = 10; Int size2 = 5; LinkedList<RoundTask>[] t1 = new LinkedList[size1]; // Round LinkedList<RoundTask>[] t2 = new LinkedList[size2]; Final AtomicInteger flag1 = new AtomicInteger(0); // Final AtomicInteger flag1 = new AtomicInteger(0); Final AtomicInteger flag2 = new AtomicInteger(0); final AtomicInteger flag2 = new AtomicInteger(0); / / scheduler, drag the pointer to beat ScheduledExecutorService service = Executors. NewScheduledThreadPool (2); Public RoundDemo() {// Initialize time for (int I = 0; i < size1; i++) { t1[i] = new LinkedList<>(); } for (int i = 0; i < size2; i++) { t2[i] = new LinkedList<>(); Void print() {system.out.println ("t1:"); for (int i = 0; i < t1.length; i++) { System.out.println(t1[i]); } System.out.println("t2:"); for (int i = 0; i < t2.length; i++) { System.out.println(t2[i]); }} // Add a task to the time wheel void add(RoundTask task) {int delay = task.delay; If (delay < size1) {//10, in the small wheel t1[delay]. AddLast (task); } else {t2[delay/size1]. AddLast (task);} else {t2[delay/size1]. }} void startT1() {// Take to task execution service immediately. The scheduleAtFixedRate (new Runnable () {@ Override public void the run () {int point = flag1. GetAndIncrement () % size1; Println (" T1 ‐‐> slot "+ point); system.out.println ("t1 ‐‐> slot" + point); LinkedList<RoundTask> list = t1[point]; if (! List.isempty ()) {// If there are tasks in the current slot, take them out, execute them one by one, remove while (list.size()! = 0) { list.getFirst().run(); list.removeFirst(); } } } }, 0, 1, TimeUnit.SECONDS); } void startT2() {// Every 10 seconds, push the wheel to rotate, Take to the task below to t1 service. The scheduleAtFixedRate (new Runnable () {@ Override public void the run () {int point = flag2. GetAndIncrement () % size2; System.out.println("t2 =====> slot " + point); LinkedList<RoundTask> list = t2[point]; if (! List.isempty ()) {// If there is a task in the current slot, fetch it and put it in the defined small wheel while (list.size()! = 0) { RoundTask task = list.getFirst(); // Which slot to put in the wheel? T1 [task.delay % size1].addlast (task); // Remove list.removeFirst() from the wheel; } } } }, 0, 10, TimeUnit.SECONDS); } public static void main(String[] args) { RoundDemo roundDemo = new RoundDemo(); For (int I = 0; i < 100; i++) { roundDemo.add(new RoundTask(i, new Random().nextInt(50))); } // print to view the time wheel task layout rounddemo.print (); // Start the big wheel rounddemo.startt2 (); // Rounddemo.startt1 (); }}Copy the code
- Results analysis
Three load balancing algorithm
Load Balance refers to balancing loads (work tasks) and allocating them to multiple operation units, such as FTP server, Web server, enterprise core application server, and other main task servers, so as to jointly complete work tasks. Since multiple machines are involved, it is a matter of how tasks are distributed, and this is a load balancing algorithm problem.
3.1 Polling (RoundRobin)
Overview Polling means standing in line, one by one. The time slice rotation used in the previous scheduling algorithm is a typical polling. But the previous implementation used arrays and subscript polling. Here try to manually write a bidirectional linked list form server list request polling algorithm.
2. Implement
public class RR { class Server { Server prev; Server next; String name; public Server(String name) { this.name = name; }} // Server current; Public RR(String serverName) {system.out.println ("init server list: "+ serverName); String[] names = serverName.split(","); for (int i = 0; i < names.length; i++) { Server server = new Server(names[i]); If (current == null) {// If (current == null) {// If (current == null) {// If (current == null) {// If (current == null) {// If (current == null) {// If (current == null) { // At the same time, the front and back of the server point to itself. current.prev = current; current.next = current; } else {// Otherwise, there is already a machine, press new processing. addServer(names[i]); Void addServer(String serverName) {system.out.println ("add server: "+ serverName); Server server = new Server(serverName); Server next = this.current.next; // Insert a new node after the current node this.current-next = server; server.prev = this.current; // Change the prev pointer of the next node server.next = next; next.prev = server; } // Remove the current server and change the pointer to the front and back nodes. Void remove() {system.out.println ("remove current = "+ current.name); this.current.prev.next = this.current.next; this.current.next.prev = this.current.prev; this.current = current.next; } // request. Void request() {system.out.println (this.current. Name); this.current = current.next; } public static void main(String[] args) throws InterruptedException {// Initializing two machines RR RR = new The RR (" 192.168.0.1, 192.168.0.2 "); New Thread(new Runnable() {@override public void run() {while (true) {thread.sleep (500); } catch (InterruptedException e) { e.printStackTrace(); } rr.request(); } } }).start(); Thread.currentthread ().sleep(3000); 192.168.0.3 rr. AddServer (" "); Thread.currentthread ().sleep(3000); rr.remove(); }}Copy the code
3. Result analysis
4. The advantages and disadvantages
Simple implementation, the list of machines can be added or subtracted freely, and the time complexity is O (1)
It is impossible to make biased customization for nodes, and the processing capacity of nodes cannot be treated differently
3.2 Random
1. Overview Provide a response at random from the list of servicables. In the scenario of random access, arrays are suitable for more efficient subscript random access.
2. The implementation defines an array and takes a random number as its subscript. Very simple
public class Rand { ArrayList<String> ips; public Rand(String nodeNames) { System.out.println("init list : " + nodeNames); String[] nodes = nodeNames.split(","); Ips = new ArrayList<>(nodes.length); for (String node : nodes) { ips.add(node); Void request() {int I = new Random().nextint (ips.size()); System.out.println(ips.get(i)); Void addNode (String nodeName) {system.out.println ("add node: " + nodeName); ips.add(nodeName); } // remove void remove(String nodeName) {system.out.println ("remove node: "+ nodeName); ips.remove(nodeName); } public static void main(String[] args) throws InterruptedException {Rand rd = new Rand("192.168.0.1,192.168.0.2"); New Thread(new Runnable() {@override public void run() {while (true) {thread.sleep (500); } catch (InterruptedException e) { e.printStackTrace(); } rd.request(); } } }).start(); Thread.currentthread ().sleep(3000); Rd., invoke the (" 192.168.0.3 "); Thread.currentthread ().sleep(3000); Rd. Remove (" 192.168.0.2 "); }}Copy the code
3. Result analysis
3.1 Hashing the Source Address
1. Overview Create a hash value for the current accessed IP address. The same key is routed to the same machine. This scenario is common in a distributed cluster environment. Request routes and sticky sessions are used during user login.
2. Implementation The service of requesting value to corresponding node can be realized by using HashMap, and the time complexity of its search is O (1). Fix an algorithm that maps requests to keys. For example, take the end of the source IP of the request, mod by number of machines as key:
public class Hash { ArrayList<String> ips; public Hash(String nodeNames) { System.out.println("init list : " + nodeNames); String[] nodes = nodeNames.split(","); Ips = new ArrayList<>(nodes.length); for (String node : nodes) { ips.add(node); }} // add a node. Note that adding a node causes an internal Hash rearrangement. // This is a problem! Void addNode (String nodeName) {system.out.println ("add node: "+ nodeName); ips.add(nodeName); } // remove void remove(String nodeName) {system.out.println ("remove node: "+ nodeName); ips.remove(nodeName); } // The algorithm that maps to the key, Private int hash(String IP) {int last = integer.valueof (ip.subString (ip.lastIndexof (".") + 1, ip.length())); return last % ips.size(); Void request(String IP) {// subscript int I = hash(IP); System.out.println(IP + "‐ >" + ips.get(I)); } public static void main(String[] args) {Hash Hash = new Hash("192.168.0.1,192.168.0.2"); for (int i = 1; i < 10; I ++) {// Source IP String IP = "192.168.1." + I; hash.request(ip); 192.168.0.3} hash. Invoke the (" "); for (int i = 1; i < 10; I ++) {// Source IP String IP = "192.168.1." + I; hash.request(ip); }}}Copy the code
3. Result analysis
2.4 Weighted Polling (WRR)
1. An overview of the
WeightRoundRobin is just a mechanical rotation, weighted polling makes up for the disadvantage of all machines being the same. On a polling basis, the machine carries a specific gravity when initialized.2. Implement
Maintain a linked list, each machine according to the weight of different, occupy a different number. When polling, the weight is significant and the number is large. For example: a,b, c three machines, the weight is 4,2,1 respectively, will be a,a,a,a,b,b,c, each request, take nodes from the list, the next request and then take the next one. At the end, start all over again. But there is a problem: the machines are not evenly distributed, and clusters appear…. Nginx: To solve the problem of machine smoothing, nginx source code using a smooth weighted polling algorithm, the rules are as follows: each node two weights, weight and currentWeight, weight is always the value of the configuration, current changes constantly change rules as follows: Select all current+=weight, select the maximum current response, and let its current-=total count after the response
public class WRR { class Node { int weight, currentWeight; String name; public Node(String name, int weight) { this.name = name; this.weight = weight; this.currentWeight = 0; } @Override public String toString() { return String.valueOf(currentWeight); } // List of all nodes ArrayList<Node> list; // total weight int total; Public WRR(String nodes) {String[] ns = nodes.split(","); list = new ArrayList<>(ns.length); for (String n : ns) { String[] n1 = n.split("#"); int weight = Integer.valueOf(n1[1]); list.add(new Node(n1[0], weight)); total += weight; }} // getCurrent Node getCurrent() {for (Node Node: list) {node.currentweight += node.weight; } // return Node current = list.get(0); int i = 0; for (Node node : list) { if (node.currentWeight > i) { i = node.currentWeight; current = node; } } return current; } // respond to void request() {// get the current Node Node = this.getCurrent(); Current system.out. print(list.toString() + "‐‐ "); System.out.print(node.name + "‐‐ "); CurrentWeight ‐=total; // Current system.out.println (list); } public static void main(String[] args) { WRR wrr = new WRR("a#4,b#2,c#1"); For (int I = 0; i < 7; i++) { wrr.request(); }}}Copy the code
3. Result analysis
3.5 Weighted Randomness (WR)
WeightRandom: Machines are randomly selected, but a set of weights is given, with different chances of being selected depending on the weight. In this concept, randomness can be considered as a special case of equal weight.
2. The design idea is still the same: generate different number of nodes according to the weight size, and obtain nodes randomly after queuing. The data structure here mainly involves random machine reading, so the array is preferred. As with randomness, the array is filtered randomly, except that randomness is only 1 per machine, which becomes multiple when weighted.
Public class WR {// List of all nodes ArrayList<String> list; Public WR(String Nodes) {String[] ns = nodes.split(","); list = new ArrayList<>(); for (String n : ns) { String[] n1 = n.split("#"); int weight = Integer.valueOf(n1[1]); for (int i = 0; i < weight; i++) { list.add(n1[0]); }}} void request() {int I = new Random().nextint (list.size()); System.out.println(list.get(i)); } public static void main(String[] args) { WR wr = new WR("a#2,b#1"); for (int i = 0; i < 9; i++) { wr.request(); }}}Copy the code
- Results analysis
3.6 Minimum Number of Connections (LC)
1. Overview of LeastConnections, that is, count the number of connections on the current machine and select the least number to respond to new requests. The former algorithm stands on the request dimension, while the minimum connection number stands on the machine dimension. 2. Implement a counter that defines a linked table to record the node ID of the machine and the number of machine connections. The minimum heap is used internally to sort, and the top node of the heap is the minimum number of connections.
Public class LC {// Node list Node[] nodes; LC(String ns) {String[] ns1 = ns.split(","); nodes = new Node[ns1.length + 1]; for (int i = 0; i < ns1.length; i++) { nodes[i + 1] = new Node(ns1[i]); // void down(int I) {// void down(int I) {// void down(int I) { O(log2n) while (I << 1 < nodes.length) {// Int left = I << 1; Int right = left +1; Int flag = I; int flag = I; int flag = I; // Check whether the left child is smaller than this node if (nodes[left].get() < nodes[I].get()) {flag = left; } if (right < nodes.length && Nodes [flag].get() > Nodes [right].get()) {flag = right; } // The smallest of the two is not equal to this object, if (flag! = i) { Node temp = nodes[i]; nodes[i] = nodes[flag]; nodes[flag] = temp; i = flag; } else {// otherwise equal, heapsort complete, exit loop break; }}} // request. Is very simple and directly take the minimum heap heap top element is the least number of connections of machine void request () {System. Out. Println (" ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ ‐ "); Node Node = nodes[1]; System.out.println(node.name + " accept"); // Add 1 node.inc(); System.out.println("before:" + arrays.tostring (nodes)); // the top of the heap sinks down(1); Println (" After :" + Arrays. ToString (Nodes)); } public static void main(String[] args) {LC = new LC("a,b,c,d,e,f,g"); For (int I = 0; i < 10; i++) { lc.request(); }} class Node {// Node id String name; // counter AtomicInteger count = new AtomicInteger(0); public Node(String name) { this.name = name; } public void inc() {count.getAndIncrement(); } public int get() {return count.get(); } @Override public String toString() { return name + "=" + count; }}}Copy the code
3. Result analysis
3.7 Application Cases
1. nginx upstream
Upstream frontend {# upstream hash ip_hash; Server 192.168.0.1:8081; Server 192.168.0.2:8082 weight = 1 down; Server 192.168.0.3:8083 weight = 2; Server 192.168.0.4:8084 weight = 3 backup; Server 192.168.0.5:8085 weight=4 max_fails=3 fail_timeout=30s; }Copy the code
2. springcloud ribbon IRule
Eureka ‐ Application ‐service eureka‐ ApplicationService. ribbon.NFLoadBalancerRuleClassName=com.netflix.loadbalancer.RandomRuleCopy the code
3. Dubbo load balancer uses the Service annotation
@Service(loadbalance = "roundrobin",weight = 100)
Copy the code
Encryption algorithm and application
4.1 the hash
1. An overview of the
This is not strictly a form of encryption, but rather a message summarization algorithm. The algorithm uses hash functions to compress messages or data into a summary, making the amount of data smaller and the format of the data fixed. By shuffling the data together, a new value called a hash is created
2. Common algorithms
3. Application (Often used for password storage or file fingerprint verification.)
After the website user registers, the password is encrypted by MD5 and stored in DB. Upon the next login, encrypt the password in the same way and compare it with the ciphertext in the database. This way, even if the database is broken, or the developer can see, based on the irreversibility of MD5, the password is still unknown.
The second is the file verification scenario. For example, for files downloaded from a certain site (especially large files, such as system image ISO), the official website will place a signature (possibly MD5 or SHA). When users get the document, they can perform hashing algorithm locally to compare it with the official website signature to determine whether the document is usurpated. An image like ubuntu20.04:
4. To achieve
< the dependency > < groupId > Commons ‐ codec < / groupId > < artifactId > Commons ‐ codec < / artifactId > < version > 1.14 < / version > </dependency>Copy the code
Commons ‐codec */ public static String MD5 (String SRC) {byte[] PWD = null; Try {PWD = MessageDigest. GetInstance (" md5 "). The digest (SRC) getBytes (" utf ‐ 8 ")); } catch (Exception e) { e.printStackTrace(); } String code = new BigInteger(1, pwd).toString(16); for (int i = 0; I < 32 ‐ code. The length (); i++){ code = "0" + code; } return code; } public static String commonsMd5(String src) { return DigestUtils.md5Hex(src); Commons ‐ Codec package */ public static String SHA (String SRC) throws Exception {MessageDigest SHA = MessageDigest.getInstance("sha"); Byte [] shaByte = sha. Digest (SRC) getBytes (" utf ‐ 8 ")); StringBuffer code = new StringBuffer(); for (int i = 0; i < shaByte.length; i++) { int val = ((int) shaByte[i]) & 0xff; if (val < 16) { code.append("0"); } code.append(Integer.toHexString(val)); } return code.toString(); } public static String commonsSha(String src) throws Exception { return DigestUtils.sha1Hex(src); } public static void main(String[] args) throws Exception {String name = "architect "; System.out.println(name); System.out.println(md5(name)); System.out.println(commonsMd5(name)); System.out.println(sha(name)); System.out.println(commonsSha(name)); }}Copy the code
5. Result analysis
4.2 symmetry
1. An overview of the
Encryption and decryption use the same secret key, performance is much higher than asymmetric encryption.
2. Common algorithms
Common symmetric encryption algorithms are DES, 3DES, AES DES algorithm is widely used in POS, ATM, magnetic card and smart card (IC card), gas station, highway toll station and other fields, in order to achieve the confidentiality of key data, such as the encrypted transmission of the PIN of the credit card holder. Bidirectional authentication between IC cards and POS, MAC check of financial transaction packets, etc. 3DES is a mode of DES encryption algorithm, and a more secure variant of DES. The transition algorithm from DES to AES
3. The application of
It is often used in real-time data encryption communication which requires high efficiency.
4. To achieve
Public class AES {public static void main(String[] args) throws Exception {// Generate KEY KeyGenerator KeyGenerator = KeyGenerator.getInstance("AES"); keyGenerator.init(128); Key = new SecretKeySpec(keygenerator.generateKey ().getencoded (), "AES"); key = new SecretKeySpec(keygenerator.generateKey ().getencoded (), "AES"); Cipher cipher = Cipher.getInstance("AES/ECB/PKCS5Padding"); String SRC = "architect bootcamp "; System.out.println(" plaintext: "+ SRC); // Encrypt cipher.init(cipher. ENCRYPT_MODE, key); byte[] result = cipher.doFinal(src.getBytes()); System. The out. Println (" encryption: "+ Base64. EncodeBase64String (result); // Decrypt cipher.init(cipher. DECRYPT_MODE, key); result = cipher.doFinal(result); System.out.println(" decrypt: "+ new String(result)); }}Copy the code
5. Result analysis
4, 3 asymmetric
1. Overview Asymmetry means that encryption and decryption do not share the same key, but are divided into public and private keys. The private key is in the hands of the individual, and the public key is public. One key is used for encryption and the other for decryption. If one of them is used for encryption, the original plaintext can only be decrypted with the corresponding other key, even if the key originally used for encryption cannot be used for decryption. Because of this property, it is called asymmetric encryption.
2. Common algorithms
RSA, ElGamal, Knapsack algorithm, Rabin (a special case of RSA), Public key encryption algorithm in Diffiety-Hermann key exchange protocol, Elliptic Curve Cryptography (ECC). The most widely used algorithm is RSA (an acronym of the surnames of the inventors Rivest, Shmir and Adleman)
3. The application of
The most common, two things: HTTPS and digital signatures. Strictly speaking, NOT all HTTPS requests use asymmetry. To improve performance, HTTPS uses an asymmetric key first and then uses the key for symmetric encryption and data transmission.
4. To achieve
public class RSAUtil { static String privKey; static String publicKey; Public static void main(String[] args) throws Exception {// Generate the public and private keys genKeyPair(); // Encryption String String message = "Architect boot camp "; System.out.println(" plaintext: "+ message); System.out.println(" random publicKey :" + publicKey); System.out.println(" random private key :" + privKey); String messageEn = encrypt(message, publicKey); System.out.println(" public key encryption :" + messageEn); String messageDe = decrypt(messageEn, privKey); System.out.println(" private key decryption :" + messageDe); } /** * Randomly generate key pairs */ public static void genKeyPair() throws NoSuchAlgorithmException {// KeyPairGenerator class used to generate public and private key pairs, Based on RSA algorithm to generate KeyPairGenerator object keyPairGen = KeyPairGenerator. GetInstance (" RSA "); // Initialize the key pair generator with the key size 96‐1024 bit keypairgen. initialize(1024, new SecureRandom()); / / generates a key pair, save in the keyPair keyPair keyPair. = keyPairGen generateKeyPair (); privKey = new String(Base64.encodeBase64((keyPair.getPrivate().getEncoded()))); publicKey = new String(Base64.encodeBase64(keyPair.getPublic().getEncoded())); } /** * RSA public key encryption */ public static String encrypt(String STR, String publicKey) throws Exception {//base64 encoded publicKey byte[] decoded = base64. decodeBase64(publicKey); RSAPublicKey pubKey = (RSAPublicKey) KeyFactory.getInstance("RSA").generatePublic(new X509EncodedKeySpec(decoded)); //RSA encryption Cipher Cipher = cipher. getInstance("RSA"); cipher.init(Cipher.ENCRYPT_MODE, pubKey); String outStr = Base64. EncodeBase64String (cipher. DoFinal (STR) getBytes (" UTF ‐ 8 "))); return outStr; } /** * public static String decrypt(String STR, String privateKey) throws Exception {// 64-bit decoded encrypted String byte[] inputByte = base64. decodeBase64(str.getBytes("UTF‐8")); byte[] decoded = Base64.decodeBase64(privateKey); RSAPrivateKey priKey = (RSAPrivateKey) KeyFactory.getInstance("RSA").generatePrivate(new PKCS8EncodedKeySpec(decoded)); Cipher cipher = Cipher.getInstance("RSA"); cipher.init(Cipher.DECRYPT_MODE, priKey); return new String(cipher.doFinal(inputByte)); }}Copy the code
5. Result analysis
Consistency Hash and application
1. In the background load balancing policy, we mentioned the source address hash algorithm, which enables certain requests to be fixed to the corresponding server. In this way, session information retention is avoided.
Also, the standard hash, if the number of machine nodes has changed. What if the request is rehashed, breaking the original design? Consistency hash plays.
Principle 2.
Taking four machines as an example, the consistent hash algorithm is as follows: First of all, the hash value of each server, and configured to 0 ~ 232 round Then use the same method the key's hash value of data storage, also on a map Clockwise from the data mapped to the location of the search, save the data to find the first server If a maximum to still can't find, just take the first. That's why the image is called a ringCopy the code
3. The characteristics of
4. The optimization
Adding virtual nodes can optimize the hash algorithm, resulting in finer segmentation and distribution. That is, there are actually M machines, but the expansion is n-fold, m* N are placed on the ring, then after equalization, the distribution of key segments will be more refined.
5. Implement
Public Class Hash {// Server list private static String[] Servers = {"192.168.0.1", "192.168.0.2", "192.168.0.3", "192.168.0.4"}; Private static SortedMap<Integer, String> serverMap = new TreeMap<Integer, String>(); // Key indicates the hash value of the server and value indicates the hash value of the server. static { for (int i=0; i<servers.length; i++) { int hash = getHash(servers[i]); ‐4 = 0‐4, multiplied by 60 to distribute evenly across the 0‐254 ring // When the actual request IP comes, hash *= 60 on the ring; System.out.println("add " + servers[i] + ", hash=" + hash); serverMap.put(hash, servers[i]); Private static String getServer(String key) {int hash = getHash(key); SortedMap<Integer, String> subMap = servermap.tailmap (Hash); If (submap.isempty ()){// If there is no hash value greater than that of the key, start from the first node Integer I = servermap.firstKey (); Return servermap.get (I); }else{// the firstKey is the node nearest to the node clockwise Integer I = submap.firstkey (); Return submap.get (I); }} // This function can be arbitrarily defined, Private static int getHash(String STR) {String last = str.substring(str.lastIndexOf(".")+1,str.length()); String last = str.substring(str.lastIndexOf(".")+1,str.length()); return Integer.valueOf(last); } public static void main(String[] args) {public static void main(String[] args) { i < 8; String IP = "192.168.1."+ I *30; System. The out. Println (IP + "‐ ‐ ‐ >" + getServer (IP)); } // add server # 5 between 2‐3, take the middle position, 150 system.out. println("add 192.168.0.5, hash=150"); ServerMap. Put (150, "192.168.0.5"); For (int I = 1; i < 8; String IP = "192.168.1."+ I *30; System. The out. Println (IP + "‐ ‐ ‐ >" + getServer (IP)); }}}Copy the code
5. Verify
6 Typical Service Scenarios
6.1 Filtering sensitive words on websites
1. Scene-sensitive words and text filtering are essential functions of a website, and efficient filtering algorithm is very necessary. Use String contains in Java to pass through sensitive words one by one:
String[] s = ".split(","); String text = ""; boolean flag = false; for (String s1 : s) { if (text.contains(s1)){ flag = true; break; } } System.out.println(flag);Copy the code
2. Regular expression:
System. The out. Println (text. Matches (". * (advertising slogan that advertising | | winning). * "));Copy the code
In fact, no matter which method is adopted, the basic is to change the soup. Are the whole character matching, efficiency is debatable. So what to do? DFA algorithm appears. DFA is Deterministic Finite Automaton, also known as Deterministic Finite Automaton. It obtains the nextstate through event and the current state, that is, event+state=nextstate.
Compare to the above case, search and stop search is action, find not found is state, the search and results of each step decide whether to continue the next step. The key to the application of DFA algorithm on sensitive words is the construction of sensitive words library. If we translate the above cases into JSON, it can be expressed as follows:
{" isEnd ": 0," wide ": {" isEnd" : 0, "Sue" : {" isEnd ": 1, the" word ": {" isEnd" : 1}}}, "in" : {" isEnd ": 0," prize ": {" isEnd" : 1}}}Copy the code
Search process is as follows: first split text word by word, word by word to find the key of the theshoary, first from “to”, not the next word “disgusted”, until “wide”, find the judgment isEnd, if it is 1, indicating that the successful match contains sensitive words, if it is 0, then continue to match “Sue”, until isEnd=1.
There are two kinds of matching strategies. Minimum and maximum match. The smallest match [advertisement], the largest need to match in the end [advertisement]
4. Java implementation to add fastjson coordinates, view the sensitive theseword structure to use
Alibaba </groupId> <artifactId>fastjson</artifactId> <version>1.2.70</version> </dependency>Copy the code
/** * public class SensitiveWordUtil {// short matching rules, such as: sensitive thesewords [" AD "," AD "], statement: "I am AD ", matching result: Public static final int SHORT_MATCH = 1; Public static final int LONG_MATCH = 1; public static final int LONG_MATCH = 2; public static final int LONG_MATCH = 2; /** * public static sensitiveWordMap; /** * initialize sensitive words * words: sensitive words, */ private static void initSensitiveWordMap(String Words) {String[] w = words.split(","); private static void initSensitiveWordMap(String Words) {String[] w = words.split(","); sensitiveWordMap = new HashMap(w.length); Map nowMap; for (String key : w) { nowMap = sensitiveWordMap; for (int i = 0; i < key.length(); Char keyChar = key.charat (I); WordMap = (Map) nowmap. get(keyChar); If (wordMap == null) {wordMap = new HashMap(); wordMap.put("isEnd", "0"); nowMap.put(keyChar, wordMap); } nowMap = wordMap; If (I = = key. The length () ‐ 1) {/ / last nowMap put (" isEnd ", "1"); }}}} /** * Check whether the text contains sensitive characters ** @return Public static Boolean contains(String TXT, int matchType) {for (int I = 0; i < txt.length(); i++) { int matchFlag = checkSensitiveWord(txt, i, matchType); If (matchFlag > 0) {// If (matchFlag > 0) exists, return true; } } return false; } /** * public static Set<String> getSensitiveWord(String, int matchType) { Set<String> sensitiveWordList = new HashSet<>(); for (int i = 0; i < txt.length(); I ++) {int length = checkSensitiveWord(TXT, I, matchType); If (length > 0) {// If (length > 0) {sensitiveWordList.add(txt.substring(I, I + length)); // The pointer moves the length of the sensitive word back through the text // that is, once the sensitive word is found, it is added to the list, and the search continues past the character of the word // but must be reduced by 1, because the for loop increases itself, I = I + length ‐1; I = I + length ‐1; } return sensitiveWordList;} return sensitiveWordList;} return sensitiveWordList; } /** * return the length of the sensitive word from the beginIndex character * if found, there is no return 0 * this length is used to extract the sensitive word and move back pointer, Private static int checkSensitiveWord(String TXT, int beginIndex, int matchType) {private static int checkSensitiveWord(String TXT, int beginIndex, int matchType) { Boolean flag = false; Int length = 0; int length = 0; char word; // Start with root Map nowMap = sensitiveWordMap; for (int i = beginIndex; i < txt.length(); Word = txt.charat (I); word = txt.charat (I); NowMap = (map) nowmap.get (word); nowMap = (map) nowmap.get (word); if (nowMap ! Length++ = 1; length++ = 1; length++ = 1; If ("1".equals(nowmap.get ("isEnd"))) {// The end flag bit is true flag = true; If (SHORT_MATCH == matchType) {break; }}} else {// Sensitive library does not exist, directly interrupt break; } } if (length < 2 || ! Flag) {// The length of a word must be greater than or equal to 1. } return length; } public static void main (String [] args) {/ / initializes the sensitive thesaurus SensitiveWordUtil. InitSensitiveWordMap (" advertisement, advertising slogan, "); System.out.println(" sensitiveWordMap "+ json.tojsonString (sensitiveWordMap)); String String = "About winning AD word selection "; System.out.println(" Detected text: "+ string); System.out.println(" + string.length()); / / whether contain the keywords Boolean result = SensitiveWordUtil. The contains (string, SensitiveWordUtil. LONG_MATCH); System.out.println(" long match: "+ result); result = SensitiveWordUtil.contains(string, SensitiveWordUtil.SHORT_MATCH); System.out.println(" short match: "+ result); / / retrieve words in the statements Set < String > Set = SensitiveWordUtil. GetSensitiveWord (String, SensitiveWordUtil. LONG_MATCH); System.out.println(" long match to: "+ set); set = SensitiveWordUtil.getSensitiveWord(string, SensitiveWordUtil.SHORT_MATCH); System.out.println(" short match to: "+ set); }}Copy the code
6.2 Optimal commodity TOPK
1. Background topk is a typical business scenario. In addition to the best products, including recommendation ranking and point ranking, all the places involving topk are the application occasions of this algorithm.
Topk means to filter the topk values after obtaining a set. The problem seems simple, but the data structures and algorithms inside reflect the thinking and deep mining of the solution’s performance. After all, there are several methods, the optimization ideas contained in these programs is what? I’ll do that in this video
2. Plan
3. The implementation
Let’s implement TopK with minimal heap
Static void down(int[] nodes, int I) {static void down(int[] nodes, int I) { O(log2n) while (I << 1 < nodes.length) {// Int left = I << 1; Int right = left +1; Int flag = I; int flag = I; int flag = I; // Check whether the left child is smaller than this node if (Nodes [left] < nodes[I]) {flag = left; } if (right < nodes.length && Nodes [flag] > Nodes [right]) {flag = right; } // The smallest of the two is not equal to this object, if (flag! = i) { int temp = nodes[i]; nodes[i] = nodes[flag]; nodes[flag] = temp; i = flag; } else {// otherwise equal, heapsort complete, exit loop break; }} public static void main(String[] args) {// original data int[] SRC = {3, 6, 2, 7, 4, 8, 1, 9, 2, 5}; Int k = 5; // heap, why k+1? Int [] nodes = new int[k + 1]; For (int I = 0; int I = 0; i < k; i++) { nodes[i + 1] = src[i]; } System.out.println("before:" + Arrays.toString(nodes)); For (int I = k >> 1; i >= 1; {I ‐ ‐) down (nodes, I); } System.out.println("create:" + Arrays.toString(nodes)); For (int I = src.length; i<src.length ; i++){ if (nodes[1] < src[i]) { nodes[1] = src[i]; down(nodes, 1); } } System.out.println("topk:" + Arrays.toString(nodes)); }}Copy the code
4. Result analysis
Related articles
Business application based on data structure and algorithm I