That is, there are N more than one timer, how to achieve efficient point to execute the timing function of the problem.

When developing Linux network programs, it is usually necessary to maintain multiple timers, such as maintaining client heartbeat time and checking timeout retransmission of multiple packets.

There are many clock structures of timers such as linked list, minimum heap, time wheel and so on. Which one should be used for efficiency and complexity in different application scenarios?

 

Time wheel timer 1. What are the benefits of time wheel timer, or what problems can be solved by the timer of this structure? In the ascending list timer, you can see that when you add a timer, the complexity is O(n) because you want to maintain order, so the traversal list inserts into the appropriate position. Assuming the system has a large number of timers (10W) using the ascending chain phenotype will have performance problems. This is where a time wheel timer would be more appropriate. 2. Common timer implementation algorithm complexity StartTimerStopTimerPerTickBookkeeping based on chain table O (1) O (n) O (n) based on the ranking list O (n) O (1) O (1) based on the minimum pile of O (LGN) O (1) O(1)

 

Based on time round O(1) O(1) O(1)

 

3. Timer management After a timeout occurs on nginx’s red-black tree and libevent’s heap, the while loop starts a del timer from the top of the heap once — until the last adjusted minimum heap top is not a timeout event. However, the event marked as timeout will later be placed in the active task list, waiting for processing, and the application layer callback function will decide how to process the event marked as timeout when processing the ActVie queue. When nginx handles a timeout, it removes the RB node member from the red-black tree (event structure) and calls the timeout handler function already registered in the application layer through the Add Timer. The heap is not used because each time the node is removed directly from the inside, rather than the top of the heap. The key point is, take the heap, delete time is O (1), but adjust the heap, logn. The insertion time is basically log n. With a red-black tree, deleting a node takes 3 rotations, but finding the smallest node takes logn. The insertion time is basically log n. On the whole, it’s pretty much the same. Heap sort and red-black tree performance comparison: blog.sina.com.cn/s/blog_56e6…

Not to mention memory, algorithmically red black tree inserts are at worst 2logN times (maximum height) plus no more than 2 rotations, minimum heap is at worst logN times; Red-black tree deletion does not need to compare, just need to rotate no more than 3, to find the minimum value need to traverse logN, if the deletion of the minimum value tree adjustment is generally very small; The minimum heap query top node is O(1), and deleting the top node is the worst case in any case, requiring 2logN comparisons; The worst case of red-black tree is constantly changing in rotation, and the insertion performance is worse than the minimum heap, but the deletion performance is several times better than the minimum heap. If insert + to find the minimum + delete minimum overall to determine the red-black tree and the minimum heap, red and black tree dominant, fluctuations in performance is relatively small, red and black tree but much depends on the comparison, relatively minimum heap by comparing the performance effects of smaller, if the comparison is a custom function called, compare the string, etc., so performance will sacrifice a lot); The advantage of minimum heap maximum is that the minimum performance is O(1). If the data inserted is basically orderly, the minimum heap can achieve better performance, and it has better performance in the application that frequently takes the minimum value and meets the condition (such as timeout queue). Therefore, red-black tree should avoid unnecessary query times to obtain better performance in timeout queue applications. From the practical point of view: because the minimum heap is generally implemented by heap, the element access speed is much higher than that of red black tree using linked list, and the insertion performance is several times faster than that of red black tree, but the deletion performance of the minimum heap is not faster than that of red black tree. The biggest disadvantage of the minimum heap is that it has to be contiguous memory.

 

 

4. Recently I learned libevent and found that libevent1.2 uses red-black tree for Timer events, but Libevent 2.0 has abandoned the use of red-black tree and uses small root heap. So I collected some data,

The following content from: blog.chinaunix.net/xmlrpc.php?…

Libevent minimum heap source path is:

Github.com/libevent/li…

First what is a small root heap:

(1) it is a complete binary tree (2) every node is less than or equal to the key codes of its left and right child nodes (the big root heap is the opposite), so it can be known that the root node of the current tree structure is the smallest node of the whole tree structure… (1) the complexity of O(nlogn) can be used to construct priority queues. (2) the complexity of O(nlogn) can be used to construct priority queues. (3) the complexity of O(nlogn) can be used to construct priority queues. (3) In libevent library, the timing is used to maintain the small root heap (nginx is a red black tree), let’s look at the small root heap insert and delete, actually very simple, than the red black tree, just float up and down, there is no red black tree left or right, also need to dye and so on. First, let’s look at node insertion: (1) The node to be inserted is inserted to the end of the current tree structure in the form of a complete binary tree. (2) Float the newly inserted node upward. The floating principle is as follows: Compare the current node with its parent, if smaller than the parent, then swap with the parent, and continue recursively until floating or the root node is reached. (1) The deletion of the small root heap is to delete the current root node, that is, to return the value of the current root node, and then replace the root node with the last node of the current tree structure. (2) The float starts from the last non-leaf node of the current attribute structure and floats downward. The float principle is as follows: If there is a smaller child node, swap with that child node and recursively float down the newly swapped child node. (If the current two children are smaller, swap with the smallest one.)

 

5. Consult blogs

Analysis of the realization of timer under Linux

How can I efficiently trigger timeout for a 10w scheduled task

Linux Network Programming 21: Using SIGALRM Signal to implement a simple Timer based on ascending list — “Linux High Performance Server Programming (author: You Shuang) 11.2.1”

Linux Network Programming twenty-two: Time wheel of High-performance timer — “Linux High-performance Server Programming (author: You Shuang) 11.4.1”

Linux Network Programming twenty-three: Time heap of High-performance timer — “Linux High-performance Server Programming (by You Shuang) 11.4.2”

High performance timer with epoll+ time heap

Epoll timer implementation series of articles: C language to achieve the time wheel

Epoll timer implementation series article: C language implementation time heap

Timing Wheel Timing Wheel algorithm implemented in Java

Linux C++ implements time-round optimization timeout detection mechanism