When I read the Node HTTP module documentation, I noticed the server.timeout property. I wanted to introduce it briefly, but after sorting it out, I found that there is a huge content supporting timeout: server.timout -> node core timers -> uv timers -> linux msleep/hrtimer -> clocksource -> tsc -> cmos rtc -> clock After the end of the timer series, Noder can roughly understand: How clock cycle drives Linux msleep/hrtimer; The relationship between Linux timers and UV timers; Relationship between Node timers and UV timers.

Timers are one of the most widely used features in programming, so what is the basic principle behind their implementation? No matter how complex a computer is, when the layers are stripped away, its core is a stone core (quartz) that sends out pulses of fixed frequency, like the human heart, pumping energy and signals to the whole body.

Period and Accuracy

Clock cycle, CPU cycle [min(fetch)], instruction cycle [fetch + execute], microinstruction cycle [clock cycle]

In the Intel Haswell microarchitecture kernel architecture, an instruction is usually decomposed into microinstructions, one clock cycle, and theoretically up to eight microinstructions can be executed in parallel.

Relationship between clock cycle and time:

Frequency (HZ) 1 1K 1M 1G 1000G
Time (/ Hz) 1s 1ms 1us 1ns 1ps

At present, ordinary computer CPU frequency is in the GHz level, with my personal computer for example: Intel(R) Core(TM) I5-8265U CPU @ 1.60ghz, which means that there are 1.6e9 clock cycles per second. A clock cycle is about 0.625ns, so at present, the accuracy of ordinary machine timer can only reach ns nanosecond level, then the accuracy is 1N, 10ns, 100ns level? To solve this problem, start with the cycle of (micro) instructions. In the Linux kernel, the lowest level of timer (essentially a counter) instruction is ADD 1. Even with pipeline support, no more than two basic microinstructions with sequential pipelines are avoided: Operation and write back, so the smallest operation requires 0.625ns*2 = 1.25ns, so the current maximum timer should be a few nanoseconds or so.

Originally, it was expected that the accuracy range of Linux hrtimer could be calculated as large as possible, but it was impossible to estimate it because CPU and HRtimer were too complex and my personal knowledge was limited. If you can complete a while-schedule loop in 10 clock cycles at Intel’s maximum core frequency of 10GHz, maybe…

Essence — while + schedule

Tell a joke: the big sales start at midnight on November 11

while (true) { const now = new Date(); If (now === 'double 11 ') {break; }} start offer ();Copy the code

Although the program may have a little blocking problem, but can not deny the implementation of timing, in fact, this is the essence of timer.

Join non-blocking

while (true) { const now = new Date(); If (now === 'double 11 ') {break; } schedule(); // schedule the CPU operation} start preference ();Copy the code

Support for multiple timers

const timers = Some Data Structur[timer]; . while (true) { const latest_timer = timers.peek(); // get the latest timer if (llatest_timer && atest_timer === 'double 11 ') {break; } schedule(); // schedule the CPU operation} start preference ();Copy the code

Rest is considering what scenarios, using what data structure to store the timer data, thus can realize C | U | R | D optimal (usually time optimal)

reference

A Journey through CPU pipelines 64- IA -32- Architectures – Optimization – Manual

When I read the Node HTTP module documentation, I noticed the server.timeout property. I wanted to introduce it briefly, but after sorting it out, I found that there is a huge content supporting timeout: server.timout -> node core timers -> uv timers -> linux msleep/hrtimer -> clocksource -> tsc -> cmos rtc -> clock After the end of the timer series, Noder can roughly understand: How clock cycle drives Linux msleep/hrtimer; The relationship between Linux timers and UV timers; Relationship between Node timers and UV timers.

Timers are one of the most widely used features in programming, so what is the basic principle behind their implementation? No matter how complex a computer is, when the layers are stripped away, its core is a stone core (quartz) that sends out pulses of fixed frequency, like the human heart, pumping energy and signals to the whole body.

Period and Accuracy

Clock cycle, CPU cycle [min(fetch)], instruction cycle [fetch + execute], microinstruction cycle [clock cycle]

In the Intel Haswell microarchitecture kernel architecture, an instruction is usually decomposed into microinstructions, one clock cycle, and theoretically up to eight microinstructions can be executed in parallel.

Relationship between clock cycle and time:

Frequency (HZ) 1 1K 1M 1G 1000G
Time (/ Hz) 1s 1ms 1us 1ns 1ps

At present, ordinary computer CPU frequency is in the GHz level, with my personal computer for example: Intel(R) Core(TM) I5-8265U CPU @ 1.60ghz, which means that there are 1.6e9 clock cycles per second. A clock cycle is about 0.625ns, so at present, the accuracy of ordinary machine timer can only reach ns nanosecond level, then the accuracy is 1N, 10ns, 100ns level? To solve this problem, start with the cycle of (micro) instructions. In the Linux kernel, the lowest level of timer (essentially a counter) instruction is ADD 1. Even with pipeline support, no more than two basic microinstructions with sequential pipelines are avoided: Operation and write back, so the smallest operation requires 0.625ns*2 = 1.25ns, so the current maximum timer should be a few nanoseconds or so.

Originally, it was expected that the accuracy range of Linux hrtimer could be calculated as large as possible, but it was impossible to estimate it because CPU and HRtimer were too complex and my personal knowledge was limited. If you can complete a while-schedule loop in 10 clock cycles at Intel’s maximum core frequency of 10GHz, maybe…

Essence — while + schedule

Tell a joke: the big sales start at midnight on November 11

while (true) { const now = new Date(); If (now === 'double 11 ') {break; }} start offer ();Copy the code

Although the program may have a little blocking problem, but can not deny the implementation of timing, in fact, this is the essence of timer.

Join non-blocking

while (true) { const now = new Date(); If (now === 'double 11 ') {break; } schedule(); // schedule the CPU operation} start preference ();Copy the code

Support for multiple timers

const timers = Some Data Structur[timer]; . while (true) { const latest_timer = timers.peek(); // get the latest timer if (llatest_timer && atest_timer === 'double 11 ') {break; } schedule(); // schedule the CPU operation} start preference ();Copy the code

Rest is considering what scenarios, using what data structure to store the timer data, thus can realize C | U | R | D optimal (usually time optimal)

reference

CPU pipeline exploration journey

64-ia-32-architectures-optimization-manual

Follow my wechat official account “SUNTOPO WLOG”, welcome to leave a comment and discuss, I will reply as much as possible, thank you for reading.