Main excerpts from: – Node.js debugging guide Node scene Revelations
Today, Node.js has been more and more widely used in BFF front-end separation, full-stack development, client-side tools and other fields. However, while the application layer is booming, Runtime is still in a black box state for most front-end developers, which has not been improved and thus hindered the application and promotion of Node.js in the business.
Memory leak problem
-
For a memory leak of the type that slowly increases to OOM, there is plenty of time to capture the Heapsnapshot and analyze the Heapsnapshot to locate the leak. (See the previous article “Node Scene Uncovered — Quickly Locating an Online memory leak”)
-
There is no good way to handle cases such as the failure of the while loop to break out of the condition, the suspended animation of the process due to the long regular execution, and the short time the application will OOM due to the exception request, often without capturing the Heapsnapshot.
There are two ways to generate a Coredump file
- When our application crashes unexpectedly and terminates, the operating system automatically records it. This method is commonly used in postmortem. It is used to analyze OOM triggered by avalanche and also to perform automatic Core dump for uncaught exceptions.
Here it is important to note that this is a not that safe operation: online generally pm2 have automatic restart function daemon tools such as process, this means that if our program in some cases frequently crash and reboot, then generates a lot of Coredump file, and it may even write server disk is full. Therefore, after this option is enabled, you must remember to monitor and alarm server disks.
- Manual call
gcore <pid>
Method to manually generate. This is usually used“Biopsy”Is used to locate problems when the Node.js process is in suspended animation.
This article introduces several guides to debugging memory for Node
1 gcore + llnode
1.1 Core & Core Dump
Before we start, let’s know what Core and Core Dump are.
What is the Core?
Before using semiconductors as memory materials, human beings use coils as memory materials, the coil is called core, and the memory made of coils is called core memory. Today, with the semiconductor industry booming, no one uses Core memory anymore, but in many cases, people still call the memory core.
What is Core Dump?
When a program terminates or crashes during execution, the operating system records the memory state of the program and saves it in a file. This behavior is called Core Dump. We can think of Core Dump as a “snapshot of memory”, but in fact, in addition to memory information, there are also some key program running state Dump, such as register information (including program pointer, stack pointer, etc.), memory management information, other processor and operating system state and information. Core Dump is very helpful for programmers to diagnose and debug programs, because for some program errors that are difficult to reproduce, such as pointer exceptions, Core Dump files can reproduce the situation when a program fails.
1.2 Test Environment
$ uname -aDarwin xiaopinguodeMBP 16.7.0 Darwin Kernel Version 16.7.0: Wed Oct 10 20:06:00 PDT 2018; Root: xnu - 3789.73.24 ~ 1 / RELEASE_X86_64 x86_64Copy the code
1.3 Enabling Core Dump
In the terminal type:
$ ulimit -c
Copy the code
Check the size of files that can be generated by Core Dump. A value of 0 indicates that Core Dump is disabled. Run the following command to enable Core Dump without limiting the size of files generated by Core Dump:
$ ulimit -c unlimited
Copy the code
The above command is only in view of the current terminal environment effectively, if want to permanent effect, need to modify the/etc/security/limits file, as follows:
1.4 gcore
Gcore can be used to dump the core files of a specific process without restarting the program. Use gcore as follows:
$ gcore [-o filename] pid
Use # as follows
$gcore
gcore: no pid specified
usage:
gcore [-s] [-v] [[-o file] | [-c pathfmt ]] [-b size] pid
Copy the code
During Core Dump, a core.pid file is generated in the directory where the gcore command is executed by default.
1.5 llnode
What is llNode?
Node.js v4.x+ C++ plugin for LLDB – a next generation, high-performance debugger.
What is LLDB?
LLDB is a next generation, high-performance debugger. It is built as a set of reusable components which highly leverage existing libraries in the larger LLVM Project, such as the Clang expression parser and LLVM disassembler.
Install llNode + LLDB:
Github.com/nodejs/llno…
# Prerequisites: Install LLDB and its Library
brew update && brew install --with-lldb --with-toolchain llvm
# instal
npm install -g llnode
Copy the code
1.6 Testing memory Instances
The following example tests llNode usage with a typical memory leak caused by global variable caching. The code is as follows:
const leaks = []
function LeakingClass() {
this.name = Math.random().toString(36)
this.age = Math.floor(Math.random() * 100)
}
setInterval((a)= > {
for (let i = 0; i < 100; i++) {
leaks.push(new LeakingClass)
}
console.warn('Leaks: %d', leaks.length)
}, 1000)
Copy the code
Run the program:
$ node app.js
Copy the code
Wait a few seconds, open another terminal and run gcore:
$ ulimit -c unlimited
$ pgrep -n node
$ 33833
$ sudo gcore -c core.33833 33833
Copy the code
Generate the core.33833 file.
1.7 Analyzing Core Files
Use LLDB to load the Core file just generated:
llnode -c ./core.33833
(lldb) target create --core "./core.33833"
Core file '/Users/xiaopingguo/repos/my_repos/node_repos/node-in-debugging/./core.33833' (x86_64) was loaded.
(lldb) plugin load '/usr/local/lib/node_modules/llnode/llnode.dylib'
Copy the code
Enter v8 to view the documentation. There are several commands:
v8
The following subcommands are supported:
bt -- Show a backtrace with node.js JavaScript functions and their args. An optional argument is accepted; if that argument is a number, it
specifies the number of frames to display. Otherwise all frames will be dumped.
Syntax: v8 bt [number]
findjsinstances -- List every object with the specified type name.
Flags:
* -v, --verbose - display detailed `v8 inspect` output for each object.
* -n <num> --output-limit <num> - limit the number of entries displayed to `num` (use 0 to show all). To get next page repeat
command or press [ENTER].
Accepts the same options as `v8 inspect`
findjsobjects -- List all object types and instance counts grouped by type name and sorted by instance count. Use -d or --detailed to get an output
grouped by type name, properties, and array length, as well as more information regarding each type.
findrefs -- Finds all the object properties which meet the search criteria.
The default is to list all the object properties that reference the specified value.
Flags:
* -v, --value expr - all properties that refer to the specified JavaScript object (default)
* -n, --name name - all properties with the specified name
* -s. --string string - all properties that refer to the specified JavaScript string value getactivehandles -- Print all pending handlesin the queue. Equivalent to running process._getActiveHandles() on the living process.
getactiverequests -- Print all pending requests in the queue. Equivalent to running process._getActiveRequests() on the living process.
inspect -- Print detailed description and contents of the JavaScript value.
Possible flags (all optional):
* -F, --full-string - print whole string without adding ellipsis
* -m, --print-map - print object's map address * -s, --print-source - print source code for function objects * -l num, --length num - print maximum of `num` elements from string/array Syntax: v8 inspect [flags] expr nodeinfo -- Print information about Node.js print -- Print short description of the JavaScript value. Syntax: v8 print expr settings -- Interpreter settings source -- Source code information For more help on any particular subcommand, type 'help <command> <subcommand>'.
Copy the code
- bt
- findjsinstances
- findjsobjects
- findrefs
- inspect
- nodeinfo
- source
runv8 findjsobjects
View all object instances and their total memory size
(llnode) v8 findjsobjects
Instances Total Size Name
---------- ---------- ----
...
356 11392 (Array)
632 35776 Object
8300 332000 LeakingClass
14953 53360 (String)
---------- ----------
24399 442680
Copy the code
LeakingClass has 8300 instances, accounting for 332000 bytes of memory. Use the v8 findjsInstances to view all LeakingClass instances:
(lldb) v8 findjsinstances LeakingClass
...
0x221fb297fbb9:<Object: LeakingClass>
0x221fb297fc29:<Object: LeakingClass>
0x221fb297fc99:<Object: LeakingClass>
0x221fb297fd09:<Object: LeakingClass>
0x221fb297fd79:<Object: LeakingClass>
0x221fb297fde9:<Object: LeakingClass>
0x221fb297fe59:<Object: LeakingClass>
0x221fb297fec9:<Object: LeakingClass>
0x221fb297ff39:<Object: LeakingClass>
0x221fb297ffa9:<Object: LeakingClass>
(Showing 1 to 8300 of 8300 instances)
Copy the code
usev8 i
Retrieves the specific contents of the instance
(llnode) v8 i 0x221fb297ffa9
0x221fb297ffa9:<Object: LeakingClass properties {
.name=0x221f9bc82201:<String: "0.s3psjp4ctzj">,
.age=<Smi: 95>}>
(llnode) v8 i 0x221fb297ff39
0x221fb297ff39:<Object: LeakingClass properties {
.name=0x221fb297ff71:<String: "0.q1t4gikp9a">,
.age=<Smi: 6>}>
(llnode) v8 i 0x221fb297fec9
0x221fb297fec9:<Object: LeakingClass properties {
.name=0x221fb297ff01:<String: "0.zzomfpcmgn">,
.age=<Smi: 52>}>
Copy the code
You can see the values of the name and age fields for each LeakingClass instance.
usev8 findrefs
See the reference
(llnode) v8 findrefs 0x221fb297ffa9
0x221fd136cb51: (Array)[7041]=0x221fb297ffa9
(llnode) v8 i 0x221fd136cb51
0x221fd136cb51:<Array: length=10018 {
[0]=0x221f9b627171:<Object: LeakingClass>,
[1]=0x221f9b627199:<Object: LeakingClass>,
[2]=0x221f9b6271c1:<Object: LeakingClass>,
[3]=0x221f9b6271e9:<Object: LeakingClass>,
[4]=0x221f9b627211:<Object: LeakingClass>,
[5]=0x221f9b627239:<Object: LeakingClass>,
[6]=0x221f9b627261:<Object: LeakingClass>,
[7]=0x221f9b627289:<Object: LeakingClass>,
[8]=0x221f9b6272b1:<Object: LeakingClass>,
[9]=0x221f9b6272d9:<Object: LeakingClass>,
[10]=0x221f9b627301:<Object: LeakingClass>,
[11]=0x221f9b627329:<Object: LeakingClass>,
[12]=0x221f9b627351:<Object: LeakingClass>,
[13]=0x221f9b627379:<Object: LeakingClass>,
[14]=0x221f9b6273a1:<Object: LeakingClass>,
[15]=0x221f9b6273c9:<Object: LeakingClass>}>
Copy the code
It can be seen that: Using the memory address of a LeakingClass instance, we use V8 Findrefs to find the memory address of the array referencing it, and then retrieve the array from that address. The array is 10018 in length, and each entry is a LeakingClass instance. Isn’t that the Leaks array in our code?
Tip: V8 I stands for V8 INSPECT, and V8 P stands for V8 print.
1.8 --abort-on-uncaught-exception
Add the — abort-on-uncaught-exception parameter to node.js startup. When the program crashes, it will automatically Core Dump to facilitate “post-mortems”.
Add the –abort-on-uncaught-exception argument to start the test program:
$ ulimit -c unlimited
$ node --abort-on-uncaught-exception app.js
Copy the code
Start another terminal run:
$ kill -BUS `pgrep -n node`
Copy the code
The first terminal will display:
Leaks: 100
Leaks: 200
Leaks: 300
Leaks: 400
Leaks: 500
Leaks: 600
Leaks: 700
Leaks: 800
Bus error (core dumped)
Copy the code
The debugging steps are the same as above:
(llnode) v8 findjsobjects
Instances Total Size Name
---------- ---------- ----
...
356 11392 (Array)
632 35776 Object
8300 332000 LeakingClass
14953 53360 (String)
---------- ----------
24399 442680
Copy the code
1.9 summarize
Our test code is simple and does not reference any third party module. If the project is large and references many modules, the results of V8 findjSObjects will be difficult to identify. In this case, we can use Gcore to perform Core Dump several times to compare and find the growing objects, and then diagnose them.
2 use heapdumps
Heapdump is a tool that dumps V8 heap information. V8 ::Isolate::GetCurrent()->GetHeapProfiler()->TakeHeapSnapshot(title, control) v8::Isolate::GetCurrent()->GetHeapProfiler()->TakeHeapSnapshot(title, control) But heapdump is simpler to use. Heapdump is an example of how to analyze node.js memory leaks.
Here’s a classic memory leak code for the test:
const heapdump = require('heapdump')
let leakObject = null
let count = 0
setInterval(function testMemoryLeak() {
const originLeakObject = leakObject
const unused = function () {
if (originLeakObject) {
console.log('originLeakObject')
}
}
leakObject = {
count: String(count++),
leakStr: new Array(1e7).join(' '),
leakMethod: function () {
console.log('leakMessage')}}},1000)
Copy the code
Why did this program have a memory leak? First, we need to understand the principle of closures: there is only one scope of closures within the same function, and all closures are shared. During function execution, if a closure is encountered, the memory space of the closure scope is created and local variables used by the closure are added. Then, when the closure is encountered, variables used by the closure are added to the previously created scope space that the previous closure does not use. At the end of the function, the variables that are not referenced by the closure scope are cleared.
The reason for this leak is that there are two closures inside the testMemoryLeak function: unused and leakMethod. Unused this closure references the originLeakObject variable in the parent scope. If there is no later leakMethod, it will be cleared after the function finishes, and so will the closure scope. Since leakObject is a global variable, i.e. leakMethod is a global variable, the closure scope (which contains originLeakObject referenced by unused) is not released. As testMemoryLeak is constantly called, originLeakObject points to the previous leakObject, and the next leakObject. LeakMethod will refer to originLeakObject. A closure reference chain is formed, and leakStr is a large string that cannot be released, resulting in a memory leak.
Solution: Add originLeakObject = null at the end of testMemoryLeak.
Run the test code:
$ node app
Copy the code
Then do it twice:
$ kill -USR2 `pgrep -n node`
Copy the code
Two heapsnapshot files are generated in the current directory:
Heapdumps - 100427359.61348. Heapsnapshot heapdumps - 100438986.797085. HeapsnapshotCopy the code
2.1 the Chrome DevTools
We used Chrome DevTools to analyze the previously generated HeapSnapshot file. Chrome DevTools -> Memory -> Load and Load the heapsnapshot files in sequence. Click on the second heap snapshot and there is a drop-down menu in the upper left corner with the following four options:
- Summary: Displayed by the constructor name.
- Comparison: Compares the differences between multiple snapshots.
- Containment: Checks the entire GC path.
- Statistics: Displays the memory usage in a pie chart. Usually we only use the first two; The third option is generally not used, because you can see the path from GC Roots to this object as you expand each item in Summary and Comparison; The fourth option only shows the memory usage, as shown in the figure below:
Switching to the Summary page, you can see the following five properties:
- Contructor: Constructor names such as Object, Module, Socket, (array), (string), (regexp), etc. The parentheses represent the built-in array, string, and regexp, respectively.
- Distance: The Distance to the GC roots. The GC root object is typically a Window object in the browser and a global object in Node.js. The larger the distance, the deeper the reference, and it’s worth focusing on the objects that are most likely to be memory leaks.
- Objects Count: Indicates the number of Objects.
- Shallow Size: The Size of the object itself, excluding the objects it references.
- Retained Size: the Size of the object itself and the Size of the object it references, i.e. the memory Size that can be reclaimed after GC.
Tip:
- Retained Size = Retained Size of an object + the sum of Retained Size of all other objects that the object can refer to directly or indirectly.
- Retained Size == Retained Size (Boolean), (number), (string), etc. Retained Size == Retained Size (Boolean), (number), (string), etc.
Let’s click Retained Size and display the closure in lower order, so you can see that 99% of the closure reference was Retained. Continue with this expansion:
As you can see, a leakStr accounts for 5% of memory, while leakMethod references 88% of memory. Retainers (Object’s Retaining Tree) shows the GC path of the Object. Click leakStr (Distance is 13) as shown above. The Retainers automatically expand. Distance decreases from 13 to 1.
We continue to open leakMethod as shown below:
It can be seen that: The leakMethod function context with a count= “18” of originLeakObject references an originLeakObject object with a count= “17”, The Context of the leakMethod function of the originLeakObject object refers to the originLeakObject object whose count= “16”, and so on. Each originLeakObject object has a large string leakStr (occupying 8% of memory) that causes a memory leak, which is consistent with our previous inference.
Tip: If the background color is yellow, it means that the object is still referenced in JavaScript and may not have been cleared. If the background color is red, it means that the object has no reference in JavaScript, but it still lives in memory. This is usually seen in DOM objects. They are stored in a different location than objects in JavaScript, and are rarely encountered in Node.js.
2.2 Comparing Snapshots
Switching to the Comparison view, you can see some attributes, such as #New, #Deleted, and #Delta, with + and – representing relative to the heap snapshot being compared. We compare the second snapshot with the first, as follows:
We can see that :(string) has been added by 5 bytes, each of which has a size of 10000024 bytes.
3 use memwatch – next
Memwatch-next (memwatch for short) is a module for monitoring node.js memory leaks and comparing heap information. As an example of how to use memWatch, let’s take a piece of code where an event listener causes a memory leak.
Test code is as follows:
let count = 1
const memwatch = require('memwatch-next')
memwatch.on('stats', (stats) => {
console.log(count++, stats)
})
memwatch.on('leak', (info) => {
console.log(The '-')
console.log(info)
console.log(The '-')})const http = require('http')
const server = http.createServer((req, res) = > {
for (let i = 0; i < 10000; i++) {
server.on('request'.function leakEventCallback() {})
}
res.end('Hello World')
global.gc()
}).listen(3000)
Copy the code
After each request arrives, the listener registers 10,000 request events to the server (a large number of listener functions are stored in memory, causing a memory leak), and then manually triggers a GC.
Run the program:
$ node --expose-gc app.js
Copy the code
Note: We added the — expose- GC parameter to start the program so that we could manually trigger the GC in the program.
Memwatch can listen for two events:
- Stats: GC event. This function is fired each time the GC is executed, printing heap related information. As follows:
{num_full_gc: 1, num_inc_gc: 1,// Increased garbage collection heap_compactions: 1,// memory compression usage_trend: 0,// Use the trend estimated_base: 5350136,// expected base current_base: 5350136,// Current base min: 0,// minimum value Max: 0// Maximum value}Copy the code
- Leak: a memory leak event that can be triggered if the memory has grown after 5 consecutive GC sessions. Print as follows:
{
growth: 3616040,
reason: 'heap growth over 5 consecutive GCs (0s) - -2147483648 bytes/hr'
}
Copy the code
Run:
$ ab -c 1 -n 5 http://localhost:3000/
Copy the code
Output:
(node:35513) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 request listeners added. Use emitter.setMaxListeners() to increase limit
1 { num_full_gc: 1,
num_inc_gc: 2,
heap_compactions: 1,
usage_trend: 0,
estimated_base: 5674608,
current_base: 5674608,
min: 0,
max: 0 }
2 { num_full_gc: 2,
num_inc_gc: 4,
heap_compactions: 2,
usage_trend: 0,
estimated_base: 6668760,
current_base: 6668760,
min: 0,
max: 0 }
3 { num_full_gc: 3,
num_inc_gc: 5,
heap_compactions: 3,
usage_trend: 0,
estimated_base: 7570424,
current_base: 7570424,
min: 7570424,
max: 7570424 }
4 { num_full_gc: 4,
num_inc_gc: 7,
heap_compactions: 4,
usage_trend: 0,
estimated_base: 8488368,
current_base: 8488368,
min: 7570424,
max: 8488368 }
--------------
{ growth: 3616040,
reason: 'heap growth over 5 consecutive GCs (0s) - -2147483648 bytes/hr' }
--------------
5 { num_full_gc: 5,
num_inc_gc: 9,
heap_compactions: 5,
usage_trend: 0,
estimated_base: 9290648,
current_base: 9290648,
min: 7570424,
max: 9290648 }
Copy the code
As you can see, Node.js has warned us that there are more than 11 event listeners, which could cause a memory leak. Five consecutive memory growth triggers the leak event to print out how much memory (bytes) has grown and the estimated number of bytes per hour.
3.1 Heap Diffing
Memwatch has a HeapDiff function that compares and calculates the difference between two heap snapshots. Modify the test code as follows:
const memwatch = require('memwatch-next')
const http = require('http')
const server = http.createServer((req, res) => {
for (let i = 0; i < 10000; i++) {
server.on('request'.function leakEventCallback() {})
}
res.end('Hello World')
global.gc()
}).listen(3000)
const hd = new memwatch.HeapDiff()
memwatch.on('leak', (info) => { const diff = hd.end() console.dir(diff, { depth: 10})}) run this code and execute the same ab command, print is as follows: (35690) node: MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 request listeners added. Use emitter.setMaxListeners() to increaselimit
{ before: { nodes: 35864, size_bytes: 4737664, size: '4.52 mb' },
after: { nodes: 87476, size_bytes: 8946784, size: '8.53 mb' },
change:
{ size_bytes: 4209120,
size: '4.01 mb',
freed_nodes: 894,
allocated_nodes: 52506,
details:
[ ...
{ what: 'Array',
size_bytes: 533008,
size: '520.52 kb'.'+': 1038,
The '-': 517 },
{ what: 'Closure',
size_bytes: 3599856,
size: '3.43 mb'.'+': 50001,
The '-': 3}... ] }}Copy the code
Closure and Array memory increased by a large part from 4.52 MB to 8.53 MB. We know that the essence of registering event listeners is to push event functions into corresponding arrays.
3.2 combining heapdumps
Memwatch works best when used in conjunction with heapdump. Typically, memWatch detects a memory leak, heapdump multiple heap snapshots, and Chrome DevTools analyzes and compares them to locate the source of the leak.
Modify the code as follows:
const memwatch = require('memwatch-next')
const heapdump = require('heapdump')
const http = require('http')
const server = http.createServer((req, res) = > {
for (let i = 0; i < 10000; i++) {
server.on('request'.function leakEventCallback() {})
}
res.end('Hello World')
global.gc()
}).listen(3000)
dump()
memwatch.on('leak', () => {
dump()
})
function dump() {
const filename = `${__dirname}/heapdump-${process.pid}-The ${Date.now()}.heapsnapshot`
heapdump.writeSnapshot(filename, () => {
console.log(`${filename} dump completed.`)})}Copy the code
The preceding program performs heap dump after starting, and then heap dump when the leak event is triggered. Run this code and execute the same ab command to generate two heapsnapshot files:
heapdump-21126-1519545957879.heapsnapshot
heapdump-21126-1519545975702.heapsnapshot
Copy the code
Load the two HeapSnapshot files with Chrome DevTools and select the Comparison view, as shown below:
leakEventCallback
Heapdump and memwatch-Next are both useful, but in practice they are not very convenient. We can’t keep looking at the state of the server and then manually trigger Core Dump when the memory keeps growing and exceeds a certain threshold. In most cases, when a problem is discovered, the scene has already been missed. So, we might need cpu-memory-monitor. As the name suggests, this module can be used to monitor CPU and Memory usage and automatically dump CPU profiles and Memory snapshots (HEapsnapshots) based on configuration policies.
4 use the CPU memory — the monitor
Let’s take a look at how to use cpu-memory-monitor, which is as simple as introducing the following code into the process startup entry file:
require('cpu-memory-monitor') ({cpu: {
interval: 1000.duration: 30000.threshold: 60.profileDir: '/tmp'.counter: 3.limiter: [5.'hour']}})Copy the code
The above code does the following: Check the CPU usage every 1000ms(interval). If the CPU usage exceeds 60%(threshold) for 3 consecutive (counter) times, dump the CPU usage of 30000ms(duration). ${process.pid}-${date.now ()}.cpuprofile in/TMP (profileDir), 1(limiter[1]) hour
These are the policies for automatically dumping CPU usage. The strategy for dumping Memory usage is the same:
require('cpu-memory-monitor') ({memory: {
interval: 1000.threshold: '1.2 gb'.profileDir: '/tmp'.counter: 3.limiter: [3.'hour']}})Copy the code
The above code does the following: Check the Memory usage every 1000ms(interval). If the Memory is greater than 1.2GB for 3 consecutive (counter) times (threshold), dump the Memory once. ${process.pid}-${date.now ()}.heapsnapshot in/TMP (profileDir), 1(limiter[1]) hours
Note: The memory configuration does not have a duration parameter, because Memroy dumps only at a time, not a period of time.
You’re smart enough to ask: Is it possible to use CPU and memory together? Such as:
require('cpu-memory-monitor') ({cpu: {
interval: 1000.duration: 30000.threshold: 60. },memory: {
interval: 10000.threshold: '1.2 gb'. }})Copy the code
The answer is: yes, but don’t. Because this might happen:
Memory is high and reaches the set threshold -> Triggers Memory Dump/GC -> Causes CPU usage to be high and reaches the set threshold -> Triggers CPU Dump -> causes more and more requests to pile up (such as many SQL queries in Memory) -> Triggers Memory Dump -> cause avalanche.
Usually, just use one or the other.
4.1 Source Code Interpretation
The source code for cpu-memory-monitor is no more than a hundred lines, and the general logic is as follows:
const processing = {
cpu: false.memory: false
}
const counter = {
cpu: 0.memory: 0
}
function dumpCpu(cpuProfileDir, cpuDuration) {... }function dumpMemory(memProfileDir) {... }module.exports = function cpuMemoryMonitor(options = {}) {... if (options.cpu) {const cpuTimer = setInterval((a)= > {
if (processing.cpu) {
return
}
pusage.stat(process.pid, (err, stat) => {
if (err) {
clearInterval(cpuTimer)
return
}
if (stat.cpu > cpuThreshold) {
counter.cpu += 1
if (counter.cpu >= cpuCounter) {
memLimiter.removeTokens(1, (limiterErr, remaining) => {
if (limiterErr) {
return
}
if (remaining > - 1) {
dumpCpu(cpuProfileDir, cpuDuration)
counter.cpu = 0}})}else {
counter.cpu = 0
}
}
})
}, cpuInterval)
}
if (options.memory) {
...
memwatch.on('leak', () => { dumpMemory(...) }}})Copy the code
We can see that cpu-memory-monitor doesn’t use anything new, just a combination of v8-profiler, heapdump, and memwatch-next.
The following points need to be noted:
Only the CPU or memory configuration that is passed in will be listened to. ${process.pid}-${date.now ()}.heapsnapshot Heapdump is introduced at the top, so even if there is no memory configuration, you can manually trigger memory Dump by killing -usR2
Refer to the link
- Cnodejs.org/topic/5b640…
- Github.com/node-inspec…
- Github.com/bnoordhuis/…
- Github.com/marcominett…