🥳 welcome interested partners, do something meaningful together! Translator: Oil-oil
I launched a weekly translation project at github.com and fedarling.github. IO
Currently, there is still a lack of like-minded partners, which is purely personal interest. Of course, it will also help to improve English and front-end skills. Requirements: English should not be too bad, proficient in using Github, persistent, modest, and responsible for what they do.
If you want to participate, you can either click here to view it or send an issue message to the warehouse. My blog also has specific personal contact information: daodaolee.cn
In this article, I’ll explore the allocation of heap memory in Node, and then try to push memory as far as the hardware can handle. Then we’ll find some practical ways to monitor Node processes to debug memory-related issues.
OK, when you’re ready to go!
Clone the code from my GitHub.
Introduction to V8 garbage collection
First, a brief introduction to the V8 garbage collector. Memory is allocated using the heap, which is divided into generational areas. The generation of an object varies with age in its life cycle.
The generation is divided into the young generation and the old generation, and the young generation is divided into the new generation and the middle generation. As objects survive garbage collection, they also join the older generation.
The basic principle of the generation hypothesis is that most subjects are young. Based on this, the V8 garbage collector promotes only objects that survive the garbage collection. As objects are copied to adjacent regions, they eventually pass into the older generation.
There are three main aspects of memory consumption in Node:
- Code – The location where the code is executed
- Call stack – Used to hold functions and local variables with primitive types (numbers, strings, or Booleans)
- Heap memory
Heap memory is our main focus today. Now that you know more about the garbage collector, it’s time to allocate some memory on the heap!
function allocateMemory(size) {
// Simulate allocation of bytes
const numbers = size / 8;
const arr = [];
arr.length = numbers;
for (let i = 0; i < numbers; i++) {
arr[i] = i;
}
return arr;
}
Copy the code
In the call stack, local variables are destroyed at the end of the function call. The underlying type number is never in the heap, but is allocated in the call stack. But the object ARR will go into the heap and may survive the garbage collection.
Is there a limit to heap memory?
Now take the brave test — push the Node process to the limit to see where it runs out of heap memory:
const memoryLeakAllocations = [];
const field = "heapUsed";
const allocationStep = 10000 * 1024; // 10MB
const TIME_INTERVAL_IN_MSEC = 40;
setInterval(() = > {
const allocation = allocateMemory(allocationStep);
memoryLeakAllocations.push(allocation);
const mu = process.memoryUsage();
// # bytes / KB / MB / GB
const gbNow = mu[field] / 1024 / 1024 / 1024;
const gbRounded = Math.round(gbNow * 100) / 100;
console.log(`Heap allocated ${gbRounded} GB`);
}, TIME_INTERVAL_IN_MSEC);
Copy the code
In the code above, we allocated about 10 MB at 40-millisecond intervals, giving enough time for garbage collection to promote surviving objects to the old age. Process. memoryUsage is a tool for recycling rough metrics about heap utilization. As the heap allocation grows, the heapUsed field records the size of the heap. This field records the number of bytes in RAM, which can be converted to MB.
Your results may vary. On a Windows 10 laptop with 32GB of ram, you get the following:
Heap allocated 4 GB Heap allocated 4.01 GB <-- Last few GCs --> [18820:000001A45B4680A0] 26146 ms: Mark-sweep (reduce) 4103.7 (4107.3) -> 4103.7 (4108.3) MB, 1196.5/0.0 MS (Average mu = 0.112, Current mu = 0.000) Last resort GC in old space requested <-- JS stackTrace --> FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memoryCopy the code
Here, the garbage collector tries to compress memory as a last resort, and finally gives up and throws the “out of heap” exception. This process reached the 4.1GB limit and took 26.6 seconds to realize it was about to hang the service.
Some of the reasons for these results are unknown. The V8 garbage collector originally ran in a 32-bit browser process with strict memory limits. These results suggest that memory limits may have been inherited from legacy code.
At the time of this writing, the above code is running under the latest VERSION of LTS Node and using a 64-bit executable. In theory, a 64-bit process should be able to allocate more than 4GB of space and easily grow to 16 TB of address space.
Expand memory allocation limits
node index.js --max-old-space-size=8000
Copy the code
This sets the maximum limit to 8GB. Be careful when you do this. My laptop has 32GB of space. I recommend setting it to the actual available space in RAM. Once physical memory runs out, processes begin to occupy disk space through virtual memory. If you set the limit too high, you’ll have another reason to replace your computer, so let’s try to avoid smoking
Let’s run the code again with the 8GB limit:
Heap Allocated 7.8GB Heap allocated 7.8GB <-- Last few GCs --> [16976:000001ACB8FEB330] 45701 ms: Mark-sweep (reduce) 8000.2 (8005.3) -> 8000.2 (8006.3) MB, 1468.4/0.0 MS (Average MU = 0.211, Current mu = 0.000) Last resort GC in old space requested <-- JS stackTrace --> FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memoryCopy the code
This time the heap was almost 8GB in size, but not quite. I suspect there is some overhead in the Node process to allocate so much memory. The process finished in 45.7 seconds.
In a production environment, memory may run out in less than a minute. This is one reason why monitoring and insight into memory consumption can be helpful. Memory consumption increases slowly over time, and it can take days to know there is a problem. If the process keeps crashing and the “out of heap” exception appears in the log, there may be a memory leak in the code.
The process may also take up more memory because it is processing more data. If resource consumption continues to grow, it may be time to break this monomer into microservices. This reduces memory stress on individual processes and allows nodes to scale horizontally.
How do I track Node.js memory leaks
The heapUsed field of process.memoryUsage is somewhat useful, and one way to debug memory leaks is to place memory metrics in another tool for further processing. Since this implementation is not complex, I will focus on how to implement it myself.
const path = require("path");
const fs = require("fs");
const os = require("os");
const start = Date.now();
const LOG_FILE = path.join(__dirname, "memory-usage.csv");
fs.writeFile(LOG_FILE, "Time Alive (secs),Memory GB" + os.EOL, () = > {}); // request - acknowledgement
Copy the code
To avoid having heap allocation metrics in memory, we chose to write the results to a CSV file to facilitate data consumption. The writeFile asynchronous function with a callback is used. The callback is empty to write to the file and continue without any further processing. To get a progressive memory metric, add it to console.log:
const elapsedTimeInSecs = (Date.now() - start) / 1000;
const timeRounded = Math.round(elapsedTimeInSecs * 100) / 100;
s.appendFile(LOG_FILE, timeRounded + "," + gbRounded + os.EOL, () = > {}); // request - acknowledgement
Copy the code
The above code can be used to debug memory leaks where the heap memory grows over time. You can use some analysis tools to parse native CSV data for a nice visualization.
If you’re just in a hurry to see what the data looks like, just use Excel, as shown below:
At the limit of 4.1GB, you can see a linear increase in memory usage over a short period of time. Memory consumption continues to grow without flattening out, which is a sign that there is a memory leak somewhere. When we debug this kind of problem, we look for the part of code that was allocated at the end of the old generation.
Objects that survive garbage collection may persist until the process terminates.
One way to be more reusable with this memory leak detection code is to wrap it in its own time interval (because it doesn’t have to exist in the main loop).
setInterval(() = > {
const mu = process.memoryUsage();
// # bytes / KB / MB / GB
const gbNow = mu[field] / 1024 / 1024 / 1024;
const gbRounded = Math.round(gbNow * 100) / 100;
const elapsedTimeInSecs = (Date.now() - start) / 1000;
const timeRounded = Math.round(elapsedTimeInSecs * 100) / 100;
fs.appendFile(LOG_FILE, timeRounded + "," + gbRounded + os.EOL, () = > {}); // fire-and-forget
}, TIME_INTERVAL_IN_MSEC);
Copy the code
Note that these methods are not intended to be used directly in a production environment, but simply show you how to debug memory leaks locally. In practice, automatic displays, alerts, and rotating logs are included so that the server does not run out of disk space.
Trace node.js memory leaks in production
Although the above code does not work in a production environment, we have seen how to debug memory leaks. So, as an alternative, you can wrap the Node process in a daemon like PM2.
Set a restart policy when memory consumption reaches the limit:
pm2 start index.js --max-memory-restart 8G
Copy the code
The units can be K (kilobytes), M (megabytes), and G (gigabytes). It takes about 30 seconds for the process to restart, so configure multiple nodes with a load balancer to avoid interruptions.
Another nifty tool is the cross-platform native module Node-Memwatch, which fires an event when it detects a memory leak in running code.
const memwatch = require("memwatch");
memwatch.on("leak".function (info) {
// event emitted
console.log(info.reason);
});
Copy the code
The event is triggered by Leak, and its callback object has a Reason that grows as the heap of continuous garbage collection grows.
Diagnose memory limits using AppSignal’s Magic Dashboard
AppSignal has a magic dashboard forMonitor heap growthGarbage collection statistics.
The figure above shows that the request stopped for 7 minutes around 14:25 to allow garbage collection to reduce memory stress. Dashboards are also exposed when objects stay in the old space for too long and cause memory leaks.
Summary: Resolve Node.js memory limitations and leaks
In this article, we first looked at what the V8 garbage collector can do, then looked at whether there are heap memory limits and how to extend the memory allocation limits.
Finally, we use some potential tools to keep an eye out for memory leaks in Node.js. We saw that memory allocation monitoring can be achieved by using some crude utility methods, such as memoryUsage debugging methods. Here, the analysis is still done manually.
Another option is to use specialist tools like AppSignal, which provides monitoring, alerts, and nifty visualizations to diagnose memory problems in real time.
Hope you enjoyed this quick introduction to memory limits and diagnosing memory leaks.
A link to the
The original link
Translation plan