An overview of the
Programming languages such as C have low-level memory management primitives such as malloc() and free(). Developers use these primitives to explicitly allocate and free memory for the operating system.
JavaScript allocates memory for objects (objects, strings, and so on) when they are created, and “automatically” frees memory when pairs are no longer used, a process known as garbage collection. This “automatic” look-a-release feature is a source of confusion, because it gives JavaScript(and other high-level languages) developers the false impression that they don’t care about memory management, which is a big mistake.
Even when using high-level languages, developers should understand memory management (or at least know the basics). Sometimes there are problems with automatic memory management (such as bugs in the garbage collector or implementation limitations) that developers must understand so that they can be handled properly (or find an appropriate solution to maintain the code with minimal cost).
What is memory?
We now commonly used computers are von Neumann system computer, computer hardware by controller, arithmetic unit, memory, input equipment, output equipment composed of five major parts.
What we usually call memory is memory (RAM random access memory).
Common memory are non-volatile memory (to maintain data by constantly add new brush, once in power will lead to loss of data), so I need a large capacity, low cost of nonvolatile memory for data storage, this is the CRT, such as tape, floppy disk, hard disk, CD, flash card, U disk, etc. You can think of external memory as an input/output device because it is accessed through an I/O interface, whereas memory is directly addressed by the CPU. Programs in external memory need to be called into memory through an I/O interface to run.
Memory is where programs run, and programs are essentially collections of instructions and data. So memory is temporary storage for instructions and data, which are then processed by the CPU.
How does it work
At the hardware level, computer memory consists of a large number of triggers. Each flip-flop contains several transistors and is capable of storing one bit, and individual flip-flops can be addressed by a unique identifier so that we can read and override them. Thus, conceptually, the entire computer memory can be thought of as a huge array of bits that can be read and written.
As humans, we’re not very good at thinking and calculating in bits, so we organize them into larger groups that together can be used to represent numbers. Eight bits are called a byte. In addition to bytes, there are characters (words) (sometimes 16 bits, sometimes 32 bits). Characters and bytes correspond differently in different encodings:
A lot of things are stored in memory:
- All variables and other data used by the program.
- Program code, including operating system code.
The compiler and operating system work together to handle most of the memory management for you, but you still need to take a look at the bottom line to get a better understanding of the underlying management concepts.
When compiling code, the compiler can examine the basic data types and calculate in advance how much memory they require. The required size is then allocated to the program in the call stack space, called stack space, where these variables are allocated. Because when functions are called, their memory is added to existing memory, when they terminate, they are removed in LIFO (last in, first out) order. Such as:
int n; / / 4 bytes
int x [4]; // An array of 4 elements, 4 bytes each
doubleM;// 8 bytes
Copy the code
The compiler knows immediately how much memory it needs :4 + 4×4 + 8 = 28 bytes.
This code shows how much memory integer and double – precision floating-point variables take up. But about 20 years ago, integer variables typically accounted for two bytes and double-precision floating-point variables for four. Your code should not depend on the size of the current base data type.
The compiler inserts code that interacts with the operating system and requests the number of stack bytes needed to store the variables.
In the example above, the compiler knows the exact memory address of each variable. In fact, every time we write the variable N, it is internally converted to something like “memory address 4127963”.
Note that if we try to access x[4], the data associated with M will be accessed. This is because accessing a non-existent element in the array (which is 4 bytes more than the last actual allocated element in the array x[3]) may end up reading (or overwriting) some M bits. This is sure to have unpredictable consequences for the rest of the program.
When a function calls another function, each function gets its own block on the call stack. It holds all local variables, but it also has a program counter to remember where it is during execution. When the function completes, its block of memory is used elsewhere again.
Memory life cycle
Regardless of which programming language is used, the memory life cycle is the same:
Here is a brief overview of each phase of the memory life cycle:
- Allocate Memory – Memory is allocated by the operating system, which allows your programs to use it. In low-level languages, such as C, this is an explicitly performed operation that the developer needs to handle himself. However, in a high-level language, the system automatically assigns you internals.
- Use memory – This is the memory allocated before the program actually uses it. Read and write operations occur when allocated variables are used in code.
- Free memory – Free all memory that is no longer used, making it free memory that can be reused. Like the memory allocation operation, this operation needs to be performed explicitly in low-level languages.
In JavaScript, steps 1 and 3 are done by the JS engine.
The memory model of the JavaScript engine
Take the V8 engine.
A running program always corresponds to a portion of memory. This space is called a Resident Set. Resident set size Resident set size Resident SET size Resident set size Resident set size Resident set size Resident set size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident SET size Resident VSS
The functions of each part are as follows:
- Code Segment: Stores the Code being executed
- Stack: Stack memory that holds identifiers, primitive type values, and heap addresses referencing type variables
- Heap: Heap memory for reference type values
In the case of multithreading, each thread will have its own completely separate stack, but they will share the heap. Stack is thread-specific, while Heap is application-specific. Stack is an important consideration in exception handling and thread execution.
JavaScript is a single-threaded programming language, which means it has only one call stack.
In node.js, we can query memoryUsage by calling the process.memoryusage () method. Memory Usage {RSS: 4935680, heapTotal: 1826816, heapUsed: 650472, external: The above values are in bytes • RSS: Resident Set size • heapTotal: total heap size • heapUsed: actual heap size • External: Represents the size of a C++ object bound to a JavaScript object managed by V8
The stack
What are stacks and stacks?
Heap and stack are essentially two data structures.
Stack (data structure) : A first-in, last-out data structure.
Heap (data structure) : A heap can be thought of as a tree, as in heap sort.
The stack (stack)
Used for static memory allocation
A stack is a linear structure in memory used to store local variables and function parameters, following the principle of first in, last out. Data can only be pushed sequentially and removed sequentially. Of course, the stack is just a formal description of a contiguous area of memory, and the operation of moving data onto and off the stack is just the movement of the stack pointer up and down the memory address. As shown in the following figure (C language as an example) :
Variables in the stack disappear when the function call ends.
The stack is managed automatically by the operating system, not by V8 itself
Heap (heap)
The heap is used for dynamic memory allocation, and unlike the stack, the program needs to use Pointers to find data in the heap (think of it as a large, multi-tiered library).
The heap is an area of your computer’s memory that is not automatically managed for you and is not strictly managed by the CPU. It is the more free-floating (and larger) region of memory. To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. After free() allocates memory on the heap, you are responsible for reallocating it once it is no longer needed. If you fail to do this, the program experiences what is known as a memory leak. That is, memory on the heap will still be preserved (and will not be available to other processes).
Unlike a stack, a heap has no size limit on variable size (aside from the obvious physical limitations of the computer). Heap memory is read and written slightly slower because Pointers must be used to access memory on the heap.
Unlike the stack, variables created on the heap can be accessed from any function anywhere in the program. The scope of a heap variable is global in nature.
Stack vs Heap
So the characteristics of the stack:
-
Quick access (LIFO)
-
Any data stored in the stack must be finite and static (data size is known at compile time)
-
Multithreaded applications can have one stack per thread.
-
Stack memory management is ** straightforward, ** and done by the operating system without fragmentation
-
The data stored is local variables: primitive type values, heap addresses referencing type variables, Pointers, function frames (frames contain arguments supplied to the function, local variables of the function, and the address at which the function is executed)
Each called function has its own stack frame structure, and the stack frame structure is formed by the function itself. The top of a stack frame is defined by two Pointers — the frame pointer (register EBP) and the stack pointer (ESP)– see In Depth Understanding Computer Systems for more information.
-
Stack size limits (depending on OS and architecture: I386, X86_64, PowerPC. The default stack size on most architectures is around 2 to 4 MB.)
-
Variables cannot be resized
So the characteristics of the heap:
-
Slow access (relative to stack)
-
Store data with dynamic size
-
The heap is shared between the threads of the application.
-
You must manage memory (you are responsible for allocating and freeing variables)
-
Typical data stored in the heap are global variables, reference types
-
Unlimited memory size (generally speaking, on a 32-bit system, the heap can be up to 4 gb of space. From this point of view, the heap is almost unlimited)
-
Efficient use of space is not guaranteed, and over time chunks of memory may fragment as they are allocated and then be released
The stack used in JavaScript, where objects are stored in the Heap and referenced when needed
Dynamic allocation (heap-based memory allocation)
In computer science, Dynamic memory allocation, also known as heap memory allocation, refers to the allocation of memory during the running period of a computer program. It can be thought of as a way to allocate ownership of limited memory resources.
Dynamically allocated memory remains in effect until it is explicitly freed by the programmer or garbage collected. The difference from static memory allocation is that there is no fixed lifetime. Objects so assigned are said to have a “dynamic lifetime”
Unfortunately, things get a little more complicated when you don’t know how much memory a variable requires at compile time. Suppose we want to do the following:
Int n = readInput ();// Read user input.// Create an array with n elements
Copy the code
At compile time, the compiler does not know how much memory the array needs to use, because this is determined by the user-supplied values.
Therefore, it cannot allocate space for variables on the stack. Instead, our program needs to explicitly request the appropriate space from the operating system at run time, which is allocated from heap space. The differences between static and dynamic memory allocation are summarized in the following table:
Static memory allocation | Dynamic memory allocation |
---|---|
The size must be known at compile time | The size does not need to be known at compile time |
Execute at compile time | Execute at runtime |
Assign to stack | Assigned to a pile of |
FILO (In last out) | There is no particular order of allocation |
Reasons and advantages of dynamically allocating memory:
- When we don’t know how much memory the program needs.
- When we want a data structure that has no upper limit on storage space.
- When you want to use your memory space more efficiently. * Example: * If you allocate storage space for a one-dimensional array as array [20] and end up using only 10 storage space, the remaining 10 storage space will be wasted, and other program variables will not even be able to take advantage of this wasted memory.
- Dynamically created list inserts and deletes can be done very easily just by manipulating the address, while inserts and deletes result in more movement and wasted memory in statically allocated memory.
- If you want to use the concepts of structures and linked lists in your programming, you must allocate dynamic memory.
To fully understand how dynamic memory allocation works, you will need to spend more time with C and Pointers, which may be too far removed from the subject of this article to cover Pointers in detail.
En.wikipedia.org/wiki/C_dyna…
Zh.wikipedia.org/wiki/%E5%8A…
Moduscreate.com/blog/dynami…
Allocate memory in JavaScript
Now I’ll explain the first step: how to allocate memory in JavaScript. Reference developer.mozilla.org/en-US/docs/…
Unlike C/C++, there is no strict distinction between stack and heap memory in JavaScript. So we can simply say that all JavaScript data is stored in heap memory. However, in some scenarios, we still need stack-based data structure thinking to implement some functions, such as JavaScript execution context. The execution order of the execution context borrows the access method of the stack data structure. Therefore, it is very important to understand the principles and characteristics of stack data structures.
JS data type and memory relationship
ECMAScript variables may contain values of two different data types: base type values and reference type values.
For primitive types, the data itself is stored in the stack. For reference types, only a reference to an in-heap address is stored in the stack.
To better understand stack memory and heap memory, we can use the following examples and diagrams to understand.
var a1 = 0; // A variable object
var a2 = 'this is string'; // A variable object
var a3 = null; // A variable object
var b = { m: 20 }; // The variable b resides in the variable object and {m: 20} resides in the heap memory as an object
var c = [1.2.3]; // The variable c resides in the variable object, and [1, 2, 3] resides in the heap memory as an object
Copy the code
Value initialization
To spare programmers the trouble of allocating memory, JavaScript does so when variables are defined.
var n = 123; // Allocate memory for numeric variables
var s = "azerty"; // Allocate memory to the string
var o = {
a: 1.b: null
}; // Allocate memory for objects and their contained values
// Allocate memory for arrays and their values (just like objects)
var a = [1.null."abra"];
function f(a){
return a + 2;
} // Allocate memory for functions (callable objects)
// Function expressions can also assign an object
someElement.addEventListener('click'.function(){
someElement.style.backgroundColor = 'blue';
}, false);
Copy the code
Allocate memory through function calls
Certain function calls also cause memory allocation of objects:
var d = new Date(a);// Assign a Date object
var e = document.createElement('div'); // Assign a DOM element
Copy the code
Some methods allocate new variables or new objects:
var s = "azerty";
var s2 = s.substr(0.3); // s2 is a new string
// Since strings are invariants,
// JavaScript may decide not to allocate memory,
// only the range [0-3] is stored.
var a = ["ouais ouais"."nan nan"];
var a2 = ["generation"."nan nan"];
var a3 = a.concat(a2);
// The new array has four elements, which are the result of a joining a2
Copy the code
Use memory in JavaScript
Basically, using allocated memory in JavaScript means reading and writing to it. This can be done by reading or writing the value of a variable or object attribute, or by passing arguments to a function.
Release memory when it is no longer needed
Most memory management problems occur at this stage.
The hardest part here is determining when allocated memory is no longer needed, which typically requires the developer to determine where in the program memory is no longer needed and release it.
High-level languages embed a mechanism called a garbage collector, whose job is to keep track of memory allocation and usage so that at any time a piece of interior that no longer needs to be allocated can be found. In this case, it will automatically free the memory.
Unfortunately, this procedure is only a rough estimate, because the general problem of knowing whether you need some chunk of memory is undecidable (it can’t be solved by an algorithm).
Most garbage collectors work by collecting memory that is no longer accessed; for example, all variables pointing to it are out of scope. However, this is an under-estimate of the set of memory Spaces that can be collected, because at any point in memory location, there may still be a variable pointing to it in scope, but it will never be accessed again.
The garbage collection
The garbage collector came up with a way to solve the problem because it was not certain that some memory was really useful.
Heap memory is where garbage collection (GC) ** occurs
What is garbage collection
Let’s start with a quick and brief introduction to what is garbage collection? In fact, as the name suggests, there are two main points: garbage and recycling.
Then there is What/How/When based on these two points, which basically makes things clear. The official term is:
- What is garbage? How to find garbage? When to look for garbage?
- What is recycling? How to recycle? When will it be recycled?
In all garbage collection, garbage is an area of memory that is no longer used, and garbage collection means making it available to be overwritten with new, useful data.
Memory references
Garbage collection algorithms rely primarily on references.
In the context of memory management, an object is said to reference another object if it has access to another object (either implicitly or explicitly). For example, a Javascript object has references to its stereotype (implicit references) and to its attributes (explicit references).
In this case, the concept of “object” specifically refers not only to JavaScript objects, but also to function scopes (or global lexical scopes).
The lexical scope defines how to resolve a variable name in a nested function: does the inner function contain the function of the parent function even if the parent function has already returned
Lexical scope VS dynamic scope introduction
www.jianshu.com/p/70b38c7ab…
www.cnblogs.com/lienhua34/a…
Garbage collection algorithm
There are two main types of garbage collection algorithms, which are tracking garbage collection algorithm and Reference counting algorithm.
Reference-counting garbage collection algorithm
This is the most rudimentary garbage collection algorithm. This algorithm simplifies the definition of “whether an object is no longer needed” to “whether the object has other objects referring to it”. If there are no references to the object (zero references), the object will be collected by the garbage collection mechanism. The following code:
var o = {
a: {
b:2}};// Two objects are created, one referenced as an attribute of the other, and the other assigned to the variable o
// Apparently, none of them can be garbage collected
var o2 = o; The o2 variable is the second reference to "this object"
o = 1; // Now, "this object" has only one reference to the O2 variable, "this object" has no original reference to O
var oa = o2.a; // Reference the a attribute of "this object"
// Now "this object" has two references, one to O2 and one to oa
o2 = "yo"; // Although the original object is now zero-reference, it can be garbage collected
// But the object of its property A is still referenced by OA, so it cannot be reclaimed yet
oa = null; // The object with the a attribute now has zero references
// It can be garbage collected
Copy the code
Loops can cause problems
There is a limitation when it comes to loops. In the following example, two objects are created that reference each other to create a loop. They go out of scope after function calls, so they are virtually useless and allocated memory should be reclaimed. However, the reference-counting algorithm assumes that because each object is referenced at least once, none of them are marked for garbage collection. Circular references are a common cause of memory leaks.
function f(){
var o1 = {};
var o2 = {};
o1.a = o2; // o1 references o2
o2.a = o1; // O2 references o1 in a loop
return "azerty";
}
f();
Copy the code
Mark-and-sweep algorithm
The algorithm simplifies the definition of “an object is no longer needed” to “an object is not accessible”.
The algorithm consists of the following steps:
- The garbage collector builds a “root” list that holds referenced global variables. In JavaScript, the “window” object is a global variable that acts as the root node. Is the “global” object in Node.js.
- The algorithm then checks all the roots and their child nodes and marks them as active (which means they are not garbage). Any place that the root cannot reach will be marked as garbage.
- Finally, the garbage collector frees any chunks of memory that are not marked as active and returns that memory to the operating system.
Visual gifs of the tagging and cleanup algorithms in operation
This algorithm is an improvement over the previous algorithm, because “objects with zero references” are always inaccessible, but the opposite is not necessarily true, as we saw in the loop.
Disadvantages: This approach has several disadvantages, the most obvious being the need to pause the entire system during collection. Changes to the working set are not allowed. This can cause programs to “freeze” on a regular (often unpredictable) basis, making certain real-time and time-pressed applications impossible. In addition, the entire working memory must be checked, much of it twice, which can cause problems with the paged memory system.
In computer operating systems, page scheduling is a memory management scheme by which the computer stores and retrieves data from secondary storage for use in main storage. In this scenario, the operating system retrieves data from secondary storage in blocks of the same size (called *page) *. Paging is an important part of the virtual memory implementation in modern operating systems, using secondary storage to make programs exceed the size of available physical memory.
For simplicity, main storage is called “RAM” (short for “random access memory”) and secondary storage is called “disk” (short for “hard disk drive, magnetic drum memory, or solid State drive”), but the concept does not depend on whether these terms apply literally to a particular computer system.
As of 2012, all modern browsers have a mark-sweep garbage collector. And all of the improvements that have been made in JavaScript garbage collection (generational/incremental/concurrent/parallel garbage collection) over the last few years have been improvements to the mark-sweep algorithm, not to the garbage collection algorithm itself, and have not simplified the definition of “an object is no longer needed”.
You think this is it? NO! All of the above are relatively simple and logical and easy to understand.
How to solve the problem that the system stops for a long time?
The Mark phase causes The application to hang, or “stop-the-world,” when a mutator (The technical term for something that changes whether or not The memory region is referenced by The program, such as The program itself). Heap execution can be modified, and GC waits. At certain times (such as when memory is full) the GC executes and the Mutator waits. So how and when to find it is relatively simple: memory is full, STW starts; Finding garbage is a kind of graph traversal. Starting from Root, all accessible nodes are marked, and the inaccessible nodes are garbage.
With STW, the main thing is to keep the Mutator out of GC — the same reason you shut the dog out of the house with the mopping robot. Incremental marking, on the other hand, is like sweeping the floor with the dog, a battle of wits with mutator.
We need to introduce the concept of incremental GC if we want to solve the problem of system downtime for long periods.
Incremental GC, as its name implies, allows collectors to execute in multiple small batches, each with minimal Mutator pauses and a near-real-time effect.
Incremental STW VS figure 1
The reference counting class GC itself has the incremental nature, but due to the defects and efficiency of its algorithm, it is generally not adopted. The difficulty with the incremental implementation of the trace GC is that mutator may change the reference relationships between objects when traversing the reference diagram in the Collector
To solve this problem and realize incremental GC, we need to introduce a new algorithm tri-color marking algorithm
Tri-color marking algorithm
V8 blog has a new blog post on V8 GC in 2018, which covers the topic of incremental collection. Incremental GC in fact, as early as 1975 in a paper, grand Master Dijkstra had proposed a solution to this problem – the tricolor marking algorithm. (Dijkstra also coined the word mutator.)
Because incremental collection is concurrent, the process is similar to that shown in Figure 1 above (think of CPU time slice rotation), which means that the GC can be paused and restarted at any time, so the scan results need to be kept for the next GC wave to resume. In fact, the two-color mark is only a description of the scanning result: black or white, but ignores the description of the scanning state: has the child node of this point been swept? If I had stopped on A graph like this last time, I would have asked not only: do I sweep children at points A and B?
To deal with this, Dijkstra introduced another color: gray, which means that this node is referenced by Root, but I haven’t processed the children yet; Black means that the node is referenced by Root and the child nodes are marked complete. In this way, only gray nodes need to be dealt with when scanning codes.
Another advantage of the introduction of gray markers is that when there are no gray nodes in the graph, the entire graph marker is complete, and the cleaning can be done.
Objects can only move from white to gray and from gray to black, so the algorithm preserves an important invariance – no black object references a white object. This ensures that white objects can be released once the gray group is empty. This is called the tri-color invariant
A violation of the tricolor invariant
Can incremental recycling be done once the tricolor problem is solved? It’s not that simple. What is a failed garbage collection? There are two things:
- Throw away useful things;
- Save what you don’t need;
In fact, as long as there is a means, useless garbage or it can endure a few rounds; But useful kills are intolerable: I have just declared a variable and you tell me Reference Error, WTF!
With a traditional STW, the ability to instantly distinguish between what is useful and what is not in the current state can be easily distinguished by a root node tag reference. But incremental recycling is different, and Dijkstra presents a mischievous mutator in his paper:
- Three nodes ABC and C repeatedly jump between AB, sometimes only A points to C, sometimes only B points to C;
- At the beginning of A, C’s father is B, at the end of A node is black, C is white;
- At the beginning of B, C’s father is A. After B, there is no child node. B node is black, and C is still white.
- Since node A has been marked black, it cannot scan its child nodes, so it has to continue to scan backward.
- After a snakeskin operation, C was killed as an orphan, C’s fathers left helpless tears.
How to solve the problem of violating tricolor invariance
To solve the above problem, there are generally two ways to coordinate the behavior of mutator and collector:
- Read barriers, which prevent mutator from accessing white objects. When the Collector detects that mutator is about to access white objects, it immediately accesses them and marks them gray. Since the Mutator cannot access Pointers to white objects, it cannot make black objects point to them
- Write barriers, which record the pointer to any new black – > white object from mutator and mark the object gray so that the Collector can access the object in question again
Read/write barriers are essentially synchronous operations — before mutator can do something, it must activate the Collector to do something.
Write barriers
In the case of the mutator, it is possible to add a white node that has not been scanned to a node that has already been marked as being scanned (black node). Once the white node is subsequently referenced by other nodes that have already been scanned, there is no mechanism to collect it.
After thinking about this case, Dijkstra made a requirement: no black nodes can point to white nodes! Every time a reference changes, the referenced node needs to be colored immediately: that is, the white node needs to be gray immediately, and the gray and black nodes do not change.
For example, in the C example above, when the parent of C sends a change, something like: A.C = C, immediately color the C node and push it to the gray stack. This solves the problem of accidentally cleaning up useful nodes.
Read barrier
No introduction to see the following article
Liujiacai.net/blog/2018/0…
Barrier Methods for Garbage Collection. Benjamin Zorn 1990
conclusion
In a word, tricolor markup is mainly to solve the problem that the traditional two-color markup process in incremental markup cannot be fragmented. With tricolor markup, the traditional two-color markup can be suspended and restarted, so that it can be divided into small segments and run concurrently with mutator. Write barriers are used to solve the problem of mutator changes in concurrency causing useful memory to be cleaned up. Tricolor marking is just one of many technical solutions for garbage collection. Others, such as generation hypothesis and scavenger algorithm, have their subtle points and can be further studied.
GC become part of a figure
You can read more about tracking garbage collection in this article.
Also read Algorithms and Implementations of Garbage Collection.
Data: www.cs.cmu.edu/~fp/courses…
Liujiacai.net/blog/2018/0…
Liujiacai.net/blog/2018/0…
Liujiacai.net/blog/2018/0…
Currently, there is no automatic garbage collection algorithm suitable for all scenarios. V8 uses a variety of garbage collection algorithms internally. They recycle objects with short and long lifetimes. Above is the basic garbage collection methods and principles.
Loops are no longer a problem
In the first example above where the loop causes problems, the two objects are no longer referenced by objects accessible from the global object after the function call returns. Therefore, the garbage collector will find them inaccessible.
Even when references exist between objects, they are no longer accessible from the root directory.
Garbage collector for counterintuitive behavior
As convenient as garbage collectors are, they have their own set of tradeoffs, one of which is indeterminism; in other words,GC is unpredictable and you can’t really tell when to do garbage collection. This means that in some cases, the program will use more memory, which is actually necessary. In speed-sensitive applications, short pauses may be noticeable. If no memory is allocated, most GCS will be idle. Take a look at this scenario:
- Perform bulk allocation.
- Most (or all) of these elements are marked as inaccessible (assuming the reference points to a cache we no longer need) (assuming we empty a reference to a cache we no longer need).
- No more allocations are performed.
In these scenarios, most GCs will no longer continue collecting. In other words, even if there are unreachable references to collect, the collector does not declare them. These are not strictly leaks, but can still result in higher than usual memory usage.
V8. dev/blog/trash-… v8.dev/blog/trash-…
Node.js
Node.js provides additional options and tools for configuring and debugging memory issues that may not be appropriate for JavaScript executed in a browser environment.
V8 engine logo
The available flag increases the maximum amount of available heap memory:
node --*max-old-space-size=6000* index.js
Copy the code
We can also use flags and the Chrome Debugger to expose the garbage collector to debug memory problems:
node --expose-gc --inspect index.js
Copy the code