preface
This is a Node core knowledge system explanation, involving modularity, asynchronous I/O, asynchronous programming solutions, memory control, network programming, process these six aspects, if you are helpful to click 👍 and collect it ❤️
modular
quick start
NodeJS modularity relies heavily on the CommonJS specification
- Defining a module
module.exports = {
motto: 'If there is an afterlife to be a tree, stand into eternity, no posture of sorrow and joy'
}
Copy the code
- Importing a module
const koa = require('koa');
Copy the code
How does a module introduce loading?
To introduce a module in Node for the first time, go through the following three steps. Note that Node automatically caches referenced modules and loads them from the cache first.
- Path analysis
- File location
- Compile implementation
Path analysis
Depending on the module type, Node takes a different approach to parsing
-
Core modules: Modules built into Node. They have the second highest loading priority after the cache. Some modules are converted to binary code when Node is compiled, so they load the fastest.
-
Path form file module: to. |.. | / identifier will be as a file module for processing. When analyzing a path module, the require() method converts its path to a real path (that is, an absolute path) and indexes the real path, storing the compiled results in the cache to make the second load faster.
-
Custom module: the most time-consuming to find, the slowest to load, the analysis process is similar to the prototype chain search process.
Node_module in the current file directory => Node_module in the parent file directory =>… => Node_module in the root directory
File location
-
Extension analysis: the require () when the load module allows does not include the file extensions, it will be in accordance with the. Js. | json |. The order of the node to completion
-
Analysis list and package: the Node will be according to the package. The json configuration information in the file location and the main properties, if there is no package. The json, the Node will index as the file name, and, in turn, search index. The js | index. The json | index. The Node
Modules compiled
- .js file: read the file synchronously through the FS module and compile it
- .node file: an extension file written in C/C++ that is loaded by the dlopen() method and then compiled
- Json file: After reading the file synchronously through the FS module, use json.parse () to return the result
-
Implement a Node package
Juejin. Im/post / 5 bdfa4…
What is the NPM
NPM stands for Node Package Manager. As the name implies, it is the node package management tool.
Here are some common commands to use as a packager
NPM install packageName // Download the latest version of NPM install packageName@latest // Download the specified version of NPM install [email protected] / / uninstall the node package NPM uninstall packageName / / set the node download source NPM set registry / / https://registry.npm.taobao.org Get the current Node download source NPM get RegistryCopy the code
Asynchronous I/O
Why asynchronous I/O
Traditional backend languages use more multithreaded design patterns, which can fully benefit the resources of multi-core processors and can execute tasks in parallel. However, the cost of development is high due to the overhead of creating threads and thread context switching as well as the locking and state synchronization problems faced by multithreaded programming.
Node is single-threaded and avoids multi-threaded deadlocks and state synchronization. Use asynchronous I/O to keep single threads out of blocks for better CPU usage
Blocking I/O and non-blocking I/O
-
There are only two types of I/O in the OS: blocking and non-blocking. The former must wait until all operations are completed at the system kernel level before the call ends. For example, when reading a file, the system kernel does not complete the call until it has done the disk hunt, read the data, and copied the data into memory.
-
Non-blocking I/O is returned immediately after the call (that is, there is no need to wait for disk seek, read data, copy data to memory, etc.)
-
But non-blocking I/ OS that want to fetch data need to repeatedly call I/O operations to fetch the returned data, which is called polling.
-
The epoll polling scheme is the most efficient polling scheme. If no I/O event is detected during polling, epoll hibernates until the event occurs.
-
Node implements non-blocking asynchronous I/O through thread pools by polling a few threads for data, letting one thread do the computation, and passing the data from the I/O through communication between threads.
-
Node implements asynchronous I/ on *nix and Window platforms through the Libuv library
Event loop
The event loop in the Libuv engine is divided into six phases, which are repeated sequentially. Each time a phase is entered, the function is fetched from the corresponding callback queue and executed. The next phase occurs when the queue is empty or the number of callback functions executed reaches a threshold set by the system.
In the figure above, you can roughly see the sequence of events in node:
External input data –> Polling stage –> Check stage –> Close callback stage –> Timer detection stage –>I/O callback stage –> IDLE stage Prepare)–> Polling phase (repeated in this order)…
- Timers stage: This stage performs the callback of timer (setTimeout, setInterval)
- I/O Callbacks phase: Handles some of the few unexecuted I/O callbacks from the previous cycle
- Idle, prepare: Used only internally
- Poll phase: Fetch new I/O events, where node will block under appropriate conditions
- Check phase: Perform the setImmediate() callback
- Close Callbacks stage: Performs the close event callback of the socket
Note: None of the above six phases includes process.nexttick () (described below)
We continue to detail the timers, Poll, and Check phases, as these are where most asynchronous tasks in daily development are handled.
timer
The Timers phase performs setTimeout and setInterval callbacks and is controlled by the Poll phase. Similarly, the timer specified in Node is not the exact time and can only be executed as soon as possible.
poll
Poll is a crucial stage in which the system does two things
1. Return to the Timer phase and execute the callback
2. Perform I/O callback
And if the timer is not set when entering this phase, the following two things will happen
-
If the poll queue is not empty, the callback queue is traversed and executed synchronously until the queue is empty or the system limit is reached
-
If the poll queue is empty, two things happen
- If there is a setImmediate callback to perform, the poll phase stops and enters the Check phase to perform the callback
- If no setImmediate callback needs to be executed, the state waits for the callback to be added to the queue and then executes it immediately. There is also a timeout setting to prevent waiting forever
Of course, if the timer is set and the poll queue is empty, the poll queue will determine whether there is a timer timeout, and if so, the callback will be returned to the timer phase.
The check phase
The setImmediate() callback is added to the Check queue, and the phase diagram of the Event loop shows that the check phase is executed after the poll phase.
Let’s start with an example:
console.log('start')
setTimeout(() = > {
console.log('timer1')
Promise.resolve().then(function() {
console.log('promise1')})},0)
setTimeout(() = > {
console.log('timer2')
Promise.resolve().then(function() {
console.log('promise2')})},0)
Promise.resolve().then(function() {
console.log('promise3')})console.log('end')
//start=>end=>promise3=>timer1=>timer2=>promise1=>promise2
Copy the code
- When a macro task is executed (printing the start end and putting the two timers into the timer queue), the macro task is executed (the same as in the browser), so the promise3 is printed
- Then enter the Timers phase, execute the timer1 callback function, print timer1, and place the promise.then callback into the MicroTask queue, and do the same for timer2, print timer2; This is quite different from the browser side. Several setTimeout/setInterval are executed successively in the timers stage, unlike the browser side, which executes a micro task after each macro task (the difference between Node and browser Event Loop will be discussed in more detail below).
The observer
In Node, events originate mainly from network requests, file I/O, etc., and these events have corresponding observers.
The event loop is a typical producer/consumer model. Asynchronous I/O, network requests, and so on are producers of events that are passed to the corresponding observer, from which the event loop picks up and processes the event.
The request object
In the transition between JavaScript invocation and I/O, there is an intermediate called request object. That is, the callback function is not called by the developer but by the requesting object.
- JS calls the Node core module
- Node Core calls C++ built-in modules
- Built-in modules make system calls through libuv. A request object is generated in which the parameters and methods passed in by the JS layer are wrapped, including the callback function (located on the onComplete property).
- Once the object is wrapped, the Windows platform pushes the object into the thread pool for execution.
Implement the callback
- When the I/O operation in the thread pool is complete, the result is stored in the REq ->result property, and then the IOCP(windos’s solution for asynchronous I/O) is notified that the current object operation is complete
- The I/O observer of the event loop is called, and on each Tick execution, icOP-related methods are called to detect whether there are outstanding requests in the thread pool. If so, the request object is enqueued to the I/O observer and treated as an event. This is the end of the asynchronous I/O operation
Asynchronous programming solutions
Publish and subscribe
class EventEmitter{
private events: Object = {}; // Store events
private key: number = 0; // The unique identifier key of the event
on(name: string,event: any): number{
event.key = ++this.key;
this.events[name] ? this.events[name].push(event)
: (this.events[name] = []) && this.events[name].push(event);
return this;
}
once(name: string,cb){
let cb = (. args) = > {
cb.call(this. args);this.off(name);
}
this.on(name,cb);
return this;
}
off(name: string,key? : number){
if(this.events[name]){
this.events[name] = this.events[name].filter(x= >x.key ! == key); }else{
this.events[name] = [];
}
return this;
}
emit(name: string,key? : number){
if(this.events[name].length === 0 ) throw Error('Sorry, you don't have a definition${name}Listener `)
if(key){
this.events[name].forEach(x= > x.key === key && x());
}else {
this.events[name].forEach(x= > x());
}
return this; }}Copy the code
Avalanche problem
In the case of high traffic, large amount of concurrent cache failure scenario, at this time a large number of requests into the database at the same time, the database can not withstand such a large query request, and further affect the overall response speed of the website.
Use sentries to ensure the order of events
// Use partial functions
// Here is a demo loaded on demand
let after = function(times, cb) {
let count = 0,results = {};
return function (key, value) {
results[key] = value;
count++;
if(count === times) cb(results); }}const emitter = new events.Emitter();
let done = after(times, render);
emitter.on("done", done);
emitter.on("done", other);
fs.readFile(template_path, "utf8".function (err, template) {
emitter.emit("done"."template", template);
});
db.query(sql, function (err, data) {
emitter.emit("done"."data", data);
});
l10n.get(function (err, resources) {
emitter.emit("done"."resources", resources);
});
Copy the code
Promise/Deferred model
- Promise.then mounts the callback function
- By deferred the resolve | reject performing callbacks
function myPromise(construc){
let self = this;
this.status = 'pending';
this.value = undefined;
this.reason = undefined;
this.resolveQueue = [];
this.rejectQueue = [];
function resolve(value) {
if(self.status === 'pending'){
self.status = 'fulfilled';
self.value = value;
self.resolveQueue.forEach((fn) = >fn()); }}function reject(reason) {
if(self.status === 'pending'){
self.status = 'rejected';
self.reason = reason;
self.rejectQueue.forEach((fn) = >fn()); }}try {
construc(resolve,reject);
}catch (e) {
reject(e);
}
}
myPromise.prototype.then = function(res,rej){
this.status === 'fulfilled' && res(this.value);
this.status === 'rejected' && rej(this.reason);
if(this.status === 'pending') {this.resolveQueue.push(() = >res(this.value));
this.rejectQueue.push(() = >rej(this.reason)); }};let p = new myPromise((res,rej) = > {
setTimeout(res(1),1000)
}).then((e) = > console.log(e))
Copy the code
Async and await
async function fn() {
const a = await new Promise((res) = > {
res(1);
})
console.log(a);
}
/ / 1
Copy the code
Memory control
V8 garbage collection mechanism
- Objects are divided into new generation objects and old generation objects. The new generation takes up two Semispace, while the old generation takes up more memory
- The heap memory is split in two, and each part is called semispace. Of the two semispace, one is in use (Form) and the other is idle (To)
- When an object is allocated, it is first allocated in From. When garbage collection is performed, the surviving object in From is copied To To. When the copy is completed, the From and To are swapped
- On each swap, check To see if the survivor has been Scavenge and To take up more than 25% of the space
One condition can complete the promotion of the new generation object to the old generation object
- The old generation uses mark clearing and mark sorting algorithm (an improvement for mark clearing, mainly moving the living object to one end, after the completion of the move directly clean up the border of memory
- See the log
node projectName --trace_gc
Copy the code
Memory metrics
View the memory usage of a process
// Memory usage of node processes
process.memoryUsage()
// OS memory usage
os.totalmem() // Total memory usage
os.freemem() // Free memory usage
Copy the code
A memory leak
The main reason
-
Cache: The memory in the cache cannot be freed, which can cause memory leaks when the cache object becomes larger. The solutions are as follows
-
Cache restriction policies, such as first come, first served in OS, LRU algorithm, etc., carry out cache switching
-
Moving the cache outside reduces the number of objects residing in memory and makes garbage collection more efficient
-
Caches can be shared between processes
-
Learn about Redis
-
Delayed queue consumption: The consumption speed in the Task queue is lower than the production speed, causing memory objects to accumulate and possibly causing memory leaks. The solution is as follows
- Set up a monitoring system to notify people when queues build up
- Set up a timeout mechanism, the call to join the queue to start the timer, timeout will respond to a timeout error
-
Unscoped: Memory leaks caused by some global variables not freeing space in time, such as closure variables.
Screening method
- node-heapdump: Github.com/bnoordhuis/…
- node-memwatch: Github.com/lloyd/node-…
-
Large memory application
Node uses the Stream module to read and write applications with large memory
const fs = require('fs');
let reader = fs.createReadStream('in.txt');
let writer = fs.createWriteStream('out.txt');
reader.on('data'.function (chunk) {
writer.write(chunk);
});
reader.on('end'.function () {
writer.end();
});
Copy the code
Buffer object
A Buffer object is similar to an array in that its elements are hexadecimal two-digit numbers ranging from 0 to 255.
let str = 'i love javaScript';
let buffer = new Buffer(str,'utf-8');
console.log(buffer); // <Buffer 69 20 6c 6f 76 65 20 6a 61 76 61 53 63 72 69 70 74>
Copy the code
Memory allocation
Allocation mechanism
Node uses a slab allocation mechanism, which refers to a allocated memory area of a fixed size. It has the following three states
- Full: Indicates the full allocation status
- Partial: indicates the partial allocation status
- Empty: no assigned state
Allocating Buffer objects
-
Allocate small Buffer objects (less than 8KB)
- If the declared Buffer object memory usage is less than 8KB, an intermediate object pool is generated. Then, in the next application, the system checks whether the memory space in the pool is sufficient, and adds it to the slab memory unit pointed to by the pool object. If not, it creates a slab memory unit again and adds it to the pool object.
- The whole process looks like
function allocPool() {
pool = new SlowBuffer(Buffer.poolSize);
pool.used = 0;
}
if(! pool || pool.length - pool.used <this.length) allocPool();
Copy the code
-
It is worth noting that when slab units applied for the first time were not used up and the units applied for the second time were larger than the remaining units applied for the first time, the idle space could not be recycled in time, resulting in waste
-
Allocate large Buffer objects: If a Buffer object larger than 8KB is required, a SlowBuffer object will be allocated directly as a slab cell, which will be exclusive to the large Buffer object
The code problem
When a buffer array is concatenated with strings, it will be divided according to the specified buffer object length (default: 11). However, Chinese characters occupy 3 bytes in UTF-8, resulting in garbled characters.
For buffers of any length, there are wide – byte strings
Truncation may exist, but the larger the Buffer length, the lower the probability of occurrence, but this problem can not be ignored.
Sample code for concatenation
Buffer.concat = function(list, length) {
if (!Array.isArray(list)) {
throw new Error('Usage: Buffer.concat(list, [length])');
}
if (list.length === 0) {
return new Buffer(0);
} else if (list.length === 1) {
return list[0];
}
if (typeoflength ! = ='number') {
length = 0;
for (let i = 0; i < list.length; i++) { length += list[i].length; }}const buffer = new Buffer(length);
let pos = 0;
for (var i = 0; i < list.length; i++) {
let buf = list[i];
buf.copy(buffer, pos);
pos += buf.length;
}
return buffer;
};
Copy the code
Performance considerations
Buffers are binary data and can perform more than twice as well as strings. However, more attention should be paid to the details of the use of Buffer, otherwise it is easy to cause inexplicable garbled characters and memory waste.
Network programming
Building the TCP Service
quick start
// server.js
var net = require('net');
var server = net.createServer(function (socket) {
// New connection
socket.on('data'.function (data) {
socket.write("Hello");
});
socket.on('end'.function () {
console.log('Disconnected');
});
socket.write("Welcome to the Simple Node.js example: \n");
});
server.listen(8124.function () {
console.log('server bound');
});
// client.js
var net = require('net');
var client = net.connect({port: 8124}, function () { //'connect' listener
console.log('client connected');
client.write('world! \r\n');
});
client.on('data'.function (data) {
console.log(data.toString());
client.end();
});
client.on('end'.function () {
console.log('client disconnected');
});
Copy the code
api
-
server event
-
Listening: Triggered when the server.listen() binding port or Domain Socket is called
Server.listen (port,listeningListener), passed in as the second argument to the listen() method.
-
Connection: triggered when each client socket connects to the server, succinctly written through net.create
Server(), the last argument passed.
-
Close: Triggered when the server is shut down, the server will stop accepting new sockets after a call to server.close()
Connect, but keep existing connections and wait until all connections are disconnected.
-
Error: This event is triggered when a server exception occurs. For example, listening on a port in use
An exception that the server will throw if it does not listen for error events.
-
-
client event
-
Data: When one end calls write() to send data, the other end fires a data event, and the data passed by the event is
Write () Sends data.
-
End: This event is triggered when either end of the connection sends FIN data.
-
Connect: This event is used by the client and is triggered when the socket successfully connects to the server.
-
Drain: This event is emitted when either end calls write() to send data.
-
Error: This event is triggered when an exception occurs.
-
Close: This event is triggered when the socket is completely closed.
-
Timeout: When a connection is no longer active after a certain period of time, this event will be triggered to inform the user that the current connection has been completed
It’s unused.
-
Building the UDP Service
quick start
// server.js
var dgram = require("dgram");
var server = dgram.createSocket("udp4");
server.on("message".function (msg, rinfo) {
console.log("server got: " + msg + " from " +
rinfo.address + ":" + rinfo.port);
});
server.on("listening".function () {
var address = server.address();
console.log("server listening " +
address.address + ":" + address.port);
});
server.bind(41234);
// client.js
var dgram = require('dgram');
var message = new Buffer(node.js);
var client = dgram.createSocket("udp4");
client.send(message, 0, message.length, 41234."localhost".function(err, bytes) {
client.close();
});
$ node server.js
server listening 0.0. 0. 0:41234Server got: Simple node.jsfrom 127.0. 01.:58682
Copy the code
api
- Message: this event is triggered when a UDP socket listens on a nic port and receives a message. The data carried by this event is
Message Buffer object and a remote address information.
- Listening: This event is triggered when the UDP socket starts listening.
- Close: This event is emitted when the close() method is called, and the message event is no longer emitted. Trigger message again if necessary
Event, rebind.
- Error: This event is triggered when an exception occurs. If you do not listen, the exception is thrown and the process exits.
Building the HTTP Service
quick start
var http = require('http');
http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
}).listen(1337.'127.0.0.1');
console.log('Server running at http://127.0.0.1:1337/);
Copy the code
api
-
server
- Connection event: Before an HTTP request and response can be initiated, the client and server need to establish an underlying TCP connection
This connection can be used between multiple requests and responses because keep-alive is enabled. When this connection is built
Immediately, the server fires a Connection event.
- Request event: After a TCP connection is established, the UNDERLYING HTTP module abstracts HTTP requests and HTTP calls from data flows
This event is triggered when the request data is sent to the server and the HTTP request header is parsed. End () in res.
After that, the TCP connection may be used for the next request response.
- Close event: In line with TCP server behavior, the server.close() method is called to stop accepting new connections
This event is triggered when some connections are disconnected. You can quickly register this by passing a callback function to server.close()
Events.
- CheckContinue event: Some clients do not send large data directly, but first
Send a request to the server with an Expect: 100-continue header, and the server will trigger the checkContinue
Events; If this event is not listened for by the server, the server will automatically respond to the client’s 100 Continue status
Code, indicating that data upload is accepted. If too much data is not received, the client responds to 400 Bad Request to reject the data
The client continues to send data. Note that the Request event is not triggered when this event occurs, two things
Components are mutually exclusive. The request event is triggered when the client reinitiates the request after receiving 100 Continue.
- Connect event: Triggered when a client initiates a CONNECT request, which is usually initiated by an HTTP proxy
Appear; If the event is not listened for, the connection that made the request is closed. 7.3 Building THE HTTP Service 161
- Upgrade event: When the client requests to upgrade the connection protocol, it needs to negotiate with the server
With the Upgrade field in the header, the server fires this event when it receives such a request. This is later
The WebSocket section describes the process in detail. If the event is not listened for, the connection that made the request is closed.
- ClientError event: When the connected client raises an error event, the error is passed to the server
Send the incident.
-
client
- Response: When the client corresponding to the server side request event receives the server side response after the request is sent,
Triggers the event.
- Socket: Triggered when a connection established in the underlying connection pool is assigned to the current request object.
- Connect: When the client initiates a connect request to the server, if the server responds with the 200 status code, the client will send the
The user will trigger the event.
- Upgrade: If the server responds to 101 Switching when the client initiates an upgrade request to the server
Protocols status, which the client will trigger.
- Continue: The client initiates an Expect: 100-continue header message to try to send a large amount of data,
If the server responds with the 100 Continue state, the client fires the event.
Build the Websocket service
quick start
var WebSocket = function (url) {
/ / pseudo code, parsing the ws: / / 127.0.0.1:12010 / updates, for the request
this.options = parseUrl(url);
this.connect();
};
WebSocket.prototype.onopen = function () {
// TODO
};
WebSocket.prototype.setSocket = function (socket) {
this.socket = socket;
};
WebSocket.prototype.connect = function () {
var this = that;
var key = new Buffer(this.options.protocolVersion + The '-' + Date.now()).toString('base64');
var shasum = crypto.createHash('sha1');
var expected = shasum.update(key + '258EAFA5-E914-47DA-95CA-C5AB0DC85B11').digest('base64');
var options = {
port: this.options.port, / / 12010
host: this.options.hostname, / / 127.0.0.1
headers: {
'Connection': 'Upgrade'.'Upgrade': 'websocket'.'Sec-WebSocket-Version': this.options.protocolVersion,
'Sec-WebSocket-Key': key
}
};
var req = http.request(options);
req.end();
req.on('upgrade'.function(res, socket, upgradeHead) {
// The connection succeeded
that.setSocket(socket);
// Trigger the open eventthat.onopen(); }); }; Here is the response behavior on the server side:var server = http.createServer(function (req, res) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('Hello World\n');
});
server.listen(12010);
// After receiving the upgrade request, tell the client that it is allowed to switch protocols
server.on('upgrade'.function (req, socket, upgradeHead) {
var head = new Buffer(upgradeHead.length);
upgradeHead.copy(head);
var key = req.headers['sec-websocket-key'];
var shasum = crypto.createHash('sha1');
key = shasum.update(key + "258EAFA5-E914-47DA-95CA-C5AB0DC85B11").digest('base64');
var headers = [
'the HTTP / 1.1 101 Switching separate Protocols'.'Upgrade: websocket'.'Connection: Upgrade'.'Sec-WebSocket-Accept: ' + key,
'Sec-WebSocket-Protocol: ' + protocol
];
// Let the data be sent immediately
socket.setNoDelay(true);
socket.write(headers.concat(' '.' ').join('\r\n'));
// Establish a WebSocket connection on the server side
var websocket = new WebSocket();
websocket.setSocket(socket);
});
Copy the code
process
Creating a child process
type | The callback abnormal | The process type | Perform type | You can set the timeout period |
---|---|---|---|---|
spawn() | x | any | The command | x |
exec() | Square root | any | The command | Square root |
execFile() | Square root | any | Executable file | Square root |
fork() | x | Node | JavaScript | x |
Interprocess communication
The process communication in Node is realized by PIPE, which is provided by Libuv. The process communication on the application layer only has simple message events and Send methods
Multiple subprocesses listen on the same port by passing a handle
// parent.js
const child = require('child_process');
const child1 = child.fork('child.js');
const child2 = child.fork('child.js');
const server = require('net').createServer();
server.on('connection'.function(socket) {
child1.send('server', server);
child2.send('server', server);
server.close();
})
// child.js
const http = require('http');
const server = http.createServer(function(req,res) {
res.writeHead(200, {'Content-type': 'text/plain'});
res.end(process.pid);
})
process.on('message'.function(m, tcp) {
if(m === 'server') {
tcp.on('connection'.function(socket) {
server.emit('connection', socket); })}})Copy the code
The cluster
Process events
- Message: This event is triggered when a message is received
- Send: indicates the method of sending information
- Error: This event is triggered when a child process cannot be created, killed, or sent.
- Exit: This event is triggered when the child process exits. If the child process exits normally, the first parameter of this event is exit
Code, otherwise null. If the process was killed by the kill() method, you get a second argument, which means killed
Signal of the process.
- Close: This event is triggered when the standard input/output stream of the child process terminates. The parameter is the same as exit.
- Disconnect: This event is emitted when the disconnect() method is called in the parent or child process, and when the method is called
The listening IPC channel will be closed.
Load balancing
Node implements round-robin load balancing.
The way round-robin scheduling works is that the main process accepts connections and distributes them in turn to worker processes. The distribution strategy is to select the I =(I +1) mod N processes at a time out of N worker processes to send connections. It feels like normal polling
State sharing
Node does not allow multiple processes to share data. Therefore, common data sharing methods are as follows:
- Third-party data storage: stored in databases, caching services, etc
- Active notification: Similar to the mediator pattern, actively notifies other processes when data changes
The Cluster module
Create a cluster of child processes
var cluster = require('cluster');
// Create a child process
cluster.setupMaster({
exec: "worker.js"
});
var cpus = require('os').cpus();
for (var i = 0; i < cpus.length; i++) {
cluster.fork();
}
Copy the code
- Fork: Triggers the event after a worker process is replicated.
- Online: After replicating a worker process, the worker process sends an online message to the master process. The master process receives the message
The event is triggered when a message is received.
- Listening: The listener sends a listening message after calling Listen () (the server Socket is shared)
The message is sent to the main process. The main process receives the message and triggers the event.
-
Disconnect: This event is triggered when the IPC channel between the main process and the worker process is disconnected.
-
Exit: This event is triggered when a worker process exits.
-
Setup: cluster.setupMaster() triggers the event.
reference
Node.js is easy to understand