1. The introduction of the Node
-
Characteristics of the node
First of all, Node is not a language, but a JavaScript runtime environment, enabling JavaScript to break away from the limitations of the browser and have some influence on the server. Node is built like Chrome, except without support for Things like WebKit and HTML. Node serves on the basis of event-driven. It can connect to databases, build WebSocket servers, play with multiple processes, and so on.
It features:
- Asynchronous I/O: Each call does not have to wait for the previous I/O call to finish.
The time of such multiple calls depends on the time of the slowest task.Copy the code
-
Events and callbacks: Cause code to be written in an order independent of the order in which it is executed.
-
Single thread: JavaScript executes in a single thread, cannot share state with other threads, do not need to pay attention to state synchronization issues, but also has the characteristics of the multi-core CPU. (However, Node can share computing tasks with child processes)
-
Cross-platform: Address platform differences with Libuv
-
Application Scenarios of Node
-
1. I/O intensive
-
2. Distributed applications
-
3. Real-time applications
2. Module mechanism
Module implements
- Priority cache loading
- Path analysis and file location
- Module compilation (different extension names, different loading methods)
There are two types of modules in Node: one is the core module provided by Node. Some core modules are loaded into memory when the process starts, omits file location and compilation, and are preferentially executed in path analysis. A user – written file module, need to go through the above three steps, slow.
File is loaded
- The.js file is read by the FS module synchronously and then compiled and executed.
- The.node file is loaded through the dlopen() method.
- The.json file is read and compiled by the FS module synchronously, and returned by json.parse ().
- Other extensions are processed as JS files.
Core module introduction: from JavaScript to low-level C++
3. The asynchronous I/O
-
Why asynchronous I/O?
- I/O time is expensive
- Keep single threads away from multi-threaded deadlocks, context switches; It does not block I/ OS.
-
Status of asynchronous I/ OS
The operating system kernel has only blocking and non-blocking for I/O. This is really different from asynchronous/synchronous, even though it sounds the same.
Blocking I/O: A feature of blocking I/O is that the call does not end until the system kernel completes all operations. This causes the CPU to wait for I/O, wasting waiting time.
Non-blocking I/O: Non-blocking I/O returns immediately after the call, but only returns the state of the call. In order to get the result of the callback, the application needs to call I/O repeatedly to confirm whether it is complete. This is called polling. It allows the CPU to process state judgments, resulting in a waste of CPU resources.
Existing polling technologies:
1. Read: The most primitive method with the lowest performance. Repeated calls to check I/O status are used to obtain complete data reads.
2. Select: an improvement on read to determine the event state of file descriptors. Since it uses an array of 1024 lengths to store the state, it can check up to 1024 file descriptors at the same time.
3. Poll: It uses linked lists to avoid array length limitations. Second, it avoids unnecessary checks, but when there are too many file descriptors, the performance is poor.
4. Epoll: This scheme is the most efficient event notification mechanism under Linux. If no I/O events are detected during polling, it will sleep until the event wakes it up.
5. Kqueue: This implementation is similar to ePoll, but is implemented in FreeBSD.
Polling solves the need for non-blocking I/O to get the data, but for the program, it is still synchronous because it waits for the I/O to fully return, during which the CPU either traverses the file descriptor or hibernates to wait for events to occur. Our ideal asynchrony is to be able to process the next task while the CPU is waiting, sending the data to the application via signals or callbacks when the I/O is complete.
Ideal asynchronous I/O:
Linux has a way (AIO) to pass data through signals or callbacks, but AIO only supports kernel I/O 0_DIRECT and cannot read from the system cache.
Asynchronous I/O in real life:
Using thread pool, through the communication between threads will I/O data transfer, simulation to achieve asynchronous.
Asynchronous I/O for Node: event loop, observer, request object, I/O thread pool
Event loop
Observers: The process of determining whether an event needs to be processed is to poll observers.
Request object: The intermediate between JavaScript calling the kernel and executing the I/O operation is called the request object.
Asynchronous API
- Timer (inaccurate)
By calling setTimeout, the timer created by setInterval is inserted into the red-black tree inside the timer observer. Each iteration of the loop retrieves the timer object from the red-black tree, checking to see if the timer has been exceeded, and if so, an event is formed and the callback is performed immediately. The difference between the two is that setInterval checks repeatedly.
2.process.nextTick()
To execute the asynchronous task immediately, the next Tick is called to put the callback in the queue and the next Tick is taken out for execution. The time complexity is O(1), and the timer uses red-black tree, the time complexity is O(LG N), compared with process. NextTick is more lightweight and efficient.
3.setImmediate()
SetImmediate is similar to Process. nextTick in that it is delayed, but nextTick’s callback functions are saved to an array, setImmediate is saved to a linked list, and nextTick has a higher priority than setImmediate. This is because the event loop observer checks in order, process. NextTick belongs to the idle observer, and setImmediate belongs to the check observer. Priority: Idle Observer >I/O observer > Check observer.
4. Asynchronous programming
- Asynchronous programming solution
-
Event publish/subscribe pattern
-
Promise/Deferred mode: The Deferred is used internally, maintaining the state of the asynchronous model, and promises are used externally, exposing the external to add custom logic through the then() method.
-
Var Deferred = function () {this.state = 'unfulfilled'; this.promise = new Promise(); }; Deferred.prototype.resolve = function (obj) { this.state = 'fulfilled'; this.promise.emit('success', obj); }; Deferred.prototype.reject = function (err) { this.state = 'failed'; this.promise.emit('error', err); }; Deferred.prototype.progress = function (data) { this.promise.emit('progress', data); }; Var Promise = function () {eventEmitter.call (this); }; util.inherits(Promise, EventEmitter); Promise.prototype.then = function (fulfilledHandler, errorHandler, ProgressHandler) {if (typeof table_handler === 'function') {this.once('success', fulfilledHandler); } if (typeof errorHandler === 'function') {this.once('error', errorHandler); } if (typeof progressHandler === 'function') { this.on('progress', progressHandler); } return this; }; ``` Promise.all ``` Deferred.prototype.all = function (promises) { var count = promises.length; var that = this; var results = []; promises.forEach(function (promise, i) { promise.then(function (data) { count--; results[i] = data; if (count === 0) { that.resolve(results); } }, function (err) { that.reject(err); }); }); return this.promise; }; ` ` `Copy the code
- Process control library: Step, Wind
- Asynchronous concurrency control
- async
- Async. Parallel has no related dependencies
- Async. waterfall Exists
- Async. auto Automatically checks the dependency
5. Memory control
- V8 garbage collection and memory limits
- How to use memory well
-
Garbage collection algorithm
Mark-Sweep & Mark-Compact
- Scavenge
- Buffer objects are not allocated by V8 and have no heap size limit.
6.Buffer
-
The structure of the buffer
-
Buffer conversion, wide byte encoding easy to form garbled
var fs = require('fs');
var rs = fs.createReadStream('test.md');
var data = '';
rs.on("data", function (chunk){
data += chunk;
});
rs.on("end", function () {
console.log(data);
});
Copy the code
This code is used for stream reading specifications, and chunk is the buffer object. This code may be fine for English, but it produces garbled characters in the case of wide byte encodings. The problem is that
data += chunk;
Copy the code
This statement is equivalent to the toString operation. It is equivalent to:
data = data.toString() + chunk.toString();
Copy the code
When we read Chinese silent thoughts to test, there may be
The reason:
With setEncoding(), the ཨstring_decoder module solves the superficial problem without changing the substance. The correct concatenation method is to store the buffer as an array and merge the small buffer into the large buffer.
- Reading the same large file, the higher the hightWaterMark (limit value per read), the faster the reading speed.
7. Network programming
-
TCP and UDP
-
HTTP and HTTPS
-
webcocket
8. Build a Web application
- Data upload
- Route resolution
- The middleware
- Page rendering
Process of 9.
Strictly speaking, Node is not really a single-threaded architecture. It also has its own I/O threads, which are handled by the underlying Libuv. Single threaded: JavaScript code is always running on V8, so it’s called single threaded.
- The child process
Add-on: JavaScript files via execFile must be added on the first line to be directly executable.
#! /usr/bin/env nodeCopy the code
- Interprocess communication
Supplementary: Only if the child process is a Node process, the IPC channel will be disconnected based on the environment variable. Other types do not implement interprocess communication.
- Handle transfer
The last
Some of the above summary, if there are shortcomings, also hope to point out the communication. If you find this article helpful, please leave your mind ~~, which is my greatest support.