I started to change my job at the end of last year, and it has come to an end until now. I have interviewed many companies intermittently. In retrospect, DURING that time, I was torn by the interviewer’s hand and attacked by the written test questions.

In this article, I plan to make a summary of all kinds of interview questions I have met in job hunting (I will summarize after every interview) and interesting questions I have met in my review. The New Year is the peak period of job-hopping, which may help some friends.

Let’s talk about the difficulty of these questions first, most of them are basic questions, because this experience gives me the feeling that no matter you are interviewing for advanced or elementary level, basic knowledge will be asked, even some depth, so the basic is very important.

I will divide it into several articles according to the type:

Summary of interview: summary of javascript pilot (completed)

Interview Summary: NodeJS pilot Summary (completed)

Summary of interview: Summary of browser-related pilot (completed)

Summary of interview: CSS pilot summary (completed)

Summary of interview: Summary of framework VUE and engineering related aspects pilot (completed)

Summary of interview: Summary of non-technical questions (completed)

I will seize the time to complete the unfinished summary ~

This article is a summary of nodeJS related topics.

Let’s look at the table of contents

Q: What do you think about the high concurrency nodeJS supports

This question covers several aspects. It’s a good conversation. It’s a good plus. Follow these steps to explain to the interviewer

  1. Nodejs single-threaded architecture model

Nodejs is not really a single-threaded architecture, because nodeJS also has I/O threads (network I/O, disk I/O), which are handled by the lower level Libuv and are transparent to developers. JavaScript code runs on V8 forever and is single-threaded.

So nodeJS is single-threaded from a developer’s point of view.

Let’s make a net map:

Notice the Event Loop on the right, which is what I’m going to talk about

Advantages and disadvantages of single-threaded architecture:

Advantage:

  • A single thread is played by a single thread, eliminating the overhead of switching between threads
  • There is also the issue of thread synchronization, thread conflict issues do not need to worry about

Disadvantage:

  • Disadvantages are also very obvious, starting now are 4 cores, single thread can not make full use of CPU resources
  • Single thread, if you crash, your application is dead, and you know that debugging scripts know that if something goes wrong during execution, you’re done debugging
  • Since only one CPU can be used, if the CPU is occupied by a certain computation for a long time, the CPU can not be released, subsequent requests will always be suspended, directly no response

Of course, there are already mature solutions to these disadvantages, using PM2 management process, or K8S can also be used

  1. Core: Event loop mechanism

How can you support high concurrency with a single thread?

The core is the event loop mechanism of the JS engine (I think this is a good opening)

Nodejs and browser event loops are slightly different. The core of the event loop is the execution stack, macro queue and microqueue

The nodeJS event loop is one of the most important issues in the nodeJS event loop. The nodeJS event loop is one of the most important issues in the nodeJS event loop.

  1. To conclude, NodeJS is asynchronous and non-blocking, so it can withstand high concurrency

Come to chestnuts:

For example, A client requests A to come in and needs to read the file. After reading the file, the content is integrated and finally the data is returned to the client. But while reading the file another request comes in, how does that process work?

Soul painter, I made a whole picture, just so you understand

  • Request A enters the server and the thread begins processing the request
  • Request A reads the file, ok, sends it to file IO, but it’s slow, it takes three seconds, and then it’s suspended, waiting for notification, which is implemented by an event loop,
  • While A is waiting, the CPU is already released. At this time, THE REQUEST B comes in and the CPU processes the request B
  • There is no competing state between two requests. So when does a request block? When a lot of computation is involved, because the computation is performed on the JS engine, the execution stack is stuck, and other functions cannot be executed, for example, to build a very deep large object, repeatedly to this objectJSON.parse(JSON.stringify(bigObj))
  1. Extend the concepts of synchronous, asynchronous, blocking, and non-blocking to the interviewer if you have the opportunity

Synchronous and asynchronous focus on message communication mechanisms.

  • Synchronization: After a call is made, the call does not return until the result is received. The caller waits for the result to be returned.

  • Asynchronous: After a call is initiated, the call will return directly and continue to execute without waiting for the result. The result of execution is informed to the caller by the called party through status, notification, etc. Typical asynchronous programming model is Node.js

Blocking and non-blocking are concerned with the state of the thread while it waits for the result.

  • Block: A thread suspends execution while waiting for the result of a call
  • Non-blocking: As opposed to above, the current thread continues to execute

Reference data: www.zhihu.com/question/19… zhuanlan.zhihu.com/p/41118827

Q: This section describes the NodeJS event loop

Assuming you’re familiar with the browser’s event loop, take a look at the following image:

As shown in the figure above, the event cycle is broken down into six phases, as follows:

  1. Timers: timer Interval Timoout Callback events. The timer callback functions are executed in sequence
  2. Pending: Some system-level callbacks will be performed during this phase
  3. Idle,prepare: This stage is “for internal use only”
  4. PollIO callback function, this stage is more important and complicated,
  5. Check: Perform the setImmediate() callback
  6. Close: Executes the socket’s close event callback

The stages of development that require relationships

The three phases related to our development are Timers Poll Check

Timers: Executes timer callbacks, but note that prior to Node 11, several consecutive timer callbacks are executed consecutively, instead of executing a macro task immediately after executing a micro task as in the browser.

Check: This phase performs the setImmediate() callback, which only exists in NodeJS.

Poll: The above two phases are actually triggered in the Poll phase. The Poll phase is executed in this order.

  1. Check whether there are events in the check phase
  2. After the check phase, check whether the queue in the poll phase has events. If so, run the poll phase
  3. After the poll queue execution is complete, the event of the check phase is executed

Nodejs also has macro tasks and micro tasks. In NodeJS, except process.nextTick, the classification of macro tasks and micro tasks is consistent.

So when are microtasks performed?

In the figure above, the yellow phases are flanked by small microtasks, each of which immediately executes the events in the microtask queue.

Here’s the explanation.

Chestnuts in a microarray

The following code:

const fs = require('fs');
const ITERATIONS_MAX = 3;
let iteration = 0;
const timeout = setInterval((a)= > {
    console.log('START: setInterval'.'TIMERS PHASE');
    if (iteration < ITERATIONS_MAX) {
        setTimeout((a)= > {
            console.log('setInterval.setTimeout'.'TIMERS PHASE');
        });
        fs.readdir('./image', (err, files) => {
            if (err) throw err;
            console.log('fs.readdir() callback: Directory contains: ' + files.length + ' files'.'POLL PHASE');
        });
        setImmediate((a)= > {
            console.log('setInterval.setImmediate'.'CHECK PHASE');
        });
    } else {
        console.log('Max interval count exceeded. Goodbye.'.'TIMERS PHASE');
        clearInterval(timeout);
    }
    iteration++;
    console.log('END: setInterval'.'TIMERS PHASE');
}, 0);
// This is the first execution
// START: setInterval TIMERS PHASE
// END: setInterval TIMERS PHASE
// setInterval.setImmediate CHECK PHASE
// setInterval.setTimeout TIMERS PHASE

// Execute the second time
// START: setInterval TIMERS PHASE
// END: setInterval TIMERS PHASE
// fs.readdir() callback: Directory contains: 9 files POLL PHASE
// fs.readdir() callback: Directory contains: 9 files POLL PHASE
// setInterval.setImmediate CHECK PHASE
// setInterval.setTimeout TIMERS PHASE

// Execute the third time
// START: setInterval TIMERS PHASE
// END: setInterval TIMERS PHASE
// setInterval.setImmediate CHECK PHASE
// fs.readdir() callback: Directory contains: 9 files POLL PHASE
// setInterval.setTimeout TIMERS PHASE
Copy the code

process.nextTick

In the case of process.nextTick, this event has a higher priority than other microqueue events, so for callback events that need to be executed immediately, this method can be used to place the event at the beginning of the microqueue.

The following code:

Promise.resolve().then(function () {
    console.log('promise1')
})
process.nextTick((a)= > {
    console.log('nextTick')
    process.nextTick((a)= > {
        console.log('nextTick')
        process.nextTick((a)= > {
            console.log('nextTick')
            process.nextTick((a)= > {
                console.log('nextTick')})})})})// nextTick=>nextTick=>nextTick=>timer1=>promise1
Copy the code

The difference from the browser’s event loop execution result

Let’s take a look at the following code execution in the browser and nodeJS respectively

setTimeout((a)= > {
  console.log('timer1')
  Promise.resolve().then(function() {
    console.log('promise1')})},0)
setTimeout((a)= > {
  console.log('timer2')
  Promise.resolve().then(function() {
    console.log('promise2')})},0)
Copy the code

If you are familiar with browser event queues, you will quickly know that in the browser timer1->promise1-> Timer2 ->promise2, the microtask queue is executed immediately after each macro task completes.

What about in NodeJS?

The result looks like this: timer1-> Timer2 ->promise1-> promisE2. Since the microtask queue is executed immediately after each phase, the Timer phase has two callback events. After the events are executed in sequence, the events in the microqueue are executed before the next phase.

Note: This result was tested on Node 10 and below, modified on Node 11 and above, and the result is the same as the browser result

timer1->promise1->timer2->promise2

Reference article:

www.ibm.com/developerwo…

Juejin. Cn/post / 684490…

Q: How does NodeJS create process threads and what scenarios can it be used in

How to start multiple child processes

One of the disadvantages of single threading is that it can’t take full advantage of multiple cores, so the cluster module is officially introduced. The cluster module can create child processes that share server ports

const cluster = require('cluster');
for (let i = 0; i < numCPUs; i++) {
    cluster.fork(); // Create a new worker process that can communicate with the parent process using IPC
}
Copy the code

Essentially, child_process.fork() is used exclusively to spawn new Node.js processes. The spawned Node.js child processes are independent of the parent, except for the IPC communication channel established between the two. Each process has its own memory, with its own V8 instance

How to start multiple threads in one process

In NodeJS 10.0 and above, the worker_Threads module has been added to enable multiple threads

const {
    Worker, isMainThread, parentPort, workerData
} = require('worker_threads');
const worker = new Worker(__filename, {
    workerData: script
});
Copy the code
  • How to transfer data between threads:parentPort postMessage onSend a listening message
  • Shared memory:SharedArrayBufferThrough this shared memory

Usage scenarios

  1. A common scenario is to start a process in a service to execute shell commands
var exec = require('child_process').exec;
exec('ls'.function(error, stdout, stderr){
    if(error) {
        console.error('error: ' + error);
        return;
    }
    console.log('stdout: ' + stdout);
});
Copy the code
  1. If a large number of computations are involved in the service, a worker thread can be started and executed by this thread, and the result is notified to the service thread when it is finished.

Reference link: wolfx.cn/nodejs/node…

Q: Implementation and principle of KOA2 Onion model

Nodejs framework (koA2) is a popular nodeJS framework (koA2). This framework code is not much, but also very easy to understand.

If you ask koA2, the onion model is at its heart.

This is a segment very simple koA Server

const Koa = require('koa');
const app = new Koa();

app.use(async (ctx, next) => {
    ctx.body = 'Hello World';
    console.log('firsr before next')
    next()
    console.log('firsr after next')}); app.use(async (ctx, next) => {
    console.log('sencond before next')
    next()
    console.log('sencond after next')
    ctx.body = 'use next';

});

app.listen(3500, () = > {console.log('run on port 3500')});Copy the code

Request http://127.0.0.1:3500/ output

firsr before next
sencond before next
sencond after next
firsr after next
Copy the code

Initializing middleware

Use the app.use method to push the middleware function into the array as follows:

  1. Generators are the async/await method currently used by KOA2. Generators are async/await

  2. Use the Middleware array to hold middleware

use(fn) {
    if (typeoffn ! = ='function') throw new TypeError('middleware must be a function! ');
    if (isGeneratorFunction(fn)) {
      deprecate('Support for generators will be removed in v3. ' +
                'See the documentation for examples of how to convert old middleware ' +
                'https://github.com/koajs/koa/blob/master/docs/migration.md');
      fn = convert(fn);
    }
    debug('use %s', fn._name || fn.name || The '-');
    this.middleware.push(fn);
    return this;
}
Copy the code

Execution middleware (Onion Model)

The middleware function has two parameters, the first is context and the second is next. During the execution of the middleware function, if next() is encountered, it will enter the next middleware for execution. After the completion of the next intermediate execution, it will return to the previous middleware to execute the method after next(). This is the execution logic of middleware.

The core function is as follows, and I annotated it

// koa-compose/index.js
function compose(middleware) {
    // Middleware's array of functions
    if (!Array.isArray(middleware)) throw new TypeError('Middleware stack must be an array! ')
    for (const fn of middleware) {
        if (typeoffn ! = ='function') throw new TypeError('Middleware must be composed of functions! ')}Next: Add a middleware method at the end of all middleware for internal extension */
    return function (context, next) {
        // last called middleware #
        let index = - 1 // count to determine whether the middle is executed to the last one
        return dispatch(0) // Start executing the first middleware method
        function dispatch(i) {
            if (i <= index) return Promise.reject(new Error('next() called multiple times'))
            index = i
            let fn = middleware[i] // Get the middleware function
            if (i === middleware.length) fn = next // If the middleware has reached the last, the middleware that performs the internal extension
            if(! fn)return Promise.resolve()  // Return Promise
            try {
                // Execute fn, assign the next middleware function to the next parameter, call next as shown in the custom middleware method, and the middleware functions can be concatenated
                return Promise.resolve(fn(context, dispatch.bind(null, i + 1)));
            } catch (err) {
                return Promise.reject(err)
            }
        }
    }
}
Copy the code

Functional logic is not difficult to understand, the beauty lies in the design, see the official picture, very clever use of the idea of functional programming (if you are familiar with functional programming, you can give the interviewer a wave)

Q: Introduce stream

Streams are widely used in NodeJS, but for most developers, streams are more commonly used, such as HTTP Request respond, standard I/O, file read (createReadStream), gulp build tools, etc.

Flow, can understand into a pipeline, such as read a file, the commonly used method is from the hard disk read into memory, read in from memory, this way for small files no problem, but if a large file, efficiency is very low, there may be insufficient memory, with the method of flow, as if to large files with a straw, continuing a little read the file content, The other end of the pipe receives the data and can process it, a concept familiar to those of you who know Linux.

There are four basic stream types in Node.js:

  • Writable – A stream that can write data (for example, fs.createWritestream ()).
  • Readable – A stream that can read data (for example, fs.createreadStream ()).
  • Duplex – a stream that is both readable and writable (such as net.socket).
  • Transform – A Duplex stream that can modify or Transform data during reading and writing (for example, zlib.createDeflate()). The first and second types of Pipes are used to consume readable streams
const fs = require('fs');
// Read the file directly
fs.open('./xxx.js'.'r', (err, data) => {
    if (err) {
        console.log(err)
    }
    console.log(data)
})
// Read and write in stream mode
let readStream = fs.createReadStream('./a.js');
let writeStream = fs.createWriteStream('./b.js')
readStream.pipe(writeStream).on('data', (chunk) => { // Readable streams are consumed by writable streams
    console.log(chunk)
    writeStream.write(chunk);
}).on('finish', () = >console.log('finish'))
Copy the code

Native provides the Stream module, which you can see in the official documentation. The API is very powerful, and if we need to create a particular stream, we need to use this module.

Recommend a document: javascript.ruanyifeng.com/nodejs/stre…

Nodejs. Cn/API/stream….

Q: What is nodeJS log cutting implemented with

Implement log management and cutting with Winston and Winston-daily-rotate-file, daily cutting and cutting according to size.

(Specific implementation does not have a close look, interested friends can look at the source code)

Q: the relationship between bits and bytes

Bit: Bit represents binary bytes: 1 byte = 8 bits

Q: About character encoding

ASCII: The canonical standard for coding

Unicode: contains all characters in the world in a single set. Computers that support this set can display all characters without garbled characters. Unicode codes are a superset of ASCII codes.

Utf-32 UTF-8 UTF-16 are the encoding forms of Unicode codes

Utf-32: Each code point is represented by four bytes of fixed length

Utf-8: represents each code point in bytes of variable length. If only one byte is needed, use one byte. If one is not enough, use two… Therefore, in UTF-8 encoding, a character may consist of 1-4 bytes.

Utf-16: a combination of fixed length and variable length, it has only two bytes and four bytes to represent code points

Q: NPM install execution process

The following is the summary quoted from netizens, even at the end of the interview

NPM module installation mechanism

  1. Issue the NPM install command
  2. Query whether the specified module already exists in the node_modules directory
  3. If yes, do not reinstall it
  4. If there is no
  5. NPM queries Registry for the url of the module zip package
  6. Download the zip package and store it in the. NPM directory under the root directory
  7. Unzip the package to the node_modules directory of the current project

Implementation principle of NPM

After entering the NPM install command and pressing Enter, the following phases (NPM 5.5.1 as an example) occur:

  1. Execute the project’s own preinstall. The current NPM project will execute if it defines the preinstall hook.
  2. The first thing you need to do is to identify the first dependencies in the project, which are directly specified in the dependencies and devDependencies properties (assuming that the NPM install parameter is not added at this point). The project itself is the root node of the whole dependency tree, and each first-level dependency module is a sub-tree below the root node. NPM will start multiple processes from each first-level dependency module to gradually find nodes at deeper levels.
  3. Module acquisition, module acquisition is a recursive process, divided into the following steps:
  • Get module information. Before downloading a module, you first need to determine its version, because package.json is often a Semantic version (semver). If nPM-shrinkwrap. Json or package-lock.json has the module information in the version description file (nPM-shrinkwrap. If the version of a package in packaeg.json is ^1.1.0, NPM will go to the repository to get the latest version in the form of 1.x.x.
  • Gets the module entity. The resolved field for the module will be obtained in the previous step. NPM will use this address to check the local cache and grab it if it is in the cache or download it from the repository if it is not.
  • Look for the module dependencies, go back to Step 1 if there are any, and stop if there are none.
  1. Install the module, which updates node_modules in the project and executes the lifecycle functions in the module (in preinstall, install, postinstall order).

  2. The current NPM project will be executed if hooks are defined (install, postinstall, prepublish, prepare).

The last step is to generate or update the version description file, and the NPM install process is complete.

Module flattening (DEDUPE)

There is a joke on the Internet, an NPM Courier: Your node_modules arrives, opens the door and crashes a bunch of packages

The previous step obtained a complete dependency tree, which may contain a large number of duplicate modules. For example, module A depends on loadsh and module B also depends on LoDash. Prior to NPM3, the installation was strictly based on the dependency tree structure, resulting in module redundancy.

Starting with NPM3, a dedupe process is added by default. It iterates through all nodes, placing modules one by one below the root node, the first layer of Node-modules. When duplicate modules are found, they are discarded.

Here we need a definition for duplicate modules, which means the modules have the same name and are semver-compatible. Each Semver has a set of versio-permitted ranges. If the versio-permitted ranges of two modules overlap, a compatible version can be obtained without having to have the same version number. This allows more redundant modules to be removed during dedupe.

For example, node-modules foo depends on Lodash @^1.0.0 and bar depends on lodash@^1.1.0, then ^1.1.0 is compatible.

When foo depends on LoDash @^2.0.0 and bar on LoDash @^1.1.0, semver’s rules say that no compatible version exists. One version is placed in node_modules and the other remains in the dependency tree.

For example, suppose a dependency tree would look like this:

node_modules -- foo ---- lodash@version1

-- bar ---- lodash@version2
Copy the code

Assuming version1 and version2 are compatible versions, dedupe will give the following form:

Node_modules -- foo -- bar -- lodash (the reserved version is compatible)Copy the code

Assuming version1 and version2 are incompatible, the later versions remain in the dependency tree:

node_modules -- foo -- lodash@version1

-- bar ---- lodash@version2
Copy the code

Citation: muyiy.cn/question/to…

summary

The above is a summary of nodeJS related topics, and will continue to be supplemented in the future when we encounter representative topics.

If there is something wrong in the article, you are welcome to correct it.

Thank you.