What is a prototype

Stereotypes are one of the most important features of JavaScript, allowing objects to inherit features from other objects, so JavaScript is also known as a “prototype-based language.” Strictly speaking, stereotypes are supposed to be a property of objects, but functions are also a special kind of object. For example, when we perform an instanceof Object operation on a custom function, the result is true

function fn() {}

fn instanceof Object; // true
Copy the code

Stereotypes and constructors

In JS we use constructors to create objects. Each constructor has a prototype property value inside it, which is an object containing properties and methods that can be shared by all instances of the constructor. When we create an object using the constructor, the object contains a pointer to the constructor’s prototype property. In ES5, this pointer is called the prototype of the object. Normally we should not be able to get this value, but browsers now implement the __proto__ attribute to give us access to this attribute, but it is best not to use this attribute because it is not specified in the specification. New to ES5 is the object.getProtoTypeof () method, which can be used to retrieve the prototype of an Object. When we access a property of an object, if that property doesn’t exist inside the object, then it will look for that property in its prototype object, and that prototype object will have its own prototype, and so on and so forth, the concept of prototype chain. The end of the prototype chain is usually Object.prototype, so that’s why our new Object can use methods like toString(). If p is an instance, p can get the prototype in one of three ways, where obj.constructor points to the constructor

  • obj.__proto__
  • obj.constructor.prototype
  • Object.getPrototypeOf(obj)
function a() {
 this.a = 1;
 this.b = 2;
}
let obj = new a();
console.log(obj);
console.log(obj.constructor === a.prototype.constructor); //true
console.log(obj.constructor.prototype === a.prototype); //true
console.log(a.prototype.constructor.prototype === a.prototype); //true
console.log(a.prototype.isPrototypeOf(obj)); // true
Copy the code

Note that JavaScript objects are passed by reference, and that each new object entity we create does not have a copy of its own prototype. When we modify the stereotype, the objects associated with it inherit the change. The __proto__ attribute is implemented in all browsers to give us access to the value of [[Prototype]], but it’s best not to use this attribute because it’s not specified in the specification. Although we don’t have access to the [[Prototype]] attribute in scripts, But the isPrototypeOf() method is used to test whether one object exists on another object’s prototype chain. In ECMAScript 5 there is a new method called Object.getProtoTypeof () that returns the value of [[Prototype]]

Implicit and explicit archetypes

Implicit stereotypes usually point automatically to the explicit stereotype of the constructor at instance creation time. For example, in the sample code below, when Object A is created, the implicit prototype of a points to the explicit prototype of the constructor Object()

var a = {};
a.__proto__ === Object.prototype; // true
var b = new Object();
b.__proto__ === a.__proto__; // true
Copy the code

Explicit stereotypes are the default property of built-in functions (such as the Date() function), and are also generated by default when custom functions (except the arrow function) are defined. The generated explicit stereotype object has only one attribute, constructor, which points to the function itself. Usually used in conjunction with the new keyword, when a function instance is created using the new keyword, the implicit stereotype of the instance points to the explicit stereotype of the constructor

function fn() {}
fn.prototype.constructor === fn; // true
Copy the code

Do implicit stereotypes have to work with explicit stereotypes? The following code declares the parent and Child objects, where the child object defines the attribute name and the implicit stereotype proto, and the implicit stereotype points to the parent object, which defines the code and name attributes. When printing child.name, it prints the name property of the object child. When printing child.code, because the object child has no property code, it finds the property code of the prototype object parent. Print out the value of parent. Code. As you can see from the print, the parent object has no explicit stereotype property. The hasOwnProperty() function is used to tell whether the attributes of the child object are inherited from the prototype

var parent = { code: "p", name: "parent" };
var child = { __proto__: parent, name: "child" };
console.log(parent.prototype); // undefined
console.log(child.name); // "child"
console.log(child.code); // "p"
child.hasOwnProperty("name"); // true
child.hasOwnProperty("code"); // false
Copy the code

In this example, if the parent object also does not have a property code, the parent object’s prototype object is searched for the property code, and so on, one by one, until the property code is found or the prototype object is not pointed to. This recursive chain lookup mechanism is called the prototype chain.

Attributes of stereotypes

Property access

Whenever code reads a property of an object, it first searches for that property in the object itself, returns the value of that property if it finds it, continues to search for the object’s corresponding prototype object if it doesn’t, and so on. Because of this search process, if we add a property to the instance, it will mask the property of the same name stored in the prototype object because the property is searched in the instance and no longer can be searched back

Attribute judgment

Since an attribute can be as much of an instance as it is of its prototype object, how can we tell? Answer: Use hasOwnProperty()

So in Javascript, there’s a function that when you do an Object lookup, it never looks up the prototype hasOwnProperty and all objects that inherit from an Object will inherit to the hasOwnProperty method. This method can be used to detect whether an object has specific attributes of itself. Unlike the IN operator, this method ignores attributes inherited from the stereotype chain. When a for-in loop is used, it returns all enumerable properties accessible through the object, including properties that exist in the instance as well as in the stereotype. One thing to note is that instance attributes that mask non-enumerable attributes in the instance are also returned in the for-in loop. So we can encapsulate a function to determine if a property exists in the stereotype

function hasPrototypeProperty(object, name) { return ! object.hasOwnProperty(name) && name in object; }Copy the code

If you want to get all the enumerable instance properties of an Object, you can use the object.keys () method, which takes an Object as an argument and returns an array of strings containing all the enumerable properties. If you want to get all the instance attributes, whether it can be enumerated, we can use the Object. The getOwnPropertyNames () method

What is a prototype chain

An object’s proto can be used to find its prototype object, which is also an object, and its __proto__ can be used to find its prototype object, Finally, we found our Object.prototype, starting with the prototype Object of the instance, and then object. prototype is our prototype chain. We introduced the concept of prototype chain when introducing the prototype. If the object doesn’t have that property, then it will look for that property in its prototype object, which in turn will have its own prototype, and so on and so forth, the concept of prototype chain. So that’s why our new Object can use methods like toString() and this recursive chain lookup mechanism is called “prototype chain.”

case

Add a prototype method to an Array local object

The purpose is to reorder the ascending sort and return the new array

Array.prototype.distinct = function() {
  return [...new Set(this)].sort((a, b) => a - b);
};

console.log(["a", "b", "c", "d", "b", "a", "e"].distinct()); // ["a", "b"]
Copy the code

Multiple layers of inheritance are implemented through prototype chains

Given that the constructor B() needs to inherit from the constructor A(), we can extend the explicit prototype of function B() by pointing to an instance of function A(). Then the instance created by function B() can access both the property B of function B() and the property A of function A(), thus realizing multi-level inheritance.

function A() {}

A.prototype.a = function() {
  return "a";
};

function B() {}

B.prototype = new A();

B.prototype.b = function() {
  return "b";
};

var c = new B();

c.b(); // 'b'

c.a(); // 'a'
Copy the code

Functions are objects

function Foo(who) {
 this.me = who;
}

Foo.prototype.identify = function() {
 return "I am " + this.me;
};

function Bar(who) {
 Foo.call(this, who);
}

Bar.prototype = Object.create(Foo.prototype);

Bar.prototype.speak = function() {
 alert("Hello, " + this.identify() + ".");
};

var b1 = new Bar("b1");
var b2 = new Bar("b2");

b1.speak(); //Hello, I am b1
b2.speak(); //Hello, I am b2
Copy the code

Scope and closure

define

Closures are functions that refer to variables in the scope of another function, usually a dirty little book nested within a function: Closures are generated when a function can remember and access its lexical scope, even if the function is performing MDN outside the current lexical scope: Closures are functions that have access to free variables (i.e., variables that are used in a function but are neither function arguments nor local variables of a function, which are actually variables in the scope of another function).

So: a closure is a function that has access to variables in the scope of another function. The most common way to create a closure is to create another function within a function that has access to local variables of the current function.

The purpose of closures.

1. The first use of closures is to allow us to access variables inside functions outside of them. By using closures, we can access variables inside the function externally by calling the closure function externally. We can use this method to create private variables. Another use of the 2 function is to keep a variable object in the context of a function that has already run out of memory. Because the closure retains a reference to the variable object, the variable object is not reclaimed. In fact, the essence of closure is a special application of scope chain. As long as you understand the creation process of scope chain, you can understand the implementation principle of closure

The reasons causing

Must first understand the concept of the scope chain, actually very simple, there are only two kinds of scope in the ES5 — – the global scope and function scope, when to access a variable, the interpreter will first in the current scope find identifier, if not found, will go to the parent scope, until you find the variable identifier or not in the parent scope, This is called the scope chain, and it is worth noting that each subfunction copies its parent scope to form a chain of scopes.

var a = 1;
function f1() {
  var a = 2;
  function f2() {
    var a = 3;
    console.log(a); //3
  }
}
Copy the code

In this code, the scope of F1 refers to the global scope (window) and itself, while the scope of F2 refers to the global scope (window), F1 and itself. In addition, the scope is searched from the bottom up until the global scope Window is found. If there is no global scope, an error is reported

The essence of a closure is that there is a reference to a parent scope in the current environment.

function f1() {
 var a = 2;
 function f2() {
   console.log(a); //2
 }
 return f2;
}
var x = f1();
x();
Copy the code

Parsing here takes x to the variable in the parent scope, printing 2 because in the current environment, there is a reference to F2, which refers to the scope of window, F1, and F2. So F2 can access variables in f1’s scope. Here’s the return function case. Back to the essence of closures, we just need to make the reference to the parent scope exist, so we can also do this:

var f3;
function f1() {
  var a = 2;
  f3 = function() {
    console.log(a);
  };
}
f1();
f3();
Copy the code

F1 = f1; f1 = f1; f3 = f1; f1 = f1; f3 = f1; f1 = f1; f3 = f1; In this case, there is a reference to the parent scope of the external variable F3, hence the closure, the form has changed, the essence has not changed

How to use

1. Return a function, as illustrated above. 2. Pass it as a function argument

var a = 1; function foo() { var a = 2; function baz() { console.log(a); } bar(baz); } function bar(fn) {// this is the closure fn(); } // Print 2 instead of 1 foo();Copy the code

**3. Whenever callbacks are used in timers, event listeners, Ajax requests, cross-window communication, Web Workers, or any asynchronism, you are essentially using the closure below to save only the window and the current scope **

// timer setTimeout(function timeHandler(){console.log('111'); }, 100) // Event Listener $('#app').click(function(){console.log('DOM Listener'); })Copy the code

Create a closure that holds both the global scope window and the scope of the current function, so that variables can be global

var a = 2; (function IIFE() {// Output 2 console.log(a); }) ();Copy the code

The advantages and disadvantages

Three major features:

  • A wrapper function is called to create the inner scope (function nested function)
  • The return value of the wrapper function must include at least one reference to the inner function, thus creating a closure covering the entire inner scope of the wrapper function (inside the function can refer to external parameters and variables).
  • Parameters and variables are not collected by the garbage collection mechanism.

Advantages:

  • You want a variable to be stored in memory for a long time.
  • Avoid contamination of global variables.
  • Private members exist.

Disadvantages:

  • Resident memory,
  • Increase memory usage.
  • Improper use can easily cause memory leaks.
function outer() {
  var name = "jack";
  function inner() {
    console.log(name);
  }
  return inner;
}
outer()(); // jack
function sayHi(name) {
  return () => {
    console.log(`Hi! ${name}`);
  };
}
const test = sayHi("xiaoming");
test(); // Hi! xiaoming
Copy the code

The sayHi function has been executed, but its live object is not destroyed, because the test function still refers to the sayHi variable name, which is the closure. However, because the closure refers to the variable of another function, the other function is no longer used and cannot be destroyed, so using too many closures can take up more memory, which is also a side effect. Because in ECMA2015, only functions can split scopes, and the variables of the current scope can be accessed from inside a function, but the variables of the function cannot be accessed from outside. Therefore, closures can be understood as “functions defined inside a function, and the variables of the internal function can be accessed from outside through the function returned from inside”. In essence, closures are Bridges that connect the inside and outside of a function

case

Loop output

for (var i = 1; i <= 5; i++) { setTimeout(function timer() { console.log(i); }, 0); } //6 6 6 6 6 for (var i = 1; i <= 5; i++) { setTimeout(function timer() { console.log(i); }, i * 1000); } //6 6 6 6Copy the code

SetTimeout is a macro task. Due to the single-thread eventLoop mechanism in JS, macro tasks are executed after the same step task is executed in the main thread. Therefore, the callbacks in setTimeout will be executed successively after the loop ends. I found I, now the loop is over, I becomes 6. 6 Solution: Use IIFE(execute function expression immediately)

For (var I = 1; i <= 5; i++) { (function(j) { setTimeout(function timer() { console.log(j); }, 0); })(i); }Copy the code

This method belongs to the closure solution

Pass a third argument to the timer as the first function argument to the timer function

for (var i = 1; i <= 5; i++) {
  setTimeout(
    function timer(j) {
      console.log(j);
    },
    0,
    i
  );
}
Copy the code

Function: the function you want to execute after the expiration time (delay milliseconds). Code: This is an optional syntax that allows you to use strings instead of functions to compile and execute strings after delay milliseconds (this syntax is not recommended for the same security reasons as eval()). Delay (Optional) : The number of milliseconds (one second equals 1000 milliseconds) after which the function call will occur. If this parameter is omitted, delay takes the default value 0, which means “immediately” or as soon as possible. In either case, the actual delay may be longer than the expected value. See the reason why the actual delay is longer than the set value: Minimum delay time. arg1, … , argN (optional) : Additional arguments that are passed to function as arguments once the timer expires

Use ES6

let for (let i = 1; i <= 5; i++) {
 setTimeout(function timer() {
   console.log(i);
 }, 0);
}
Copy the code

About let Let revolutionizes JS. The functional scope of JS is changed into a block-level scope. The scope chain no longer exists after let is used. Code is scoped at the block level

Simulating private variables

const book = (function() { var page = 100; return function() { this.auther = "okaychen"; this._page = function() { console.log(page); }; }; }) (); var a = new book(); a.auther; // "okaychen" a._page(); // 100 a.page; // undefinedCopy the code

Print the index of the label using a closure

< ul id = "test" > < li > this is the first article < / li > < li > this is the second article < / li > < li > this is the third article < / li > < / ul >Copy the code
Var lis = document.getelementById ("test").getelementsbyTagName ("li"); for (var i = 0; i < 3; i++) { lis[i].index = i; lis[i].onclick = function() { alert(this.index); }; Var lis = document.getelementById ("test").getelementsbyTagName ("li"); for (var i = 0; i < 3; i++) { lis[i].index = i; lis[i].onclick = (function(a) { return function() { alert(a); }; })(i); }Copy the code

Implement the singleton pattern

var SingleStudent = (function() { function Student() {} var _student; return function() { if (_student) return _student; _student = new Student(); return _student; }; }) (); var s = new SingleStudent(); var s2 = new SingleStudent(); s === s2; // trueCopy the code

Closures and modules

Consider the following code:

function CoolModule() { var something = "cool"; var another = [1, 2, 3]; function doSomething() { console.log(something); } function doAnother() { console.log(another.join(" ! ")); } return { doSomething: doSomething, doAnother: doAnother, }; } var foo = CoolModule(); foo.doSomething(); // cool foo.doAnother(); / / 1! 2! 3Copy the code

This pattern is called a module in JavaScript. The most common way to implement the module pattern is often called module exposure, and this is a variation shown here. First, CoolModule() is just a function that must be called to create an instance of the module. Neither the inner scope nor the closure can be created without executing an external function. Second, CoolModule() returns a syntax using object literals {key: value,… } to represent the object. The returned object contains references to internal functions rather than internal data variables. We keep internal data variables hidden and private. You can think of the return value of this object type as essentially the module’s public API. The return value of this object type is eventually assigned to the external variable foo, which can then be used to access attribute methods in the API, such as foo.dosomething (). The doSomething() and doAnother() functions have closures that cover the internal scope of module instances (implemented by calling CoolModule()) to implement the singleton pattern for modules

 var foo = (function CoolModule() {
  var something = "cool";
  var another = [1, 2, 3];
  function doSomething() {
    console.log(something);
  }
  function doAnother() {
    console.log(another.join(" ! "));
  }
  return {
    doSomething: doSomething,
    doAnother: doAnother,
  };
})();
foo.doSomething(); // cool
foo.doAnother(); // 1 ! 2 ! 3
Copy the code

Converting a module function to IIFE, calling the function immediately and assigning the return value directly to the singleton’s module instance identifier foo to name the object to be returned as a public API is another simple but powerful use of the module pattern

Var foo = (function CoolModule(id) {function change() {// modify publicAPI publicapi.identify = identify2; } function identify1() { console.log(id); } function identify2() { console.log(id.toUpperCase()); } var publicAPI = { change: change, identify: identify1, }; return publicAPI; })("foo module"); foo.identify(); // foo module foo.change(); foo.identify(); // FOO MODULECopy the code

asynchronous

Synchronous and asynchronous

Synchronization is the process of executing a piece of code, and the rest of the code cannot be executed until the code returns a result, but once the execution is complete and the return value is obtained, the rest of the code can be executed. In other words, the execution of this code will be blocked until the result is returned. This situation is called synchronous asynchrony. When a code executes an asynchronous procedure call, the code does not immediately return the result. Instead, after an asynchronous call has been made, the call is typically processed by a callback function and the result is retrieved. When an asynchronous call is made without affecting the execution of the code that follows the block, it is called asynchronous.

why

JavaScript is single threaded, so what does it mean if JS executes code synchronously? This may cause a block, if we have a piece of code to execute, if we use synchronous mode, then it will block the following code execution; With asynchrony it does not block, and we do not need to wait for the result of asynchronous code execution to return. We can continue to execute the code logic after the asynchronous task. So in JS programming, there will be a lot of asynchronous programming

Asynchronous development

Callback functions –> Promise–> Generator–>async/await

The callback function

In order to realize asynchronous programming of JS in the early years, callback functions are generally used, such as the callback of typical events, or setTimeout/ setInterval to achieve some asynchronous programming operations, but there is a very common problem using callback functions to achieve, that is, callback hell

fs.readFile(A, "utf-8", function(err, data) {
 fs.readFile(B, "utf-8", function(err, data) {
   fs.readFile(C, "utf-8", function(err, data) {
     fs.readFile(D, "utf-8", function(err, data) {
       //....
     });
   });
 });
});
Copy the code

There are also many scenarios where callbacks enable asynchronous programming, such as:

  • Callback to Ajax requests;
  • Callback in timer;
  • Event callback;
  • Some method callbacks in Nodejs.

While the readability and code maintainability of asynchronous callbacks are acceptable if the hierarchy is small, once the hierarchy becomes large, the callback hell will be involved in all of the above asynchronous programming scenarios

Promise

In order to solve the problem of callback hell, the community later proposed the solution of Promise, and ES6 wrote it into the language standard, using the Promise implementation way to solve the problem of callback hell to a certain extent

function read(url) {
  return new Promise((resolve, reject) => {
    fs.readFile(url, "utf8", (err, data) => {
      if (err) reject(err);

      resolve(data);
    });
  });
}

read(A)
  .then((data) => {
    return read(B);
  })
  .then((data) => {
    return read(C);
  })
  .then((data) => {
    return read(D);
  })
  .catch((reason) => {
    console.log(reason);
  });
Copy the code

The advantage is that asynchronous operations can be expressed as a flow of synchronous operations, avoiding layer upon layer of nested callback functions. However, Promise also has some problems. Even with the chained invocation of Promise, if there are too many operations, There is no fundamental solution to the problem of callback hell, just a new way of writing that makes it more readable, but still difficult to maintain. However, Promise also provides an ALL method, which might work better for this business scenario code

function read(url) { return new Promise((resolve, reject) => { fs.readFile(url, "utf8", (err, data) => { if (err) reject(err); resolve(data); }); }); } // Multiple asynchronous parallel executions can be implemented with promise.all, Promise.all([read(A), read(B), read(C)]).then((data) => {console.log(data); }) .catch((err) => console.log(err));Copy the code

Generator

Generator is also an asynchronous programming solution. The best feature of the Generator is that it can yield execution of functions. Generator functions can be seen as containers for asynchronous tasks and use yield syntax whenever they need to be paused. Generator functions are usually used in conjunction with yield. Generator functions return iterators at the end

function* gen() { let a = yield 111; console.log(a); let b = yield 222; console.log(b); let c = yield 333; console.log(c); let d = yield 444; console.log(d); } let t = gen(); t.next(1); // When next is called for the first time, the argument passed is invalid, so no result is printed. // a output 2; t.next(3); // b output 3; t.next(4); // c output 4; t.next(5); // d output 5;Copy the code

async/await

After ES6, a new asynchronous solution was proposed in ES7: Async /await, async is the syntactic sugar of Generator functions, async/await has the advantage of clean code (unlike with promises where you need to write a lot of then method chains) and can handle callback hell. Writing async/await makes JS asynchronous code look like synchronous code. In fact, the goal of asynchronous programming development is to make asynchronous logic code look as easy to understand as synchronization

function testWait() { return new Promise((resolve, reject) => { setTimeout(function() { console.log("testWait"); resolve(); }, 1000); }); } async function testAwaitUse() { await testWait(); console.log("hello"); return 123; // Output order: testWait, hello // Output order: hello, testWait} console.log(testAwaitUse());Copy the code

In the normal order of execution, the callback of testWait will be executed one second later because of the setTimeout timer. However, the callback will be executed with await keyword. The testAwaitUse function waits for the testWait function to complete before printing hello. But if you remove await, the order in which the results are printed changes. Therefore async/await is not only a way of asynchronous programming of JS, but also has readability close to synchronous code, making it easier to understand. summary

Js asynchronous programming Simple summary
The callback function Early years of ASYNCHRONOUS JS programming adopted the way
Promise ES6 added asynchronous programming to solve the callback hell problem
Generator Used with yield, the iterator is returned
async/await Used together, async returns a Promise object and await controls the execution order

Promise

Promise to introduce

If I had to explain what a Promise is, it’s simply a container that holds the result of some event (usually an asynchronous operation) that will end in the future. Syntactically, a Promise is an object from which to get messages for asynchronous operations. Promise provides a uniform API, and all kinds of asynchronous operations can be handled in the same way

function read(url) {
  return new Promise((resolve, reject) => {
    fs.readFile(url, "utf8", (err, data) => {
      if (err) reject(err);

      resolve(data);
    });
  });
}
read(A)
  .then((data) => {
    return read(B);
  })
  .then((data) => {
    return read(C);
  })
  .then((data) => {
    return read(D);
  })
  .catch((reason) => {
    console.log(reason);
  });
Copy the code

Promise objects are created in an undetermined state and allow you to associate an asynchronous operation that returns a final success value or failure reason with the appropriate handler. In general, a Promise must be in one of the following states during its execution.

  • Pending: An initial state that has not been completed or rejected.
  • This is a big pity: The operation is completed successfully.
  • Rejected: The operation fails.

A Promise object in a pending state, executed, will either be completed with a value or rejected for a reason. When one of these situations occurs, our associated handlers, arrayed with the Promise’s then methods, are called. Because the final promise.prototype. then and promise.prototype. catch methods return a Promise, they can continue to be called chained. One thing to note about the state flow of promises is that internal state changes are irreversible

How does Promise solve callback hell

There are two main problems with callback hell:

  • Multi-layer nesting problem;
  • There are two possible outcomes for each task (success or failure),

These two possibilities need to be addressed separately at the end of each task. Promise was created to address both of these problems, especially in the era of callback functions. Promise uses three techniques to solve callback hell: callback function delay binding, return value penetration, and error bubbling

let readFilePromise = (filename) => { return new Promise((resolve, reject) => { fs.readFile(filename, (err, data) => { if (err) { reject(err); } else { resolve(data); }}); }); }; readFilePromise("1.json").then((data) => { return readFilePromise("2.json"); });Copy the code

As you can see from the code above, the callback function is not declared directly, but is passed in via the later THEN method, which is called deferred binding. Let’s make a few tweaks to the code above, as shown below

let x = readFilePromise("1.json").then((data) => { return readFilePromise("2.json"); // This is the return Promise}); X. teng (/* internal logic omitted */);Copy the code

We create different types of promises based on the incoming value of the callback function in THEN, and then pass the returned Promise through the outer layer for subsequent calls. The x here refers to the internally returned Promise, which can then be followed by the chain call. This is the effect of return value penetration, and the two techniques work together to write deep nested callbacks in the following form

readFilePromise("1.json")
 .then((data) => {
   return readFilePromise("2.json");
 })
 .then((data) => {
   return readFilePromise("3.json");
 })
 .then((data) => {
   return readFilePromise("4.json");
 });
Copy the code

This is a lot cleaner, more importantly, it is more in line with the human thinking model, the development experience is better, the two technologies combined to produce the effect of chain call. This solves the problem of multiple layers of nesting, but what about the other problem of handling success and failure separately at the end of each task? Promise took the error bubble approach

readFilePromise("1.json")
.then((data) => {
  return readFilePromise("2.json");
})
.then((data) => {
  return readFilePromise("3.json");
})
.then((data) => {
  return readFilePromise("4.json");
})
.catch((err) => {
  // xxx
});
Copy the code

This way errors are passed backwards and caught, so you don’t have to check for errors as often. As can be seen from the above code, the Promise solution effect is also quite obvious: to achieve the chain call, solve the problem of multi-layer nesting; One-stop processing after error bubbling is realized to solve the problem of incorrect judgment and increased code confusion in each task

A static method for Promise

All (iterable) : an iterable, such as Array. Description: This method is useful for summarizing the results of multiple promises. In ES6, multiple asynchronous promises.all requests can be processed in parallel. A failed method is entered when one of them fails. Let’s take a look at the business scenario. For the following business scenario page load, it might be better to merge multiple requests together using all.

Function getBannerList() {return new Promise((resolve, resolve)) Reject) => {setTimeout(function() {resolve(" reject "); }, 300); }); Function getStoreList() {return new Promise((resolve, resolve)); Reject) => {setTimeout(function() {resolve(" reject "); }, 500); }); } function getCategoryList() {return new Promise((resolve, resolve) Reject) => {setTimeout(function() {resolve(" reject "); }, 700); }); } function initLoad() { Promise.all([getBannerList(), getStoreList(), getCategoryList()]) .then((res) => { console.log(res); }) .catch((err) => { console.log(err); }); } initLoad();Copy the code

As can be seen from the above code, the three operations of obtaining the rotation list, obtaining the store list and obtaining the classification list need to be loaded in a page, and the page needs to send requests for page rendering at the same time, which can be realized by using promise.all to make it more clear and obvious.

The allSettled method promise.allSettled has similar syntax and arguments to promise.all, which accepts an array of promises and returns a new Promise. The only difference is that there is no failure after execution, which means that when promise.allSettled is finished, we can get the state of each Promise, regardless of whether it was processed successfully or not. Let’s look at a piece of code implemented with allSettled.

const resolved = Promise.resolve(2); const rejected = Promise.reject(-1); const allSettledPromise = Promise.allSettled([resolved, rejected]); allSettledPromise.then(function(results) { console.log(results); }); // [// {status: 'pity ', value: 2}, // {status: 'rejected', reason: -1} //]Copy the code

As you can see from the code above, promise.allSettled finally returns an array of the return values of each Promise in the parameters passed in, which is not the same as the all method. You can also adapt the code for the business scenario provided by the All method to know that after multiple requests have been made, the Promise will eventually return the final state of each parameter.

Any (iterable) Parameters: iterable objects, such as Array. The any method returns a Promise. As long as one of the parameter Promise instances becomes a fulfilled state, the final instance returned by any will become a fulfilled state. If all Promise instances are in the Rejected state, the package instance will be in the Rejected state.

const resolved = Promise.resolve(2); const rejected = Promise.reject(-1); const allSettledPromise = Promise.any([resolved, rejected]); allSettledPromise.then(function(results) { console.log(results); }); // Returns the result: // 2Copy the code

As you can see from the modified code, if one of the promises becomes a pity state, then any will finally return this Promise. Resolved Promise (resolve)

Race (iterable) Parameters: iterable iterable objects, such as Array. The RACE method returns a Promise, and the return state of the race method changes as long as one of the parameters’ Promise instances changes state first. The return value of the first changed Promise instance is passed to the race method’s callback function. Let’s take a look at this business scenario. For image loading, race method is particularly suitable for solving the problem. The image request and timeout judgment are put together, and race is used to realize the image timeout judgment. Look at the code snippet.

Function requestImg() {var p = new Promise(function(resolve, reject) {var img = new Image(); img.onload = function() { resolve(img); }; img.src = "http://www.baidu.com/img/flexible/logo/pc/result.png"; }); return p; } // Delay function, Function timeout() {var p = new Promise(function(resolve, reject) {setTimeout(function() {reject(reject)); }, 5000); }); return p; } Promise.race([requestImg(), timeout()]) .then(function(results) { console.log(results); }) .catch(function(reason) { console.log(reason); });Copy the code

As you can see from the code above, using the Promise approach to determine whether images load successfully is also a good business scenario for the promise.race approach. summary

Promise method Simple summary
all Parameter returns all results on success
allSettled Parameter returns the execution status of each parameter regardless of whether the return result was successful
any Returns the execution result of any success in the argument
race Returns, as the name implies, the execution result of the argument that returns successfully first

Generator

Generator is the new ES6 keyword, which is a bit tricky to learn, so what is a function of Generator? In layman’s terms, Generator is a “function” with an asterisk (it’s not really a function, as the code below verifies for you) and can be paused or executed with the yield keyword. Let’s look at a piece of code that uses Generator

function* gen() { console.log("enter"); let a = yield 1; let b = yield (function() { return 2; }) (); return 3; } var g = gen(); Console. log(typeof g); // Block without executing any statement console.log(typeof g); // Return object instead of "function" console.log(g.ext ()); console.log(g.next()); console.log(g.next()); console.log(g.next()); // output: // { value: 1, done: false } // { value: 2, done: false } // { value: 3, done: true } // { value: undefined, done: true }Copy the code

The use of yield keywords in Generator controls the order in which functions are executed. Each time the next method is executed, the Generator function will execute to the next location where the yield keyword exists. In summary, the implementation of the Generator has these key points.

  • After calling gen(), the program blocks and does not execute any statements.
  • After calling g.ext (), the program continues to execute until it pauses when the yield keyword is encountered.
  • The next method keeps executing, and finally returns an object with two properties: value and done.

This is the basic content of Generator, and the keyword yield is mentioned, so let’s look at its basics

Yield Yield is also a new keyword in ES6 that works with Generator execution and pauses. The yield keyword finally returns an iterator object with both value and done attributes, where the done attribute indicates the returned value and whether it is done. Yield in conjunction with the Generator, together with the next method, can actively control the execution progress of the Generator. When I said Generator, I gave an example of a Generator function. Let’s look at the use of multiple generators in conjunction with yield

function* gen1() {
 yield 1;
 yield* gen2();
 yield 4;
}
function* gen2() {
 yield 2;
 yield 3;
}
var g = gen1();
console.log(g.next());
console.log(g.next());
console.log(g.next());
console.log(g.next());
// output:
// { value: 1, done: false }
// { value: 2, done: false }
// { value: 3, done: false }
// { value: 4, done: false }
// {value: undefined, done: true}
Copy the code

Yield keywords can also be used in conjunction with Generator functions to control the progress of function execution. This allows you to control the use of the Generator, as well as the execution schedule of the resulting functions, so that the execution sequence is as desired. What is the connection between asynchronous programming and the ability to execute Generator step by step by calling the next method, even if Generator functions are nested within each other? How can I execute the Generator functions sequentially all at once?

Thunk function introduction

To understand what a thunk function is, take the example of determining data types

let isString = (obj) => {
 return Object.prototype.toString.call(obj) === '[object String]';
};
let isFunction = (obj) => {
 return Object.prototype.toString.call(obj) === '[object Function]';
};
let isArray = (obj) => {
 return Object.prototype.toString.call(obj) === '[object Array]';
};
Copy the code

As you can see, there is a lot of repeated data type logic, as well as similar repeated logic scenarios in normal business development. We encapsulate them as shown below.

let isType = (type) => {
 return (obj) => {
   return Object.prototype.toString.call(obj) === `[object ${type}]`;
 };
};
Copy the code

So after encapsulation we can use this to reduce the duplication of logic code, as shown below.

let isString = isType("String");
let isArray = isType("Array");
isString("123"); // true
isArray([1, 2, 3]); // true
Copy the code

The corresponding isString and isArray functions are generated by the isType method, and the code is modified in this way, which is obviously much simpler. A function like isType is called a thunk function. The basic idea is to take certain parameters, produce a custom function, and then use the custom function to perform the desired function. Such functions in the JS programming process will encounter a lot, especially when you read some open source projects, high abstraction of JS code will often use this way. Does the combination of Generator and Thunk provide some convenience?

Generator and Thunk are combined

const readFileThunk = (filename) => {
 return (callback) => {
   fs.readFile(filename, callback);
 };
};
const gen = function*() {
 const data1 = yield readFileThunk("1.txt");
 console.log(data1.toString());
 const data2 = yield readFileThunk("2.txt");
 console.log(data2.toString);
};
let g = gen();
g.next().value((err, data1) => {
 g.next(data1).value((err, data2) => {
   g.next(data2);
 });
});
Copy the code

ReadFileThunk is a thunk function, and this programming associates the Generator with asynchronous operations. The third code above is fairly simple to nest. If there are many tasks, it will generate many layers of nesting, which is not very readable. Therefore, we need to optimize the package of the code executed

function run(gen) {
  const next = (err, data) => {
    let res = gen.next(data);
    if (res.done) return;
    res.value(next);
  };
  next();
}
run(g);
Copy the code

After the modification, we can see that the run function executes exactly the same as above. The code is only a few lines long, but it contains recursive procedures, solves the problem of multiple layers of nesting, and achieves the one-time effect of asynchronous operations. This is what happens when an asynchronous operation is done through the thunk function

Combine Generator and Promise

// Wrap it as a Promise object to return

const readFilePromise = (filename) => { return new Promise((resolve, reject) => { fs.readFile(filename, (err, data) => { if (err) { reject(err); } else { resolve(data); }}); }).then((res) => res); }; Const gen = function*() {const data1 = yield readFilePromise("1.txt"); console.log(data1.toString()); const data2 = yield readFilePromise("2.txt"); console.log(data2.toString); }; Function run(gen) {const next = (err, data) => {let res = gen. Next (data); if (res.done) return; res.value.then(next); }; next(); } run(g);Copy the code

The thunk function is essentially the same as the Promise method, except that the Promise method can be used with Generator functions to perform the same asynchronous operation. Used to handle automatic execution of Generator functions. The core principle is actually described above through the thunk function and Promise object, packaged into a library, it is very simple to use, such as the above code, the third section of code can be omitted, directly reference co function, packaged can be used

const co = require("co");
let g = gen();
co(g).then((res) => {
 console.log(res);
});
Copy the code

So why and how does the CO library automatically execute Generator functions?

  1. Because a Generator function is a container for asynchronous operations, it requires an automatic execution mechanism, and the CO function takes a Generator function as an argument and returns a Promise object.
  2. In the returned Promise object, CO first checks if gen is a Generator. If so, execute the function; If not, return and change the state of the Promise object to Resolved.
  3. Co wraps the next method of the internal pointer object of the Generator function as ondepressing function. This is mainly to be able to catch thrown errors. The key is the next function, which calls itself repeatedly.

The ultimate solution async/await

Asynchronous JS programming has evolved from the initial callback method, to the use of Promise objects, to the Generator+ CO method. Each time there have been some changes, but they still feel incomplete, requiring an understanding of the underlying mechanism. While async/await is known as the ultimate asynchronous solution in JS. It can write asynchronous code synchronously like CO +Generator and has the support of underlying syntax without any third party library

Const readFilePromise = (filename) => {return new Promise((resolve, resolve)) reject) => { fs.readFile(filename, (err, data) => { if (err) { reject(err); } else { resolve(data); }}); }).then((res) => res); }; // Replace Generator * with async, Const gen = async function() {const data1 = await readFilePromise("1.txt"); console.log(data1.toString()); const data2 = await readFilePromise("2.txt"); console.log(data2.toString); };Copy the code

Although we have simply changed the * of Generator to async and yield to await, there is quite a bit of work going on inside async. Let’s take async apart to see what it does. In summary, the improvements of Async function over Generator function are mainly reflected in the following three points.

  1. Built-in actuators: The implementation of Generator functions must rely on actuators, as they cannot be executed at once, hence the open source CO library. However, async functions are executed as normal functions, without co function library or next method, while async functions have their own executor and are automatically executed.
  2. Better applicability: The CO library is conditional so that yield can only be followed by Thunk or Promise, but async can be followed by await keyword.
  3. Better readability: async and await, with clearer semantics than using * and yield.

Having said so many advantages, let’s look at the result returned by async through a simple code, is it more convenient to use async

function func() {
 return 100;
}
console.log(func());
// Promise {<fulfilled>: 100}
Copy the code

Async func returns a Promise object, which allows the developer to continue processing. Instead of automatically executing the Generator, the next method was used to control it, and instead of returning a Promise object, the CO library was used to implement it and return a Promise object. ES7’s addition of async/await does solve the previous problem, making it easier for developers to understand, syntax clearer, and no longer having to reference the CO library separately. Therefore, the code written with async/await is more elegant, easier to understand than the previous Promise and CO +Generator, and the cost of getting started is also lower. It is worthy of being the ultimate solution of JS asynchracy

summary

Asynchronous programming method The characteristics of
Generator Generator functions are used in conjunction with the yield keyword and are not executed automatically, requiring the next method to be executed step by step
Generator+co By introducing the open source CO library, asynchronous programming is implemented, and the return result can be controlled as a Promise object, which facilitates subsequent operations, but requires only thunk function or Promise object after yield
async/await The ultimate asynchronous programming solution introduced in ES7, without importing any other libraries, is unrestricted to the type behind await, more readable and easier to understand

Asynchrony in Nodejs

Asynchronous I/O and non-blocking I/O are two different technologies that are used to implement asynchronous I/O and non-blocking I/O

What is the I/O

First, I think it is necessary to explain the concept of I/O. I/O is Input/Output. On the browser side, there is only one type of I/O, which is network I/O, sending a network request using Ajax and then reading the returned content. Going back to NodeJS, this kind of I/O scenario is actually more extensive, and can be divided into two main types:

  • File I/O. For example, the FS module is used to read and write files
  • Network I/O. For example, an HTTP module initiates a network request

Blocking and non-blocking

I/O blocking and non-blocking I/O are really specific to the operating system kernel, not nodeJS itself. The characteristic of blocking I/O is that the call is not finished until the operating system has completed all operations, whereas non-blocking I/O is returned immediately after the call, without waiting for the operating system kernel to complete the operation. For the former, while the operating system is doing I/O, our application is actually in a waiting state, doing nothing. If we switch to non-blocking I/O, our NodeJS application can do other things when the call comes back, while the operating system is doing I/O. This makes the most of the waiting time and improves execution efficiency, but raises the question of how the NodeJS application knows that the operating system has completed the I/O operation. To let NodeJS know that the operating system has finished the I/O operation, it needs to check the operating system repeatedly, which is polling. For polling, there are several options:

  • Polling to check the I/O status until the I/O is complete. This is the most primitive way, and the least efficient, and will keep the CPU on hold. The effect is the same as blocking I/O.
  • The file descriptor (the file credential between the operating system and NodeJS during file I/O) is traversed to determine whether the I/O is complete, and the state of the file descriptor changes when the I/O is complete. But CPU polling is still very expensive.
  • Epoll mode. That is, the CPU will sleep if THE I/O is not completed at the time of polling, and wake up the CPU when it is finished.

In short, the CPU is either double-checked for I/O, double-checked for file descriptors, or hibernated, which is not a good use. What we want is: Nodejs applications can execute other logic directly after making I/O calls. The operating system silently completes the I/O and sends nodeJS a complete signal, and NodeJS performs callback operations. This is the ideal situation and the effect of asynchronous I/O. How do you achieve this effect?

The nature of asynchronous I/O

Linux natively exists in such a way, namely (AIO), but with two fatal flaws:

  • Asynchronous I/O support is not available on other systems.
  • Unable to take advantage of system cache.

Asynchronous I/O schemes in NodeJS

Is there nothing we can do? This is true in the case of single threads, but it becomes much easier to think outside the box and think about the problem in terms of multiple threads. We could have one process do the computation, and some others do the I/O calls, and then signal the I/O to the computation thread to perform the callback. Wouldn’t that be fine? Yes, asynchronous I/O is implemented using such thread pools. On Linux, this can be done directly using a thread pool, while on Windows, the IOCP system API(which is still done internally using a thread pool) is used. With operating system support, how does NodeJS interconnect with these operating systems for asynchronous I/O? For file I/O let’s take a piece of code as an example:

let fs = require("fs");
fs.readFile("/test.txt", function(err, data) {
 console.log(data);
});
Copy the code

This is what happens when you execute code:

  1. First, fs.readfile calls Node’s core module fs.js
  2. Next, Node’s core module calls the built-in node_file. Cc module to create the corresponding file I/O observer object (useful later!).
  3. Finally, depending on the platform (Linux or Window), the built-in module makes system calls through the Libuv middle tier

Libuv call procedure unpacking

Here we go! How do you make system calls in Libuv? So what’s going on in uv_fs_open()?

  1. Create a request object. In Windows, for example, we create a request object for file I/O and inject a callback function into it.

req_wrap->object_->Set(oncomplete_sym, callback);

1req_wrap is the request object, and the corresponding value of the onComplete_sym property of object_ in req_wrap is the callback function passed in our NodeJS application code.

  1. After wrapping the object is complete, the QueueUserWorkItem() method pushes the object into the thread pool for execution.

The current I/O operation will also be executed in the thread pool. This completes asynchro. Wait, don’t be too excited, the callback hasn’t been executed yet. The next step is to execute the callback notification.

  1. Callback notification actually doesn’t matter now whether the I/O in the thread pool is blocked or not, because the purpose of asyncio has been accomplished. What matters is what happens after I/O is complete.

Before the introduction to the subsequent story, to introduce the two important methods: GetQueuedCompletionStatus and PostQueuedCompletionStatus. At each Tick event loop (round) be calls GetQueuedCompletionStatus to check thread pool for execution of the request, if you have said is ripe, can perform the callback. PostQueuedCompletionStatus method is submitted to the IOCP state, tell it to the current I/O completion. When the corresponding thread after completion of the I/O, the results obtained will be stored and saved to the corresponding request object, then calls the PostQueuedCompletionStatus () submit to the IOCP completes, and thread to the operating system. Once EventLoop polling operations, calls GetQueuedCompletionStatus detected in the finished state, will give I/O request object plug observer (before the stage, now finally) making its debut. The behavior of the I/O observer now is to retrieve the stored result of the request object and also retrieve its onComplete_sym property, the callback function (look back to step 1 if you don’t understand this property). Pass the former as a function argument to the latter, and execute the latter. Here, the callback is successfully executed!

summary

  1. Blocking and non-blocking I/O are really specific to the operating system kernel. The characteristic of blocking I/O is that the call is not finished until the operating system has completed all operations, whereas non-blocking I/O is returned immediately after the call, without waiting for the operating system kernel to complete the operation.
  2. Asynchronous I/O in NodeJS adopts multi-threaded mode, which is coordinated by EventLoop, I/O observer, request object and thread pool.

this

This is a JavaScript keyword that refers to the object on which it is called. This has two meanings. First, this should refer to an object, more specifically the “context object” that the function executes. Second, this object refers to the “calling” object, or global object (undefined in strict mode) if the calling object is not an object or if the object does not exist. ,

Four invocation modes

This is an attribute in the execution context that points to the object on which the method was last called. In real development, the direction of this can be determined by four invocation patterns

  • The first is the function call pattern, where this refers to the global object when a function is called directly as a function that is not a property of an object
  • The second is the method invocation pattern, where if a function is called as a method of an object, this refers to that object
  • The third is the constructor invocation pattern. If a function is called with new, a new object is created before the function is executed. This refers to the newly created object
  • The fourth is the Apply, Call, and bind invocation patterns

Explicitly bound

Call, apply, bind, call, and apply are basically the same in that they call a function by passing in an object and arguments to this. The difference lies in the way arguments are passed. The former is passed parameter by parameter, while the latter is passed as an array by putting parameters into an array. Bind is special in that it can bind not only this but also function parameters and return a new function. When C calls a new function, the bound this or parameter cannot be changed

function getName() {
  console.log(this.name);
}
var b = getName.bind({ name: "bind" });
b(); //"bind"
getName.call({ name: "call" }); //"call"
getName.apply({ name: "apply" }); //"apply"
Copy the code

Tip Because of the indeterminacy to which this refers, it is easy to call something unexpected. Avoid using this when writing code, either as a pure function or as an argument to pass context objects. If you really want to use this, consider using something like bind

Implicitly bind this

Global context

function fn() { console.log(this); } fn(); // browser: Window; Node. Js: globalCopy the code

The global context points to window by default and undefined in strict mode

Call the function directly

let obj = { a: function() { console.log(this); }}; let func = obj.a; func();Copy the code

This case is a direct call. This corresponds to the global context

Object. Method

let obj = { a: function() { console.log(this); }}; let func = obj.a(); func();Copy the code

In this case this refers to this object

DOM event binding

This in onclick and addEventerListener refers by default to the element of the binding event. IE uses attachEvent, where this points to the window by default.

The new + constructor now points to the instance object in the constructor.

class A {
  fn() {
    console.log(this);
  }
}
var a = new A();
a.fn(); // a
Copy the code

Arrow function

Arrow functions do not have this and therefore cannot be bound. The arrow function does not create its own this, it only inherits this from the upper level of its scope chain. This, which can be thought of simply as an arrow function, inherits from this, but in the global context the definition still points to the global object

let obj = { a: function() { const fun = () => { console.log(this); }; fun(); }}; obj.a(); // Find the nearest non-arrow function a, a is now bound to obj, so this in the arrow function is objCopy the code

The difference between arrow functions and ordinary functions

Arrow functions differ from normal functions in the following ways

  • The Arguments object is not bound, which means that accessing the Arguments object inside the arrow function returns an error;
  • Cannot be used as a constructor, that is, cannot create an instance with the new keyword;
  • Prototype properties are not created by default;
  • Cannot be used as a Generator() function and the yeild keyword cannot be used.

Arrow function: This refers to the object at which it was defined, not the object at which it was used

Priority Priorities: New > Call, apply, bind > objects. Method > Direct call

A slightly more complicated example

If function fn() is called in function fn2(), where does this of function fn() point when fn2() is called?

function fn() {
  console.log(this);
}
function fn2() {
  fn();
}
fn2(); // ?
Copy the code

Call fn2() from obj; call fn2 from obj; call fn2 from obj; Where does fn()’s this point to?

function fn() {
  console.log(this);
}
function fn2() {
  fn();
}
var obj = { fn2 };
obj.fn2(); // ?
Copy the code

The function fn() is called by fn2() instead of obj. Although fn2() is called as an attribute of obj, the this pointer in fn2() is not passed to the function fn(), so the answer is that the window (global under node.js) object dx has the array attribute arr, What does it point to to print this in the forEach callback for the attribute arr?

var dx = {
  arr: [1],
};
dx.arr.forEach(function() {
  console.log(this);
}); // ?
Copy the code

ForEach takes two arguments, the first is the callback function and the second is the object to which this points. Only the callback function is passed in here. The second argument is not passed in and is undefined by default, so the correct answer is to print the global object. Similarly, pass this to every(), find(), findIndex(), map(), and some(). Create a fun variable that refers to the fn() function of instance B. What does this point to when fun() is called?

class B {
 fn() {
   console.log(this);
 }
}
var b = new B();
var fun = b.fn;
fun(); // ?
Copy the code

Fun is called globally, so this should refer to the global object. This idea is fine, but ES6 classes use strict mode internally by default, and the actual class definition part of the code above can be interpreted as follows.

class B { "use strict"; fn() { console.log(this); }}Copy the code

In strict mode, the global object is not specified as the default call object, so the answer is undefined. This refers to either the object calling it or undefined, so what happens if this refers to an underlying type of data

[0].forEach(function() {
  console.log(this);
}, 0); //Number {0}
Copy the code

123 Base types can also be converted to corresponding reference objects. So this refers to an object of type Number with a value of 0