preface
In early Web applications, interacting with the background required submitting a form and then giving feedback to the user after a page refresh. In the process of page refresh, the background will return a section of HTML code, most of the content of this section of HTML is basically the same as the previous page, which is bound to cause a waste of traffic, and also extended the response time of the page, which always makes people feel that the experience of Web application is not as good as that of client application.
AJAX, the “Asynchronous JavaScript and XML” technology that was introduced in 2004, has dramatically improved the experience of Web applications. Then in 2006, jQuery came out, which improved the development experience of Web application to a new level.
Due to the single-threaded features of JavaScript language, both event triggering and AJAX are triggered by the way of callback for asynchronous tasks. If we wanted to process multiple asynchronous tasks linearly, we would have something like this in the code:
getUser(token, function (user) {
getClassID(user, function (id) {
getClassName(id, function (name) {
console.log(name)
})
})
})
We often refer to this code as “callback hell”.
Events and callbacks
As we all know, the JavaScript runtime runs on a single thread and is based on the event model to trigger asynchronous tasks. There is no need to consider the issue of locking the shared memory, and the bound events will be triggered in order. To understand JavaScript’s asynchronous tasks, you first need to understand JavaScript’s event model.
Since it is an asynchronous task, we need to organize a piece of code to run in the future (either at the end of a specified time or when an event is triggered). This piece of code is usually placed in an anonymous function, usually called a callback function.
SetTimeout (function () {// at the end of the specified time, trigger callback}, 800) window.addEventListener("resize", function() {// When the browser window changes, Trigger callback})
The future operation
As mentioned earlier, the callback function is run in the future, which means that the variables used in the callback are not fixed at the callback declaration stage.
for (var i = 0; i < 3; i++) {
setTimeout(function () {
console.log("i =", i)
}, 100)
}
Three asynchronous tasks are declared in succession, and 100 milliseconds later the result of variable I is printed, which would normally output 0, 1, and 2.
However, this is not the case, which is also a problem when we are new to JavaScript, because the actual running time of the callback function is in the future, so the output I is the value at the end of the loop, and the results of the three asynchronous tasks are identical, and the output is three I = 3.
For those of you who have experienced this problem, you know that we can solve this problem by closing it, or by redeclaring local variables.
The event queue
Event, after all of the callback function will be stored, and then in the process of running, there will be another thread scheduling of the asynchronous calls the callback processing, once the “trigger” conditions will be the callback function into the corresponding event queue (here is simply to understand into a queue, exist two event queue: micro macro task, task).
Generally, there are the following conditions to meet the trigger conditions:
- Events triggered by DOM related operations, such as click, move, out of focus, etc.
- IO related operations, file reading completed, the end of the network request, etc.
- Time-dependent operations arrive at the agreed time of a timed task;
When this happens, the previously specified callbacks in the code are placed in a task queue, and once the main thread is idle, the tasks are executed in a first-in, first-out (FIFO) process. When new events are fired, they are put back into the callback, and so on 🔄, so this mechanism in JavaScript is often referred to as the “event loop” mechanism.
for (var i = 1; i <= 3; I ++) {const x = I setTimeout(function () {console.log(' ${x} setTimeout is executed ')}, 100)}
As you can see, its running order satisfies the queue FIFO characteristic, the first declared first is executed first.
Blocking of thread
Due to the single-threaded nature of JavaScript, timers are not reliable. When the code encounters a blocking situation, even if the event has reached the time to fire, it will wait until the main thread is idle before running.
Const start = date.now () setTimeout(function () {console.log(' actual wait time: ${date.now () -start}ms')}, ${date.now () -start}ms')}, ${date.now () -start}ms')}
In the above code, the timer is set after 300ms to trigger the callback function. If the code does not encounter blocking, normally 300ms will be output after waiting time.
However, we haven’t added a while loop yet. This loop will not end until 800ms later. The main thread will always be blocked by this loop, which will cause the callback function to not work properly when the time is up.
Promise
The way event callbacks are handled, during coding, is particularly prone to callback hell. Promise, on the other hand, provides a more linear way to write asynchronous code, somewhat like a pipeline mechanism.
GetUser (token, function (user) {getClassid (user, function (id) {getClassName(id, id); function (name) { console.log(name) }) }) }) // Promise getUser(token).then(function (user) { return getClassID(user) }).then(function (id) { return getClassName(id) }).then(function (name) { console.log(name) }).catch(function (err) { Console. error(' Request exception ', err)})
Promise has a similar implementation in many languages, as well as in some of the more famous frameworks in JavaScript, such as jQuery and Dojo. In 2009, the CommonJS specification, based on dojo. Deffered implementation, proposed the Promise/A specification. It was also the year that Node.js was born. Many implementations of Node.js are based on the CommonJS specification, and the most familiar is its modularization scheme.
Earlier Node.js also implemented Promise objects, but back in 2010, Ry (author of Node.js) considered Promise to be an upper-level implementation, and Node.js was built on the V8 engine. The V8 engine also did not provide native support for Promise, so later Node.js modules used the error-first callback style (CB (error, result)).
Const fs = require('fs') // Error object. If it is not null, function (err, buffer) {if (err! == null) { return } console.log(buffer.toString()) })
This decision has led to the emergence of various Promise libraries in Node.js, notably Q.JS and Bluebird. I’ve written a previous article on Promise Implementation, for example, in “How to Implement Promises by Your Hands.”
Prior to Node.js@8, V8 native Promise implementations had some performance issues that caused native Promise to perform even worse than some third-party Promise libraries.
Therefore, in lower versions of Node.js projects, it is common to replace Promises globally:
const Bulebird = require('bluebird')
global.Promise = Bulebird
Generator & co
Generator is a new function type provided by ES6 that is primarily used to define a function that iterates on itself. Using the syntax of function *, we can construct a Generator function that returns an iteration object with a next() method that pauses each time the next() method is called before the yield keyword. Until the next() method is called again.
function * forEach(array) {
const len = array.length
for (let i = 0; i < len; i ++) {
yield i;
}
}
const it = forEach([2, 4, 6])
it.next() // { value: 2, done: false }
it.next() // { value: 4, done: false }
it.next() // { value: 6, done: false }
it.next() // { value: undefined, done: true }
The next() method returns an object with two properties, value, done:
value
Said:yield
The last value;done
: Indicates whether the function has completed execution;
Because the generator function has the ability to interrupt execution, using the generator function as a container for asynchronous operations, coupled with the Promise object’s Then method, can give back the execution of the asynchronous logic. By adding a Promise object to each Yeild, the iterator will continue to execute.
function * gen(token) { const user = yield getUser(token) const cId = yield getClassID(user) const name = yield GetClassName (cId) console.log(name)} const g = gen('xxxx-token') // The value returned by the next method is a Promise object const {value: If (user =>), then(user =>), then(user =>), then(user =>), then(user =>), then(user =>), then(user =>) // Only the value of the first next method is discarded const {value: promise2 } = gen.next(user).value promise2.then(cId => { const { value: Promise3, done} = gen. Next (cId). The value / / in turn successively passed, until the next () method returns the done to true})})
We abstract the above logic so that after each Promise object returns normally, we automatically call next, leaving the iterator to execute on its own until it completes (that is, done is true).
function co(gen, ... args) { const g = gen(... args) function next(data) { const { value: promise, Done} = g.next(data) if (done) return promise.then(res => {next(res) // pass the result of the promise to the next yield})} next() } co(gen, 'xxxx-token');
This is the logic of CO, the early core library of KOA, except that CO does some parameter validation and error handling. Adding CO to the Generator will make asynchronous processes easier to read, which will be a great pleasure for developers.
async/await
Async /await can be said to be a solution to JavaScript async, which is essentially a syntax sugar of Generator & co. You just need to add async to the async Generator function and replace the yield inside the Generator function with await.
async function fun(token) {
const user = await getUser(token)
const cId = await getClassID(user)
const name = await getClassName(cId)
console.log(name)
}
fun()
Async /await is semantically clearer than the yield of the generator. This is an asynchronous operation at a moment’s notice. In addition, the async/await function is not restricted to a Promise object.