preface

Recently, I was arranged by the company to share some topics. After thinking about it, I still want to share knowledge about JavaScript races. Thus summarized into this article.

The goal of this blog post is to give you an illustrative look at JavaScript concurrency and races.

The following text.

Error caused by race

More experienced developers may find that asynchronous code is indeed harder to understand, write, and maintain than synchronous code. Time is the most complex element in the program. A modern Web application cannot avoid asynchronous processing. How to be more aware of concurrency and races in asynchrony is the first step. Take a look at the following:

Suppose we have three asynchronous requests: A1 -> A2 -> A3, which are triggered by time. After each request passes through the server, the response is returned, which will have some side effects on the application.

At the same time, we assume that each request is affected by network factors and the response return time is variable. As shown in the figure above, the actual response return sequence is A3 -> A1 -> A2. So although we would expect the correct request to have an effect on the application would be A3. What actually works in the end is A2. In the real world, then, it may end up being a fatal mistake.

So how can this be avoided?

Strategy one — Go to the old and usher in the new

Those of you who have used Redux-Saga may know that there is an API called “takeLatest”. RXJS also has an operator called “takeLast(1)”. The front end manages multiple requests through state control.

The main implementation idea is that when the latest request is triggered, the preceding request is cancelled, so that only the latest request can be finally effective.

Note, the method to abort the request in native XHR objects is xmlHttprequet.abort (). In Axios there is an API for cancelToken to provide completion.

Strategy two — Control the callback

We can also let the request happen, because what we really need to ensure is that the Web application ultimately works with the server’s final data (that is, the data that the latest request can retrieve).

Therefore, we may not actually have to prevent the request from happening, we can just control the order in which the request’s front-end response callbacks are executed. On this basis to do a shake control, can also achieve the desired better experience effect.

Strategy three — queues

The third strategy is to put all incoming requests in a queue at the front end, send them one by one, and send the next one after the response comes back. By shooting the request directly from a line, race scene problems are completely avoided.

Compared to Strategy 1 and Strategy 2, this approach may seem the dumbest and slowest. But this approach can be a better choice than the first two in some situations.

GET or POST request scenario

All of the above policies satisfy the GET request scenario.

However, let’s consider the following scenario:

Our request is not a simple GET request, but a POST request that operates on the server database, and the server depends on the order in which the POST request operations are executed to return the correct response.

The disadvantage of strategy 1 is that even if the request is cancelled, it only ensures that no response callback is performed on the front end, but the front end has no actual control over whether the request has reached the server. In other words. In fact, server data manipulation can also be disruptive.

The disadvantage of strategy 2 is the same as above. Since the request is not even cancelled, if the request arrives at the server in the wrong order in the same scenario, data disorder in the server database is even inevitable.

As shown in the figure below, we assume that server database A and DATABASE B respectively represent two kinds of user permissions. A+ request will increase the value of database A, and A- operation will decrease the value of database A. Operation B is the same as A, and service permission cannot be negative, as shown in the figure below:

So the wrong order of request arrival results in the wrong database data.

Although strategy 3 does not take full advantage of the concurrency of requests, it completely avoids the above problems by controlling the sending of requests in the front-end queue.

About Timing control

Therefore, based on strategy 3, it can also be subdivided into queue control at the front end and timing control at the back end.

The front-end control

As shown in the figure, multiple requests are shot into a queue and sent one by one. See Waterfall Flow in the console network panel when implemented.

The back-end control

A colleague reminded me that servers can also fulfill this requirement. The front-end sends all requests normally, and the server maintains the status of multiple requests and dynamically controls the order in which the requests take effect.

About the implementation

On the implementation side, I’ve written a blog post before. But it’s mostly sample code. See also some thoughts and solutions for JavaScript concurrency, race scenarios

summary

Of course, students who expect to know other methods can be free to teach.

I would be honored if I could help you with the above.