Welcome to my public accountMedia Talk
, to get my latest article:
One, foreword
The control of API requests has long been a hot topic in the front-end space, and there are many excellent open source projects available. In the spirit of teaching people to fish, this article leaves aside all the utility functions and introduces how to solve practical problems with the simplest code in various scenarios.
Concurrency control
In some scenarios, the front-end needs to send a large number of network requests in a short period of time, while not taking up too much system resources, which requires concurrency control of requests. The request may be the same interface or multiple interfaces, and it is generally not processed until all interfaces are returned. To be more efficient, we want the space to be left as soon as a request is completed and a new request to be made. Here we can use a combination of Promise’s two tool methods, Race and All.
Async function concurrentControl(PoolLimit, requestPool) {// Stores all requests returned by Promise const ret = []; // Executing a request to control concurrent const executing time = []; while (requestPool.length > 0) { const request = requestPool.shift(); const p = Promise.resolve().then(() => request()); ret.push(p); / / p.t hen () returns a new promise, said the state of the current request const e = p.t hen (() = > executing the splice (executing. IndexOf (e), 1)); executing.push(e); if (executing.length >= poolLimit) { await Promise.race(executing); } } return Promise.all(ret); }
Const e = p.then(() => Executing time. Splice (Executing time. IndexOf (E), 1))
- The return value of p.then() is one
promise
, then functions are synchronous execution code - The effect of p.then() is correct
p
thispromise
To subscribe, something likedom
的addEventListener
- Fn in then(Fn) waits
promise
The JS engine will put the resolve in the microtask queue to execute asynchronously
So the actual order of execution of the above code is:
const e = p.then(fn); executing.push(e); // Executed synchronously with time, 1. Executed synchronously with time, 1.
Here is the test code, if you are interested, you can verify it yourself.
let i = 0; function generateRequest() { const j = ++i; return function request() { return new Promise(resolve => { console.log(`r${j}... `); setTimeout(() => { resolve(`r${j}`); }, 1000 * j); }) } } const requestPool = [generateRequest(), generateRequest(), generateRequest(), generateRequest()]; async function main() { const results = await concurrentControl(2, requestPool); console.log(results); } main();
The async/await used in the previous implementation is a feature of ES7 and the same effect can be achieved with ES6.
Function concurrentControl(poolLimit, requestPool) {// store all requests returned by Promise const ret = []; // Executing a request to control concurrent const executing time = []; function enqueue() { const request = requestPool.shift(); if (! request) { return Promise.resolve(); } const p = Promise.resolve().then(() => request()); ret.push(p); let r = Promise.resolve(); const e = p.then(() => executing.splice(executing.indexOf(e), 1)); executing.push(e); if (executing.length >= poolLimit) { r = Promise.race(executing); } return r.then(() => enqueue()); } return enqueue().then(() => Promise.all(ret)); }
We are using nested function calls. The code is not as neat as async/await, but it has another advantage: it supports dynamically adding new requests:
const requestPool = [generateRequest(), generateRequest(), generateRequest(), generateRequest()]; function main() { concurrentControl(2, requestPool).then(results => console.log(results)); // dynamically add new request requestPool. Push (GenerateRequest ()); }
We can see from the code that we can dynamically add new requests to the requestPool before the requestPool is completed, which is suitable for some scenarios where requests are made based on conditions.
Three, throttle control
While traditional throttling controls the timing of requests, the throttling mentioned in this article is a publisk-subscribe design pattern that multiplies the results of requests and is suitable for sending multiple identical requests in a short period of time. The code is as follows:
function generateRequest() { let ongoing = false; const listeners = []; return function request() { if (! ongoing) { ongoing = true return new Promise(resolve => { console.log('requesting... '); setTimeout(() => { const result = 'success'; resolve(result); ongoing = false; if (listeners.length <= 0) return; while (listeners.length > 0) { const listener = listeners.shift(); listener && listener.resolve(result); }}, 1000); }) } return new Promise((resolve, reject) => { listeners.push({ resolve, reject }) }) } }
The key point here is that if there is an ongoing request, create a Promise, put resolve and reject into the Listeners array, and subscribe to the result of the request.
The test code is as follows:
const request = generateRequest();
request().then(data => console.log(`invoke1 ${data}`));
request().then(data => console.log(`invoke2 ${data}`));
request().then(data => console.log(`invoke3 ${data}`));
3. Cancellation of the request
There are two ways to implement a cancel request. Let’s look at the first one. Set a flag to control the validity of the request, which is explained below in conjunction with the act Hooks.
UseEffect (() => {// let didCancel = false; const fetchData = async () => { const result = await getData(query); // Check validity before updating data. didCancel) { setResult(result); } } fetchData(); Return () => {// set data invalidate when query changed didCancel = true; } }, [query]);
After the request is returned, the validity of the request is determined first. If the request is invalid, the subsequent operation is ignored.
The above implementation is not really cancellation, but rather discarding. If you want to implement an actual cancel request, you use the AbortController API, as shown in the following code:
const controller = new AbortController();
const signal = controller.signal;
setTimeout(() => controller.abort(), 5000);
fetch(url, { signal }).then(response => {
return response.text();
}).then(text => {
console.log(text);
}).catch(err => {
if (err.name === 'AbortError') {
console.log('Fetch aborted');
} else {
console.error('Uh oh, an error!', err);
}
});
When abort() is called, the Promise is rejected, triggering a DomException called AborError.
Four, elimination request
Scenes such as the search box, which requires suggestions as users type, require multiple requests to be sent in a short period of time, and the results of the first request cannot be overwritten (network congestion may cause the first request to be returned later). Out-of-date requirements can be eliminated in the following way.
// let seqenceId = 0; // The last valid request number let lastId = 0; function App() { const [query, setQuery] = useState('react'); const [result, setResult] = useState(); } // When a request is made, add 1 const curId = ++seqenceId; const result = await getData(query); If (curId > lastId) {setResult(result); lastId = curId; } else { console.log(`discard ${result}`); fetchData(); }, [query]); return ( ... ) ; }
The key point here is whether the sequence number of the request returned by the comparison request is greater than the last valid request. If not, then a later request responded first and the current request should be discarded.
Five, the summary
This paper lists several special scenarios when the front-end processing API requests, including concurrency control, throttling, cancellation and elimination, and summarizes the solutions according to the characteristics of each scenario, which can improve the performance while ensuring data validity.