Please reject the polling technical solution first, instead of using the server push, the front end only needs to register as a common event, because:
- Polling is difficult to design. Poor design risks self-ddos. Second, the polling request matching efficiency is not high.
- The real-time nature of server push is very high.
Conclusion: The real-time performance and hit ratio of polling scheme are not high 🤕. If you really want to poll can refer to the following scheme.
demand
We will always face similar asynchronous long time to get the operation result of the situation, assume that an operation is expected to get the receipt about 1s, in order to better user experience, as long as you can get it to show that the account has been received, otherwise continue to request, the user will wait 1.5s at most. At first glance, this translates to technical requirements: keep polling until you succeed in 1.5 seconds.
This paper summarizes the general ideas and methods triggered by specific requirements. There are five options. The following is an evolving narrative.
Scheme 1: Fixed timeout & fixed number of requests
The front end sets a timeout limit of 500ms for each request setting. If no result is obtained, the next round will be continued.
Analysis of the
The scheme is serial
advantages
The code is simple and straightforward. However, the average request time is less than 500ms.
disadvantages
This strategy may result in the interface taking 600ms to return a result each time, but we limit it to 500ms, so no matter how many requests are sent, the result will not be received within the specified time.
Scheme 2: Limit the total polling duration & do not limit the number of polling times
The front-end limits the polling duration to 1500ms. No matter how many times the interface is invoked within the total polling duration, the polling ends when the polling duration exceeds 1500ms.
Analysis of the
The subdivision is serial
disadvantages
A situation where requests fail quickly leads to too many unnecessary requests being sent. If it takes 100 ms for a single request to get the receipt at 800ms, then the previous 7 requests are wasted.
In case the interface returns once in 50ms, it is not allowed to call the interface many times at a time.
Network smooth case: 3 times?
Network failure: maximum two times?
Scheme 3: Fixed interval discrete polling & set descending timeout for single request
The optimized version of the first polling scheme: we send three requests at 0s 0.5s 1s respectively, and set the timeout to 1.5s 1s 0.5s respectively. As long as one of them returns the account received, the user will be reminded that the account has been received. Solve the problem of quick failure, at the same time the request is broken, to ensure that there must be a request to get the probability of receipt maximization.
Analysis of the
This division is not purely serial, there is union in the string.
advantages
- Requests are scattered to ensure that they are evenly distributed across time periods to improve the hit ratio of requests
- No timer, no need to consider turning off the timer, no memory leak risk
- No recursion, no need to consider infinite loops or infinite requests
- It has to be a finite number. The pain point of solution two
disadvantages
- Does the request have a timeout or does it have a probability of failure overall
The specific plan
First, use Promise implementation
Step 1: Simplify. If you don’t get a return receipt, consider the request a failure.
// 1 if the request is not received, it will be considered as an error
function fetchList(params) {
return request(params).then((resp) = > {
if (resp.status === 'SUCCESS') {
return resp;
}
throw resp;
});
}
Copy the code
Step 2: Regardless of generality, the three requests are sent every 0.5s, and the timeout time of the three requests is 1.5, 1 and 0.5 respectively.
Tips: Not thinking about GM makes your thinking clearer
function fetchLoop() {
// This Promise constructor cannot report an error, so async can be used instead
// https://stackoverflow.com/a/43050114
return new Promise(async (resolve, reject) => {
let times = 0;
const r1 = handle(fetchList({ timeout: 1500 }));
await delay(500)
const r2 = handle(fetchList({ timeout: 1000 }));
await delay(500)
const r3 = handle(fetchList({ timeout: 500 }));
function handle(promise) {
times++;
promise.then(resolve).catch((error) = > {
if (times >= 3) { reject(error); }}); }})}Copy the code
Then (resolve) takes advantage of the irreversibility of the end state of a Promise
Step 3: Generalize on the basis of step 2, consider the general case, solve the indefinite request with for loop.
// Consider the general case, use the for loop to solve the variable request
function loopInLimitTime(asyncFn, { timeLimit, interval }) {
return new Promise(async (resolve, reject) => {
const loopCnt = Math.ceil(timeLimit / interval);
// timeLimit=3 interval=2 loopCnt = ceil(3/2) = 2
// #i moment timeout
/ / # 1 0 3-0 * 2 = 3
/ / # 2 2 3-1 * 2 = 1
// timeLimit=1.5 interval=0.5 loopCnt = ceil(1.5/0.5) = 3
// #i moment timeout
/ / 1.5 0 # 1, 0.0 * 0.5 = 1.5
/ / # 2, 0.5 1.5 1 * 0.5 = 1.0
/ / # 3 1.0 1.5 2 * 0.5 = 0.5
for (let index = 0; index < loopCnt; index++) {
if (succeed) {
// DO NOTHING
// no need to re-request
return;
}
// resume on not succeed
handle(asyncFn({ timeout: timeLimit - index * interval }));
await delay(interval);
}
function handle(promise) {
times++;
promise.then(resolve).catch((error) = > {
if (times >= loopCnt) {
// End only if the number of times exceeded, otherwise let it continue to the following requestreject(error); }}); }})}Copy the code
use
Page({
async onLoad() {
const [error, resp] = await loopInLimitTime(fetchList, {
timeLimit: 1500.interval: 500});if (error) {
// Report to monitor
rc.error('Clear receipt not returned within 1.5s', { code: xxx })
return;
}
this.setData({ status: resp.status }); }})Copy the code
Use the publish-subscriber pattern-event mechanism
If you want to know the progress and a more detailed description of the process, such as the number of times the request was received. Instead of “I wait for you, you call me back,” use the “publish-subscriber” model, the familiar event mechanism on the front end.
Hollywood Principle — Don’t Call Me, I’ll Call You! The Hollywood principle
function loopInLimitTime$(asyncFn, { timeLimit, interval, eventName }) {
const loopCnt = Math.ceil(timeLimit / interval);
for (let index = 0; index < loopCnt; index++) {
handle(asyncFn({ timeout: timeLimit - index * interval }));
await delay(interval);
}
function resolve(resp, index) {
event.emit(`${eventName}:success`, resp, index)
}
function reject(error) {
event.emit(`${eventName}:failed`, error)
}
function progress(index) {
event.emit(`${eventName}:progress`, { index, loopCnt })
}
function handle(promise) {
times++;
promise
.then((resp) = > { resolve(resp, index) })
.catch((error) = > {
if(times >= loopCnt) { reject(error); }}); }}Copy the code
use
let offSuccessEvent;
let offFailedEvent;
let offProgressEvent;
Page({
onLoad() {
// Advance registration is required
const eventName = 'operation-result'
offSuccessEvent = event.on(`${eventName}:success`.({ status }, index) = > {
rc.info('The receipt is at no${index}Request return ');
this.setData({ status });
});
offProgressEvent = event.on(`${eventName}:progress`.({ index, loopCnt }) = > {
rc.info(` altogether${loopCnt}The second poll has been initiated${index}Polling `);
})
offFailedEvent = event.on(`${eventName}:failed`.(error) = > {
/ / report
rc.error('Clear receipt not returned within 1.5s', { error, code })
})
loopInLimitTime$(fetchList, { timeLimit: 1500.interval: 500, eventName })
},
// Remember to unlisten when the page/component is uninstalled to prevent memory leaks
onUnload(){ offSuccessEvent? . (); offFailedEvent? .() offProgressEvent? . ()}})Copy the code
The code analysis
? See a few tips on JavaScript.
Project analysis
The scheme is becoming more Sync mode 😄.
advantages
Process transparent, can be more fine-grained control of the whole process, even the page can display countdown 😎.
disadvantages
The API is ambiguous, requires a look at the source code, or is well documented.
Implemented using RxJS
Promises are not able to work on multiple events. RxJS Observable not only works like promises but can accomplish even more.
function poolInLimitTime$(asyncFn: (... args:any[]) = >Promise<any>,
{ timeLimit, interval }
) {
return new Observable((subscriber) = > {
const loopCnt = Math.ceil(timeLimit / interval);
let succeeded = false;
const startTimestampOfAllRequests = Date.now();
// Must fill the array otherwise the reduce cb wont be called!
new Array(loopCnt).fill(0).reduce((acc, cur, index) = > {
return acc.then(() = > {
if (succeeded) {
// DO NOTHING
// no need to re-request
return;
}
// continiue request on failed
handle(asyncFn({ timeout: timeLimit - index * interval }), index + 1);
return delay(interval);
});
}, Promise.resolve());
function resolve({ resp, index, totalRequestsCosts, singleRequestCosts }) {
subscriber.next({
type: `success`.payload: { resp, index, totalRequestsCosts, singleRequestCosts },
});
succeeded = true;
subscriber.complete();
}
function reject({ index, error, totalRequestsCosts, singleRequestCosts }) {
subscriber.error({
index,
error,
totalRequestsCosts,
singleRequestCosts,
});
}
function progress(index) {
subscriber.next({
type: `progress`.payload: { index, loopCnt },
});
}
async function handle(promise: Promise<any>, requestIndex) {
progress(requestIndex);
const start = Date.now();
try {
const resp = await promise;
const singleRequestCosts = Date.now() - start;
resolve({
resp,
index: requestIndex,
totalRequestsCosts: Date.now() - startTimestampOfAllRequests,
singleRequestCosts,
});
} catch (error) {
const singleRequestCosts = Date.now() - start;
subscriber.next({
type: `error`.payload: {
index: requestIndex,
loopCnt,
error,
singleRequestCosts,
},
});
if (requestIndex >= loopCnt) {
reject({
index: requestIndex,
error,
totalRequestsCosts: Date.now() - startTimestampOfAllRequests,
singleRequestCosts: singleRequestCosts, }); }}}}); }Copy the code
Polling Use RxJS for detailed code
The code analysis
Reduce instead of for await is a common pattern, but mainly because the Observable constructor cannot be async function, otherwise the code will be simpler and easier to understand. If you have a good method, please leave a comment.
for (let index = 0; index < loopCnt; index++) {
handle(asyncFn({ timeout: timeLimit - index * interval }));
await delay(interval);
}
Copy the code
The equivalent
new Array(loopCnt).fill(0).reduce((acc, cur, index) = > {
return acc.then(() = > {
handle(asyncFn({ timeout: timeLimit - index * interval }), index + 1);
return delay(interval);
});
}, Promise.resolve());
Copy the code
Scheme 4: fixed interval polling & single request without timeout control
The end condition is 1.5 seconds.
advantages
Suitable for medium and long time polling
disadvantages
Scheme 5: incremental interval polling
If polling takes a long time, and network conditions are generally poor, the use of interval increasing scheme can further reduce the waste of requests.
The increasing rule can be exponential, such as 2 factorial 1, 2, 4, 8, 16, 32, 64… .
The appendix
“Don’t call me; I’ll call you.” The Hollywood principle
Normally, Client is you calling the underlying Server, me, but for some methods, please don’t poll/harass me, I’ll notify you.
When you take a taxi to a certain place, you ask the driver along the way. But don’t start from the first second on the bus, or ask the driver every 5 seconds if you’ve arrived at your destination. That’s annoying.
The heart of the Hollywood principle: replace polling with notification.
- From www.daimajiaoliu.com/daima/4edca…
Server-side push is the embodiment of the “Hollywood Principle”.
Reference documentation
- Don’t call me I’ll call you
- BELIEVING IN anti-patterns IS AN anti-pattern IS AN interesting answer. Stackoverflow.com/questions/4…