Time slicing

The purpose is to break down a long task into smaller tasks and execute them in pieces. Since tasks over 50ms will be considered as long tasks, users can perceive the lag of rendering and interaction, so we can shorten the continuous execution time of the function.

The cause of

A colleague encountered an animation problem, that is, to execute a function with a lot of computation, he wanted to load a loading, but he found that the loading element display: block; Loading animation will not appear immediately on the page. When loading animation occurs, the operation function is completed.

The solution

There are two ways to handle this time-consuming task. The first is webWorker. However, some DOM operations cannot be handled, so we decided to use generator functions to solve this problem.

Event loop

Micro tasks:

1. Promise.then
2. Object.observe
3. MutaionObserver
Copy the code

Macro task:

SetTimeout 3. SetInterval 4. I/O 5. UI interaction events 6Copy the code

Browser render timing

Except in special cases, the page is rendered after the microtask queue is empty and before the macro task is executed, so we can tell the function that pushes the main stack to sleep for a certain amount of time, and then wake it up in the macro task after rendering, so that neither rendering nor user interaction is stuck!

The original code

Let’s first simulate a js long task

code

// style
@keyframes move {
    from {
        left: 0;
    }
    to {
        left: 100%;
    }
}
.move {
    position: absolute;
    animation: move 5s linear infinite;
}

// dom
<div class="move">123123123</div>

// script
function fnc () {
    let i = 0
    const start = performance.now()
    while (performance.now() - start <= 5000) {
        i++
    }

    return i
}

setTimeout(() => {
    fnc()
}, 1000)
Copy the code

The effect

As shown below, when the animation runs for 1s, the js function starts to run, the animation will stop rendering first, and then wait for the main js stack to be idle before the animation continues.

Function transformation

Let’s transform the original function into a generator function

code

Function * fnc_ () {let I = 0 const start = performance.now() while (performance.now() -start <= Function timeSlice (FNC) {if(fnc.constructive.name! == 'GeneratorFunction') return fnc() return async function (... args) { const fnc_ = fnc(... Args) let data do {data = fnc_.next() Register a macro task setTimeout to wake him up with await new Promise(resolve => setTimeout(resolve))} while (! data.done) return data.value } } setTimeout(() => { const fnc = timeSlice(fnc_) const start = performance.now() Console. log(' start ') const num = await FNC () console.log(' end ', `${(performance.now() - start)/ 1000}s`) console.log(num) }, 1000)Copy the code

The effect

Animations are not affected at all, and FPS are stable because we break time-consuming tasks into many chunks.

Optimized time sharding

The above time-sharding function will sleep every step, and then wake it up through a macro task. However, such execution efficiency is definitely relatively low. Let’s optimize the execution efficiency to improve the continuous execution time.

code

Function timeSlice_ (FNC, time = 25) {if(fnc.constructive.name! == 'GeneratorFunction') return fnc() return function (... args) { const fnc_ = fnc(... args) function go () { const start = performance.now() let data do { data = fnc_.next() } while (! data.done && performance.now() - start < time) if (data.done) return data.value return new Promise((resolve, reject) => { setTimeout(() => { try { resolve(go()) } catch(e) { reject(e) } }) }) } return go() } } setTimeout(async () => {const fnc1 = timeSlice_(fnc_) let start = performance.now() console.log(' start ') const num = await FNC () Console. log(' end ', '${(performance.now() -start)/ 1000}s') console.log(num)}, 1000);Copy the code

The effect

We split the function into larger chunks so that the function is executed more efficiently and the FPS is slightly affected, but within acceptable limits.

Comparison before and after optimization

Let’s compare the effect before and after optimizing the time slice function

code

setTimeout(async () => { const fnc = timeSlice(fnc_) const fnc1 = timeSlice_(fnc_) let start = performance.now() Console.log (' start ') const a = await FNC () console.log(' end ', '${(performance.now() -start)/ 1000}s ') console.log(' start ') start = performance.now() const b = await fnc1() The console. The log (' end ', ` ${(performance. Now () - start) / 1000} s `) console. The log (a, b)}, 1000);Copy the code

The effect

Compared with the optimized time-slice function, it is 4452 times more efficient than the previous one. All we do is improve the continuous execution time of the function.

The last

The yield position in generator functions is critical and needs to be placed where time is needed. Optimized time fragmentation functions also provide a time variable that you can change as needed.