Performance Web Animations and Interactions: Achieving 60 FPS

Translator’s note: This article is mostly cliches, but there are a few new things to see until the end.

High-performance Web interactive animation: how to achieve 60FPS

Every product that pursues natural effect wants to have a set of smooth interaction process. But developers can overlook some of the details, resulting in poorly performing Web animations that not only produce “janky”, but the most immediate experience is page stuttering. Developers tend to spend a lot of time optimizing the first screen load, counting the milliseconds, and ignoring the performance issues of interactive animations on the page.

Everyone at Algolia cares about user experience, and “performance” must be a key part of the conversation. Animation performance is as important to pages as search result speed is to search.

Criteria for success

The frame rate of animation can be used as a yardstick, and generally images look good at 60fps. This translates to 16.7ms (16.7 = 60/1000) per frame. Therefore, our first priority is to reduce unnecessary performance consumption. The more frames that need to be rendered, the more tasks the browser has to deal with, so frame drops appear, which is a stumbling block to reaching 60fps. If all animations can’t be rendered in 16.7ms, consider rendering at a slightly lower 30fps frame rate.

Browser 101: Where do pixels come from

Before delving further, there’s an important question to ask: how does the browser translate code into pixels that the user can see?

When first loaded, the browser downloads and parses the HTML, turning the HTML elements into a “content tree” of DOM nodes. In addition, styles are also parsed to produce a “render tree”. In order to improve performance, the rendering engine will do this work separately, and even render trees will generate faster than DOM trees.

layout

After the render tree is generated, the browser iteratively calculates the size and position of each element, starting at the top left corner of the page, and eventually generates the layout. This process can be done in one go, but it can also be drawn repeatedly due to the arrangement of elements. The placement of elements is closely related. To optimize the necessary tasks, the browser tracks changes to the elements and marks them and their children as’ dirty ‘. But the coupling between elements is tight, and any changes in layout are costly and should be avoided.

draw

After the layout is generated, the browser draws the page to the screen. Similar to the “layout” step, the browser tracks dirty elements and merges them into a large rectangular area. Only one redraw occurs within each frame to paint the contaminated area. Redrawing also consumes a lot of performance, if you can avoid it.

composite

The last step is to compound all the drawn elements. By default, all elements will be drawn in the same layer; If elements are separated into different composite layers, updating elements is performance-friendly, and elements that are not in the same layer are not susceptible. CPU drawing layer, GPU generation layer. The basic drawing operation is efficient in hardware – accelerated synthesis. The separation of layers allows for non-destructive changes and, as you might guess, the changes on the GPU composite layer cost the least in terms of performance.

Stimulate creativity

In general, changing the composite layer is an operation that consumes less performance, so try to trigger the drawing of the composite layer by changing the values of opacity and transform. It looks like… There’s a limit to what we can do, but is it? Develop your creativity.

transform

Transformation provides infinite possibilities for elements: position can be changed (translateX, translateY, or translate3D), size can also be changed by scaling, rotation, skew, and even 3D transformation. In some cases, developers need to think differently and use transformations to reduce reordering and redrawing. For example, adding the active class name to an element will move it 10px to the left by changing the left attribute:

.box {
  position: relative;
  left: 0;
}
.active{
    left: -10px;
}
Copy the code

Translate:

.active {
    transform: translateX(-10px);
}
Copy the code

transparency

Changing the value of opacity allows elements to be displayed and hidden (similar to changing the value of display or visibility, but with better performance). For example, the function of menu switching is implemented: when the menu is expanded, the opacity value is 1; When folded up, the opacity value changes to 0. Note that the value of pointer-events is also changed to prevent the user from operating on a menu that is clearly folded up. The closed class name will be added when the user clicks ‘open’; The closed class name is removed when the ‘close’ button is clicked. The corresponding code looks like this:

.menu {
  opacity: 1;
  transition: .2s;
}

.menu.closed {
  opacity: 0;
  pointer-events: none;
}
Copy the code

In addition, variable transparency means that developers can control how visible elements are. Thinking about scenarios where transparency is applied — such as directly affecting elements’ box-shadow — can cause serious performance problems:

.box {box-shadow: 1px 1px 1px rgba(0,0,0,.5); transition: .2s; }. Active {box-shadow: 1px 1px 1px rgba(0,0,0,1); }Copy the code

If you put the shadow on the pseudo-element and control the transparency of the pseudo-element to control the shadow, the effect is the same but the performance is better, the code is as follows:

.box { position: relative; }. Box :before {content: ""; Box-shadow: 1px 1px 1px RGB (0,0,0); opacity: .5; transition: .2s; position: absolute; width: 100%; height: 100%; } .box.active:before { opacity: 1; }Copy the code

Manual optimization

The good news is that developers can select the properties they want to control, create a composite layer, and drag elements to that layer. Manual optimization ensures that the element is always drawn, and is the easiest way to tell the browser that it is ready to draw the element. Scenarios that require a separate layer include changes in the state of an element (such as animation) and changes in performance draining styles (such as Position :fixed and overflow: Scroll). You’ve probably seen poor performance that causes pages to flicker, vibrate… Or other less than desired effects, such as the head, which is often fixed at the top of the viewport on mobile devices, flashing as the page scrolls. A common solution to this problem is to isolate such elements into their own composite layer.

Hack method

Previously, developers used to use backface-visibility:hidden or trasform: translate3d(0,0,0) to trigger the browser to generate a new composite layer, but this was not the standard way to write it, and neither method had any effect on the visual appearance of the element.

The new method

Now with will-change, it can explicitly tell the browser to optimize the rendering of one or more elements of an element. Will-change accepts a variety of property values, such as one or more CSS properties (transform, opacity), contents, or scroll position. However, the most commonly used value is probably auto, which indicates that the browser will optimize by default:

.box {
  will-change: auto;
}
Copy the code

There is a lot of talk about “too many layers of composition hindering rendering”. Because browsers have done everything they can for optimization, will-change’s performance optimization scheme is itself resource-demanding. If the browser continues to perform will-change on an element, it means that the browser continues to optimize that element, resulting in a performance drain on the page. Too many composite layers degrade page performance on mobile.

Animation method

You can use CSS (declarative) or JavaScript (imperative) to animate elements, depending on your needs.

Declarative animation

CSS animations are declarative (telling the browser what to do), and the browser needs to know the start and end states of the animation so it knows how to optimize. CSS animations are not executed in the main thread and do not interfere with the execution of tasks in the main thread. Overall, CSS animations are more performance-friendly. Animation combinations of keyframes provide quite rich visual effects, such as the following infinite rotation animation of an element:

@keyframes spin { from {transform: rotate(0deg); } to {transform: rotate(360deg); } } .box { animation-name: spin; animation-duration: 3ms; animation-iteration-count: infinite; animation-timing-function: linear; }Copy the code

But CSS animations lack the expressive power of JS, so it’s better to combine the two: for example, use JS to listen for user input and switch class names based on actions. Class names correspond to different animation effects. The following code toggles the class name when an element is clicked:

const box = document.getElementById("box") box.addEventListener("click", function(){ box.classList.toggle("class-name");  });Copy the code

It’s worth noting that the new Web Animation API takes advantage of CSS capabilities if you’re working with bleeding. With this API, developers can easily handle animation synchronization and timing issues on a performance-friendly basis.

Imperative animation

Imperative animations tell the browser how to render the animation. CSS animation code becomes bloated in some scenarios, or requires more interactive control, where JS comes in. Attention! Unlike CSS animations, JS animations are executed in the main thread (that is, they have a greater chance of losing frames than CSS animations), and their performance is relatively poor. In a scenario that uses JS animation, there are fewer performance options to consider in the scope.

requestAnimationFrame

RequestAnimationFrame is performance-friendly and you can think of it as an evolution of setTimeout, but it’s actually an API for animation execution. Theoretically calling this API would guarantee a frame rate of 60fps, but in practice this function is asking to draw the animation when it is next available, i.e. there is no fixed time interval. Browsers increase CPU usage by grouping changes on a page one at a time, rather than drawing for each change. RAF can be used recursively:

function doSomething() {
    requestAnimationFrame(doSomething);
    // Do stuff
}
doSomething();
Copy the code

In addition, in scenarios like zooming Windows or page scrolling, where directly binding events is relatively performance-intensive, developers can consider using RAF to improve performance in similar situations.

rolling

Achieving smooth scrolling with good performance can be a challenge. Fortunately, the recent specification provides some configurable options. Developers no longer need to disable browser preventDefault, enable Passive Event Listeners to improve scrollability. {passive: true}} {passive: true}}

element.addEventListener('touchmove', doSomething(), { passive: true });
Copy the code

Starting with Chrome 56, this option will be enabled by default in TouchMove and TouchStart.

The new Intersection Observer API can tell developers whether an element is in the viewport or interacts with other elements. In contrast to interaction through event handling, which blocks the main thread, the Intersection Observer API listens on elements and performs actions only when they cross paths. This API can be used in both infinite scrolling and lazy loading scenarios.

First learns to write

Constantly reading and writing to the DOM can lead to “forced synchronous layouts,” though over the course of technology it has evolved into the more graphic term layout thrashing. As mentioned earlier, browsers keep track of “dirty elements” and store transformations when appropriate. After reading a particular property, the developer can force the browser to calculate ahead of time. This repeated reading and writing can lead to rearrangement. Fortunately, there’s a simple solution: read before you write.

To simulate this effect, consider the following example of strict reading and writing requirements:

Boxes. ForEach (box = > {box. Style. Transform = "translateY (" + wrapper. GetBoundingClientRect (). The height + px); })Copy the code

Putting “read” outside of forEach, rather than executing it with “write” in each iteration, improves performance:

let wrapperHeight = wrapper.getBoundingClientRect().height + 'px'; Box-style. transform = "translateY(" + wrapperHeight +" px "); box-style. transform = "translateY(" + wrapperHeight +" px "); })Copy the code

The future of optimization

Browsers continue to put more and more effort into performance optimization. The new feature contains allows you to declare that an element’s subtree is separate from the rest of the page (currently only Chrome and Opera support this feature). This tells the browser that “this element is safe and does not affect other elements.” Contain contains strict, Content, size, Layout, style, or paint. This ensures that when the subtree is updated, there is no rearrangement of the parent element. Especially when introducing third-party controls:

.box { contain: style; // Restrict the style range in the element and its children}Copy the code

The performance test

Once you know how to optimize your page performance, you should also do performance testing. In my opinion, the Chrome Developer Tool is the best test tool. There is a ‘Rendering’ panel in ‘More Tools’ that includes options such as tracking ‘dirty elements’, calculating frame rate per second, highlighting the boundaries of each layer, and monitoring scrolling performance.

The ‘Timeline’ tool in the ‘Performance’ panel records the animation process, allowing developers to directly locate the offending part. Very simple, red means there is a problem and green means the rendering is normal. Developers can directly click on the red area to see which function is causing the performance problem.

Another interesting tool is ‘CPU throtting’ in ‘Caputrue Settings’, which allows developers to simulate pages running on a very jammed device. Developers may do well to test a page on a desktop browser because PC or Mac performance is inherently better than mobile. This option provides a good simulation of the real world.

Testing and iteration

The simplest solution to animation performance optimization is to reduce the amount of work per frame. The most effective way to relieve performance pressure is to update only elements in the composite layer, so that re-rendering of the composite layer elements does not easily affect other elements on the page. Performance optimization often means repeated testing and validation, as well as thinking outside the box to find clever ways to achieve high performance animation — in any case, users and developers end up benefiting.