Animations written in JavaScript code have performance issues due to the asynchronous nature of the Bridge in React Native. Modern animation libraries like Animated overcome these pitfalls by minimizing the use of the React Native Bridge. User interaction is a further problem, and the interface needs to be constantly updated in response to user input. Can we do 60 FPS with React Native?
React Native is one of the modern mobile application development frameworks that appeal to developers. One of the advantages of this framework is that it can greatly improve development productivity. Simply put, it dramatically speeds up application development, in part because it allows developers to share code across platforms.
However, developers have always had concerns about the framework, wondering, can I use React Native to do the last critical steps? Is the performance of my app comparable to that of the best apps developed purely natively?
Admittedly, these concerns are justified. On our platform, Wix.com, we started migrating pure Native code to React Native over a year ago. The first 95 percent of the development was pretty easy, and we were moving about four times as fast as we normally would, but in the last 5 percent, there were some challenges. We found that we couldn’t develop React Native directly at what I call the “last mile” stage.
How to cross that last mile smoothly is what we and the community are striving to improve.
What’s the difference between the best apps and the mediocre ones? In the mobile space, we’ve actually raised the bar a little bit. We want animated objects to not just pop off the screen, but to be able to transform and move and so on.
Achieving smooth animation performance at 60 frames per second was an important part of the last 5% development phase above. In the past, React Native suffered from poor animation performance. This problem was eventually solved by the excellent Animated library.
Beyond animation, we went one step further and sought to implement dynamic user interaction that mimics reality. Interaction occurs when a user gestures to a view and the view responds to the user’s gestures in the real world.
To understand the above, let’s take a look at some real life examples. I pulled up my phone and started looking for examples of good interactions from some of my favorite apps, categorizing them as follows:
-
Scroll up and down the ListView — the screenshot on the far left above shows the inboxes of apple’s official email app for iOS and Google’s Email app Gmai. In this checklist-like interface, the user swipes his finger up and down, and the sidebar gradually appears with various action buttons.
-
Card Slide — The second screenshot from left shows Google’s instant messaging app Google Now and Lifehack Labs’ app Flic for deleting and managing photos, showing tinder-like user interaction. The appearance of these card-shaped interfaces adjusts as the user swipes a finger, and if the finger swipes hard enough, the interface flies away from the screen.
-
Foldable view — Second screen shot from right to left shows Cal, a smart calendar app developed by Airbnb and Any.DO. Both provide users with views that can be folded between multiple states. Airbnb users can switch between filter and search pages, and Cal users can choose to browse by month or week, switching between the two.
-
Slide panel and drawer pull — the screenshot on the far right shows the official iOS top notification panel and The official iOS Maps app. Users can drag these panels to show new user interface elements that are normally hidden. This approach is much like the popular navigation drawer or sidebar menu.
What do these examples have in common? They are imitating physical motion. The speed of the view varies as the user drags and shakes. Note some subtle differences, such as the notification panel popping out of the bottom when waved vigorously.
By choosing the React Native development framework, we naturally tried to implement these interactions using JavaScript code. First, let’s review how it works. So the first example, the ListView slides up and down. SwipeableRow is implemented using the React Native core code.
This is the latest, performance-oriented implementation, using much of the Animated library. For now, however, let’s focus on implementing the interaction itself.
_handlePanResponderMove(event: Object, gestureState: Object): void {
if (this._isSwipingExcessivelyRightFromClosedPosition(gestureState)) {
return;
}
this.props.onSwipeStart();
if (this._isSwipingRightFromClosed(gestureState)) {
this._swipeSlowSpeed(gestureState);
} else {
this._swipeFullSpeed(gestureState);
}
},
_isSwipingRightFromClosed(gestureState: Object): boolean {
const gestureStateDx = IS_RTL ? -gestureState.dx : gestureState.dx;
return this._previousLeft === CLOSED_LEFT_POSITION && gestureStateDx > 0;
},
_swipeFullSpeed(gestureState: Object): void {
this.state.currentLeft.setValue(this._previousLeft + gestureState.dx);
},
_swipeSlowSpeed(gestureState: Object): void {
this.state.currentLeft.setValue(
this._previousLeft + gestureState.dx / SLOW_SPEED_SWIPE_FACTOR,
);
},
_isSwipingExcessivelyRightFromClosedPosition(gestureState: Object): boolean {
const gestureStateDx = IS_RTL ? -gestureState.dx : gestureState.dx;
return (
this._isSwipingRightFromClosed(gestureState) &&
gestureStateDx > RIGHT_SWIPE_THRESHOLD
);
},Copy the code
This implementation relies on PanResponder to calculate view changes caused by touch events. What kind of performance should we expect from this approach?
In order to analyze the behavior of the interaction, we must have a deeper understanding of React Native’s internal mechanisms. React Native runs two parts simultaneously: the JavaScript part where we run our business logic, and the Native part where our Native views are. The two parts communicate through the Bridge. Frequent communication is expensive because each transfer of data over the Bridge requires the data to be serialized.
Touch events are native constructs, and they occur in the native part. For each frame of the interaction, these events need to be transmitted through the Bridge and handled by _handlePanResponderMove in the JavaScript section. Once the business logic calculates the response, we set the Value of Animated Value. But because the updated view has to be in the native part, we have to communicate through the Bridge again.
As you can see, every frame we see needs to be passed through the Bridge and serialized. Once your App is busy, this performance drain can lower your App’s framerate below 60FPS.
We initially developed the RN application for Wix using JavaScript code for all interactions. But the interaction wasn’t as smooth as expected, and we started using native components to implement the UI.
This means that everything is implemented twice — once in Objective-C on iOS and again in Java on Android. Achieving speeds of 60 FPS is usually easier in the native part because it avoids passing data through the Bridge, and doing business logic view rendering natively can quickly end the event loop of every update frame.
We open source most of the native code and finally completed several animation libraries, such as “react-Native swipe- View” for card slide view and “React-native Action-View” for ListView slide view. Instead of trying to find a universal solution, we ended up with a single-purpose visual library for each use case.
The main problem with this approach is that developers are required to have native development capabilities, and it usually requires two different developers for iOS and Android. On Wix, we keep 10% of the front-end developers as native developers, because objective-C /Swift or Java development capabilities are still required in this approach.
Of course, this approach is not good enough. We should aim higher and look for more effective, universal solutions.
In fact, animation presents us with a similar challenge. The native UI complements the view properties between frames drawn by JavaScript code. This will result in a lot of data exchange on the Bridge, as well as frame loss. We all know that the Animated library is the dominant solution for 60 FPS animations in RN. How did it happen?
Animated’s philosophy is to use a declarative API to describe animations. If we can pre-declare the entire animation, the JavaScript code for the entire animation can be serialized and transferred over the Bridge once. In this way, a general purpose rendering driver can be used to animate frame by frame as required.
Originally Animated’s rendering driver was implemented in JavaScript. Recent versions, however, provide a driver for native coding, which can run animations frame-by-frame on the native level and can update native views without Bridge.
This approach reduces data traffic through the Bridge and ensures that data exchange is limited to the initialization phase. This leads us to the following interesting conclusions:
Declarative apis are the key to crossing the last mile
This is a very powerful concept, and we should give more thought to writing such a code base. Whenever you find performance limitations in React Native, try this approach. All we need to do is find some typical use cases and design a declarative API that covers all of them — and that’s what we’re going to do.
In order to design an API that works, we should define the following two goals:
1. Our API should be universal. A good way to check for generality is to make sure that the API covers all eight of the examples we saw in the user experience interaction design patterns above.
2. Our API should be simple. A good way to check for simplicity is to make sure that each interaction has no more than three to five lines of definition code.
Before I get into our API, I want to talk about some of the interesting work that the Animated library does to support user interaction. One of the more interesting apis is Animated.ScrollView, which manipulates view properties based on where the interface scrolls. Another interesting piece of unfinished work is a visual library developed by Krzysztof Magiera called React-Native Mouth-Handler. View properties can be manipulated based on gesture parameters.
But now, the way we’re going to synthesize it is a little bit different. We will start with the eight ux interaction design patterns shown above and design the simplest high-level API that defines all of them.
Analyzing the eight UX patterns above, we can see that some of the views can move freely horizontally, while others can move freely up and down. So it’s a good place to start with the direction of movement.
On the other hand, we find that the view can only move freely after being dragged. As long as the user does not move, they usually instantaneously move to a predefined capture point position. For example, in a pull-a-drawer interactive interface, the view will instantly go to an open position or a closed position.
Finally, to give the instant movement a real world feel, we need to create a spring-like animation curve. If you don’t want the spring to swing all the time, you also need to set the spring’s spring damping in the API.
In general, the first phase of a declarative API can be accomplished by setting the following properties:
-
Horizontal/vertical
-
Capture points
-
Bounce coefficient
Let’s use this simple API to illustrate the first two UX modes — ListView swipe left and Card swipe right:
// ListView row actions <Interactable.View horizontalOnly={true} snapPoints={[ {x: 0}, {x: 100}]} friction={0.7} /> // swipeable cards < interactable. View horizontalOnly={true} snapPoints={[{x: -360}, {x: 0}, {x: 360}]} friction={0.6} />Copy the code
To make the card on the right appear to disappear by swiping, we simply define the capture point (+/-360 logical pixels) at which the card completely disappears from the screen. Note that we are currently using this pixel value for simple graphics, but we can later add support for UI elements for multi-screen solutions, such as adding percentages.
That’s a good start, but designing declarative apis is only the first half of the job. Now comes the second part: implementing the native driver. Let’s move on to the next stage.
While the JavaScript props property gives the flow description of the interaction, React Native is serialized during initialization and delivers a transfer to the Native layer via Bridge. Our generic native driver receives these descriptions, drives interact at the native level, and updates to each frame of computation no longer need to be transmitted through the Bridge. Interactions run at 60 FPS because frequent data exchanges are eliminated.
Let’s start with a simple implementation written in Objective-C. We use UIPanGestureRecognizer to drag the view. When the gesture stops, we look for the nearest capture point and give the view a spring-like trajectory.
- (void)handlePan:(UIPanGestureRecognizer *)pan { CGPoint translation = [pan translationInView:self]; self.center = CGPointMake(self.initialPanCenter.x + translation.x, self.initialPanCenter.y + translation.y); if (pan.state == UIGestureRecognizerStateEnded) { InteractablePoint *snapPoint = [self findClosestPoint:self.snapTo toPoint:self.center]; If (snapPoint) {[UIView animateWithDuration: 0.8 delay: 0 usingSpringWithDamping: 0.7 initialSpringVelocity: 0 options: nil animations:^{ self.center = [snapPoint positionWithOrigin:self.origin]; } completion:^(BOOL finished) {}]; }}}Copy the code
This works fine, but the problem is that the mechanics we’re using in animation are too one-dimensional. Think about what would happen if the user flipped the view at some initial speed. The animation function we used is only good for reflecting the speed in the direction the spring is bouncing, but if the user tries to flip in the other direction, how does the view move? Our model is not powerful enough to handle that situation.
Let’s look at a more powerful interaction-driven approach. If you look into the native SDK, Apple already has UIKit Dynamics ready for developers.
The nifty API, introduced in Apple’s iOS 7 operating system, is an omni-mechanical engine in a “virtual guise” that allows us to add properties like mass, speed and force to the view. The physical parameters of the scene are defined according to the applied action. We can easily perfect the above implementation.
if (pan.state == UIGestureRecognizerStateEnded) { CGPoint velocity = [pan velocityInView:self.superview]; InteractablePoint *snapPoint = [self findClosestPoint:self.snapTo toPoint:self.center]; if (snapPoint) { // initial velocity UIDynamicItemBehavior *itemBehaviour = [[UIDynamicItemBehavior alloc] initWithItems:@[self]]; [itemBehaviour addLinearVelocity:velocity forItem:self]; [self.animator addBehavior:itemBehaviour]; // snap to point UISnapBehavior *snapBehaviour = [[UISnapBehavior alloc] initWithItem:self snapToPoint:[snapPoint positionWithOrigin: self.origin]]; SnapBehaviour. Damping = 0.8 f; [self.animator addBehavior:snapBehaviour]; }}Copy the code
In this way, we seem to be closer to our goal, but we are still one step away. UIKit Dynamics has two major flaws. First, it doesn’t support Android. In other words, the API only works on iOS and doesn’t work with the Android SDK. Second, it doesn’t have full control over certain movements, such as capturing, such as defining the force of a momentary motion.
Let’s try something a little cooler. Why don’t we try running UIKit Dynamics ourselves? The forces of the real world are simple compared to mathematical equations. It wouldn’t be too hard to build a biomechanical engine from scratch.
UIKit Dynamics will teach us how to do that. We can even use its behavior patterns. Here’s an example of a split-second action. We can show the movement of a spring by its action, how does the spring move? Now let’s review some basic mechanics.
Don’t worry about too much math, some of the work will be done in the animation library. Wikipedia entries on Newton’s laws of Mechanics and Hooke’s Law provide a comprehensive physical background.
We have to calculate the force and speed of each frame. To do this, we need a high-precision timer, running at 60 frames per second. Fortunately, there is a native API — CADisplayLink — designed to do just that. The results of this tool are as follows.
self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(displayLinkUpdated)]; [self.displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes]; - (void)displayLinkUpdated {CFTimeInterval deltaTime = 0.0; CFTimeInterval currentTime = [self.displayLink timestamp]; If (self.lastFrameTime > 0.0) deltaTime = currentTime - self.lastFrameTime; self.lastFrameTime = currentTime; [self animateFrameWithDeltaTime:deltaTime]; } - (void)executeFrameWithDeltaTime:(CFTimeInterval)deltaTime onObject:(PhysicsObject*)object { CGFloat dx = self.target.center.x - self.anchorPoint.x; CGFloat ax = (-self.tension * dx) / object.mass; CGFloat vx = object.velocity.x + deltaTime * ax; CGFloat dy = self.target.center.y - self.anchorPoint.y; CGFloat ay = (-self.tension * dy) / object.mass; CGFloat vy = object.velocity.y + deltaTime * ay; object.velocity = CGPointMake(vx, vy); }Copy the code
Now that we’re on the right track, we’ve learned something very interesting:
We’re writing a declarative mechanics engine for React Native.
This is so cool.
We finally have a controlled native driver. It’s time for a powerful engine to add some functionality to our declarative API.
So far, our declarative API has provided a solid foundation, but it’s not powerful enough to enable the more nuanced interactions of the eight UX modes described above. Think back to the notification panel at the top of Apple’s official iOS operating system. The user slides the panel down hard enough that it sinks and takes up the entire screen.
It’s easy to add properties to your declarative apis that support this behavior. We’ll set boundaries to limit view movement and add bounce to view edges.
// notification panel having boundaries with bounce <Interactable.View verticalOnly={true} snapPoints={[ {y: 50}, {y: Boundaries ={{y: 50}} boundaries={{bottom: 667, bounce: 2}} friction={0.7} />Copy the code
Let’s consider another complex use case. This time, the ListView moves left and right. Some mobile screens don’t have buttons on either side. In this case, a common UX design is for the user to slide left to right on the list element to reveal the button, and the slide direction is different from left to right. If you slide backwards, you will encounter a lot of drag and recover quickly.
We can set a spring effect that works all the time, so that one side of the view is linked to one side of the screen to increase resistance to left-right movement. This spring reaction is also activated when the user drags the view.
There’s another problem we need to solve. There should be no resistance to moving to the left (there is a button in that direction), but to the right (there is no button in that direction). We can add this action property to our API by giving each force, including the spring, an optional area of influence.
When the view goes beyond the coverage of the area of influence, the force disappears.
// ListView row actions with a spring having limited influence area <Interactable.View horizontalOnly={true} SnapPoints ={[{x: 0}, {x: -230}]} springPoints={[{x: 0, tension: 6000, damping: 0.5, influenceArea: {left: 0}}}] friction = {} 0.7 / >Copy the code
As you can see, we meet the requirements of more and more use cases, which enrichs our declarative API and adds common functionality to enrich the specification.
We still have a big piece of the puzzle, think of the ListView swiping left and right use case. If you swipe left or right, the action button will gradually appear. The conventional way of presenting them is to gradually change their appearance, such as size and transparency, as they appear.
You can see the action below (the action button is blue).
Also note that the view we want to animate is the blue action button, as opposed to the view that interacts with the user, which is the gray overlay row.
This effect is not easy to achieve because the key in this animation phase is not the time, but the horizontal position of the row. Still, this is an animation, and the attributes of the view (size and transparency) are changed in order. We already have a powerful tool, Animated library, Animated, which allows us to animate view properties as we wish. So now we can find a way to use it for what we want.
Animated helps animate view properties by declaring the interpolation of Animated.Value.
this._animValue = new Animated.Value(0); <Animated.View style={{ transform: [{ scale: this._animValue.interpolate({ inputRange: [-150, 0], outputRange: [0.3, 1]})}]}}>... </Animated.View>Copy the code
Since animation depends on the horizontal position of the row, how about if we define Animated.Value? In this way, we can define interpolation based on the position of the view. The position of that view affects other views that are not a direct part of the interaction, such as buttons.
How does this work in our declarative API? We can pass Animated.Value as a property (animatedValueX).
// regular Animated code this._deltaX = new Animated.Value(0); <Animated.View style={{ transform: [{ scale: this._deltaX.interpolate({ inputRange: [-150, 0], outputRange: [0.3, 1]})}]}}>... </Animated.View> // our improved declarative API <Interactable.View horizontalOnly={true} snapPoints={[{x: 0}, {x: -230}]} animatedValueX={this._deltaX} />Copy the code
Our native driver will do that behind the scenes, using Animated. Events. Some recent versions of the Animated library even support using Animated. Events with native drivers. This means that the entire animation process, from passing view locations to updating view properties, can be performed at the native level without Bridge transport. This is great news for us to get to 60 FPS.
If we were to design our own physics-based interactions, we might need to add the remaining forces as well. You’ve got the spring, now you can add gravity and magnetism. These capabilities give the developer the flexibility to design a variety of fascinating real-life interactions.
We’ll also add support for events, so our JavaScript code gets notified when an interaction stops or the view moves quickly to a point. On top of that, we can add haptic feedback, such as making the device vibrate when the view touches around the screen. These improved details make the user experience more perfect.
It’s time to wrap things up.
I want to show you the full strength of the API we’ve created. Looking at the declaration in the following code, can you guess what it does?
<Interactable.View snapPoints={[{x: 140}, {x: -140}]} gravityPoints={[{x: 0, y: 200, strength: 8000, falloff: 40, damping: 0.5, haptics: true}]} dragWithSpring={{tension: 2000, damping: OnStop ={this.onstopinteraction} animatedValueX={this._deltax} animatedValueY={this._deltay} />Copy the code
The effect: our view will be drawn to the left or right of the screen, and there will be gravity at the bottom of the screen, falling to the bottom if the view gets too close. Also, we don’t limit the direction the view can move.
Yes, we did it all in just seven lines of code!
Watching a video is not the same as interacting with a physical device. Note that the emulator does not provide a real experience because it loses frames.
So can you really run at 60 frames per second on a physical device? That’s for you to judge. With the help of the engine we’ve just created, I’ve already run the eight UX modes mentioned above through our declarative API that leverage React Native’s design. You can find examples for iOS and Android on Apple’s App Store and Google Play, respectively.
All running and sample apps for our iOS and Android versions in the native Driver — Mechanics Engine can be found on GitHub:
Github.com/wix/react-n…
Special thanks to Mobile network infrastructure engineer Rotem Mizrachi-Meidan and developer Tzachi Kopylovitz for helping us complete the core code in time for ReactConf 2017.
I hope you can not only gain a skill from this interesting experiment, but also realize that the React Native community is constantly exploring the limitations of React Native and breaking through these limitations.
If you encounter interesting performance issues with React Native, I recommend collecting typical use cases and trying to design a simple declarative API to define them. If the performance problem is due to the React Native Bridge (which is a common cause), a Native driver for the API may well solve the problem.
Finally, let’s cross the last mile together.
Hackernoon.com/moving-beyo…
Note: This article is translated and published with the authorship of the author.
Click on the image below to read it
Front-end Weekly Checklist No. 14: Vue status and Outlook, Writing modern JavaScript code, Security Self-check list for Web developers
GMTC 2017 will be held in Beijing from June 9 to 10. GMTC 2017 is the first “big Front end” conference for mobile, front-end, cross-platform and AI applications. Scan the qr code below and go to the official website for details!
InfoQ is a vertical community focused on front-end technology. To join the InfoQ learning group, please follow the “Front-end top” public account and reply to “Add group”. To share or contribute, please send an email to [email protected], marked “Contribute to the top of the front”.