Taro has 100% supported the conversion of jingdong mini program, attracting the attention of many students. There are cheering voices: “a key to jingdong small program, finally can get off work on time.” Students who do not know Taro well ask, “How effective is the conversion?” “, “Is the performance of the converted code up to par? And so on.

For all kinds of questions, from the perspective of performance and development experience, we made a comparison between the native development of JINGdong small program and the development of Taro.

The performance comparison

For performance, we tested the package size of Taro’s empty project and Taro’s performance in a long list. Because package size affects the first load speed of small programs, long lists are a common performance bottleneck scenario.

Taro empty project package size

Currently, each small program has a limit on the size of the main package, for example, jingdong small program is limited to 5M, and wechat small program is limited to 2M. This is because the speed of initial entry is critical to the user experience, and the larger the main package, the longer it takes to download. Therefore, the size of small program framework has become one of the important reference indicators for frame selection before development. If the frame is too large, the available space of business logic will be compressed.

The following image shows the size of Taro runtime frame before and after compression. It can be seen that the compressed size is only 84K, which has a very small impact on the main package space.

Before compression:

After the compression:

Long list rendering performance

The benchmark is introduced

We write a benchmark with reference to JS-Framework-benchmark to test and compare Taro code and native code rendering performance in long list scenarios.

Speed indicator
  • Initialization: render 40 items from entry to completion.

  • Create: create 40 items after page onLoad.

  • Add: Add 20 items at a time to a list of 40 items that have been created.

  • Partial update: Update the name of every 10th item in 400 items.

  • Swap: Swap the positions of two items among 400 items.

  • Select: click on the product picture to change the font color of the product name.

Timing point

Taro:

Start: The top of the event response function.

End: The top of the setState callback.

Native applets:

Start: The top of the event response function.

End: The top of the setData callback.

other

Benchmark repository: Github

Taro version: 1.3.21

Test model: Magic Blue Note

Test method: Test 10 pieces of data in each group, and calculate the average value after removing the maximum and minimum values

The test results

Since the trigger timing of setData callback is slightly different in jingdong and wechat applets, it is listed separately.

operation Taro jd Original jingdong small program
Initialize the 150 123
create 87 85
Part of the update 125 235
exchange 140 213
The selected 131 155
operation Taro weapp Native wechat small program
Initialize the 1155 1223
create 500 408
Part of the update 167 307
exchange 252 309
The selected 193 178

Tests show that the length of the list has an impact on the time it takes to add operations: the longer the list, the longer it takes to add operations. Therefore, we cannot simply calculate the average increase time for N increase operations. Here we choose to use a line chart to show the change trend of rendering time as the number of operations increases.

The test results

create

Taro does some processing with the data when it is created, so it is slightly slower than native.

Initialize the

Initialization differs from creation in that it introduces page construction time. Initialization time = page construction time + creation operation time.

Taro processes data during page initialization and creation operations. Therefore, the overall initialization takes slightly longer than the native initialization.

So why is the initialization of the Taro program in wechat shorter? In Benchmark, Taro renders lists in componentWillMount and onLoad, respectively, while Taro constructs pages using Component, ComponentWillMount is actually triggered in the Attached life cycle. This phenomenon occurs because attached programs in wechat are triggered much earlier than onLoad.

The selected

Since Taro only wraps a layer around callback functions, handling event parameters, this, etc., it is comparable to native speed.

Partial update, exchange, add

Taro will be faster than native. Taro will first diff the data to be setData and the data of the current data, which can greatly reduce the amount of setData and speed up rendering. By comparing the two line charts, it can be seen that the greater the amount of data, the greater the optimization benefit of DIFF.

Taro performance optimization for applets

setData

In the small program, the performance problem mainly lies in the large amount of setData ata time and the frequent call of setData. Taro uses DIff to solve the problem of large amount of setData in a single time, and also has a solution for frequent calls to setData.

Taro’s setState follows the React specification. Unlike synchronous update of setData, it will update views asynchronously. Therefore, if the developer calls setState multiple times in a single event loop, setData will only be called once in the next event loop.

Jump preloading

When the applets jump from page A to page B, the onLoad is triggered with A delay of 300 to 400 milliseconds. Taro provides componentWillPreload hooks that execute immediately after a jump is initiated. Developers can pull data from hooks as early as possible, saving 300 to 400 milliseconds from pulling data after onLoad is triggered.

shouldComponentUpdate & Taro.PureComponent

The developer Class Component can extend Taro.PureComponent, so that the Component will perform a shallow comparison of the old props and the new state before updating to avoid unnecessary updates. Of course developers can implement shouldComponentUpdate by manually controlling the old props and the new state to decide whether or not to update the component.

Taro.memo

If developers are writing functional components, they can implement shouldComponentUpdate with Taro. Memo.

Development Experience Comparison

grammar

The native syntax of JINGdong applets is similar to that of wechat applets, which are mVVM-like syntax. Developers who have not been exposed to applets have certain learning costs. In addition, the style syntax is a subset of CSS, where the adaptive size unit is RPX, which means that we need to manually configure the workflow if we need a CSS preprocessor, and always pay attention to the size unit conversion when writing the style.

While Taro currently follows React syntax (and will support all Web front-end frameworks in the future), JSX makes our code more flexible. As a result, developers with React development experience will be able to start developing Taro immediately. In terms of styles, Taro supports selecting whether to use CSS preprocessors when creating projects, which will automatically configure the corresponding workflow. Taro will also automatically convert user-written PX values into corresponding RPX values, eliminating the need for developers to deal with style units for different platforms.

The project structure

In native development, the page and component are each composed of four files (JS, JXML, JXSS, JSON), and the code management is relatively cumbersome.

The Taro pages and components are made up of a JS file and a style file that are easy to create and maintain.

The development of ecological

After continuous iteration, wechat small program has launched a plug-in system and support the function of referencing NPM package. However, jingdong Small program does not support the first two, jingdong small program community has not been built, the development of ecological resources is very scarce.

Taro not only freely references NPM packages, but also supports many excellent tools and libraries in the React community, such as React-Redux and Mobx-React.

Development assistance

The native development of JINGdong small program does not support Typescript, and only has the auto-completion function in the IDE editor, which is not efficient in coding and prone to error.

Taro supports Typescript perfectly, and comes with smart hints for code and real-time code checking capabilities to greatly improve development efficiency.

Write in the last

Taro’s performance is really better than native? No, we can write the best performing code in native for every scenario. However, this is too much work, and the actual project development needs to grasp the balance between efficiency and optimization. Taro’s strength is that it allows us to write more efficient code and have a richer ecosystem while still delivering good performance.

Finally, welcome to use Taro to develop applications. If you have any questions or wish to cooperate, please contact [email protected]. We will reply you as soon as possible.

A link to the

  • Official website of Taro
  • Taro document
  • Taro BBS

Welcome to the bump Lab blog: AOtu.io

Or pay attention to the bump Laboratory public account (AOTULabs), push the article from time to time: