Wechat applet native framework and taro contrast
Applets history (why?)
The iPhone H5 in 2007
As we all know, there are two major mobile devices, iOS and Android. In fact, there were three competing systems in the early stage, and the other one was Nokia’s Meego system. Meego adopts the dual-mode application ecosystem strategy of C + HTML5, but THE development of C is too difficult and the HTML5 experience is not good. So MeeGo fell behind; In contrast, Android stands out from the competition by relying on the Java technology ecosystem.
Therefore, in the early stage of mobile Internet, application ecology was set the keynote — native development.
At the time, the hardware didn’t work, and there was no other way, and native development could deliver a commercial experience on low-spec hardware.
But: Everyone is missing HTML
– HTML advantages: no installation updates, point-of-use, direct secondary page characteristics, has been infatuated with people
– HTML weakness: not only is it lacking in functionality, but the performance experience is a more serious problem, and the experience problem cannot be solved by simply extending JS capabilities.
2013 Baidu Light Application
By extending the native capabilities of WebView and complementing JS APIS, HTML5 applications can achieve more functions
2015 wechat JS SDK
In fact, wechat, the largest mobile browser in China, has expanded a large number of JS APIS for its browser kernel, allowing developers to use JS to call wechat payment, scanning code and many other functions that HTML5 cannot do. But each click has to wait for half a day blank screen, let a person use very painful
Hybrid application
Through tools, engine optimization and development mode adjustment, developers can write apps that are closer to the native App experience through JS.
There is another big difference between Hybrid applications and normal light applications: one is Client/Server and the other is **Browser/Server. ** Simply put, a Hybrid App is an App written in JS that needs to be installed, while a light App is an online web page.
C/S applications only need to obtain JSON data through the Internet when each page is loaded. In addition to JSON data, B/S applications also need to load page DOM, style, and logical code from the server every time. Therefore, B/S applications have slow page loading and poor experience.
However, such C/S apps, while having a good experience, lose the dynamics of HTML5 and still need to be installed, updated, and go straight to secondary pages.
So can the dynamics of C/S applications be solved?
The concept of streaming application is to package and publish the JS codes running on the client side of Hybrid applications to the server, develop the streaming loading protocol, and dynamically download these JS codes to the local engine on the mobile end. In order to speed up the first loading, the application can be run while downloading.
Just like streaming, apps can play while they play.
In 2015, the first small program for 360 was called 360 Microapp
2016 wechat Little Program, the original name is actually wechat application number, later changed to little program
Then Alibaba, mobile phone manufacturers alliance, Baidu, toutiao, launched their own small program platform, the era of small program rolling in
In September 2018, wechat took the lead in launching cloud development
Applets architecture
This is a relatively general applets architecture, currently several applets architecture design is roughly the same (fast application difference is that the view layer only native rendering)
Applets are known to be logical, view-layer-separated architectures.
The web development rendering thread and script thread are mutually exclusive, which is why long script runs can cause the page to become unresponsive, whereas in applets the two are kept separate and run on separate threads.
Performance potholes caused by the architecture
Small program this architecture, the biggest advantage is that the new page loading can be parallel, so that the page loading faster, and not card animation; But at the same time, it also caused some performance potholes. Today, I mainly introduce three points:
Logical view layer communication is blocked
Differential updates of data and components
Same layer render and blend render
Logical/view layer communication is blocked
Let’s start with swipeAction. The requirement is that the user swipes left on a list item, and the hidden menu on the right moves smoothly with the user gesture
If you want to achieve smooth hand sliding on a small program architecture, why is it difficult?
Let’s review the architecture of the applet above. The running environment of the applet is divided into the logical layer and the view layer, which are managed by two threads respectively. The applet provides data transmission and event system between the two threads. This separation has obvious benefits:
Environment isolation, both for security and performance, separation of logic and view, even if the business logic computation is very busy, does not block rendering and user interaction at the view layer
But it also brings obvious disadvantages:
- JS cannot be run in the WebView layer, and JS in the logical layer cannot directly modify the page DOM. Data update and event system can only rely on inter-thread communication, but the cost of inter-thread communication is very high, especially in the scene requiring frequent communication
Based on this architecture design, we go back to SwipeAction and analyze a touchMove operation and the internal response process of the small program:
-
4) A user drags a list item, and the View layer triggers the Touchmove event, which is transmitted by the Native layer and notified by the logical layer (4). 4 and ⓶ are shown in the following figure
-
The logical layer calculates the location to be moved, and then transmits the location data to the view layer through setData, which is also transferred by wechat client (Native), namely ⓷ and ⓸ as shown in the following figure
In fact, in the process of user sliding, touchMove callback is triggered very frequently, and each callback requires a 4-step communication process. High frequency callback leads to a significant increase in communication cost, and it is very likely to lead to page stuttering or jitter. The reason for this lag is that there is so much communication that the view layer cannot update the UI in 16 milliseconds.
In order to solve the problem of communication congestion, various small programs gradually provide corresponding solutions, such as wechat WXS, Alipay SJS and Baidu Filter. However, each small program has different support, as detailed in the following table.
In addition, wechat’s keyframe animation and Baidu’s Animation-View Lottie animation are also changing ways to reduce frequent communication.
Small program code composition
- JSON configuration
- WXML template
- WXSS style
- JS logic interaction
At present, each small program has a limit on the size of the main package, and the limit of wechat small program is 2M. This is because the speed of initial entry is critical to the user experience, and the larger the main package, the longer it takes to download. Therefore, the size of small program framework has become one of the important reference indicators for frame selection before development. If the frame is too large, the available space of business logic will be compressed.
Data/component differential update
Small program architecture has the problem of communication blocking. To solve this problem, manufacturers created WXS scripting language and keyframe animation, etc., but these are optimization solutions of the manufacturer dimension. What can we, as small program developers, do to optimize performance?
Small program development performance optimization, the core is setData calls, you can do only two things:
- Call as little as possible
setData
- Each call
setData
, pass as little data as possible, i.e., data differential update
Reduce the number of setData calls
Suppose we have a need to change the values of multiple variables as shown in the following example:
change:function(){ this.setData({a:1}); . This.setdata ({b:2}); . This.setdata ({c:3}); . This.setdata ({d:4}); }Copy the code
The above four calls to setData will trigger four logical layer and view layer data communications. In this scenario, developers need to be aware of the high call cost of setData, and they need to manually adjust the code, merge the data, and reduce the number of data communication.
Part of the tripartite framework of small programs has built-in data merge ability, developers do not need to pay attention to the call cost of setData, can rest assured to write the following code:
change:function(){ this.a = 1; . // This. B = 2; . // This. C = 3; . // This. D = 4; } Duplicate codeCopy the code
The above 4 assignments will be automatically merged into {” A “:1,” B “:2,” C “:3,” D “:4} record when uni-app runs, and setData will be called once to complete all data transmission, greatly reducing the call frequency of setData. The results are shown as follows:
To reduce the number of setData calls, there is another point to note: background pages (pages not visible to the user) should avoid calling setData.
Component differential update
Below is a screenshot of a list of tweets:
Assume that there are 200 microblogs at present. If a user likes a microblog, the data (status) of the likes needs to be changed in real time. In the traditional mode, the change of the “like” status of a microblog will transfer all the data of the whole Page through setData, which is very high consumption. Even if the change data is obtained by difference calculation as described above, the range of Diff traversal is very large and the computational efficiency is very low.
How to achieve higher performance micro blog like? This is a typical scenario for component updates.
The proper way should be to package each tweet into a component, and after the user likes it, the difference data can only be calculated within the current component range (which can be understood as the Diff range is reduced to 1/200 of the original), so as to achieve the highest efficiency.
Note that not all applets tripartite frameworks have implemented custom components, and only frameworks based on the custom component pattern encapsulation will see significant performance improvements; If the tripartite framework is developed based on components encapsulated in the old Template template, performance is not significantly improved and the Diff comparison range remains page-level.
Frame Performance PK (average)
Package size:
Native < Wepy < Taro < Mpvue < uni-app < Chameleon
Performance of wechat applets:
Taro > Mpvue > Uni-app > Wepy > Chamelon > unoptimized native code
The framework performed better than the native version, which was insane, and later research found that:
The time consuming of the wechat native framework is mainly in the call of setData. If the developer does not optimize it individually, a large amount of data will be passed each time. Uni-app, Taro, etc., do diff calculation automatically before calling setData and only transfer the changed data each time.
ecological
Small program
The logic layer of the applet is separate from the render layer. The logic layer runs in JSCore and does not have a full browser object, so it lacks the DOM API and BOM API. This difference results in some libraries familiar to front-end development, such as jQuery, Zepto, etc., not running in small programs. The JSCore environment is also different from the NodeJS environment, so some NPM packages will not run in applets.
Taro
Not only does it freely reference NPM packages, but it also supports many of the best tools and libraries in the React community, such as React-Redux and Mobx-React.
Taro’s performance is really better than native? No, we can write the best performing code in native for every scenario. However, this is too much work, and the actual project development needs to grasp the balance between efficiency and optimization. Taro’s strength is that it allows us to write more efficient code and have a richer ecosystem while still delivering good performance.