If you like my article, hope thumb up 👍 favorites, comments on three consecutive, thank you, this is really important to me!


In general mobile terminal development scenarios, each updated application function is developed through Native language and implemented through application market version distribution. However, the market changes rapidly, and the development efficiency of Native language is insufficient to some extent. Moreover, there is always a certain time lag from APP version update to application market review and release and then to user download and update, which leads to the failure of new functions to cover all users in time.

To solve this problem, developers often introduce a scripting language into the project to speed up the development process of the APP. The most widely used scripting languages on mobile are Lua and JavaScript, the former being used more in the game field and the latter being used more in the application field. The purpose of this article is to explore the selection of JavaScript engine for mobile dual ends (iOS & Android). Due to the limited personal level, there will always be omissions and deficiencies in the article, please advise a lot.

<! –truncate–>

JS engine selection points

As the most popular scripting language in the world, JavaScript has many engine implementations: JavaScriptCore for Apple, V8, the most powerful, and QuickJS……, which has recently become popular How do you choose the most suitable JS engines? Personally, I think there are several considerations:

  • Performance: No word on this. The sooner the better
  • Volume: The JS engine will increase the size of the package
  • Memory footprint: The less memory footprint, the better
  • JavaScript syntax support: The more new syntax supported, the better
  • Ease of debugging: Is debug directly supported? You still need to compile and implement the debug toolchain yourself
  • Application Marketplace platform specification: mainly iOS platform, platform prohibits application integration with virtual machine with JIT function

The trouble is that the above points are not independent of each other. For example, the V8 engine of JIT is certainly the best, but it has a large engine size and a high memory footprint. QuickJS, which is superior in package size, has an average performance gap of 5-10x compared to engines with JIT due to the lack of JIT support.

I’m going to take a look at the points I’ve just mentioned and choose four JSVMs: JavaScriptCore, V8, Hermes, and QuickJS. I’m going to talk about their strengths and features, and then talk about their weaknesses.

JS engine features compete

1.JavaScriptCore


JavaScriptCore is the default embedded JS engine in WebKit, Wikipedia has no independent entry, only in WebKit entry in the three levels of directory to introduce it, personal feeling is still a bit of a bad word, after all, is an old JS engine.

WebKit is Apple’s first open source, so WebKit engine is used in Apple’s own Safari browser and WebView, especially on iOS system. Because of Apple’s restrictions, all web pages can only be loaded with WebKit. So WebKit on iOS reached the fact monopoly, as part of the JSC WebKit module, along with the policy spring breeze, but also “basic” monopolized the iOS platform JS engine share.

Monopoly to monopoly, in fact, the performance of JSC is still possible, many people do not know that JSC JIT function is actually earlier than V8, more than a decade ago is the best JS engine, but later by V8 catch up. After iOS7, JSC has been opened as a system-level Framework for developers to use. In other words, if your APP uses JSC, you only need to import it in your project. The package size is zero overhead! Of the JS engines discussed today, JSC is the best at this.


Although JSC with JIT enabled performs well, it is only enabled in Apple’s Safari browser and WKWebView. Only in these two places JIT functions are enabled by default. If JSC is introduced directly into a project, JIT functions are turned off. Why do you do that? Rednaxelafx gave a very professional explanation:

JIT compilation requires that the underlying system support dynamic code generation, which in the case of the operating system means support for dynamic allocation of memory pages with writable executable permissions. When an application has permission to request the allocation of writable executable memory pages, it is more vulnerable to attack by allowing arbitrary code to be dynamically generated and executed, making it easier for malicious code to take advantage of it.

For security reasons, Apple prohibits third-party apps from enabling JIT when using JSC. These features are also explained in the JS Runtime page on React Native. But in practical applications, JSC is more than enough to be used as a glue language without doing heavy CPU calculations.


The above discussion is all about iOS, but on Android, JSC is not doing well. The JSC does not work well with Android devices. Although JIT can be turned on, the performance is poor, which is one of the reasons why Facebook decided to make Hermes. See the Hermes section of this article for a performance comparison.


Finally, let’s talk about debugging support for JSC. If it’s iOS, you can debug it directly using Safari’s debbuger feature. If it’s Android, I haven’t found a good way to debug the real machine so far.


Overall, JavaScriptCore has a clear home field advantage on iOS, with excellent metrics, but not so good on Android due to the lack of optimization.

2.V8


V8, I don’t think I need to explain too much, but V8 is responsible for making JavaScript where it is today. Performance did not have to say, after opening the JIT is the strongest in the industry (not only JS), there are a lot of introduction to V8 articles, I do not describe here, we talk about the performance of V8 in mobile terminal.


Also as a GoogleHome product, every Android phone is installed based on Chromium WebView, V8 is also bundled. But V8 is too tightly tied to Chromium, unlike JavaScriptCore which is packaged as a system library and can be called by all apps on iOS. As a result, if you want to use V8 on Android, you have to package it yourself. One of the best known projects in the community is J2V8, which provides the Java Bindings example for V8.

The performance of V8 is not great, and JIT can be turned on on Android, but these advantages come at a price: the memory footprint is high when the JIT is turned on, and the V8 package size (around 7 MB) makes it a bit of a luxury for a Hybrid system that just draws UI.

Let’s talk more about V8 integration on iOS. V8 launched JIT-Less V8 in 2019, which is to turn off JIT and just use Ignition Interpreter to execute JS files, so it was possible for us to integrate V8 on iOS. Because Apple still supports access to the interpreter only function of the virtual machine engine. However, I personally think that V8 with JIT turned off is of little value to access IOS, because if the interpreter is only turned on, the performance of V8 and JSC at this time is actually similar, and the introduction will increase a certain volume overhead.


Another interesting V8 feature that is not talked about much is Heap Snapshots, which V8 has supported since 2015, but which is rarely discussed in the community.

How does a heap snapshot work? In general, the first step after JSVM startup is parsing JS files, which is still time-consuming. V8 supports pre-generated Heap snapshots, which are then loaded directly into the Heap memory, so that you can quickly get the JS initialization context. NativeScript, a cross-platform framework, takes advantage of such a technique and can make JS load times up to 3 times faster. See their blog post for details.

V8 real machine debugging also needs to introduce a third party library, the Android community has done an extension to J2V8 Chrome debugging protocol, namely J2V8-Debugger project, IOS I did not find the relevant project, may need to implement a set of extensions.


In general, V8 is indeed the performance king in JSVM, and can give full play to its power when used on Android, but iOS platform is not recommended because of home disadvantage.

3.Hermes


Hermes is a JS engine that Facebook opened to the public in mid-2019. From the release record, it can be seen that this JS engine was built for React Native and was designed from the very beginning for the Hybrid UI system.

Hermes was originally launched as a replacement for the JavaScript engine on RN’s Android side, known as JavaScriptCore (because JSC was so lame on Android). We can take a look at the timeline. Since Facebook announced Hermes open source on 07-12, the maintenance information of JSC-Android has stopped forever at 2019-06-25, which is a very obvious signal: We don’t maintain JavaScriptCore Android anymore, people use our Hermes.

Hermes is currently scheduled to launch on iOS with React Native 0.64, but the RN update blog has not yet been published. Apple Agreement 3.3.2 specification interpretation, I will not say more here.


Hermes features two main features, one does not support JIT, and one supports direct generation/loading of bytecode, which we will discuss separately below.

There are two main reasons why Hermes does not support JIT: after adding JIT, the JS engine will take longer to warm up, and to some extent, it will lengthen the first screen TTI (the time for the page to interact with each other for the first time). Nowadays, front-end pages pay attention to one second to open, and TTI is a very important measure. Another problem is that JIT will increase the size of the package and memory footprint. Chrome has a high memory footprint and V8 will take some responsibility for it.

Because it does not support JIT, Hermes is not strong in some CPU-intensive computing areas, so in the Hybrid system, the optimal solution is to make full use of the JavaScript glue language. CPU-intensive computing (such as matrix transformation, Parameter encryption, etc.) is done in the Native, calculated and then passed to JS display in UI, so that performance and development efficiency can be taken into account.


One of the most notable aspects of Hermes is its support for generating bytecode, which I wrote in my previous post 🎯 What is the core technology of a cross-end framework? As mentioned earlier, after Hermes joined AOT, the processes of Babel, Minify, Parse and Compile were all done on the developer’s computer. We simply sent out the bytecode to make Hermes run. Let’s run a demo.

Write a file called test.js, and you can write anything in it. Then compile the source code of Hermes, compile the process directly according to the document on the line, I will skip here.

First of all, Hermes supports direct interpretation of running JS code, which is the normal JS loading, compiling and running process.

hermes test.js

Try this Bytecode function by adding the -emit-binary argument:

hermes -emit-binary -out test.hbc test.js

A test.hbc bytecode file is then generated:

Finally, we can tell Hermes to load and run the test. HBC file directly:

hermes test.hbc

Objective evaluation of Hermes bytecode, first of all, save the process of parsing and compiling in the JS engine, JS code loading speed will be greatly accelerated, reflected in the UI is TTI time will be significantly shortened; Another advantage is that Hermes bytecode is designed with mobile performance constraints in mind. It supports incremental loading rather than full loading, making it more friendly to mid – and low-end Android devices with limited memory. The size of the bytecode will be larger than the size of the original JS file, but considering the size of the Hermes engine itself, these size increments are acceptable.


There are two articles on Hermes performance testing in detail. One is React Native Memory Profiling: JSC vs V8 vs Hermes, you can see that Hermes performs very well on Android devices, while JSC performs very poorly:

Another article is Ctrip’s article: Ctrip’s research on RN’s new generation of JS engine Hermes shows that Hermes has the highest overall performance (JSC is still the same failure) :


With that said, let’s talk about Hermes’ JS syntax support. Hermes mainly supports ES6 syntax. When it was first opened source, it did not support Proxy, but it has been supported in V0.7.0. The team was also thoughtful and didn’t support “With Eval ()” and other design-bad APIs, a design trade-off that I personally agree with.


Finally, let’s talk about Hermes’ debugging capabilities. Hermes currently supports Chrome debugging protocol, so we can debug the Hermes engine directly with the Chrome Debugging tool. See the documentation for the details: Debugging JS on Hermes using Google Chrome’s DevTools


In general, Hermes is a JS engine designed for Hybrid UI System on the mobile end. If you want to build a Hybrid System by yourself, Hermes is a very good choice.

4.QuickJS


Before we get into QuickJS, let’s talk about its author: Fabrice Bellard.

There is a saying in the software industry that one senior programmer can create more value than 20 mediocre programmers, but Fabrice Bellard is not a senior programmer, he is a genius, in my opinion, his creativity can exceed 20 senior programmers, we can trace the timeline of what he has created:

  • In 1997, he published the fastest algorithm for calculating PI, which was a variation of Bailey-Borwein-Plouffe’s formula. The time complexity of Bailey-Borwein-Plouffe’s formula was O(n^3), which he optimized to O(n^2), increasing the computation speed by 43%. This was his mathematical achievement
  • In 2000, he released FFMPEG, one of his achievements in the field of audio and video
  • Won the International Obfuscation C Code Competition for three times in 2000,2001,2018
  • In 2002, he released TinyGL, his work in the field of graphics
  • In 2005, he launched QEMU, his achievement in virtualization
  • In 2011, he used JavaScript to write a PC virtual machine called JSLinux, a Linux operating system that runs on a browser
  • In 2019, QuickJS, a JS virtual machine that supports the ES2020 specification, was released

Bellard is one of those people whose feelings of envy and envy turn to worship when the gap between people is several orders of magnitude smaller.


For a moment, let’s take a look at the QuickJS project. QuickJS inherits the signature of Fabrice Bellard’s work — it’s small and powerful.

QuickJS is very small, with only a few C files and no messy third-party dependencies. QuickJS syntax support up to ES2020. Test262 tests show that QuickJS syntax support is even better than V8.


So how does QuickJS perform? QuickJS has a benchmark that compares the performance of multiple JS engines on the same test case. Here are the results:

Combined with the above table and some personal tests, we can simply draw some conclusions:

  • The combined V8 score of QuickJS with JIT enabled is approximately 35 times higher than that of QuickJS, but QuickJS performs exceptionally well in an equally lightweight JS engine
  • QuickJS is much lower than V8 in terms of memory footprint, since JIT is a big memory eater, and QuickJS is designed to be embedded system friendly (Bellard Achievement Trophy 🏆 +1)
  • QuickJS and Hermes scored about the same, and I did some private performance tests, and the performance of the two engines was similar


Because of QuickJS’s design, I don’t wonder how it compares to Lua’s performance. Lua is a very compact language that has always been used as a glue language for games and C/C++ development. I personally wrote some test cases and found that QuickJS and Lua performed pretty well. I found a blog post about Lua vs QuickJS on the web. I also did some testing and concluded that the performance of Lua and QuickJS was about the same, with some scenarios where Lua was faster than QuickJS.


As mentioned in the official documentation, QuickJS supports bytecode generation, which eliminates the need for compilation and parsing of JS files. I initially assumed that QuickJS, like Hermes, could generate bytecode directly and then hand it over to QuickJS for interpretation and execution. QJSC generates bytecode with the -e and -c options, which first generate a bytecode from a JS file and then spell it into a.c file that looks something like this:

#include <quickjs/quickjs-libc.h> const uint32_t qjsc_hello_size = 87; Const uint8_t qjsc_hello[87] = {0x02, 0x04, 0x0E, 0x63, 0x6F, 0x6E, 0x73, 0x6F, 0x6F, 0x6C, 0x65, const uint8_t qjsc_hello[87] = {0x02, 0x04, 0x0E, 0x63, 0x6F, 0x6E, 0x73, 0x6F, 0x6F, 0x6C, 0x65, 0x06, 0x6c, 0x6f, 0x67, 0x16, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x20, 0x57, 0x6f, 0x72, 0x6c, 0x64, 0x22, 0x65, 0x78, 0x61, 0x6d, 0x70, 0x6c, 0x65, 0x73, 0x2f, 0x68, 0x65, 0x6c, 0x6c, 0x6f, 0x2e, 0x6a, 0x73, 0x0e, 0x00, 0x06, 0x00, 0x9e, 0x01, 0x00, 0x01, 0x00, 0x03, 0x00, 0x00, 0x14, 0x01, 0xa0, 0x01, 0x00, 0x00, 0x00, 0x39, 0xf1, 0x00, 0x00, 0x00, 0x43, 0xf2, 0x00, 0x00, 0x00, 0x04, 0xf3, 0x00, 0x00, 0x00, 0x24, 0x01, 0x00, 0xd1, 0x28, 0xe8, 0x03, 0x01, 0x00, }; int main(int argc, char **argv) { JSRuntime *rt; JSContext *ctx; rt = JS_NewRuntime(); ctx = JS_NewContextRaw(rt); JS_AddIntrinsicBaseObjects(ctx); js_std_add_helpers(ctx, argc, argv); js_std_eval_binary(ctx, qjsc_hello, qjsc_hello_size, 0); js_std_loop(ctx); JS_FreeContext(ctx); JS_FreeRuntime(rt); return 0; }

Because this is a.c file, you’ll have to compile it again to generate binaries to run it.

In terms of bytecode design points, QuickJS and Hermes are positioned differently. While generating bytecode directly can greatly reduce the parsing time of JS text files, QuickJS is more embedded. The generated bytes are stored in a C file and need to be compiled to run. Hermes is based on React Native, which generates bytecode with distribution in mind from the very beginning (hot update is an application scenario) and supports direct loading and running of bytecode without having to compile it again.


While this is primarily a performance concern, let’s take a look at the development experience, starting with QuickJS debugging support. As of now (2021-02-22), QuickJS does not have an official debugger, which means that debugger statements are ignored. The community has implemented a set of VSCODE-based debuggers that support VSCODE-QuickJS-debug, However, there will be some customization to QuickJS, and I am expecting official support for a debugger protocol.

From an integration point of view, the community already has sample projects for iOS and Android that can be referenced and plugged into their own projects.


Taken together, QuickJS is a very promising JS engine, with a high level of JS syntax support, while maximizing performance and size. The Hybrid UI architecture and the game scripting system on the mobile side can be considered.

Selection of thinking

1. The single engine

Single engine means that the iOS end and the Android end use a unified engine, so that the differences in the JS layer can be erased, it is not easy to appear the same JS code running on iOS is good, Android is wrong on the strange BUG. Combined with the cross-end solutions on the market, there are roughly the following three types of selection:

  • Unified adoption of JSC: This is a pre-React Native 0.60 solution
  • Unified use of Hermes: This is the React Native 0.64 design solution
  • Uniform QuickJS: QuickJS is small and can be used to make a very lightweight Hybrid system

V8 doesn’t have a home field advantage on iOS, it performs like JSC when JIT is turned off, and it also increases the package size, which is not very cost-effective.

2. The twin-engine

It is also easy to understand the dual engine, that is, the iOS terminal and the Android terminal are used separately. The advantage is that they can give play to their own home advantage, while the disadvantage is that the results of the dual engine may be inconsistent due to the platform inconsistency. The current solutions are as follows:

  • IOS with JSC, Android with V8: Weex, NativeScript are all like this, can have a good balance in package size and performance
  • JSC for iOS, Hermes for Android: React Natvie today
  • JSC for iOS and QuickJS for Android: Didi’s cross-end framework Hummer is designed to do just that

From the perspective of selection, JSC is selected on iOS and Android has its own choice, which gives full play to the characteristics of the two platforms:)

3. The debugging

Whether it’s a single engine or a dual engine, the integrated business development experience is also important. For engines with built-in debugger functionality, that’s fine, but for engines that don’t implement debugging protocols, the lack of debuggers can still affect the experience.

However, there is no way. Generally speaking, we can save the country along a straight line. Similar to the idea of Remote JS Debugging in React Native, we can add a switch to send the JS code to the Chrome Web Worker through websocket. Then use Chrome V8 for debugging. The advantage of doing this is that you can adjust some business bugs, the disadvantage is that it will introduce a JS engine, in case of some engine implementation bugs, it is difficult to debug. But the good news is that it’s very, very rare, and we shouldn’t give up eating for fear of choking, right?

conclusion

This paper analyzes JavaScriptCore, V8, Hermes and QuickJS engines from their performance, volume, debugging convenience and other functional points, and analyzes their shortcomings and weaknesses respectively. If you are confused about the selection of mobile terminal JS engines, I think from this paper, Still can give a lot of people with inspiration, I hope this article can help you.

Refer to the link

What is the core technology of the cross-end framework?

How do I hide my hot update bundle files?

Deep understanding of JScore

The QuickJS engine has been around for a year


If you like my article, hope thumb up 👍 collection in the look at the three even once, thank you, this is really important to me!

Welcome everyone to pay attention to my WeChat official account: Spiced Egg Lab. At present, I focus on front-end technology and do some minor research on graphics.

Original link 👉 mobile JS engine which strong? Silicon Valley:…… : Updated more timely, better reading experience