Alon Zakai: Big Beard
Translation: the original huziketang.com/blog/posts/… Why WebAssembly is Faster Than ASM.js
Please indicate the source, keep the original link and author information
Alon Zakai
Why WebAssembly is Faster Than ASM.js
WebAssembly is a programming language designed for the Web that generates browser-executable binaries. And on February 28, 2017, all four major browsers agreed to announce that the MVP version of WebAssembly is complete and that a stable version will soon be available for browsers. One of WebAssembly’s main goals is to be fast. This article will give some technical details on how it is made faster.
Fast, of course, is a relative term. WebAssembly is faster than JavaScript and other dynamic languages mainly because of its statically typed nature and ease of optimization. WebAssembly aims to be as fast as native execution, and ASM.js is already close to that goal, but WebAssembly wants to close the gap even further. So this article focuses on why WebAssembly is faster than ASM.js.
Before WE get started, a few caveats: There are always some under-optimized situations with new technologies, so for now, WebAssembly is not the fastest in every situation. This article focuses on why WebAssembly should be faster. For some cases where it’s not that fast, that’s something that needs to be fixed in the future.
So with these standards in place, let’s take a look at why WebAssembly is faster.
1. Start the
WebAssembly was designed to be smaller, faster to download, and faster to parse, so that even a large Web application can launch quickly.
JavaScript code is compressed with Gzip, which is already much more compressed than native code, and it is not easy to reduce its size further. But with careful design of WebAssembly file sizes (LEB128 metrics), binary WebAssembly files are smaller than compressed JavaScript files. Typically, it is 10 to 20 percent smaller than gzip compressed.
WebAssembly takes the parsing process one step further: it parses an order of magnitude faster than JavaScript. This is because WebAssembly’s binary format is designed for better parsing. WebAssembly’s parsing and optimization functions are also easier to implement in parallel, a feature that works better on multi-core machines.
There are many factors that affect startup time in addition to download and parsing, such as the complete VM optimization of the code, additional files required to download before execution, and so on. But the download and parsing steps are unavoidable anyway, so optimize them as much as possible. For both the browser and the app, there are ways to avoid or mitigate the impact of other factors (such as complete optimization of code, which can be avoided by the Baseline compiler or interpreter for WebAssembly).
2. The CPU features
One of the tricks that makes ASM.js so fast is to take advantage of CPU features. JavaScript number types are double, and asM.js has a bitwise and operation after addition, which makes double addition logically equivalent to a CPU doing simple int addition (which is fast). Asm.js subtly allows VMS to use CPU more efficiently in this way.
But there are some things that JavaScript expresses that asM.js can’t. WebAssembly is not bound by JavaScript, so let’s take a look at some more CPU features available:
- 64-bit integer. 64-bit integer based operations can be up to four times faster. For example, you can increase the speed of hashing and encryption algorithms.
- Load and store offsets. It is particularly useful in that almost all memory objects containing fields involve offsets (such as structs in C, etc.).
- Unaligned loading and storage. This avoids the mask required by ASM.js (asM.js operations for Typed Array compatibility) and is useful for almost all load and storage problems.
- Various CPU instructions, such as popcount, Copysign, etc. Each instruction is useful for a specific usage scenario (popCount, for example, is useful in the case of cryptanalysis).
How useful it is in each particular scenario depends on how the features mentioned above are used. Compared to normal use of ASM.js, our statistics are about 5% faster than asM.js. In the future, there are more CPU features that can be exploited to improve speed, such as SIMD.
3. Tool chain improvement
WebAssembly is the primary target of compiler generation, so its execution consists of two main parts: the compiler that generated it (tool-chain side) and the virtual machine that runs it (browser-side). Good performance all depends on these two parts.
The situation is similar for asm.js, and Emscripten has made a number of optimizations to the toolchain, as well as an optimizer to run LLVM and Emscripten’s ASm.js optimizer. WebAssembly optimizations are designed on top of this, with some webAssembly-specific improvements added. Our own learning of ASM.js has helped us improve WebAssembly in a number of ways:
- We used the Binaryen WebAssembly optimizer instead of the Emscripten ASM.js optimizer, which is designed specifically for speed and increases speed at the expense of more optimization steps. For example, by removing duplicate functions during optimization, the overall C++ compiled code is reduced by about 5%.
- Better optimization of redundant, complex control flows can improve the efficiency of the Relooper algorithm, which is also helpful for compiling and interpreting type loops.
- The Binaryen optimizer is designed and refined through experimentation, and experimentation with the Super optimizer leads to subtle improvements in various situations — I think ASM.js has done the same thing.
Overall, these toolchain improvements improve WebAssembly’s speed even further over asM.js (WebAssembly’s speed improved by 5% and 7%, respectively, in Box2D’s speed review).
Excellent predictable performance
Asm.js is close to native execution speed, but it may not be up to the same standard in all browsers. Because in the process of using ASM.js, some people try to optimize asM.js in one way, while others optimize asM.js in another way, this results in different versions and results for different people. Although a consensus has been reached over time, the fundamental problem with ASM.js is that there is no single standard for it. It’s just a subset of non-standard JavaScript that was created by one vendor, and its users use it according to their own preferences and habits.
WebAssembly, by contrast, was designed by several major browser vendors. Compared to JavaScript, JavaScript can only be sped up through some innovative approach, or asm.js, and neither approach is suitable for all browsers. WebAssembly’s optimizations are accepted by most vendors. For WebAssembly, there is still a lot of room for improvement for different VMS (AOT, JIT, different compilation methods). But it’s safe to assume that good performance across the entire network is in sight.
To learn more about WebAssembly, check out the WebAssembly series below.
Background:
- WebAssembly series 1 introduces WebAssembly vividly
- How JavaScript Just-in-time (JIT) works
- How does the compiler generate assembly
The current state of WebAssembly
- WebAssembly Series (4) How WebAssembly works
- Why is WebAssembly faster?
The future of WebAssembly
- The present and future of WebAssembly
Please indicate the source, keep the original link and author information
Welcome to pay attention to my front end daha – Zhihu column, regularly release high-quality front end articles.
I’m currently working on a little book called React.js.