Daily busy playing bug + addicting games recently, no post on zhihu rectified; it is a bit ashamed – | | |
Developers may not be familiar with WebAssembly in general, but read more about it at the end of this article
Original link:
WebAssembly’s post-MVP future: A cartoon skill tree
There is a misconception about WebAssembly that the 2017 WebAssembly MVP release is the final version of WebAssembly.
I can see where this misconception is coming from; the WebAssembly community group is actually dedicated to backward compatibility. This means that the WebAssembly you create now will continue to work on browsers in the future.
But that doesn’t mean WebAssembly is complete. In fact, far from it. There are many more features that will fundamentally change what you can do with WebAssembly.
I think of these future features a bit like skill trees in video games. We have fully mastered the first few of these skills, but we still need to complete the entire skill tree below to unlock the full application.
So, let’s look at what we already know, and then we can see what we need in the future.
Minimum Viable Product
The WebAssembly story begins with Emscripten, which can run C ++ code on the Web by converting it to JavaScript. This makes it possible to bring large existing C ++ code bases, such as games and desktop applications, to the Web.
But its auto-generated JS is still much slower than native code. But Mozilla engineers discovered a type system hidden in the generated JavaScript and figured out how to make it run very fast. This JavaScript subset is called ASm.js.
Other browser vendors saw that ASM.js was very fast, so they started adding the same optimizations to their engines.
But that’s not the end of the story. This is just the beginning. There are still some optimizations that can be made to the engine to make it faster.
But they can’t do that in JavaScript itself. Instead, they needed a new language, one designed specifically for compilation. This is WebAssembly.
So what will be required in the first version of WebAssembly? What do we need to get C and C ++ to run efficiently on the Web?
Compile the target
WebAssembly developers want to support more than just C and C ++. They want to be able to compile into WebAssembly from many different languages. So they need a language-independent compilation target.
They need something like assembly language, something like desktop applications that are compiled to x86. But this assembly language is not suitable for real physical machines. This will be a concept machine.
fast
The compile target must be designed so that it can run very quickly. Otherwise, WebAssembly applications running on the Web will not meet users’ expectations for smooth interaction and games.
compact
In addition to running time, load time also needs to be fast. Users have certain expectations about how fast certain content will load. For desktop applications, because the applications are installed on your computer, they load quickly. For Web applications, users expect fast load times as well, because Web applications typically don’t have to load as much code as desktop applications.
But when you combine those two things, it gets tricky. Desktop applications are typically fairly large code bases. Therefore, if they are on the network, a lot of content needs to be downloaded and compiled when the user accesses the URL for the first time.
To meet these expectations, we need our compilation targets to be compact (small in size). That way, it can move quickly across the network.
Linear memory
These languages also need to be able to use memory in a different way than JavaScript does. They need to be able to manage their memory directly — indicating which bytes go together.
This is because languages like C and C ++ have a low-level feature called Pointers. You can have a variable that has no value, but a memory address for that value. Therefore, to support Pointers, programs need to be able to write and read from specific addresses.
But you can’t give a program you download from the Internet arbitrary access to bytes in memory. So, in order to create a safe way to access memory, just as with native programs, we had to create something that could access a specific part of memory.
For this purpose, WebAssembly uses a linear memory model. This is done using TypedArrays. It’s basically like a JavaScript array, but the array only contains memory bytes. When you access the data in it, you just use the array indexes, which you can treat as memory addresses. This means that you can pretend that the array is C ++ memory.
Achievement unlocked
So, with all of these skills, people can run desktop applications and games in a browser as if they were running on a computer.
This was pretty much the skill when the WebAssembly MVP was released. It is indeed the smallest possible version.
This allows certain types of applications to work, but there are still many others that can be unlocked.
Heavyweight desktop applications
The next achievement to unlock is the heavier desktop application.
Can you imagine something like Photoshop running in your browser? You can load it instantly on any device like you would with Gmail?
This is already happening. Autodesk’s AutoCAD team, for example, has made its CAD software available in the browser, and Adobe has used WebAssembly to provide Lightroom through the browser.
But we need to add some functionality to ensure that all of these applications — even the most complex ones — run well in the browser.
thread
First, we need to support multithreading. Modern computers have multiple cores. It’s basically multiple brains, multiple threads working at the same time. This can make things faster, and in order to take advantage of these cores, WebAssembly needs to support threads.
SIMD
In addition to threads, there’s another technique that leverages modern hardware that lets you process things in parallel.
That’s SIMD: single instruction multiple data. With SIMD, large chunks of memory can be occupied and divided into different execution units, similar to the kernel. Then you have the same code — running the same instruction on these execution units, but they each apply that instruction to their own data bits.
64 – bit addressing
Another hardware feature that WebAssembly needs to take full advantage of is 64-bit addressing.
Memory addresses are just numbers, so if your memory address is only 32 bits long, you can only have 4GB of linear memory.
But for 64-bit addressing, you have 16 exabytes. Of course, you don’t have 16 exabytes of actual memory on your computer. Therefore, the maximum value depends on how much memory the system can actually provide, not on the WebAssembly.
Streaming compilation
For these applications, we don’t just need them to run fast. Also, we need to load faster than we do now. We need some optimizations specifically designed to improve load times.
One important step is streaming compilation — compiling the WebAssembly file while still downloading it. WebAssembly is specifically designed for easy streaming compilation. In Firefox, we compiled it much faster than we could download it over the network, and it was almost finished compiling when the file was downloaded. Other browsers are also gradually supporting this feature.
Another important thing is to have a layered compiler.
In Firefox, this means having two compilers. The first, called the base compiler, starts as soon as the file starts to download. It compiles code very quickly for quick startup.
The code it generates is fast, but not optimized to 100%. For better performance, we run another compiler, the Optimizing Compiler, on several threads in the background. This compilation takes longer, but produces extremely fast code. When it’s done, we’ll replace the base version with a fully optimized version.
This way, we can start quickly using the base compiler and execute quickly using the optimized compiler.
In addition, we are developing a new optimized compiler called Cranelift. Cranelift is designed to compile code quickly and in parallel at the function level. At the same time, it generates code that performs better than our current optimized compiler.
Cranelift is currently added to the development version of Firefox, but is disabled by default. Once we enable it, we’ll get fully optimized code faster, and the code will run faster.
But we can use better tricks to implement it, so we don’t need to compile…… most of the time
Implicit HTTP caching
With WebAssembly, if the same code is loaded when two pages load, it compiles to the same machine code. It does not need to change based on the data flowing through it, as the JS JIT compiler needs to do.
This means we can store the compiled code in the HTTP cache. Then, when the page loads and goes to fetch the.wasm file, it will just extract the precompiled machine code from the cache without doing any compilation.
Other improvements
There’s a lot of discussion going on around other improvements, so stay tuned for other load time improvements.
How are we doing?
Where are we now in supporting these heavyweight applications?
- Threads Regarding threads, we have a proposal that has been completed, but earlier this year a key feature – SharedArrayBuffers – was turned off in the browser. They will open again. Turning them off is only a temporary measure to mitigate the impact of ghost security issues found in cpus released earlier this year, so stay tuned for work in progress.
- SIMD SIMD is currently under active development.
- 64-bit addressing With WASM-64, we’ve seen how to add it, much like x86 or ARM supports 64-bit addressing.
- Streaming compilation We added streaming compilation at the end of 2017, and other browsers are following.
- We added the base compiler in late 2017, and other browsers added the same type of architecture over the past year.
- Implicit HTTP caching In Firefox, we are close to implementing support for implicit HTTP caching.
- Other improvements Other improvements are currently under discussion.
This is still a work in progress, but you can already see some heavyweight applications emerging because WebAssembly has provided the required performance for these applications.
Once all these features are in place, more heavyweight applications will be able to make their way into the browser.
Small modules that interoperate with JavaScript
WebAssembly is not only suitable for games and heavyweight applications, but also for general Web development: small modular Web development.
Sometimes a small corner of your application does a lot of heavy processing, and in some cases it can be done faster using WebAssembly. We want to make parts of these small modules easier to port to WebAssembly.
This is an example of what has happened. Developers have consolidated WebAssembly modules into places where there are many smaller modules doing a lot of the heavy lifting.
One example is the source Map parser used in Firefox DevTools and WebPack. It was rewritten in Rust and compiled into WebAssembly, which made it 11 times faster. After doing the same rewrite, the Gutenberg parser for WordPress is 86 times faster on average.
But to really make this use widespread — to make people feel comfortable with it — we need something more.
Quick calls between JS and WebAssembly
First, we need fast calls between JS and WebAssembly, because if you integrate a WebAssembly module into an existing JS system, chances are you’ll need to make a lot of calls between the two, and those calls need to be fast.
However, when WebAssembly first appeared, these calls were not fast. In the MVP version, the engine has only minimal support for calls between the two. The engine just makes the call work, but it doesn’t speed it up. Therefore, the engine needs to be optimized in this respect.
We’ve recently done this in Firefox. Currently, some of these calls are actually faster than non-inline JavaScript calls. Other engines are also working on this problem.
Fast and easy data exchange
However, this brings us to another problem. When you make calls between JavaScript and WebAssembly, you usually need to pass data between them.
You need to pass the value to or return the value from the WebAssembly function. This can be slow or difficult.
This is hard for several reasons. One reason is that currently WebAssembly only understands numbers. This means that more complex values, such as objects, cannot be passed in as arguments. You need to convert that object to a number and put it in linear memory. The location in linear memory is then passed to WebAssembly.
It’s a little complicated. Converting data to linear memory takes some time. So we need to make it easier and faster.
ES module integration
We also need to integrate the browser’s built-in ES module support. Now instantiate the WebAssembly module using the imperative API. You call a function, and it returns a module.
But this means that WebAssembly modules are not part of the JS module diagram. In order to use imports and exports as well as JS modules, we need to integrate the ES module.
Toolchain integration
However, just being able to import and export is not enough. We need a place to distribute and download these modules, and some tools to package them.
What is WebAssembly’s NPM? How about using NPM directly?
What is a Webpack or parcel for a WebAssembly? How about using Webpack or Parcel directly?
These WebAssembly modules should not be any different from other JS modules to the consumer, so there is no reason to create a separate ecosystem, just tools to integrate them.
compatibility
One more thing we need to do in our existing JS applications is support older browsers, even those that don’t know what WebAssembly is. We need to make sure that you don’t have to completely rewrite the entire module in JavaScript to support IE11.
How are we doing?
So where are we so far?
- Quick calls between JS and WebAssembly are now fast in Firefox, and other browsers are also working on this issue.
- Easy and fast Data exchange To facilitate fast data exchange, there are some suggestions that can help solve this problem. As I mentioned earlier, one reason you have to use linear memory for more complex data types is because WebAssembly only understands numbers. The only types it has are integers and floating point numbers. This will change in the reference type proposal. This proposal adds a new type that WebAssembly functions can take as an argument and return. This type is a reference to an external WebAssembly object — for example, a JavaScript object. But WebAssembly cannot manipulate this object directly. It still needs to use some JavaScript glue to actually perform operations like calling methods. That means it works, but it’s slower than we need it to be. To speed things up, there is a proposal called the host-binding proposal. It lets a WASM module declare which binders are required for its imports and exports so that these binders don’t need to be written in JS. By extracting adhesives from JS into WASM, these adhesives can be completely optimized away when the built-in Web API is called. There’s another interaction that we can simplify. This is related to how long the trace data needs to stay in memory. If there is some data in linear memory that JS needs to access, it must be left there until JS reads the data. But if you leave it there forever, you get what’s called a memory leak. How do I know when I can delete data? How do you know when JS is running out? For now, you have to manage it yourself. Once JS runs out of data, the JS code must call something like a freefall function to free memory. But it’s easy to go wrong. To simplify this process, we added WeakRefs to JavaScript. With this, you can look at objects on JS. Then, when the object is garbage collected, you can clean it up on the WebAssembly side. So these proposals are in the pipeline. In the meantime, the Rust ecosystem has created tools that automate all of this for you and can provide downgrading options for the parts that are being proposed. Of particular note is a tool, WASM-Bindgen, that other languages can use as well. When it sees that your Rust code should do something like receive or return certain types of JS values or DOM objects, it automatically creates JavaScript glue code that does that for you, so you don’t need to think about it. And because it is written in a language-independent way, it can be adopted by other language toolchains.
- ES module integration For ES module integration, the proposal is very close. We are working with browser vendors to implement it.
- Toolchain support
For toolchain support, there are parallels in the Rust ecosystemwasm-pack
Such a tool automatically runs everything needed to package code for NPM. Developers are also actively working to support it. - compatibility
And finally, for backward compatibility, we havewasm2js
Tool. It gets a WASM file and outputs the equivalent JS. This JS won’t be fast, but at least it means it will run on older browsers that don’t support WebAssembly.
So we’re getting close to unlocking this achievement. Once we unlock it, we’ll open up two more paths.
JS framework and compile-to- JS language
One is to rewrite most of the JavaScript framework in WebAssembly.
Another is to make statically typed compile-to-JS languages compilable to WebAssembly — for example, Scala. Js, Reason, or Elm.
For both of these use cases, WebAssembly needs to support high-level language capabilities.
GC
We need to integrate with the browser’s garbage collector for several reasons.
First, let’s look at rewriting parts of the JS framework. This could have several benefits. For example, in React, one thing you can do is rewrite the DOM Diffing algorithm in Rust, which has very ergonomic multithreading support, and parallelize the algorithm.
You can also speed things up by allocating memory in different ways. In the virtual DOM, you can use a special memory allocation scheme instead of creating a bunch of objects that need to be garbage collected. For example, you could use a Bump allocator scheme that has a very low-cost one-time allocation mechanism. This may help speed things up and reduce memory usage.
But you still need to interact with THE JS objects in the code, such as components. You can’t just keep copying everything into linear memory, because that would be difficult and inefficient.
Therefore, you need to be able to integrate with the browser’s GC so that you can use components managed by the JavaScript VM. Some of these JS objects need to point to data in linear memory, and sometimes data in linear memory needs to point to JS objects.
If this ends up creating a loop, then the garbage collector could have problems. This means that the garbage collector will not be able to determine whether objects are in use and will never collect them. WebAssembly needs to integrate with GC to ensure that these types of cross-language data dependencies are valid.
This will also help statically typed languages compile to JS, such as Scala.js, Reason, Kotlin, or Elm. These languages use JavaScript’s garbage collector when compiled to JS. Because WebAssembly can use the same GC (the engine’s built-in GC), these languages will be able to compile to WebAssembly and use the same garbage collector. They don’t need to change the way GC works.
Exception handling
We also need better support for exception handling.
Some languages, such as Rust, do not have exceptions. But in other languages, such as C ++, JS, or C #, exception handling is often used extensively.
Exceptions can currently be handled by polyfill, but polyfill makes the code run very slowly. Therefore, when compiling WebAssembly, the current default is no exception handling at compile time.
However, because JavaScript has exceptions, JS may throw an exception even if you compile the code without running it. If a WebAssembly function calls a thrown JS function, the WebAssembly module will not handle the exception properly. A language like Rust will choose to abort in this case, and we need to optimize for this.
debugging
Another thing people who use JS and JS-like languages are used to is good debugging support. Devtools in all major browsers makes it easy to step through JS debugging. We need the same level of support to debug WebAssembly in the browser.
The tail call
Finally, for many functional languages, you need to support tail calls. I’m not going to go into detail about this, but in general, it allows you to call a new function without having to add a new call record to the stack. Therefore, for functional languages that support this functionality, we expect WebAssembly to support it as well.
How are we doing?
So where are we so far?
- There are currently two proposals under way for garbage collection: THE Typed Objects proposal for JS and the GC proposal for WebAssembly. Typed Objects can describe the fixed structure of Objects. The proposal will be discussed at the upcoming TC39 meeting. The WebAssembly GC proposes that WebAssemblies have direct access to this structure, and this proposal is being actively explored. With these two proposals, both JS and WebAssembly know the structure of an object, can share that object and access the data stored on it. Our team already has a prototype of this solution. However, it will still take some time to standardise these schemes, and we may be in a position to make progress sometime next year.
- Exception handling Exception handling is still in the research and development stage and now we are looking at whether it can effectively take advantage of other proposals such as the Typed Objects proposal mentioned above.
- Debugging There is currently some support for debugging in the browser DevTools. For example, you can step through WebAssembly text formatting in the Firefox debugger. But it’s still not ideal. We want to be able to show where it is in the source code when debugging, not in compiled code. All we need to do is figure out how the Source Map — or something like it — should work in WebAssembly. There’s a sub-group of WebAssembly CG working on this.
- Tail-call Tail-call proposals are also in the works.
Once this is done, we will unlock the JS framework and many of the compile-to-JS languages.
These are achievements that we can unlock in the browser. But what about outside the browser?
Outside the browser
Now, when I talk about “out of the browser,” you might get confused. Because isn’t that what browsers are for? As our name suggests, WebAssembly
But the truth is that what you see in a browser – HTML, CSS and JavaScript – is just part of the web. They are the visible parts — they are used to create the user interface — so they are the most obvious.
However, another very important part of the network is invisible.
This is a link, and this is a very special kind of link.
The innovation of this link is that I can link to your page without having to put it in a central registry, ask you, or even know who you are. I can put that link right there.
It is this simplicity of connection, without any oversight or approval bottlenecks, that makes our network possible. That’s why we build these global communities with people we don’t know.
But if all we have are links, then we have two problems left unsolved.
The first problem is, you go to this website, and it gives you some code. How does it know what code it should provide you? Because if you’re running on a Mac, you need different code than Windows. That’s why different operating systems have different versions of the program.
Should a website provide a different version of the code for every possible device? Not at all.
Instead, the site should have only one version of the code — the source code. This is the content for the user. It is then translated into machine code on the user’s device.
The name of this concept is portability.
It’s nice that you can load code from people who don’t know you and who don’t know what device you’re running on.
But this brings up a second problem. If you don’t know the provider of the page you’re loading, how do you know what code they’re giving you? It could be malicious code. It may try to take over your system.
Doesn’t getting network running code from anyone else mean you have to blindly trust anyone on the network?
This is where another key concept on the web comes in.
This is the security model. I’m going to call it a sandbox.
Basically, the browser takes the page — someone else’s code — and instead of letting it run unchecked on your system, it puts it in a sandbox. It puts some non-dangerous toys in the sandbox so the code can do something, but it leaves dangerous things out of the sandbox.
So the utility of links is based on two things:
- Portability – The ability to provide code to users and make it run on any type of device that can run a browser.
- Sandbox — a security model that allows you to run the code without compromising the integrity of the machine.
So why does this distinction matter? What’s the difference if we think of the Web as what the browser shows us using HTML, CSS, and JS, versus if we think of the Web in terms of portability and sandbox?
Because it changes the way you think about WebAssembly.
You can think of WebAssembly as another tool in the browser toolbox…… And so it is.
It is another tool in the browser toolbox. But it’s not just that. It also gives us a way to bring the other two properties of the network — portability and security model — to other places where they are needed.
We can extend the web to the boundaries of the browser. Now let’s look at the use of these network attributes.
Node.js
How does WebAssembly help Node? It brings full portability to Node.
Node gives you most of the portability of JavaScript in your browser. But in many cases, Node’s JS modules are not enough, and we need to improve performance or reuse existing code that is not written in JS.
In these cases, you need Node’s local module. These modules are written in languages like C and need to be compiled for the specific type of machine the user is running on.
Local modules can be compiled at user installation time or precompiled to binaries in an extensive matrix of different systems. The former is a pain for the user, and the latter is a pain for the package maintainer.
Now, if these native modules are written in WebAssembly, they do not need to be compiled specifically for the target system. Instead, they just run like JavaScript in Node. But they do it almost in a native way.
As a result, we provide complete portability for the code running in Node. You can use the exact same Node application and run it on all different types of devices without compiling anything.
But WebAssembly does not have direct access to the system’s resources. Local modules in Node are not sandboxes, and they have full access to all the dangerous toys the browser pulls out of the sandbox. In Node, the JS module can also access these dangerous toys because Node makes them available. For example, Node provides methods for reading and writing files from the system.
In the case of Node, it makes sense for a module to have such access to a hazardous system API. So, if the WebAssembly module does not have this access by default, how do we provide the required access to the WebAssembly module? We need to pass in functions so that the WebAssembly module can work with the operating system, just like Node and JS.
For Node, there may be a lot of functionality like the C library. It may also include part of POSIX — the Portable Operating System Interface — which is an earlier standard that facilitates compatibility. It provides an API for interacting with systems across a bunch of different Unix-like operating systems. Modules definitely need a bunch of POSIX-like functionality.
Portable interface
What the Node core needs to do is figure out which set of functions to expose and which apis to use.
But wouldn’t it be great if these were generic? Not just for Node, but also for other runtimes and use cases?
You can use POSIX for WebAssembly if you wish. Perhaps a PWSIX – portable WebAssembly System interface.
If done the right way, you can even implement the same API for the Web. These standard apis can populate existing Web apis.
These functions are not part of the WebAssembly specification. And there will be WebAssembly hosts that cannot use them. But for platforms where these functions are available, there will be a unified API to call them, regardless of the platform on which the code is running. This will make common modules — modules that run on both the Web and Node — much easier.
How are we doing?
So is this really going to happen?
Several things are working in favour of this idea. There is a proposal called package name mapping, which would provide a mechanism for mapping module names to the path where modules are loaded. This may be supported by browsers and Node, which can use it to provide different paths to load completely different modules, but using the same API. In this way, the.wASM module itself can specify an import pair (module name, function name) that can be run in different environments (even on the Web).
With this mechanism in place, the next step is to figure out which functions work and what their interfaces should be.
There have been no active studies on this. But now a lot of the discussion is moving in that direction. It seems likely to happen in one form or another.
This is good, because unlocking these will allow us to implement some use cases outside of the browser. With that, we can pick up the pace.
So, what are some examples of these other use cases?
CDN, serverless and edge computing
Examples are CDN, serverless and edge computing. In these cases, you put code on someone else’s server and make sure that the server is maintained and that the code is accessible to all users.
Why use WebAssembly in these cases? There was an excellent presentation on this at a recent conference.
Fastly is a company that offers CDN and edge computing. Tyler McMullen, their chief technology officer, explains it this way:
If you look at how a process works, the code in that process has no boundaries. Functions can access any memory in that process they want, and they can call any other function they want.
This is a problem when you are running the services of many different people in the same process. Sandboxes may be one solution to this problem. And then there’s a scale problem.
For example, if you use a JavaScript VM like SpiderMonkey for Firefox or V8 for Chrome, it’s like using a sandbox where you can put hundreds of instances into a single process. But as the number of requests grew, Fastly needed not just hundreds but thousands of instances per process.
Tyler explained it better in his talk, which you should check out. But the point here is that WebAssembly provides the speed and scale that this Fastly needs, while maintaining security issues.
So what do they need to get there?
The runtime
They need to create their own runtimes. This means taking a WebAssembly compiler (something that compiles WebAssembly into machine code) and combining it with the functions that interact with the system I mentioned earlier.
For the WebAssembly compiler, Fastly uses Cranelift, and we are also building a Compiler for Firefox. We designed it to be very fast and not take up much memory.
Now, for functions that interact with the rest of the system, they have to create their own functions, because we don’t have portable interfaces yet.
So creating your own runtime is now possible, but it takes some effort. And different companies have to come up with different solutions.
What if we not only had portable interfaces, but also a common runtime that could be used in all of these companies and other use cases? This will certainly accelerate development.
That way, other companies could use that runtime (as they do with Node today) instead of building their own from scratch.
How are we doing?
So where are we?
Although there is currently no standard runtime, there are several runtime projects in the works. These include WAVM and WASMJIT built on top of LLVM.
In addition, we plan to build a runtime called WASmTime on top of Cranelift.
Once we have a common runtime, we can speed up the development of a range of different use cases. For example…
Portable CLI Tool
WebAssembly can also be used on more traditional operating systems. Now, to be clear, I’m not talking about this in the kernel (although brave souls are trying), but WebAssembly can run in Ring 3 user mode.
Then, you can do things like have portable CLI tools that can be used across all different types of operating systems.
This is very close to another use case……
The Internet of things
The Internet of Things includes devices like wearable technology and smart home appliances.
These devices are usually resource-constrained — they don’t have much computing power, and they don’t have much memory. Compilers like Cranelift and wasmTime run well in this case because they are efficient and have low memory. Under extreme resource constraints, WebAssembly can be fully compiled into machine code before loading the application onto the device.
There’s also the fact that there are many of these different devices, and they’re all slightly different. The portability of WebAssembly really helps with this.
This is another part of WebAssembly’s future.
conclusion
Now let’s zoom out and look at the skill tree.
I said at the beginning of this article that there is a misconception about WebAssembly — that the WebAssembly MVP is the final version of WebAssembly.
I think you can see now why this is a misunderstanding.
Yes, the MVP created a lot of chances. It makes it possible to bring many desktop applications to the Web. But we still have a lot of use cases to unlock, from heavy desktop applications, to small modules, to JS frameworks, to everything but browsers… Node.js, serverless, blockchain, portable CLI tools, and the Internet of Things.
So the WebAssembly we have today is not the end of the story, it’s just the beginning.
Read more:
WebAssembly
WebAssembly Chinese | Wasm Chinese documents
WebAssembly – MDN – Mozilla
WebAssembly: How and Why — LogRocket
WebAssembly is here! – Unity Blog
WebAssembly is here! – Unity Blog
WebAssembly is here! – Unity Blog
Why WebAssembly is a game Changer for the Web — and a source of pride for Mozilla and Firefox
WebAssembly status and practice