Browser performance

JavaScript was originally a dynamically typed interpreted language, and one of the characteristics of dynamically typed interpreted languages is their flexibility and slowness.

So JavaScript, like all dynamically typed languages, is inherently slower than statically typed languages.

As web applications become more complex, JavaScript performance must improve.

So why are dynamically typed languages such as Python, PHP, JavaScript slower than statically typed languages such as C/C++?

JavaScriptWhere is the slow

Let’s take a look at the simplest case, implement C = a + b addition, if c /C++ language, the implementation steps are roughly as follows:

  1. The value in memory A goes to the register
  2. The value in memory B goes to the register
  3. Calculate the addition
  4. Put the result in memory C

So what are the steps that you would go through in JavaScript?

  1. The code to compile
  2. Is there any variable A in the current context? If not, go up to the next level until you find or reach the outermost context
  3. Is there any variable B in the current context? If not, go up to the next level until you find or reach the outermost context
  4. Determine the type of variable a
  5. Determine the type of variable b
  6. Whether variables A and B need to be typed and converted
  7. Calculate the addition
  8. The result of the run is assigned to C

As we can see, the whole process of the two is quite different

Browser performance fill-in

1. JIT

JIT (just – in – time compilation) : If c = a + b, a and B are almost int types, then can we remove the type judgment, type conversion step, use a c /C++ approach to the implementation of the addition operation, and the execution of the code directly compiled into machine code, directly run, no need to compile again.

When Google introduced JIT technology in V8 in 2009, JavaScript execution was instantly 20-40 times faster. The problem with JIT is that not all code is improved very well because JIT is compiled based on runtime analysis and JavaScript is a typeless language, so performance gains are limited when the types in the code change frequently. Such as

function add (a, b)
{
    return a + b
}
var c = add(1, 2);
Copy the code

The JIT saw this and was so happy that it compiled add into

function add (int a, int b)
{
    return a + b
}
Copy the code

But, more than likely, the code that follows looks something like this

var c = add("hello", "world");
Copy the code

The JIT compiler probably cried because the ADD was already compiled into machine code and had to be pushed back

2. asm.js

Alon Zakai, an engineer at Mozilla, was working on the LLVM compiler in 2012 when he got the idea: Many 3D games are written in C/C++, and if you could compile C/C++ into JavaScript code, they would run in a browser.

So he began to research how to achieve this goal, creating a compiler project called Emscripten. This compiler can compile C/C++ code into JS code, but not regular JS, but a variant of JavaScript called ASM.js.

Asm.js has static variables and no garbage collection. When the browser’s JavaScript engine discovers that it is running asM.js, it skips parsing and executes it in assembly language. Asm.js can execute at 50% of the speed of native code.

The general workflow of ASM.js is as follows:

graph TD
C/C++ --> LLVM
LLVM --> Emscripten
Emscripten --> JavaScript

There are a few problems with ASM.js:

  1. onlyFirFoxThe browser has good support
  2. Code transfer is still the same as the existing way, transfer source, local compilation

3. WebAssembly

Mozilla, Google, Microsoft and Apple felt that Asm. Js, this method is promising, would like to standardize that everyone can use. WebAssembly was born.

With the big boys behind it, WebAssembly is much more radical than ASM.js. WebAssembly can’t even be bothered to compile JS. Can I just give you the bytecode? (Later changed to AST tree). For browsers that do not support Web Assembly, there is a JavaScript section that retranslates the Web Assembly into JavaScript to run.

On December 5, 2019, the World Wide Web Consortium (W3C) announced WebAssembly as an official standard

What is theWebAssembly

  • WebAssemblyIs a new type of code that runs in modern web browsers and provides new performance features and effects.
  • It’s designed for things likeC, C + +andRustSuch low-level source languages provide an efficient compilation target.

WebAssemblyThe goal of

  • High performance – ability to run at near local speed.
  • Portability – Ability to run on different hardware platforms and operating systems.
  • A secretWebAssemblyIs a low – level language, is a compact binary format, with good confidentiality.
  • securityWebAssemblyIs restricted to running in a secure sandbox execution environment. Like other web code, it follows the browser’s same-origin and authorization policies.
  • Compatible withWebAssemblyIs designed to work in harmony with other web technologies and maintain backward compatibility.

Browser compatibility

compatibility

Method of use

There are very detailed instructions on the official website MDN

1. Install dependencies (Ubuntu 20.04)

sudo apt install python3
sudo apt install cmake
Copy the code

2. Install Emscripten

git clone https://github.com/emscripten-core/emsdk.git
cd emsdk
git pull
./emsdk install latest
./emsdk activate latest
source ./emsdk_env.sh
Copy the code

3. Hello World

Create a file hello.c:

#include <stdio.h> int main() { printf("Hello, WebAssembly! \n"); return 0; }Copy the code

Compiling C/C++ code:

emcc hello.c -s WASM=1 -o hello-wasm.html 
Copy the code

This generates three files: hello-wasm.html, hello-wasm.js, and hello-wasm.wasm. Then open the browser to see the output.

4. Call C/C++ functions

  1. Create a file add.cpp:
extern "C" { int add(int a, int b) { return a + b; }}Copy the code
  1. compiling
emcc add.cpp -o add.html -s EXPORTED_FUNCTIONS='["_add"]' -s EXPORTED_RUNTIME_METHODS='["ccall","cwrap"]'
Copy the code

The EXPORTED_FUNCTIONS parameter specifies the EXPORTED_FUNCTIONS interface names to be exposed, preceded by a separate underscore _; EXPORTED_RUNTIME_METHODS Specifies the methods that can be invoked

Using cwrap, add the following code to the generated add.html:

Module.onRuntimeInitialized = () => { 
    add = Module.cwrap('add', 'number', ['number', 'number']);
    result = add(9, 9);
}
Copy the code

Because before call the corresponding C/C + + interface, also need to initialize the first, so want to be in the Module. The onRuntimeInitialized incident, to call the content of C/C + + by JS

5. Test performance

Let’s implement a JavaScript version of the addition function

function js_add(a, b) {
	return a + b;
}
Copy the code

Call 1000000 times respectively, and compare the time consuming respectively. The implementation code is as follows:

Module.onRuntimeInitialized = () => { 
    add = Module.cwrap('add', 'number', ['number', 'number']);
    const count = 1000000;
    let result;
    	
    console.time("js call");
    for (let i = 0; i < count; i ++) {
    	result = js_add(9, 9);
    }
    console.timeEnd("js call");
    	
    console.time("c call");
    for (let i = 0; i < count; i ++) {
    	result = add(9, 9);
    }
    console.timeEnd("c call");
}
Copy the code

Which one do you think is faster? Why is that?

The reality may not be what we think, but JavaScript gets called faster after multiple calls.

Why is that?

The V8 engine automatically optimizes our code when we call the JS function multiple times, because the input parameters are of the same type.

When we call the WebAssembly module code, the intermediate transmission takes some time. If we call a lot of times, the intermediate transmission takes more time.

So you get a situation where JavaScript calls are faster.

6. Another test

This time, change the idea and implement the accumulation directly in C, modify the previous step add.cpp and save it as add_all.cpp

extern "C" { long add_all(int count) { long result = 0; for(int i = 0; i < count; i++){ result += i; } return result; }}Copy the code

Compile with the same command

emcc add_all.cpp -o add_all.html -s EXPORTED_FUNCTIONS='["_add_all"]' -s EXPORTED_RUNTIME_METHODS='["ccall","cwrap"]'
Copy the code

Let’s implement a JavaScript version of js_add_all

function js_add_all(count) {
    let result = 0;
    for(let i = 0; i < count; i++){
    	result += i;
    }
    return result;
}
Copy the code

Then run the test:

Module.onRuntimeInitialized = () => { 
    add_all = Module.cwrap('add_all', 'number', ['number']);
   	const count = 50000;
    	
   	console.time("js call");
   	console.log(js_add_all(count));
   	console.timeEnd("js call");
   	
   	console.time("c call");
    console.log(add_all(count));
   	console.timeEnd("c call");   	
}
Copy the code

Who’s faster this time? What happens when count equals 100000? Why is that?

This time we can see a significant speed difference