This is the 7th day of my participation in the August More Text Challenge. For the Nuggets’ August challenge,

Dart VM 2 is a collection of components for the Dart code

Running from a Snapshot

VMS are able to call the isolate’s heap, or the graph of more precisely serialized objects in the heap, binary snapshots, which can then be used to recreate the same state during VM startup.

The format of the snapshot is low-level and optimized for quick startup: “It’s essentially a list of objects to create and instructions on how to wire them together.”

The original idea behind snapshots: Instead of parsing Dart sources and gradually creating internal VM data structures, the VM can quickly unpack all necessary data structures from the snapshot and then isolate up.

The idea of snapshots came from Smalltalk graphics, which in turn was inspired by Alan Kay’s master’s thesis. The Dart VM uses the cluster serialization format, which is similar to the system sing: Techniques described in the papers A Fast and Feature-rich Binary Deployment Technology and Clustered Serialization with Fuel.

The machine code was not included in the initial snapshot, but was later added during the development of the AOT compiler. Motivation for developing AOT compilers and snapshots with code: “to allow the use of VMS on platforms where JIT is not possible due to platform-level constraints”.

Snapshots with code work almost the same as normal snapshots, but with a slight difference: They contain a code part that, unlike the rest of the snapshot, does not require deserialization and is placed in a way that allows it to become part of the heap directly after being mapped to memory.

Runtime/VM/Clustered_snapshot. cc handles serialization and deserialization of snapshots; API function Dart_CreateXyzSnapshot [AsAssembly] responsible for write a heap snapshot (such as Dart_CreateAppJITSnapshotAsBlobs and Dart_CreateAppAOTSnapshotAssembly); Dart_CreateIsolateGroup You can obtain snapshot data to start the ISOLATE.

Run from AppJIT snapshot

“AppJIT snapshots were introduced to reduce JIT warm-up time for large Dart applications,” such as Dart Analyzer or Dart 2JS. When these tools are used on small projects, they spend as much time doing the actual work as the VM does JIT compiling these applications.

AppJIT snapshots solve this problem: you can run an application on a VM with some simulated training data, serialize all generated code and VM internal data structures into an AppJIT snapshot, and then distribute this snapshot instead of distributing the application in source (or kernel binary) form.

The VM from this snapshot is still JIT ready.

Run from AppAOT snapshot

AOT snapshots were originally introduced for platforms where JIT compilation was not possible, but they can also be used for fast startup and lower performance penalty situations.

There is a lot of confusion about the performance characteristics of JIT versus AOT:

  • The JIT can access the local type information of the running application and execute the configuration file, but it must pay the price of warm-up;
  • AOT can infer and prove various properties globally (for which it must pay compilation time), with no information about how the program actually executes, but AOT-compiled code reaches its peak performance almost immediately, with almost no warm-up.

Dart VM JIts currently have the best peak performance, while Dart VM AOT has the best startup time.

Failure to JIT means:

  • 1. AOT snapshots must contain executable code for each function that can be called during application execution;
  • 2. Executable code shall not rely on any speculative assumptions that may be violated during execution;

To meet these requirements, the AOT compilation process performs a global static analysis (type flow analysis or TFA) to determine which parts of the application can be collected from known entry points, which instances of classes can be allocated, and how types behave in the program.

All of these analyses are conservative: meaning they can’t perform as many optimization executions as the JIT can because it can always de-optimize into unoptimized code to achieve the correct behavior.

All possible functions are compiled into native code without any speculative optimizations, while type flow information is still handled in specialized code (such as de-virtualizing calls).

Once all the functions are compiled, a snapshot of the heap can be taken, and the resulting snapshot can then be run using the precompiled runtime, a special variant of the Dart VM that does not include components such as the JIT and the dynamic code loading tool.

Package: the vm/transformations/type_flow/transformer. The dart is based on the results of TFA type flow analysis and transformation of the entry point; The dart: : Precompiler: : DoCompileAll is the entry point for the VM AOT compilation in the cycle.

Switchable call

Even with global and local analysis, AOT-compiled code may contain calls that cannot be de-virtualized (meaning they cannot be resolved statically). To compensate for this AOT-compiled code, the runtime uses an extension of in-line caching technology in the JIT called Switchable Calls.

The JIT section has described that each inline cache associated with a call point consists of two parts:

  • Cache object (bydart::UntaggedICDataInstance representation);
  • The local code block to call (for exampleInlineCacheStub);

In JIT mode, the runtime only updates the cache itself, but AOT runtime can choose to replace the cache and the native code to invoke based on the state of the inline cache.

Initially all dynamic calls start in an unlinked state, and when the first call point SwitchableCallMissStub is called, it simply calls the running helper DRT_SwitchableCallMiss link to that call location.

DRT_SwitchableCallMiss then attempts to convert the call point to a singleton state, where the call point becomes a direct call, which enters the method through a special entry point that verifies that the receiver has the expected class.

In the example above, we assume that the first execution of obj.method() is an instance of C, and obj.method resolves to c.thod.

The next time we execute the same call point, it will call C.mothod directly, bypassing any type of method lookup process.

But it will enter C.thod through a special entry point, which will verify that it obj is still C, and if not, DRT_SwitchableCallMiss will be called and try to select the next call point state.

C. mode D may still be a valid target of the call, for example, obj is an instance of D extends C but not overridden c. mode D, in which case we check whether the call point can be converted to a single target state, Dart by SingleTargetCallStub implementation (see: : UntaggedSingleTargetCache).

\