Project Loom: Fiber and computing extensions for the Java virtual machine
Translator:
Translation of several important nouns
Continuation (computational continuation)
Scheduler
Delimited Continuation (Delimited Continuation)
Loom proposal.md (java.net) Loom proposal.md (java.net)
An overview of the
The goal of Project Loom is to make it easier to write, debug, configure, and maintain concurrent applications to meet today’s needs. Threads, a natural and convenient concurrency structure provided by Java from the beginning (leaving aside the issue of thread to thread communication), are being replaced by less convenient abstractions because they are not implemented through system kernel threads to meet current needs and waste computing resources that are especially valuable in cloud computing. Project Loom introduces fibers as lightweight, efficient threads managed by the Java Virtual Machine, allowing developers to use the same simple abstraction, but with better performance and lower footprint. We want to make concurrency easy again! A fiber consists of two parts — a Continuation (calculation continuation) and a scheduler (scheduler). Since LAVA already has an excellent Scheduler in the form of ForkJoinPool, the fibers will be implemented by adding continuations to the IVM
motivation
Many applications written for the Java virtual machine are concurrent — that is, programs such as servers and databases need to service many requests, resulting in concurrency and competition for computing resources. Project Loom aims to significantly reduce the difficulty of writing efficient concurrent applications, or more precisely, the tradeoff between simplicity and efficiency.
When Java was first released more than two decades ago, one of its most important contributions was easy access to threads and synchronization primitives. Java threads (used directly or indirectly, for example, through Java servlets to process HTTP requests) provide a relatively simple abstraction for writing concurrent applications. However, one of the major difficulties in writing concurrent programs that meet today’s requirements is that the software concurrency units (threads) provided by the runtime do not match the size of the business domain’s concurrency units (whether users, transactions, or individual operations). Even if the application concurrency is a coarse-grained unit – a session, for example, a single socket connection, the server can handle one million concurrent open the socket, but use is provided by the Java runtime operating system thread implementation Java thread, it cannot effectively handle more than thousands of the number of threads. These orders of magnitude mismatches can have a big impact
Programmers are forced to choose between modeling domain concurrency units directly as threads, which can lose a lot of concurrency on a single server, or using other structures to implement concurrency at a more granular level than threads (tasks), and supporting concurrency by writing asynchronous code that doesn’t block running threads.
In recent years, the Java ecosystem has introduced many asynchronous apis, from asynchronous NIO in the JDK, asynchronous servlets, and many asynchronous third-party libraries. These apis weren’t created because they were easier to write and understand, or even because they were actually harder; Not because they’re easier to debug or analyze — they’re even more difficult (they don’t even produce meaningful stack traces); It’s not that their code combinations are better than synchronous apis — their combinations are less elegant; Not because they fit better with other parts of the language, or integrate well with existing code, but because the implementation of the parallel software unit, threads, is inadequate from a memory and performance standpoint. It’s a sad situation when a good, natural abstraction is thrown away in favor of a less natural abstraction because of runtime performance problems with abstractions.
Although use kernel threads as there are some advantages to implement the Java thread – the most significant reason is that all local code by the kernel thread support, therefore in the thread running the Java code can call the local API, but the above mentioned faults is too big that cannot be ignored, the results are either difficult to write, maintain, costly code, Or a huge waste of computing resources, which is especially expensive when the code is running in the cloud. In fact, several languages and language runtimes have succeeded in providing lightweight threading implementations, most notably Erlang and Go, which are very useful and popular.
The main goal of this project is to add a lightweight threading structure, called fibers, managed by the Java runtime, which will optionally be used with the existing heavyweight, OS-provided threading implementation. Fibers have a much lighter memory footprint than kernel threads, and the task-switching overhead between them is close to zero. Millions of fibers can be generated in a single JVM instance, and programmers can issue synchronized blocking calls without much thought, because blocking is actually lightweight. In addition to making concurrent applications simpler or more scalable, this will make library authors’ lives easier, since there is no longer a need to provide both synchronous and asynchronous apis for different brevity/performance trade-offs. Simplicity will be free.
As we’ll see, threads are not a single structure, but a combination of two concerns — scheduler and continuation. Our current intention is to separate these two issues and implement the two most important building parts of Java fibers. And, although fibers are the main motivation for this project, adding continuations as an abstraction of what the user is facing also has other uses as continuations (such as Python’s) Generators).
Objectives and Scope
Fibers can provide a low-level primitive on which interesting programming paradigms such as channels, actors, and dataflow can be implemented, but while these uses will be considered, designing any of those high-level constructs is not the goal of this project. Nor are new programming styles or recommended modes of exchanging information between fibers (e.g., shared memory vs. Messaging). Because limiting memory access for threads is the subject of other OpenJDK projects, and this issue applies to any implementation of thread abstraction, heavyweight or lightweight, this project may intersect with other projects.
The goal of this project is to add a lightweight threading structure, called fibers, to the Java platform. A possible user-oriented form for this structure is discussed below. The goal is to allow most Java code (meaning code in Java class files, not necessarily written in the Java language) to run unchanged or with minimal modifications within the fiber. It is not a requirement of this project to allow native code called from Java code to run in the fibers, although in some cases this is possible. Nor is the goal of this project to ensure that every piece of code gets a performance benefit when running in a fiber; In fact, some code that is not well suited to lightweight threads can affect performance when running in a fibre.
The goal of this project is to add a public Delimited Continuation (or Coroutine) structure to the Java platform. However, this goal is secondary to fibres (which require continuations, as explained later, but these continuations are not necessarily exposed as a public API).
The goal of the project was to experiment with a wide variety of fiber schedulers, but it was not the intention of the project to conduct any serious research into the design of the scheduler, largely because we think ForkJoinPool can be a very good fiber scheduler.
The operational ability to add a call stack to the JVM was definitely required, and it was this project that added a more lightweight structure, allowing for stack unwinding at some point and then calling a method with a given parameter (typically a generic valid tail call). We call this feature unwinding -and invoke, or UAI. The goal of this project is not to add an automatic tail-call optimization to the JVM.
This project may involve different components of the Java platform, which are characterized as follows:
- Continuations and UAI are implemented inside the JVM and exposed to a concise Java API
- Fibers will most likely be implemented in Java libraries in the JDK, but may require the help of the JVM
- Native code in JDK libraries that uses thread blocking will be adapted to run on fibers. This means a change is needed
java.io
The class of - JDK libraries use low-level thread synchronization (in particular
LockSupport
Class), for examplejava.util.concurrent
Will be adapted to support fibers, but the amount of work required depends on the fiber API and should be minimal in any case (because the API exposed by fibers is similar to threads) - Debuggers, profilers, and other service services need to know that fibers exist to provide a good user experience. This means that JFR and JVMTI will need to adapt to changes in fibers, and related platform MBeans may be added
- At this point, we did not foresee the need for changes to the Java language.
The project is still in its early stages, so everything — including its scope — could change
The term
Because kernel threads and lightweight threads are just different implementations of the same abstraction, there is bound to be some confusion in terminology. The following conventions will be adopted in this paper and shall be followed for each correspondence in the project:
- The wordthreadRefers only to abstractions (discussed later), not specific implementations, sothreadCan refer to any implementation of abstraction, whether done by the operating system or the runtime.
- Terminology when referring to a particular implementationHeavyweight thread,A kernel threadandOS threadsCan be used interchangeably to indicate an implementation of threads provided by the operating system kernel. The termLightweight thread,User mode threadandfibersCan be used interchangeably to represent implementations of threads provided by the language runtime (JVM and JDK libraries in the Java platform). These wordsDon’tRefers to specific Java classes (at least in the early stages when the design of these apis is unclear)
- The capitalized “Thread” and “Fiber” refer to specific Java classes and are primarily used to discuss API design rather than implementation.
What is a thread
Threads, “threads,” are sequences of computer instructions that are executed in sequence. When we are dealing with operations that may involve not only computation but also IO operations, pauses, and thread synchronization — generally, instructions that cause computation to wait for some external event — a thread has the ability to suspend itself, and automatically wait for the event to resume when it occurs. When one thread waits, it should yield the CPU core and allow another thread to run.
These capabilities are provided by two different parts. A continuation is a sequence of instructions that execute sequentially and may pause itself (more on this in the Continuations section below). The scheduler assigns continuations to the CPU kernel, replaces a suspended continuation with one ready to run, and ensures that a continuation ready to resume will eventually be assigned to a CPU kernel. Thus, threads need two structures: continuations and Scheduler, although these are not necessarily exposed as separate apis.
Similarly, threads (at least in this context) are a basic abstraction and do not imply any programming paradigm. In particular, they refer only to abstractions that allow programmers to write sequences of code that can be run and paused, not to any mechanism for sharing information between threads, such as sharing memory or passing messages.
Because there are two separate concerns, we can choose a different implementation for each. Currently, the Thread structure provided by the Java platform is the Thread class, which is implemented by kernel threads; It relies on the OS to implement continuations and schedulers.
Continuations exposed by the Java platform can be combined with existing Java schedulers (such as ForkJoinPool, ThreadPoolExecutor, or third-party implementations) or with schedulers optimized for this purpose to implement the fibers.
You can also split the implementation of the two threading building blocks between the runtime and the operating system. For example, in the Google (video, slides TAB (www.linuxplumbersconf.org/2013/ocw/sy… The changes to the Linux kernel in -user threading.pdf) allow user-mode code to take over kernel thread scheduling, thus essentially relying on the operating system to implement continuations, while also using libraries to handle scheduling. This has the benefits provided by user-mode scheduling while still allowing native code to run on this threaded implementation, but it still suffers from a relatively high memory footprint and the inability to resize the stack, and is not currently available. Splitting the implementation in another way — the operating system takes care of scheduling and the runtime provides continuations — doesn’t seem to do any good, because it combines the worst of both worlds.
But why are user-mode threads better than kernel threads, and why do they deserve to be called “lightweight”? Similarly, you can easily consider the continuation and Scheduler components separately.
To suspend a calculation, you need a continuation to store the entire call stack context, or simply the stack. To support native languages, the memory stored in the stack must be contiguous and remain at the same memory address. While virtual memory does offer some flexibility, there are limits to how lightweight and flexible this type of kernel Continuation, or stack, can be. Ideally, we want the stack to grow and shrink based on usage. Because the language runtime implementation of threads does not need to support arbitrary native code, we can gain more flexibility in how we store continuations, thus reducing the footprint.
A bigger problem with threads implemented using the operating system is the scheduler. First, the operating system scheduler runs in kernel mode, so a non-cheap user/kernel switch must be made every time a thread blocks and control is returned to the scheduler. The operating system scheduler, on the other hand, is designed to be universal and can schedule many different types of program threads. But the thread running the video encoder behaves very differently from the thread requested by the network server, and the same scheduling algorithm is not optimal for either. Threads processing transactions on servers tend to exhibit specific patterns of behavior that challenge the general-purpose operating system scheduler. For example, it is A common pattern for transaction service thread A to perform some operation on the request and then pass the data to another thread B for further processing. This requires some synchronization between the two threads, which may involve A lock or A message queue, but the pattern is the same :A does something to some data X, gives it to B, wakes UP B and then A blocks until it needs to process A request from the network or another thread. This pattern is so common that we can assume that A will block shortly after unblocking B, so it would be beneficial to place B and A on the same core because X is already in the cache of the core; Furthermore, adding B to the CPU core local queue does not require any costly competing synchronization. In fact, work-stealing schedulers like ForkJoinPool make this precise assumption by scheduling tasks by adding them to local queues. However, operating system kernels cannot make such assumptions. From the kernel’s prediction, thread A may want to continue running for some time after waking up B, so it will schedule recently unblocked B to A different core, requiring some synchronization, and causing A cache-fault as soon as B accesses X.
fibers
Fibers are what we call user-mode threads provided by Java schemes. This section lists fiber requirements and explores some design issues and options. It is not exhaustive, but provides an outline of the design space and points out some of the challenges involved.
In terms of basic functionality, fibers must concurrently run any piece of Java code with other threads (lightweight or heavyweight) and allow the user to wait for them to stop and join them. Obviously, there must be some mechanism to suspend and resume fibers, similar to LockSupport’s Park /unpark. We also want to get a stack trace of the fiber for monitoring/debugging and its state (hang/run), etc. In short, because a fiber is a Thread, it will have a very similar API to a heavyweight Thread (represented by the ‘Thread class). With respect to the Java memory model, the fibers will behave exactly like the current Thread implementation. Although fibers will be implemented using JVM-managed Continuations, we might also want them to be compatible with OS continuations, such as Google’s user-scheduled kernel threads.
Fibers also have some unique features: we want a fiber to have a pluggable scheduler (which is fixed to the fiber’s structure or can be replaced when it pauses, such as requiring a scheduler as a parameter unpark method), and we want the fiber to be serializable (discussed in a separate section).
In general, the fiber API will be almost identical to the “threading” API, because the abstraction is the same, and we want code that is currently running in kernel threads to be able to run in fibers with little or no modification. This immediately presents two design options:
- The fiber path is expressed as
Fiber
Class, and will beFiber
andThread
The general-purpose API is decomposed into a general-purpose supertype, temporarily calledStrand
. It is not directly known which Thread (thread-implementation-agnostic) the code will targetStrand
Programming, if the code is running in a fiber,Strand.currentStrand
Returns a fiber if the code is running in a fiberStrand.sleep
Will hang the fiber - Use the same for two different threads
Thread
Class — user – and kernel-mode — and callingstart
Previously, one of the constructors or setters was selected as the implementation of the dynamic property set.
A separate Fiber class might allow us to be more flexible with ‘Thread, ‘but it also presents some challenges. Because a user mode scheduler didn’t can’t direct access to the CPU core, to the distribution of fibers is a core by those running kernel threads, so every fibers in be scheduled at least to a CPU core corresponding to an underlying kernel threads, although the underlying kernel thread is not fixed, if the scheduler to the same scheduling to different fibers Working kernel threads may change. If the scheduler is written in Java — as we would like it to be — each fiber even has an underlying Thread instance. If the Fiber class represents a Fiber, code running in the Fiber can access the underlying Thread instance (such as Thread.CurrentThread or Thread.sleep), which seems undesirable.
If the fibers are represented by the same Thread class, then user code will not be able to access the underlying kernel threads of the fibers, which seems reasonable, but has many implications. First, it requires more work in the JVM, which makes heavy use of the Thread class, and needs to know about possible fiber implementations. On the other hand, it limits our design flexibility. It also creates loops when writing the scheduler that need to be implemented by assigning them to threads (kernel threads). This means that we need to expose the continuation of the fiber (represented by Thread) for the scheduler to use.
Because the fibers are scheduled by the Java scheduler, they need not be GC roots, because at any given time the fibers are either runnable, in which case their scheduler holds a reference to them; Or it is blocked, in which case the object blocking it holds a reference to it (such as a lock or IO queue), which can be unblocked.
Another relatively important design decision involves thread-local variables. Currently, thread-local data is represented by the (Inheritable) ThreadLocal class. How do I handle thread-local data in a fiber? Crucially, ThreadLocal can be used in two distinct ways. One is to associate data with thread context. Fibers may also need this ability. Another is to reduce contention in concurrent data structures through serialization. Use ThreadLocal as an approximation of the processor-local (or, more accurately, CPU-core-local) structure. With fibers, these two different uses need to be clearly separated, because now a thread local can be over millions of threads (fibers) which is not at all a good approximation of processor local data. The requirement for more explicit processing of threads as contexts rather than as approximations of processors is not limited to the actual ThreadLocal class, but also to any class that maps Thread instances to data for serialization. If the fibers are represented by Threads, you need to make some changes to this serialized data structure. In any case, the addition of fibers requires the addition of an explicit API to access the processor, whether exact or approximate.
An important feature of kernel threads is timeslice based preemption (called forced-preemption here for brevity). If a kernel thread performs an operation for some time without blocking IO or thread synchronization, it will be forced to preempt after some time. At first glance, this might seem like an important design and implementation issue for fibers, and we might actually decide to support this feature; The JVM SafePoint feature should make it easy — but not only is it not important, but having this feature makes no difference at all (so it’s best to drop it). Here’s why: Unlike kernel threads, the number of fibers can be very large (hundreds of thousands or even millions). If many fibers require so much CPU time that they often need to be forced to preempt, the application will not have enough resources to schedule when the number of threads exceeds the number of cores by several orders of magnitude, and no scheduling policy can accommodate this situation. If many fibers do not often need to run long calculations, a good scheduler will solve this problem by allocating fibers to the available kernel (that is, working kernel threads). If some fibers need to run long computations frequently, it is best to run the code in a heavyweight thread; While different threading implementations provide the same abstraction, sometimes one implementation is better than the other, and our fibers are not necessarily better than kernel threads in all cases.
However, a real implementation problem may be how to coordinate the fibers with the code inside the JVM that blocks the kernel thread. The following examples are code that implicitly blocks, such as the ability to load classes from disk to user-specified locations, such as synchronized and Object.wait. Because the fiber scheduler multiplexes many fibers to a small group of working kernel threads, blocking the kernel thread can consume a significant portion of the resources available to the scheduler and should be avoided.
In one extreme case, each case needs to be fiber-friendly, that is, if the blocking API is called by the fiber, only the fiber is blocked, not the underlying kernel thread; On the other hand, all cases can continue to block the underlying kernel thread. In between, we might have some apis blocking fibers and others blocking kernel threads. There is good reason to believe that many of these conditions can remain the same, namely kernel thread blocking. For example, class loading occurs frequently only during startup and rarely after startup, and as mentioned above, the fiber scheduler can easily schedule around this blocking. Many uses of synchronized protect memory access and block threads only for very short periods of time — so short that this problem can be completely ignored. We could even decide to keep synchronized as it is, and encourage those who surround IO access with synchronized and block frequently in this way to change their code to use J.U.C if they want to run code in fibers (which would be fibron-friendly). Similarly, the use of Object.wait, which is less common in modern code (or what we think of it today), is more of a similar use of J.U.C.
In any case, a fiber that blocks its underlying kernel thread triggers system events that can be monitored with JFR/ MBeans.
While fibers encourage the use of plain, simple, and natural synchronous blocking code, it is easy to adapt existing asynchronous apis into fiber blocking code. Suppose the library exposes the asynchronous API for some long-running operation foo, which returns a String:
interface AsyncFoo {
public void asyncFoo(FooCompletion callback);
}
Copy the code
Where the callback or completion handler FooCompletion is defined as follows:
interface FooCompletion {
void success(String result);
void failure(FooException exception);
}
Copy the code
We will provide an asynchronous to fiber blocking structure that might look like this:
abstract class _AsyncToBlocking<T.E extends Throwable> {
private _Fiber f;
private T result;
private E exception;
protected void _complete(T result) {
this.result = result;
unpark f
}
protected void _fail(E exception) {
this.exception = exception;
unpark f
}
public T run(a) throws E {
this.f = current fiber
register(a);
park
if (exception ! =null)
throw exception;
return result;
}
public T run(_timeout) throws E, TimeoutException {... }abstract void register(a);
}
Copy the code
We can then create a blocking version of the API by first defining the following classes:
abstract class AsyncFooToBlocking extends _AsyncToBlocking<String.FooException>
implements FooCompletion {
@Override
public void success(String result) {
_complete(result);
}
@Override
public void failure(FooException exception) { _fail(exception); }}Copy the code
We then use it to wrap the asynchronous API as a synchronous version:
class SyncFoo {
AsyncFoo foo = get instance;
String syncFoo(a) throws FooException {
new AsyncFooToBlocking() {
@Override protected void register(a) { foo.asyncFoo(this); } }.run(); }}Copy the code
We can add this extension for common asynchronous classes, such as CompletableFuture
Continuations
The motivation for adding continuations to the Java platform was to enable fibers, but continuations have some other interesting uses, so providing them as a public API was the second goal of this project. However, the role of these other uses is expected to be much less than that of fibers. In fact, continuations do not add expressiveness to a fiber (that is, you can implement a continuum on a fiber).
Everywhere in this document and in ProjectLoom, the word continuation denotes a Delimited Continuation (sometimes called Coroutine1). Here, we’ll think of the Delimited Continuation as sequential code that can be suspended (by itself) and resumed (by the caller). Some may be more familiar with the view of continuations as objects (usually subroutines) that represent the “remainder” or “future” of a computation. Both describe the same thing: a suspended continuation, which is an object that performs the rest of the calculation when restored or “called.”
Delimited Continuation is a continuation subroutine with an entry point (such as a thread), called an Entry Point (in Scheme, a reset point), that can be paused or executed at a point, We call this suspension point or yield point (in Scheme, this is shiftpoint). When a Delimited Continuation hangs, control is passed outside the continuation, and when it is restored, control returns to the last yield point, where the execution context remains at entry Point, There are many ways to represent delimited Continuations, but for Java programmers, the following example provides a good illustration of the concept
foo() { / / (2). bar() ... } bar() { ... suspend/ / (3)./ / (5)
}
main() {
c = continuation(foo) / / (0)
c.continue(a)/ / (1)
c.continue(a)/ / (4)
}
Copy the code
A continuation is created at (0), and its entry point is method foo; It is then called at (1) to pass control to the entry point of the continuation at (2), and it executes to the next hanging point in the subroutine bar (3), at which point the call at (1) is returned. When a continuation is called in (4), control returns to the suspension point where (5) is located
The continuation in question here is “stackful” because a continuation can block at any nested depth in the stack (in our example, inside the function bar, which is called by Foo, which is the entry point). In contrast, stackless Continuations can only hang in the same subroutine as the entry point. In addition, the continuation discussed here is non-reentrant, which means that any call to a continuation can change the “current” hang point. In other words, a Continuation object is stateful.
The main technical task for implementing Continuations — indeed, the entire project — is to add the ability to HotSpot to capture, store, and recover the call stack, rather than as part of the kernel thread. JNI stack frames may not be supported.
Since continuations are the foundation of fibers, if continuations are exposed as a public API, we will need to support nested continuations, which means that code running inside a Continuation must not only be able to suspend the continuation itself, You must also be able to suspend closed continuations (for example, closed fibers). For example, a common use of continuations is in the implementation of generators. The generator exposes an iterator, and each time an iterator is generated, the code running in the generator generates another value for the iterator. Therefore, it should be possible to write code like this:
new _Fiber(() -> {
for (Object x : new _Generator(() -> {
produce 1
fiber sleep 100ms
produce 2
fiber sleep 100ms
produce 3
})) {
System.out.println("Next: "+ x); }})Copy the code
In references, the nested continuum that allows this behavior is sometimes called “Delimited Continuations with multiple named prompts,” but we refer to it as a scoped computing continuation. See this blog for a discussion of the theoretical expressiveness of scoped continuums (For those interested, continuations are a “general effect” that can be used to achieve any effect – such as assignment – even in a pure language with no other side effects; That’s why continuations are, in a sense, the basic abstraction of imperative programming.
Code running in A continuation should not reference A continuation instance, and scopes usually have some fixed name (so suspending scope A suspends the innermost closed continuation of scope A). Yield points, however, provide a mechanism for passing information from code to continuation instances and back. A “try/finally block enclosing the concession point” is not triggered when a continuation hangs (that is, code running ina continuation is unaware that it is in the process of hanging).
One reason to implement Continuations as separate fibers, whether or not they are exposed as a public API, is an explicit separation of concerns. Consequently, continuations are not thread-safe, and none of their operations create a happens-before relationship across threads. The fiber must fulfill the responsibility of ensuring memory visibility for moving continuations from one kernel thread to another
A rough overview of the possible apis is given below. Continuations is a very low-level primitive that can only be used by library authors to build higher-level structures (just as the java.util.stream implementation leverages Spliterator). Classes that use continuations are expected to have private instances of the Continuation class, or even more likely subclasses of it, and continuations are not directly exposed to consumers of the structure.
class _Continuation {
public _Continuation(_Scope scope, Runnable target)
public boolean run(a)
public static _Continuation suspend(_Scope scope, Consumer<_Continuation> ccc)
public ? getStackTrace(a)
}
Copy the code
The run method returns true when a continuation terminates or false if it hangs. The suspend method allows you to pass information from the exit point to a continuation (you can use the CCC callback to inject information into a given continuation instance) and back from the continuation to the suspension point (using the return value from which the information can be queried, It is the continuation itself.
To demonstrate how easy it is to implement fibers in terms of continuations, here is a partial simple implementation of the _Fiber class that represents fibers. As you will notice, most of the code maintains the state of the fiber to ensure that it is not scheduled more than once:
class _Fiber {
private final _Continuation cont;
private final Executor scheduler;
private volatile State state;
private final Runnable task;
private enum State { NEW, LEASED, RUNNABLE, PAUSED, DONE; }
public _Fiber(Runnable target, Executor scheduler) {
this.scheduler = scheduler;
this.cont = new _Continuation(_FIBER_SCOPE, target);
this.state = State.NEW;
this.task = () -> {
while(! cont.run()) {if (park0())
return; // parking; otherwise, had lease -- continue
}
state = State.DONE;
};
}
public void start(a) {
if(! casState(State.NEW, State.RUNNABLE))throw new IllegalStateException();
scheduler.execute(task);
}
public static void park(a) {
_Continuation.suspend(_FIBER_SCOPE, null);
}
private boolean park0(a) {
State st, nst;
do {
st = state;
switch (st) {
case LEASED: nst = State.RUNNABLE; break;
case RUNNABLE: nst = State.PAUSED; break;
default: throw newIllegalStateException(); }}while(! casState(st, nst));return nst == State.PAUSED;
}
public void unpark(a) {
State st, nst;
do {
State st = state;
switch (st) {
case LEASED:
case RUNNABLE: nst = State.LEASED; break;
case PAUSED: nst = State.RUNNABLE; break;
default: throw newIllegalStateException(); }}while(! casState(st, nst));if (nst == State.RUNNABLE)
scheduler.execute(task);
}
private boolean casState(State oldState, State newState) {... }}Copy the code
The scheduler
As mentioned above, work-stealing schedulers such as ForkJoinPools are particularly suited for scheduling threads that frequently use IO to communicate and that block or frequently communicate with other threads. However, fibers will have pluggable schedulers, and users will be able to write their own schedulers (the SPIs of the scheduler can be as simple as Executor). Based on previous experience, it is expected that ForkJoinPool in asynchronous mode will be an excellent default fiber scheduler for most uses, but we may want to explore one or two simpler designs, such as nepin-scheduler, Always schedule a given fiber to a specific kernel thread (assuming that thread is anchored to the processor core).
Unwind-and-Invoke
This function is used to implement stack recovery of the fiber
Unlike continuations, the contents of an expanded stack frame are not preserved, and no object needs to instantiate the structure.
TBD
Remaining Challenges
While the primary motivation for this goal is to make concurrency easier/more scalable, there are other benefits besides threads implemented by the Java runtime and more control over them by the runtime. For example, such a thread could be paused and serialized on one machine, and then deserialized and resumed on another. This is useful in distributed systems where code can benefit from data closer to it, or in cloud platforms that offer function-as-A-services where the machine instance running user code can terminate while the code waits for some external event and then resume on another instance, Possibly on different physical machines, making better use of available resources and reducing host and client costs. A fiber will have methods like parkAndSerialize and deserializeAndUnpark.
Because we want fibers to be serializable, continuations should be serializable, too. If they are serializable, we might as well make them clonable, because the ability to clone continuations actually increases expressiveness (because it allows you to go back to the previous pause point). However, making continuations clone and good enough for such use is a difficult challenge, because Java code stores a lot of information off the stack and to be useful, so cloning needs to be “deep” in some customizable way.
Other methods
An alternative fibre-based solution to concurrency simplicity and performance problems is called async/await and has been adopted by C# and node.js, and is likely to be adopted by standard JavaScript. Continuations and fibers dominate in async/await because async/await is easy to implement with continuations (in fact, it can be implemented with a weak form of delimited Continuation, Called a stackless continuation, it does not capture the entire call stack, but holds the local context of just a single subroutine) and vice versa.
While implementing async/await is easier than full-blown continuations and fibers, that solution falls far too short of addressing the problem. While async/await makes code simpler and gives it the appearance of normal, sequential code, like asynchronous code it still requires significant changes to existing code, explicit support in libraries, and does not interoperate well with synchronous code. In other words, it does not solve what’s known as the “colored function” problem.
While it is easier to implement async/await than full-fledged continuations and Fiber, this solution falls far short of solving the problem. While async/await makes code simpler and gives it the appearance of normal, sequential code just like asynchronous code, it still requires significant changes to existing code, explicit support in the library, and does not interoperate well with synchronous code. In other words, it failed to solve the so-called “colored function” problem
That is the intrusive and contagious problem of async/await and incompatibility between different types of code
1 it is up in the air whether we call it continuation or coroutine — there is a difference in meaning, but the naming doesn’t seem to be completely standardized, and continuation seems to be used as a more general term. [↩. ↩