preface
In the aboveAsynchronism in Flutter/DartIn, we know that the Flutter/Dart program is event-driven and Dart code is based onIsolate
Exists in the form of. eachIsolate
There’s a loop of events inside,
The Dart code is constantly handling event after event. The Isolate cannot access each other directly. They communicate with each other through ports. Understanding this event mechanism is fundamental to understanding Flutter/Dart operation. This event mechanism, like the human nervous system, allows the various parts of the program to work together. It can also answer the following questions:
Isolate
How to pass between ports (Port
) Communicate with each other?- The timer
Timer
And how do microtasks work? - How does program I/O work?
- in
Isolate
Why do network requests not block? - What changes does Flutter make to Dart’s event mechanism?
The answer to these questions requires in-depth study of the Dart VIRTUAL machine and the source code of the Flutter Engine, in addition to understanding the Dart language and the Flutter framework. The rest of this article introduces the low-level implementation of the event mechanism without Posting source code.
Event mechanism
Those of you who have read about the Dart VIRTUAL Machine have no doubt seen the image below.This is a blog post by Google engineer Vyacheslav Egorov about the Dart VIRTUAL machineIntroduction to Dart VMOne of the pictures in. From this image and the blog description we can see that the Dart code is running inIsolate
From the bottom view, the execution is in some oneMutator Thread
That is, in a specific thread. butIsolate
The system threads are not bound one by one throughout the program life cycle. aIsolate
Now it’s running in one thread in the thread pool, and later it might be running in another thread in the thread pool. Similarly, for threads in a thread pool, one might be running at this pointIsolate
I’ll run another one laterIsolate
. But one thing is certain is that at some point, oneIsolate
It only runs in one system thread. And as you can see from this correspondence,Isolate
More like individual tasks running in a thread pool.
Message processing of the Isolate
So how does the Isolate run in threads? From what we know about the event-driven architecture, we can expect that a message loop must be running in this thread. With message loops, there must be message queues. Also, there must be ports to receive messages. In this way, the Isolate can be represented as follows:
Different from the normal situation, the Isolate message loop is not an infinite loop, but a single message processing function. When an external message arrives, it is first inserted into the MessageQueue MessageQueue. If the Isolate is not running, the virtual machine hands the message handler to the thread pool in the form of a task. The thread pool allocates a thread to the virtual machine as needed, and then starts executing the task handler on the allocated thread. That is, it takes a message from the queue and processes it until the queue is empty. If all the messages are processed, the thread’s tasks are finished, the thread is free, and the thread pool may schedule a new task for it to execute. This new task may be another Isolate’s message handler.
The Dart VIRTUAL machine is flexible when running on the Isolate. The Isolate does not occupy a thread for a long time. Instead, everyone shares a thread pool.
The message queue
There are two message queues in the MESSAGE processor of Isolate. One queue is a normal message queue and the other queue is called OOB message queue. OOB is short for “out of band”, which is used to transmit control messages. For example, spawn a new Isolate from the current Isolate. We can control the new Isolate by sending OOB messages to the current Isolate. Pause, resume, kill, etc.
The priority of OOB messages is higher than that of ordinary messages. When the message processor obtains messages from the OOB message queue, it obtains messages from the ordinary message queue only when the OOB message queue is empty.
The message
For messages that need to be sent back and forth, the most important thing is the destination address of the message. In Dart, that address is Port. Each message is bound to a destination port, so that the message is correctly delivered to the corresponding Isolate’s message handler. If the port is invalid, the message is discarded.
Port andPortMap
From the above statement, the Dart message mechanism is used to address Port. Each Isolate has a message handler, and the Isolate exposes multiple ports as needed. Each port is bound to a message handler.
Dart VMS need to send and receive messages. The Isolate needs to send and receive messages between the Isolate, the Isolate needs to receive I/O messages, and Timer messages. These messages tend to span different threads. While Android uses a looper-Handler mechanism, the Dart VIRTUAL machine takes a more direct approach. A globally unique PortMap exists within the virtual machine to manage the life cycle of each port and the delivery of messages. This way, every thread can access PortMap and there is no obstacle to passing messages.
PortMap internally maintains a hash table that holds all port information. Each element of the hash table has a port number and a corresponding message handler. So all you have to do is look up the table by the port number, and you can get the message handler, and you can find the message handler, and you can queue the message in the message handler and let the message handler process it.
PortMap also manages the life cycle of all ports. Each port is created and closed through PortMap. Using the Isolate example, when we create a ReceivePort within the Isolate, the call ends up in PortMap. PortMap generates a port number, binds this port number to the Message Handler of the current Isolate and stores it in the hash table of PortMap.
From the implementation of the Dart VIRTUAL machine, PortMap is initialized when the VIRTUAL machine is initialized. There is a random number generator inside that randomly generates a port number whenever a new port is created.
Closing a port removes the element corresponding to the port number from the hash table.
When a thread needs to send a message, it calls PortMap::PostMessage() to query the hash table based on the port number. After finding the message handler corresponding to the port, the message can be queued for processing. The message passing process is shown below
从PortMap
Using a global hash table to store port information we can imagine that ports are closed when they are no longer needed, and further,Isolate
Kill them when they are no longer needed; otherwise, resources may not be released in time, resulting in “leakage.”
Message delivery
Dart message distribution is divided into two layers. One is the message processor at the Native layer. Messages sent from other threads or Isolate will converge here first. If the threads of the entire Isolate are compared to a community, and each ReceivePort monitored by each Dart layer is compared to the community residents, then the message processor of the Native layer is the gate of the community, and messages need to be further distributed in the Dart layer to enter households.
At the Dart layer, each Isolate has its own _portMap, which stores the ReceivePort port number and the corresponding handler. We know that ReceivePort implements the Stream interface. After receiving the message, the corresponding handler writes the message data to ReceivePort, so that the callback listening on the Stream can process the message data. Dart layer message processing code is as follows:
@pragma("vm:entry-point"."call")
static void _handleMessage(Function handler, var message) {
handler(message);
_runPendingImmediateCallback();
}
Copy the code
Message processing does two things. First, it handles the message handler(Message) coming from the port. Then there is a _runPendingImmediateCallback (); . This function call handles all the microtasks. That’s where the cycle of events in the prologue comes from.
As you can see from the description above, the Port messaging mechanism is one-way, which is why we usually create a new ReceivePort when spawn a new Isolate and then send SendPort to the sub-isolate. This established a channel from the child Isolate to the parent Isolate. If two-way communication was required, the child Isolate also created its own ReceivePort and sent SendPort to the parent Isolate through the previous channel. It sounds convoluted, but it’s much easier to understand once you understand the messaging mechanism.
The Timer mechanism
Timers Timers are another important source of events. The Dart virtual uses EventHandler to manage timer resources. To use the timer function, underlying system resources must be called. For this purpose, the EventHandler is initialized during Dart VM initialization, and the EventHandler starts a thread to provide the timer function. This thread is named “DART: IO EventHandler”. Due to the dependence on the underlying system, the implementation of different systems is different. Take Android as an example, the timer function relies on the epoll mechanism at the bottom level.
Obviously the Isolate needs to communicate with EventHandler to use the timer function. The Isolate needs to notify EventHandler to set/cancel the timer, and when the timer expires, EventHandler sends this message to the Isolate.
From Isolate to EventHandler, according to the above message mechanism, it seems that EventHandler needs to open a port in PortMap and the Isolate sends messages to EventHandler through this port. This is not necessary, however, because EventHandler is globally unique. You do not need to go through PortMap to send a message to EventHandler. Just call the EventHandler_SendData method provided by EventHandler.
Otherwise, EventHandler does not know who to send messages to. This port number needs to be sent as a parameter in the previous Isolate to EventHandler message.
At the Dart layer, all timers of the Isolate are managed by _Timer. When a valid timer exists, the _Timer opens a ReceivePort to receive the timer’s arrival message.
We know that the use of timer is divided into two categories, one is with delay, the other is without delay, or the delay is 0 timer. Different management strategies are adopted for the two types of timer _Timer.
-
If the new timer is delayless, _Timer inserts it into a linked list called ZeroTimer.
-
If the new timer is delayed, _Timer inserts it into a binary heap called _TimerHeap. The top of the heap is the nearest timer.
_Timer also handles these two types of timers differently:
-
The no-delay timer sends itself a _ZERO_EVENT message via _sendPort after being inserted into the ZeroTimer list.
-
After being inserted into the _TimerHeap binary heap, a delayed timer checks if the current timer is the most recent to hit and, if so, sends a message to EventHandler with sendPort and the most recent wake-up time. EventHandler will send back the _TIMEOUT_EVENT message when it hits the dot.
Instead of using the generic message handler, the _Timer comes with its own message handler. After the message arrives, the message handler first finds the list of timers that it needs to process:
-
After receiving _ZERO_EVENT, add all the timers in the binary heap that have expired earlier than the current no-delay timer to the list, and finally add the current no-delay timer to the list.
-
The _TIMEOUT_EVENT message is received. If there is a no-delay timer, all timers in the binary heap that have expired earlier than the current no-delay timer are added to the list. If no no-delay timer exists, all timers in the binary heap that time out earlier than the current system time are added to the list.
After receiving a list of pendingTimers to process, the message handler calls the callback function of each timer one by one and updates its status, reloading the heap if there are periodic timers.
Finally, in order to meet the design requirements of the Dart event loop, each completed after a timer callback to invoke _runPendingImmediateCallback () to clear the task queue.
As can be seen from the above timer working process, only the timer with delay is set at the bottom layer through EventHandler, while the timer without delay is completely handled by _Timer in the Dart layer. The timer with a delay is also sent to EventHandler for management. The timer with a longer delay is managed by _Timer in the binary heap. This is also designed to save system resources.
The I/O system
System I/O is also an important source of events. The Dart I/O mechanism itself is quite complex. This summary will only explain the I/O mechanism from the perspective of message passing, and details of file, directory, HTTP, socket and other I/O methods need to be carefully studied.
The VIRTUAL machine provides _IOService in the Dart layer to process all I/O requests uniformly. Dart layer all I/O operations, such as file reads and writes, and network requests, are collected to _IOService and transferred to Native layer for processing. _IOService defines a number for each I/O operation. For example, the open file operation is defined as static const int fileOpen = 5. A total of 43 I/O operations are defined in this class.
All I/O operations are returned asynchronously, which means that the Isolate that initiates I/O operations communicates with the underlying Native code through a messaging system. Here is how the communication channel between them was established.
When receiving an I/O call request from the upper layer, _IOService first ensures that it completes initialization. Make sure you have a ReceivePort and create one if you don’t. This ReceivePort is used to receive all I/O messages.
Ok, now that you have the local receiving port, the next step is the receiver’s Native receiving port. The Native receiver port is created by _IOService in the Native layer by calling IOService_NewServicePort, as well as PortMap. We know that within PortMap a port must be bound to a MessageHandler. Instead of binding to the Isolate’s message handlers, the underlying port binds to the NativeMessageHandler for processing Native message services. NativeMessageHandler and IsolateMessageHandler both inherit from MessageHandler. So message processing at the Native level is also done in the thread pool. That is, the specific I/O operations mentioned above, such as opening files, are done in the thread pool.
After the ReceivePort, or ServicePort, is created on the Native side, its SendPort is returned to the Dart layer.
Therefore, the current state is that both Dart layer and Native layer have ReceivePort, and Dart layer has received SendPort from Native layer. The next step is to make an I/O request, simply call sendPort. send to request the I/O operation parameter (like fileOpen = 5), Parameters (such as the file path) and Dart layer sendPort, which Native needs to notify the Dart layer of the result of the I/O operation. Compose a message to the NativeMessageHandler. The NativeMessageHandler receives the message and processes it in the thread pool, retrieving the parameters for the requested I/O operation, and then finding the underlying I/O to complete the operation. After the operation is complete, send a message to the Dart layer via SendPort. Then the whole I/O process is complete. As shown in the figure:
Because the Dart layer receives messages along the same path, I/O operations also meet the Dart event loop criteria.
summary
That’s it for the Event mechanism of the Dart VIRTUAL machine. The Dart VIRTUAL machine is no stranger to you now that you know how the entire messaging system works. You’ll get a better understanding of the event-driven schematic in the introduction.
Flutter is based on the Dart virtual machine, but the above message mechanism does not meet the needs of Flutter. Therefore, Flutter has some modifications to the Dart virtual machine message mechanism. The following sections briefly describe the Dart VM customization for Flutter.
Flutter of custom
We all know that a Flutter startup creates three threads — UI, GPU and IO — plus the native Platform thread. These four threads coordinate with each other to support the foundation of Flutter operation. The UI thread runs RootIsolate. RootIsolate runs the Flutter framework, which is the rendering pipeline I described in my previous Flutter framework analysis series. RootIsolate is so important that it obviously can’t just dump its message handlers to a thread pool like regular Isolate. Instead, you specify that RootIsolate’s MessageHandler runs in the UI thread.
Message processing customization
How does this designation work? RootIsolate is different from regular Isolate in two ways when it starts up.
One is that when RootIsolate is initialized, UITaskRunner is set to RootIsolate, and ultimately Message_notifY_callback_ is set to RootIsolate’s MessageHandler. This setting will boot RootIsolate’s MessageHandler to run in the UI thread.
Another is to disable RootIsolate’s MessageHandler from running on the thread pool. How does this work? Normal Isolate calls MessageHandler.run() before running the Dart code, which sets up the thread pool for MessageHandler. Instead of calling this function, RootIsolate skips this step and starts running the Dart code. So RootIsolate’s MessageHandler has no thread pool and its MessageHandler can only run on the UI thread.
// This function call sets the thread pool
voidMessageHandler::Run(ThreadPool* pool, ...) {... pool_ = pool; .const bool launched_successfully = pool_->Run<MessageHandlerTask>(this);
}
Copy the code
Decided to message processors running in the mystery of which thread is that function MessageHandler. PostMessage () :
void MessageHandler::PostMessage(std::unique_ptr<Message> message,bool before_events) {
// Normal messages are queued
if (message->IsOOB()) {
oob_queue_->Enqueue(std::move(message), before_events);
} else{ queue_->Enqueue(std::move(message), before_events); }...// Normal ioslate, since the thread pool is not empty, enters the following statement block to run the message processor on the thread pool.
// 'RootIsolate' will be skipped because there is no thread pool.
if(pool_ ! = nullptr && ! task_running_) {const bool launched_successfully = pool_->Run<MessageHandlerTask>(this);
}
// For 'RootIsolate' pass. Calling the following function will end up with its message handler running in the UI thread.
MessageNotify(saved_priority);
}
Copy the code
By customizing Flutter, RootIsolate’s MessageHandler runs on the UI thread. To be clear, the entire logic of message processing remains the same, only the thread of execution.
Microtask customization
Another feature that Flutter customizes to its messaging mechanism is its handling of microtasks. Native Isolate microtask scheduling and execution are performed at the Dart layer. Breakthrough point is at the Dart layer message processing function after processing a message execution _runPendingImmediateCallback ().
Flutter initializes RootIsolate by setting the Dart microtask function to the Native ScheduleMicrotask. In this way, the triggering of microtask execution is also moved to the Native layer. When UIDartState: : FlushMicrotasksNow invoked later will start the task execution.
There are two times that Flutter triggers microtask execution. One is that every time UITaskRunner completes a task, it triggers a microtask. From the analysis of Flutter message mechanism above we know that RootIsolate’s message handler becomes a task run by UITaskRunner. And the message handler will only process one normal message at a time, which still meets the Dart event loop criteria.
The other is after the engine callback _beginFrame and before the callback _drawFrame. Microtask execution is triggered between these two callbacks. Please refer to # Flutter Framework Analysis (I) — Overview and Window for information about these two callbacks.
conclusion
This article introduces the Dart event mechanism, the implementation of TIMER events, I/O events, and the customization of Flutter to the native Dart event mechanism from the perspective of virtual machine. Event mechanisms are what circulatory systems are to animals and road systems are to cities. Once you understand the mechanism of the event, the functional modules inside the Dart/Flutter will be as easy and pleasant as taking a look at them. (Full text)