Synchronous and Asynchronous
Program run is due to meet the demand of people for a certain logic processing, show the executable instruction on the computer, under normal circumstances we expect the instructions are executed in sequence according to the logical order, and the actual situation for some instruction is a time-consuming operation, can’t immediately return results caused by the obstruction, cause the program can’t continue to perform. This is more common with IO operations. At this point, at the user level, we can choose to stop the world, wait for the operation to complete and return the result before continuing the operation, or choose to continue to perform other operations, and wait for the event to return the result and then notify back. This is synchronous versus asynchronous from the user’s point of view.
From an operating system perspective, synchronous asynchrony has a more complex relationship with task scheduling, interprocess switching, interrupts, and system calls.
The difference between synchronous and asynchronous I/O
Why asynchronous
Users can block and wait, because human operations are very slow compared to computer operations, and computer operations that block are a huge waste of performance. Asynchronous operations allow your program to complete its work while waiting for another operation. There are three asynchronous operation scenarios:
- I/O operations: for example, to initiate a network request, read/write a database, read/write a file, or print a document. If a synchronous program performs these operations, it will stop the program until the operation is complete. A more efficient program would instead perform other actions while the action is pending, assuming you have a program that reads some user input, performs some calculations, and then e-mails the results. When you send E-mail, you must send some data to the network and then wait to receive a response from the server. The time spent waiting for a server response is wasted time that would be better used if the program continued to compute
- Perform multiple operations in parallel: Asynchronous is used when you need to perform different operations in parallel, such as making database calls, Web services calls, and any computation
- Long-running event-driven based requests: This is the idea that you have a request come in, and that request goes dormant for a period of time waiting for some other event to occur. When this event occurs, you want the request to continue and then send a response to the client. So in this case, when the request comes in, the thread is assigned to the request, when the request goes to sleep, the thread is sent back to the thread pool, and when the task is complete, it generates an event and selects a thread from the thread pool to send the response
The implementation of asynchronous computer is task scheduling, that is, process switching
Task scheduling adopts the preemptive scheduling mode of time slice rotation, and process is the smallest unit of task scheduling.
Computer system is divided into user space and kernel space. User process is in user space, and operating system is in kernel space. Data access and modification in kernel space have higher permissions than ordinary processes. Task scheduling is the process of determining which processes have CPU resources at any given time to maximize CPU utilization. The kernel is responsible for scheduling and managing user processes. The following is the process scheduling process
Only one process can be running on a CPU core at any given time
Each process can contain multiple threads, and a thread is the smallest unit to perform an operation, so a process switch is a switch of the executing thread
Future
Future
represents the result of an asynchronous operation. It represents a delayed calculation and returns a result or error. Use this code example:
Future<int> future = getFuture();
future.then((value) => handleValue(value))
.catchError((error) => handleError(error))
.whenComplete(func);
Copy the code
A Future can be in one of three states: unfinished, returns a result value, or returns an exception
When a return Future object is called, two things happen:
- Queue a function operation to wait for the result of execution and return an unfinished Future
- When the function operation is complete,
Future
The object becomes complete and takes a value or an error
First, the Flutter Event processing model is that the main function is executed first, then the events in the Microtask Queue are checked, and the events in the Event Queue are executed finally. For example:
void main(){
Future(() => print(10));
Future.microtask(() => print(9));
print("main");
}
/// The printed result is:
/// main
/// 9
/// 10
Copy the code
Based on the above event model, take a look at several constructors provided by the Future. The most basic is to pass a Function directly:
factory Future(FutureOr<T> computation()) {
_Future<T> result = new _Future<T>();
Timer.run(() {
try {
result._complete(computation());
} catch(e, s) { _completeWithErrorCallback(result, e, s); }});return result;
}
Copy the code
Function:
// Simple operation, single step
Future(() => print(5));
// A slightly more complex, anonymous function
Future((){
print(6);
});
// More operations, method name
Future(printSeven);
printSeven(){
print(7);
}
Copy the code
Future.microtask
Events created by this engineering method are sent to the Microtask Queue, which has the characteristic of being executed first over the Event Queue
factory Future.microtask(FutureOr<T> computation()) {
_Future<T> result = new _Future<T>();
//
scheduleMicrotask(() {
try {
result._complete(computation());
} catch(e, s) { _completeWithErrorCallback(result, e, s); }});return result;
}
Copy the code
Future.sync
Returns a Future that executes the passed arguments immediately, understood as a synchronous call
factory Future.sync(FutureOr<T> computation()) {
try {
var result = computation();
if (result is Future<T>) {
return result;
} else {
// TODO(40014): Remove cast when type promotion works.
return new _Future<T>.value(result as dynamic); }}catch (error, stackTrace) {
/// .}}Copy the code
Future.microtask(() => print(9));
Future(() => print(10));
Future.sync(() = >print(11));
/// Print results: 11, 9, 10
Copy the code
Future.value
Create a future that will contain value
factory Future.value([FutureOr<T>? value]) {
return new _Future<T>.immediate(value == null ? value as T : value);
}
Copy the code
The FutureOr parameter means the set of T value and Future value. Because the result of a Future parameter can be value or Future, it is valid for either of the following:
Future.value(12).then((value) => print(value));
Future.value(Future<int> (() {return 13;
}));
Copy the code
Note here that even if value receives 12, the Event will still be sent to the Event queue for execution, but it will be executed earlier than other Future events
Future.error
Create a Future with an error result
factory Future.error(Object error, [StackTrace? stackTrace]) {
/// .
return new _Future<T>.immediateError(error, stackTrace);
}
_Future.immediateError(var error, StackTrace stackTrace)
: _zone = Zone._current {
_asyncCompleteError(error, stackTrace);
}
Copy the code
Future.error(new Exception("err msg"))
.then((value) => print("err value: $value"))
.catchError((e) => print(e));
/// The result is: Exception: err MSG
Copy the code
Future.delayed
Create a future with a delayed execution callback. The internal implementation executes a Future with a delay for the Timer
factory Future.delayed(Durationduration, [FutureOr<T> computation()?] ) {/// .
new Timer(duration, () {
if (computation == null) {
result._complete(null as T);
} else {
try {
result._complete(computation());
} catch(e, s) { _completeWithErrorCallback(result, e, s); }}});return result;
}
Copy the code
Future.wait
Wait for multiple Futures and collect the returned results
static Future<List<T>> wait<T>(可迭代<Future<T>> futures,
{bool eagerError = false.voidcleanUp(T successValue)? {})/// .
}
Copy the code
Use FutureBuilder in combination with:
child: FutureBuilder(
future: Future.wait([
firstFuture(),
secondFuture()
]),
builder: (context,snapshot){
if(! snapshot.hasData){return CircularProgressIndicator();
}
final first = snapshot.data[0];
final second = snapshot.data[1];
return Text("data $first $second"); },),Copy the code
Future.any
Returns the first value in the FUTURES collection to return a result
static Future<T> any<T>(可迭代<Future<T>> futures) {
var completer = new Completer<T>.sync(a);void onValue(T value) {
if(! completer.isCompleted) completer.complete(value); }void onError(Object error, StackTrace stack) {
if(! completer.isCompleted) completer.completeError(error, stack); }for (var future in futures) {
future.then(onValue, onError: onError);
}
return completer.future;
}
Copy the code
For the example above, future.any snapshot.data will return the first value in firstFuture, secondFuture
Future.forEach
For each element passed in, an action is executed in order
static Future forEach<T>(Iterable<T> elements, FutureOr action(T element)) { var iterator = elements.iterator; return doWhile(() { if (! iterator.moveNext()) return false; var result = action(iterator.current); if (result is Future) return result.then(_kTrue); return true; }); }Copy the code
The first time I saw this formal syntax in JS, I was confused for a while. Example:
Future.forEach(["one"."two"."three"], (element) {
print(element);
});
Copy the code
Future.doWhile
Perform an operation until false is returned
Future.doWhile((){
for(var i=0; i<5; i++){print("i => $i");
if(i >= 3) {return false; }}return true;
});
/// The result is printed to 3
Copy the code
The above are the common constructors and methods used in Future
Use a Future in a Widget
Flutter provides a component FutureBuilder that works with the Future display. It is also easy to use. The pseudocode is as follows:
child: FutureBuilder(
future: getFuture(),
builder: (context, snapshot){
if(! snapshot.hasData){return CircularProgressIndicator();
} else if(snapshot.hasError){
return _ErrorWidget("Error: ${snapshot.error}");
} else {
return _ContentWidget("Result: ${snapshot.data}")}})Copy the code
Async-await
use
While these two keywords provide synchronous writing for asynchronous methods, Future provides a convenient way to use chained calls, but it’s less intuitive, and the large amount of callback nesting results in poor readability. As a result, many languages have now introduced await-async syntax, and it’s important to learn how to use them.
Two basic principles:
- To define an asynchronous method, you must declare async in front of the method body
- The await keyword must be used in async methods
First, add async to the method before the time-consuming operation:
void main() async{...}Copy the code
The Future decoration is then added based on the return type of the method
Future<void> main() async{...}Copy the code
You can now wait for the Future to complete with the await keyword
print(await createOrderMessage());
Copy the code
For example, to implement a requirement that the first class gets the second class, and the second class gets the details, the code using chained calls would look like this:
var list = getCategoryList();
list.then((value) => value[0].getCategorySubList(value[0].id))
.then((subCategoryList){
var courseList = subCategoryList[0].getCourseListByCategoryId(subCategoryList[0].id);
print(courseList);
}).catchError((e) => (){
print(e);
});
Copy the code
Now let’s see how much easier things are with async/await
Future<void> main() async {
await getCourses().catchError((e){
print(e);
});
}
Future<void> getCourses() async {
var list = await getCategoryList();
var subCategoryList = await list[0].getCategorySubList(list[0].id);
var courseList = subCategoryList[0].getCourseListByCategoryId(subCategoryList[0].id);
print(courseList);
}
Copy the code
And you can see that this is a little bit more intuitive
defects
Async /await is very convenient, but there are some disadvantages to be aware of
Because the code appears to be synchronized, it blocks subsequent code execution until the await returns the result, just as if it were a synchronized operation. It does allow other tasks to continue running in the meantime, but then its own code is blocked.
This means that code can block with a large number of await code executing consecutively. Operations that were written in Future to represent parallel operations are now serial with await. For example, there is a requirement on the home page to get the rotwast interface, TAB list interface, and MSG list interface simultaneously
Future<String> getBannerList() async {
return await Future.delayed(Duration(seconds: 3), () {return "banner list";
});
}
Future<String> getHomeTabList() async {
return await Future.delayed(Duration(seconds: 3), () {return "tab list";
});
}
Future<String> getHomeMsgList() async {
return await Future.delayed(Duration(seconds: 3), () {return "msg list";
});
}
Copy the code
Writing with await would most likely write something like this, printing the time to perform the operation
Future<void> main2() async {
var startTime = DateTime.now().second;
await getBannerList();
await getHomeTabList();
await getHomeMsgList();
var endTime = DateTime.now().second;
print(endTime - startTime); / / 9
}
Copy the code
Here, we simply wait for all three mock interfaces to be called, making each call 3s. Each subsequent request is forced to wait until the last one completes, and we end up with a total run time of 9s. In fact, we want all three requests to be executed at the same time.
Future<void> main() async {
var startTime = DateTime.now().second;
var bannerList = getBannerList();
var homeTabList = getHomeTabList();
var homeMsgList = getHomeMsgList();
await bannerList;
await homeTabList;
await homeMsgList;
var endTime = DateTime.now().second;
print(endTime - startTime); / / 3
}
Copy the code
Storing the three Futures in a variable can start at the same time, and the final print time is only 3s, so we must keep this in mind when writing the code to avoid performance costs.
The principle of
Threading model
When a Flutter application or Flutter Engine starts, it starts (or selects from the pool) three other threads. These threads sometimes overlap, but in general, they are referred to as UI threads, GPU threads, and IO threads. It’s important to note that this UI thread is not the main thread that the program is running on, or the main thread is understood to be on other platforms, which The Flutter calls “Platform thread”.
The UI thread is where all of your Dard code runs, such as the Framework and your application, and Dart will never run in another thread unless you start your own Workings. The platform thread is where all the plugin-dependent code runs. This thread is also where native Frameworks is used to service other tasks. Typically, a Flutter application will create an Engine instance when it is started, and when the Engine is created, a Platform thread will be created to service it. All interactions with the Flutter Engine (interface calls) must occur on the Platform Thread, and attempting to call Flutter Engine on another Thread can cause unexpected exceptions. This is similar to Android/iOS UI related actions that must be done on the main thread.
In Dart, access is similar to Thread, but it is implemented differently. Isolote is an independent worker that does not share memory. Instead, it transmits messages through channels. Dart is single-threaded code execution, and THE Isolate provides a solution for Dart applications to take advantage of multi-core hardware.
Event loop
The single threaded model basically maintains an Event Loop and two queues (Event Queue and Microtask Queue). When the Flutter project triggers click events, IO events, and network events, they are added to the eventLoop. The eventLoop is always in the loop, and when the main thread finds that the event queue is not empty, it retrieves the event and executes.
Events in the MicroTask queue take precedence over events in the Event Queue. When a task is sent to the MicroTask queue, it blocks the current event queue and executes the events in the MicroTask queue after the current event is completed. This provides Dart with a solution for jumping the queue.
The blocking of event Queue means that the APP cannot draw UI and respond to events such as mouse and I/O, so it should be used with caution. The flowchart is as follows:
Task switching in these two task queues is in some ways equivalent to coroutine scheduling
coroutines
Coroutine is a cooperative task scheduling mechanism, which is different from the preemptive task scheduling mechanism of the operating system. It is under the user mode, and avoids the performance overhead of kernel mode and user mode conversion of thread switching. It let the caller to decide when to give up its own CPU, operating system than preemptive scheduling the time it takes for the price of much smaller, the latter to restore will save quite a lot of state (including not only the virtual memory process context, stack, global variables, such as user space resources, also includes the kernel stack, register and other state of kernel space), On most popular Linux machines, each context switch takes about 1.2-1.5μs of time. This is only taking into account the direct cost, fixed in a single core to avoid the cost of migration, and up to 2.2μs if not fixed
Is that a long time for a CPU? A good comparison is memcpy, which takes 3μs to complete a 64KiB copy of data on the same machine. Context switching is slightly faster than this operation
Coroutines are very similar to threads, in terms of asynchronously executing tasks, rather than in terms of designed entities such as processes -> threads -> coroutines like cells -> nuclei -> protons and neutrons. It can be thought of as a section of functions executed on a thread, using yield to complete all processes such as asynchronous request, register callback/notiler, save state, suspend control flow, receive callback/notification, resume state, and resume control flow
Multithreaded task execution model is shown as follows:
The blocking of threads depends on the switching of processes between systems to complete the execution of logical flow. Frequent switching consumes a lot of resources, and the number of logical flow execution depends heavily on the number of threads applied by the program.
Coroutines are co-multi-tasking, meaning that they provide concurrency but not parallelism. The execution flow model is shown below:
Coroutines can write the control flow in the order of the logic flow. The waiting of the coroutine will actively release the CPU, avoiding the wait time between threads switching, and having better performance. The logic flow code is much easier to write and understand
But threads aren’t all bad, and the preemptive thread scheduler actually provides a quasi-real-time experience. For example, a Timer cannot be guaranteed to run in a time slice when the time arrives, but it is not like a coroutine that will never run if no one gives up the time slice…
conclusion
- Synchronous and Asynchronous
- Future provides an asynchronous code chain-writing method in Flutter
- Async-wait provides a synchronous way to write asynchronous code
- Common methods for Future and Writing UI with FutureBuilder
- Thread model in Flutter, four threads
- Event-driven model for a single threaded language
- Interprocess switching and coroutine comparison
reference
The dart. Cn/tutorials/l…
The dart. Cn/codelabs/as…
Medium.com/dartlang/da…
Juejin. Cn/post / 684490…
Developer.mozilla.org/en-US/docs/…
www.zhihu.com/question/19…
www.zhihu.com/question/50…
En.wikipedia.org/wiki/Asynch…
Eli.thegreenplace.net/2018/measur…