0 x0, introduction
Kotlin version 1.3 started with the introduction of Coroutine, and the concise official documentation and a bunch of shallow articles on the web left me wondering if I wanted to stop at just knowing:
① In Android, Kotlin coroutine is used to solve: processing time-consuming tasks and ensure the security of the main thread; (2) With Kotlin coroutines, it is possible to write asynchronous code in a way that looks synchronous; ③ Basic API calls;
I also want to know more about the concept of coroutines, the use of Kotlin coroutines in practical development, and the principle behind them, so I have this article. Kotlin coroutines haven’t chew through the source code, this series can be notes at present, learning by watching them, extracting part since the reference, just let the subconscious have these concepts, subsequent read source code, clarify ideas to organize a wave again, have cracks, friendly welcome comments section, points out that to discuss learning, thank you ~ this article mainly elaborates some concepts related to things, Some pre-knowledge, some understanding can be directly skipped.
0x1. Trace the source
1. Synchronous & asynchronous
Programming aside, a visual example of taking a bus helps understand synchronization and asynchrony:
Passengers line up to wait for the bus, the car comes, the front door scan code to get on, one sweep to the next sweep, a serial relationship, this is synchronous; The front door passengers get on, the back door passengers get off, they don’t affect each other, they go on at the same time, it’s a parallel relationship, it’s asynchronous.
We regard passengers getting on and getting off as two tasks, and drivers driving a car is also a task, which is asynchronous with these two tasks. Asynchronous means that both can be carried out at the same time. It is also ok for the driver to drive away directly before the passenger finishes boarding:
Normally, the driver should wait for passengers to get on and off the bus before starting the bus.
There are two general operations:
Polling (active) : check the front and back doors every once in a while to see if there are any passengers; Callback (passive) : In the early days of the bus, there would be a bus driver. When no passengers got on or off the bus, she would call the driver to start.
2. Blocked & non-blocked
Synchronous and asynchronous focus on whether to proceed at the same time, while blocked and non-blocked focus on whether to proceed, or take a bus example:
There are passengers on and off the bus, the driver will need to wait, at this time the driver’s task is blocked; Passengers are on and off the bus is finished, the driver can start again, at this time the driver starts the task in a non-blocking state;
What a jam really means: something you care about doesn’t work for some reason, so it makes you wait. Waiting: Just a side effect of congestion, indicating that nothing meaningful is happening or happening over time.
There is no need to wait for a jam. You can do something unrelated, because it will not affect your waiting for the related things. For example, while the driver is waiting for the bus to depart, he can have a cup of tea, check his phone and so on, but cannot leave.
Computers are not as flexible as humans, and it is easiest to wait when clogged. Simply suspend the thread, release the CPU, and reschedule the thread when the conditions are met.
3, program,
Back to programming relevance, tasks correspond to programs in the computer, defined as follows:
A set of instructions (a set of static code) written in a programming language to accomplish a specific task
The CPU processor executes instructions one by one, and even if there is an external interruption, it simply cuts from the current program to another and continues executing each instruction one by one.
As expected, the code executes line by line, but there are some business scenario sequencing structures that don’t work, such as:
Girlfriend: You go to the supermarket after work and buy ten eggs. When you come back, buy one watermelon when you see one
At this point, we need to use another of the four basic control flows → select execution:
The remaining two basic control flows are iteration and recursion. We use the control flow to complete the logic flow. The logic will be executed wherever the program is executed.
4, processes,
Only one program is running in memory by the CPU at a time
Assume that there are two programs, A and B. A is running and needs to read A large amount of input data (IO operation). Then, the CPU can only wait until THE data reading of A is complete and then continue to execute the program.
It looks silly. How about this:
When program A reads data, switch to program B and when program A finishes reading data, let program B pause and switch back to program A?
Sure, but switching the noun in a computer is subdivided into two states:
Suspend: Save the current state of the program, suspend the current program; Activation: to restore the program state and continue executing the program;
This kind of switch, involves saving and restoring the program state, and procedures needed for the A and B system resources (memory, hard disk, etc.) are not the same, it also need A thing to record the program A and B respectively what kind of resources, and system control program to switch A and B, to be A marker to identify, etc., so there is A process of abstraction.
Process definition
A dynamic execution of a program on a data set, generally consisting of the following three parts:
- Procedure: Describes what the process is supposed to do and how to do it.
- Data set: Resources required by a program during execution;
- Process control block: it records the external characteristics of the process and describes the execution change process. The system uses it to control and manage the process, and the system senses the unique mark of the process.
A process is an independent unit of the system for resource allocation and scheduling.
The emergence of process makes multiple programs can be executed concurrently, improving system efficiency and resource utilization, but there are the following problems:
① A single process can only do one thing, and the code in the process is still executed sequentially. (2) If the execution process is blocked, the whole process will be suspended, even if some work in the process does not depend on the waiting resource, and will not be executed. ③ The memory between multiple processes cannot be shared, and the communication between processes is troublesome.
This led to the emergence of threads with smaller granularity.
5, thread
Threads appear to reduce the consumption of context switching, improve the concurrency of the system, and break through the defect that a process can only do one thing, so that intra-process concurrency is possible.
Definition of thread
A lightweight process, the basic CPU execution unit, which is the smallest unit of program execution and consists of a thread ID, program counter, register combination, and stack. The introduction of threads reduces the overhead of concurrent execution and improves the concurrent performance of the operating system.
Distinction: “process” is the minimum unit of “resource allocation”, “thread” is the minimum unit of “CPU scheduling”
Relationship between threads and processes
① A program has at least one process, and a process has at least one thread. The process can be understood as a container for threads. (2) The process has an independent memory unit during execution, which is shared by multiple threads in the process; (3) The process can be extended to multiple machines, threads are most suitable for multi-core; ④ Each independent thread has a program to run the entrance, sequential execution column and program exit, but can not run independently, need to depend on the application, by the application to provide multiple threads of execution control;
Processes and threads are descriptions of a period of time during which the CPU works, but the granularity is different.
6. Concurrency & parallelism
To be Concurrency is to be Concurrency.
Only one instruction is executed at the same time, but multiple process instructions are quickly executed in rotation, so that there is the effect of simultaneous execution on the macro, micro is not simultaneous execution, just the CPU time is divided into several segments, so that multiple processes are quickly executed alternately, existing in a single-core or multi-core CPU system.
Another confusing noun Parallel is:
Multiple instructions are executed simultaneously on multiple processors at the same time, both microscopically and macroscopically, in a multi-core CPU system.
7. Cooperative & snatch
On a single-core CPU, there is only one process executing at a time. With so many processes, how should the CPU time slice be allocated?
Collaborative multitasking
Early operating systems embraced collaborative multitasking, namely:
The process proactively cedes execution rights. If the current process needs to wait for I/O operations, the process proactively cedes the CPU, and the system schedules the next process.
Every process follows the rules and gives up the CPU when it needs to, which is fine, but there is a catch:
A single process can completely hog the CPU
The process in the computer is intermingled with the good and bad, not to say the process of the kind of malicious, if it is the process of the poor robustness, running in the middle of a dead cycle, deadlock, etc., will lead to the whole system into paralysis! In such a mixed environment, it is definitely not in line with the basic national conditions to entrust the execution power to the process itself, and preemptive multi-tasking by the operating system has emerged
Preemptive multitasking
Execution is determined by the operating system, which has the ability to take control from either process and give control to another.
The system allocates time slices for each process fairly and reasonably. If the time slices are used up, the process will sleep. Even if the time slices are not used up, the system will force the process to sleep if more urgent events need to be executed first. With the experience of process design, threads also become preemptive multitasking, but they also bring a new problem — thread safety.
8. Thread safety
When a process has a separate memory unit during execution that is shared by multiple threads, there may be situations like this:
Let’s say we have a variable a = 10 that can be shared by threads T1 and T2, and both threads write to the value of I. Let’s say we’re running this program on a single-core CPU, and the system needs to allocate CPU slices to both threads:
- 1. T1 reads the value of A from the memory as 10. T1 adds the value of A +1 and prepares to write the new value of 11 to the memory.
- T1 is saved and T2 is executed. It also reads the value of A, which is still 10 and +1, and then writes 11 to memory.
- 3. T1 is scheduled again, and it also writes 11 to memory.
The result of the program execution is not what we expected. The value of A should be 12 instead of 11. This is the thread synchronization safety problem caused by the unpredictability of thread scheduling.
Solutions:
Serialized access to critical resources, at the same time, there can be only one thread access to critical resources, also known as synchronous exclusive access, operation is usually lock lock (synchronous), when threads access to critical resources need to get the lock, other threads can’t access, can only wait (block), such as the thread used up lock is released, for other threads continue to visit.
So much for the pre-concept related things, I believe that it will be of great benefit to you to learn the Kotlin association.
0x2. Single-threaded Android GUI System
Yes, The Android GUI is designed to be single-threaded, so you might ask: why not use higher performance multithreading?
A: If the design is multi-threaded, multiple threads update a UI control at the same time, which is prone to thread synchronization safety problems; The simplest solution: lock, but this means more time and less UI update efficiency, and there are many issues to solve such as deadlocks; The cost of complexity of the multithreaded model far outweighs the cost of the performance advantage it provides. This is why most GUI systems are single-threaded.
Android requires: update UI in the main thread (UI thread), attention is → request suggestion, not regulation, regulation bottom line is:
Only the thread that created the view can manipulate the view
So, you can update the UI created by the child thread in the child thread, but this is not recommended.
The time-consuming operation is done in the child thread, and then a message is sent through the Handler to inform the UI thread to update the UI.
For more information about Handler, go to: Switch positions and Watch Handler with Questions
Android updates UI asynchronously
1, the Handler
Post (runnable) or other functions are used to send messages to the main thread message queue, waiting for the scheduler to dispatch them to the corresponding Handler to complete the UI update. Example:
The above code can be simplified by using the lambda expression + Kotlin syntax sugar thread {} :
Another common way to write this is to define a static inner class Handler that puts all UI updates together, separated by msG.what.
2, AsyncTask
AsyncTask is a lightweight Android class for processing asynchronous tasks (encapsulating Handler+Thread). The following code is used as an example:
This is much simpler than writing the Handler by hand, just inheriting AsyncTask and then filling in the blanks (rewrite the function as needed) :
- OnPreExecute () : The asynchronous operation starts and some UI initialization can be done;
- DoInBackground () : performs an asynchronous operation that calls publishProgress() to trigger onProgressUpdate() progress updates;
- OnProgressUpdate () : Updates the UI based on progress;
- OnPostExecute () : Asynchronous operation completed, update UI;
However, there are also the following limitations:
① AsyncTask class needs to be loaded in the main thread; ② AsyncTask objects should be created in the main thread. Execute () must be called in the main thread, and an AsyncTask object can only call this method once. (4) Need to create a specific subclass for each task type, at the same time for easy access to the UI, often defined as the Activity of the internal class, serious coupling.
It can be decoupled by converting functions into callbacks. The extracted custom AsyncTask class is as follows:
The call is also simple, rewriting the corresponding function as needed:
After decoupling, it is more flexible, and the external call logic is separated from the internal asynchronous logic, but there are still problems, such as exception handling, task cancellation and so on.
3, runOnUiThread
RunOnUiThread {} : runOnUiThread{} : runOnUiThread{} : runOnUiThread{} : runOnUiThread{}
Click on the source code kangkang:
Oh, that’s easy:
This function is defined in the Activity to determine whether the current thread is the main thread, not → handler. post, but → perform UI updates directly.
Callbacks are a good thing, but multiple layers of nested callbacks can lead to Callback Hell, such as this logic:
Visit Baidu → Display content (UI) → Download icon → Display icon (UI) → Generate thumbnail → Display thumbnail (UI) → Upload thumbnail → Interface update (UI)
Following this logic, with runOnUiThread a shuttle, the pseudocode is as follows:
Old thousand layer cake, a layer of a layer, see this code, DO not know you are not like me gas tremble cold (heard front-end write JS callback is more terrible, 23333)
Common ways around this: move the nested hierarchy to outer space, do not use anonymous callback functions, and give each callback a name.
4, RxJava
Based on the design of chain call, RxJava can flexibly switch between different threads and execute corresponding tasks by setting different schedulers. RxJava is powerful, but because of the high learning barrier, most Android developers are still stuck in the stage of thread switching tools + operators. Coincidentally, so do I:
“RxJava Meditation (a) : Do you think RxJava really use? This is just to show the effect of writing code in RxJava, and I’ll come back to it when I get better:
The following output is displayed:
Control thread switching as you wish ~
5, LiveData
LiveData is a responsive programming component provided by Jetpack that can contain any type of data and notify observers when the data changes; Because it can sense and follow the life cycle of components such as activities, Fragments, or Services, it is possible to update the UI only when the component is in the active state of the declaration cycle. Typically used with ViewModel components.
MutableLiveData is a MutableLiveData that provides two methods for reading data: setValue() called in the main thread and postValue() called in the non-main thread.
Dependencies need to be imported
implementation "androidx.lifecycle:lifecycle-runtime: 2.2.0."
Copy the code
The following is an example of the code used:
6. Kotlin coroutines
Using Kotlin coroutines requires adding the coroutine core library and platform library dependencies (introduced in build.gradle) :
implementation 'org. Jetbrains. Kotlinx: kotlinx coroutines -- core: 1.3.7'
implementation 'org. Jetbrains. Kotlinx: kotlinx coroutines - android: 1.3.7'
Copy the code
Use the withContext function to switch to the specified thread and automatically switch the thread back to the context to continue execution after the logic within the closure completes. Change the example of the RxJava section to the form of a Kotlin coroutine as follows:
Using Kotlin coroutines doesn’t reduce the amount of code, but it does make writing asynchronous code a lot easier. It starts to feel like writing asynchronous code synchronously.
What exactly is a coroutine in Kotlin
coroutines
A non-preemptive (cooperative) task scheduling mode in which programs can actively suspend or resume execution.
Relationship to threads
Coroutines are thread-based but much lighter than threads, which can be understood as simulating thread operations at the user level. Every time a coroutine is created, a kernel-mode process is dynamically bound. In user mode, scheduling and switching are implemented, and the kernel thread really performs the task. The kernel is required to participate in the context switching of threads, while the context switching of coroutines is completely controlled by users, which avoids a lot of interrupt participation and reduces the resources consumed by thread context switching and scheduling.
According to whether to open the corresponding function call stack is divided into two classes:
- Stack coroutines: have their own call stack, can be suspended at any function call level, and transfer scheduling rights;
- Stackless coroutine: there is no call stack of its own, and the state of the point of suspension is implemented by syntax such as state machine or closure;
Coroutines in Kotlin
“Fake” coroutines. Kotlin does not implement a synchronization mechanism (locking) at the language level, relying instead on Java keywords (such as synchronized) provided by the Kotlin-JVM. Thus, the Kotlin coroutine is essentially just a package based on the native Java Thread API.
It’s just that the API hides the asynchronous implementation details, allowing us to write asynchronous operations as if they were synchronous.
References:
- Good article so far that makes synchronous/asynchronous/blocking/non-blocking /BIO/NIO/AIO so clear
- It is enough to understand the principles of Kotlin coroutine implementation
- Kotlin Coroutines complete analytic series
- Concurrent programming (thread process coroutine)
- Let’s talk about coroutines as well
- What is the difference between thread and process?
- Discussion of thread safety issues
- Why are most UI frameworks single threaded?
- RxJava thread switching principle
- Are Kotlin coroutines really more efficient than Java threads?
- Understanding Kotlin Coroutines in Depth. By Binggan Huo