This is the 18th day of my participation in the August Genwen Challenge.More challenges in August
Today we are going to introduce some of the most important aspects of Golang, which is the core of the golang language: “buy” multi-concurrency control. This topic is quite complicated and we will introduce it in several parts. Let’s introduce the theoretical knowledge first.
Process, thread, coroutine
Processes, threads, coroutines (also called lightweight threads)
Coroutine is a kind of lightweight thread in user mode, also called microthread, English name Coroutine, the scheduling of Coroutine is completely controlled by the user. Coroutines are often compared to subroutines (functions). A subroutine call is always an entry, a return, and an exit completes the execution of the subroutine.
The biggest advantage of coroutines over traditional system-level threads and processes is that they are “lightweight” and can easily be created in the millions without depleting system resources, whereas threads and processes are usually limited to 10,000 at most. This is why coroutines are also called lightweight threads.
Compared with multithreading, coroutine has the advantages of high execution efficiency. Because subroutine switching is not thread switching but controlled by the program itself, there is no overhead of thread switching, and the more threads there are, the greater the performance advantage of coroutines compared to multithreading.
Threading model
Kernel-level threading model
User threads have a one-to-one relationship with KSE (the kernel of the operating system itself) (1:1). Thread libraries for most programming languages (pthreads for Linux, java.lang. Threads for Java, STD :: threads for C++11, etc.) are a wrapper around operating system threads (kernel-level threads), creating each Thread statically associated with a different KSE. So its scheduling is done entirely by the OS scheduler. This approach is simple to implement, takes direct advantage of the threading capabilities provided by the OS, and there is generally no interaction between different user threads. However, its creation, destruction, and context switching between multiple threads are directly performed by the OS level, which will have a significant impact on THE PERFORMANCE of the OS in a scenario where a large number of threads are required.
Each thread is scheduled independently by the kernel scheduler, so if one thread blocks, no other threads are affected.
Advantages: With the support of multi-core processor hardware, the kernel space threading model supports true parallelism, allowing another thread to continue executing when one thread is blocked, so concurrency is strong.
Disadvantages: Each user-level thread needs to be created with a corresponding kernel-level thread, which is expensive to create and affects application performance.
User-level threading model
User thread with the KSE (operating system kernel thread) is much (M: 1) to 1, this kind of thread to create, destroy and coordination of multiple threads and other operations are implemented by the user’s own thread library to responsible, transparent to the OS kernel, a process all threads are created with the same the KSE dynamically at runtime. There are many languages that implement coroutines in this way. This approach is lightweight compared to what kernel-level threads can do, consumes much less system resources, and therefore the number of things that can be created and context switches are much less costly. However, this model has a fatal flaw. If we make a blocking system call on a user thread (such as read network IO in blocking mode), then once KSE is scheduled out of the CPU by the kernel due to blocking, all the remaining corresponding user threads become blocked (the whole process hangs).
Advantages: The advantage of this model is that thread context switches occur in user space, avoiding mode switch, which has a positive impact on performance.
Disadvantages: All threads are based on a single kernel scheduling entity, the kernel thread, which means that only one processor can be used, which is not acceptable in a multi-processor environment. In essence, user threads solve the concurrency problem, but not the parallelism problem. If a thread is stuck in the kernel state due to I/O operations and the kernel state is blocked waiting for I/O data, all threads will be blocked.
Two-level threading model
The user thread and KSE (the kernel thread of the operating system itself) are many-to-many (M:N). This implementation combines the advantages of the first two models. Threads can be dynamically associated with different KSES at run time. The remaining user threads currently associated with it can re-associate with other Kses. Of course, the implementation of this dynamic association mechanism is very complicated and requires users to implement it themselves, which is one of its disadvantages. Concurrency in Go language is implemented in this way. Go implements a runtime scheduler to implement the model itself, which is responsible for the dynamic association between the “threads” in Go and KSE. This model is sometimes referred to as the hybrid threading model, where the user scheduler “schedules” user threads to KSE and the kernel scheduler schedules KSE to CPU.