I made a dynamic flow chart web page, like PPT animation, by constantly clicking on each of the flow chart, so as to achieve dynamic. Because UI is draWIO UI, so the dynamic flow chart is beautiful, and convenient to make flow chart, and then can realize the function of PPT by clicking the elements.

Address of the dynamic flow chart of the diagram below :(after the element appears, you need to wait two seconds, after the element disappears, you can click the element repeatedly to appear)

http://119.29.32.105:2335/drawio/Golang%E7%9A%84GMP.html

User-level context:

Register context: The data in various registers at the moment, such as the PC register, which holds the next instruction to be executed.

System-level context: process control blocks, memory information, and so on

IO operations: memory, disk, network, peripherals input and output, time-consuming but CPU – free

Thread switching:

Thread switching involves only register context.

When thread 3 running on a single-core CPU switches to thread 1: The context of thread 3 at this point (such as which statement the program ran to) needs to be saved, and then the context of thread 1 that was last run is loaded to perform thread 1’s task

End goal: Drain the CPU of computing time, which cannot be wasted on IO operations, thread switching, idle.

CPU Single-core: computing

Parallel programming: Using multiple cpus to perform multiple tasks in a single CPU

Concurrent programming: Running multiple tasks on a single (pseudo) core. If it is a user – mode thread is single-core, if it is a kernel – mode thread is multi-core

The role of concurrent programming: we will only discuss -> in order not to waste CPU time on IO operations

Multithreading:

Question ** : ** What if the task of thread 3 is urgent, and the operating system puts thread 3 first in the queue so that thread 3 can execute the task ahead of time

Preemptive scheduling ** : ** The operating system schedules each thread according to its priority, determining the order in which they should be executed.

Advantages of multi-threading: The priority of threads that perform I/O operations (such as reading data from hard disks) is lowered so that I/O operations do not waste a lot of CPU time

Disadvantages of multithreading: Thread switching requires saving context and takes time.

Coroutines: lightweight, non-preemptive threads in user mode. Coroutines stand for collaboration, that is, coroutines that work together to accomplish tasks

// Pseudo-code // listens on port 80, runs returnHtml() for each request, Function Listener (){// Listen on port 80 returnHtml()} go function returnHtml(){// Read the HTML file on disk // // After reading, return the HTML file to the user}Copy the code

User mode: Switching between coroutines is determined by user-written code.

Lightweight: Because there is no thread switching and the context of a coroutine is stored on that coroutine, switching coroutine consumption is minimal.

Non-preemption: Switching between coroutines is not by preemption of priority, but is determined by the user.

Disadvantages of coroutines: They can only run on one thread, so they can only take advantage of a single CPU.

What problems coroutines solve:

  1. It takes less time than thread switching

  2. Multiple coroutines do not have problems with multithreading, such as shared resources.

Question: How to use multi-core CPUS based on coroutines? Question: How to use multi-core CPUS based on coroutines?

Scheme: multi – kernel thread + multi – coroutine

Problem: Since the invocation of coroutines is determined by user-written code, kernel-mode threads are scheduled by the operating system. So the solution itself still involves sharing resources. This undoubtedly makes scheduling design more difficult when users write code. However, the scheduling complexity is not high for a large number of network IO applications.

Golang core: GMP model – > www.zhihu.com/question/20…

Here are two ways GoLang can make more efficient use of CPU time:

If the M0 kernel thread is interrupted by the operating system, p is moved to an idle kernel thread, such as M1.

If M1 is idle at this time, the new P scheduler will steal G from other P and mount it on the idle M1, so that the CPU utilization is higher.

End goal: Drain the CPU of computing time, which cannot be wasted on IO operations, thread switches, and idle time.