This is the 26th day of my participation in Gwen Challenge

These reviews

If you don’t already know Go, I suggest you read mineGo from zero.

In this series of articles, I’ll take a closer look at the Go language and introduce advanced features such as package management, reflection, and concurrency.

The previous article focused on the threading model common to the Go language. This article introduces the MPG threading model in Go, which is an improvement on the two-level threading model.

Overview of the MPG model

The MPG threading model in Go improves the two-level threading model to a certain extent, making it more flexible for scheduling between threads. It consists of three main modules, as shown in the figure:

  • Machine, a machine corresponds to a kernel thread, which is equivalent to the mapping of kernel threads in the Go process
  • Processor, a prcessor that represents the context necessary to execute the Go code snippet, can be understood as the processor of the user code logic
  • Goroutine, which encapsulates the code snippet in Go, is a lightweight user thread.

To ease the description, we’ll use M, P, and G to refer to machine, Processor, and Goroutine, respectively. As can be seen from the figure:

Each M is bound by a kernel thread, and M and P are one-to-one, and P and G are one-to-many. The relationship between M and the kernel thread does not change during execution, it is bound to only one kernel thread during the lifetime of M, and the relationship between M and P and P and G is dynamically variable.

In the actual running process, only the combination of M and P can provide an effective running environment for G, and multiple executable G will be arranged in a queue to hang on a certain P, waiting for scheduling and execution, as shown in the following figure:

In the figure above, M and P together form a basic runtime environment, where the code snippet in G0 is in the running state and the G queue on the right is in the waiting state.

M is generally created because there is not enough M to combine with P to provide a running environment for G, and in many cases there may be more M than P. In a single Go process, the maximum number of P determines the program’s concurrency size, and the maximum number of P is determined by the program. The maximum P can be set by modifying the environment variable GOMAXPROCS and calling the function runtime#GOMAXPROCS.

M and P will be timely combined and disconnected to ensure that G queue in P can be executed in a timely manner. For example, if G0 in the figure above is blocking M due to network I/O, THEN P will carry the remaining G into the arms of other M. This new M1 may be newly created or retrieved from the scheduler free M list, depending on whether there is M in the scheduler free M list at this time, so as to avoid excessive creation of M, as shown in the figure:

When M’s corresponding kernel thread is awakened, M will attempt to capture a P context for G0, possibly from the scheduler’s free P list. If this fails, M will be put into the scheduler’s executable G queue by G0, waiting for other Ps to find it. In order to ensure balanced execution of G, non-idle P will run out of its own executable G queue, periodically obtain substituted G from the scheduler’s executable G queue, and even plunder G from other P’s executable G queue!

summary

This paper mainly introduces the MPG thread model in Go language. The concurrency truth of the Go language is this: Don’t communicate as shared memory; instead, communicate as shared memory. When a P is associated with multiple Gs, the execution order of G will be processed, that is, concurrency. When a P is performing a coroutine, the others will wait. When the executing coroutine encounters a blocking situation, such as IO operation, the PROCESSOR of Go will execute other coroutines. The processor doesn’t know how long it will take you to finish, so it doesn’t wait for you to finish.

The concurrency of Go seems to be preemptive, but in fact the coroutine is non-preemptive, and the coroutine actively cedes control. That is to say, when the IO operation occurs above, the scheduler does not force the switch to execute other coroutines, but only after the current coroutine cedes control, the scheduler executes other coroutines.

Read the latest article, pay attention to the public number: AOHO Qiusuo