Go is very good at concurrency, which is the brand of Go. Go uses two tools for concurrent programming:
- Goroutine: Lets multiple tasks run in parallel, without each affecting the other
- Channel: Communicates between Goroutines
This article delves into how channels work, assuming you already know the basic concepts. For a channel, we need to know that it has the following attributes:
- It is safe for multiple Goroutines to access channels simultaneously
- Tasks in a channel are first in, first out (FIFO)
- A channel can pass values between goroutines
- Channels affect Goroutine scheduling
Let’s explore how these features are implemented.
1. Create the channel
There are two types of channels that can be created:
- Buffered Channel, which specifies the size of the buffer at creation time
- An unbuffered channel, also known as a synchronous channel, can be thought of as a special case of a buffered channel, with the buffer size set to 1
ch := make(chan int.3) // BUFFered channel with buffer size 3
ch := make(chan int) // unbuffered channel
Copy the code
The underlying data structure of CHAN is called Hchan, and its specific structure is as follows:
type hchan struct {
buf unsafe.Pointer
lock mutex
sendx uint
recvx uint
sendq waitq
recvq waitq
qcount uint
dataqsiz uint
elemsize uint16
closed uint32
elemtype *_type
}
Copy the code
Buf is a circular queue used to store the data received by a channel, lock is used to protect the data, and goroutine needs to obtain the lock before accessing the BUF of a channel.
Sendx indicates the location where the current data is sent, and recvx indicates the location where the current data is received. Sendq and RECvq are two queues, and these two structures are important, which we’ll cover next.
Qcount indicates the number of data currently stored in the BUF, and dataqsiz indicates the maximum number of data that the BUF can accept. Elemtype indicates the type of the data.
When a channel is created using make, it actually allocates a space on the heap, initializes the hchan structure, and returns a pointer to hchan. This is why, when using a channel, you can simply pass it instead of getting a pointer to it, because a channel is a pointer itself.
2. Send and receive
To take a closer look at how a channel sends and receives data, assume that there are two goroutines: G1 and G2, with one sending data and one receiving data:
// G1
func main(a) {
for _, task := range tasks {
taskCh <- task
}
}
// G2
func worker(a) {
for {
task := <- taskCh
process(task)
}
}
Copy the code
Suppose G1 now executes first and sends data to a channel through the following three steps:
Before putting data into the BUF, you need to acquire the lock, put a copy of the data into it (note that this is a copy of the original data), and then release the lock.
After G2 runs, data is found in BUF, which can be processed through three steps:
The lock is also acquired, the data is dequeued, and the lock is released.
But obviously, in most cases, data is not sent and received so smoothly, assuming that G2 now takes a long time to process a data while G1 is still sending data:
The BUF will soon fill up with data, and G1 will be blocked, consuming data directly in the BUF before sending more data.
The blocking and awakening of the Goroutine is done by the Go runtime scheduler. Here’s a concept that gets talked about a lot: runtime. Here you can simply think of the runtime as the environment in which the Go code runs, taking care of memory allocation, garbage collection, and so on.
A Goroutine represents an operating system user thread. It is lighter than an operating system thread, and switching between Goroutines is cheaper, but goroutine actually runs on an operating system thread.
This run-time scheduler is called the GMP model, where G represents the Goroutine, M represents the operating system thread, and P represents the context of the scheduler. It maintains a list of executable Goroutines and is responsible for the scheduling of goroutines.
After the G1 send task is blocked, P sets G1 to wait, disconnects G1 from M, and pulls another runnable Goroutine from the queue to run. Here G1 is blocked, but the thread running G1 is not blocked and can continue running with relatively little impact on performance.
G1 does something when it’s blocked, remember sendq in hchan above, which comes into play at this point. G1 will create a sudog data and put it in Sendq before blocking:
When G2 consumes a piece of data, sendq is activated, Task4 from sudog is enqueued directly into buF, and G1 is set to runnable.
It is important to note that G2 directly initiates Task4 into the BUF, rather than waking G1 up first, to optimize performance. If G1 wakes up first, G1 will need to acquire the lock before placing Task4 into the BUF.
If G1 is blocked as the sender, what if G2 is blocked as the receiver?
G2 will also create a sudog, but instead of putting it in sendq, it will put it in recvq:
Here sudog points to a memory address that receives data sent in by G1. Here, the most amazing thing happens, we all know that the goroutine is allocated on the stack, so the t here is on the G2 stack.
When G1 sends data, it writes it directly to the memory address pointed to by T, rather than putting it into buF. In other words, G1 writes data directly to G2’s stack. This is also done for performance reasons, so that G2 does not have to acquire the lock.
This is the only case where one stack writes to another, it doesn’t happen anywhere else.
3. Summary
A channel uses mutex (lock) to ensure that multiple Goroutines are safe when accessing the channel. A channel also maintains a loop queue to ensure that data is first in first out (FIFO). This also completes data transfer between multiple Goroutines, which is scheduled by the scheduler if the Goruotine develops a blockage.
However, the channel implementation also does some unorthodox operations, writing data across stacks, which can cause a lot of trouble for garbage collection. But if you don’t, you’re sacrificing a lot in terms of performance, so it’s a trade-off between performance and simplicity.
The text/Rayjun
[1] www.youtube.com/watch?v=KBZ…