Without further ado, cut to the chase.

Overall structure diagram of channel

Brief description:

  • bufA structure specific to a buffered channel that stores cached data. It’s a circular list
  • sendxandrecvxUsed to recordbufThe ~index sent or received by ~in this circular list
  • lockIt’s a mutex.
  • recvqandsendqA queue of structs (sudogs) abstracted from goroutines that receive (<-channel) or send (channel < -xxx), respectively. It’s a two-way list

The source code is in/Runtime /chan.go (current version: 1.11). The structure is hchan.

type hchan struct {
    qcount   uint           // total data in the queue
    dataqsiz uint           // size of the circular queue
    buf      unsafe.Pointer // points to an array of dataqsiz elements
    elemsize uint16
    closed   uint32
    elemtype *_type // element type
    sendx    uint   // send index
    recvx    uint   // receive index
    recvq    waitq  // list of recv waiters
    sendq    waitq  // list of send waiters

    // lock protects all fields in hchan, as well as several
    // fields in sudogs blocked on this channel.
    //
    // Do not change another G's status while holding this lock
    // (in particular, do not ready a G), as this can deadlock
    // with stack shrinking.
    lock mutex
}
Copy the code

Let’s take a closer look at how the various parts of Hchan are used.

Let’s start with creation

Let’s first create a channel.

ch := make(chan int.3)
Copy the code

Creating a channel is essentially instantiating an hchan structure in memory and returning a pointer to ch, which is what we use to pass a channel between functions. That’s why we don’t use a pointer to a channel, we just use a channel, Because the channel itself is a pointer.

Send (ch < -xxx) and recv(< -ch) are received in the channel

Let’s start with a question: If you wanted a Goroutine to enter a structure on a first-in, first-out (FIFO) basis, what would you do? Lock! Right! A channel is a lock. Hchan itself contains a mutex

How are queues implemented in a channel

There is a cache BUF in the channel, which is used to cache the queue (if a channel with a cache is instantiated). Let’s first look at how queues are implemented. It’s the same channel that we just created

ch := make(chan int.3)
Copy the code

When using send (ch < -xx) or recv (<-ch), you must first lock the hchan structure.

Then start send (ch < -xx) data. one

ch <- 1
Copy the code

two

ch <- 1
Copy the code

three

ch <- 1
Copy the code

At this point, the queue is full. The dynamic graph is shown as:

Then recv (<-ch), which is a reverse operation, also requires a lock.

Then start recv (<-ch) data. one

<-ch
Copy the code

two

<-ch
Copy the code

three

<-ch
Copy the code

The graph is:

Note the changes in buf and Recvx and sendx in the above two figures. Recvx and Sendx are based on the changes in the circular list BUF. As to why a channel uses a circular list as a cache structure, I think it is convenient to locate the current send or recvx position in the cache list during the dynamic send and recv process, as long as the rotation operation along the list is good.

Cache in order to store the list, when the data is read in order by the list, in line with the principle of FIFO.

Detailed operation of send/recv

Note: Each of the above steps in the cache list requires a lock operation!

The operation details of each step can be refined as follows:

  • First, lock it
  • Second, copy data from the Goroutine to the “queue” (or from the queue to the Goroutine).
  • Third, release the lock

The operation of each step is summarized as dynamic picture :(sending process)

Or :(receiving process)

Do not communicate by sharing memory; instead, share memory by communicating. The concrete implementation is to use the channel to copy data from one end to the other end! It’s a channel.

What happens when the channel cache is full? How does this work?

Send (ch < -xxx) or recv(< -ch) will block the current goroutine if the channel cache is full or if there is no cache.

As we know, Go’s Goroutine is user-space threads, which need to be scheduled by themselves. Go has a run-time scheduler to help us complete the scheduling. I will not elaborate on the Go scheduling model and GMP model here. If you don’t know more about it, please refer to my other article (Go Scheduling Principle).

Send (ch < -xx) or recv (<-ch) : send (ch <-ch)

// In goroutine1, call it G1

ch := make(chan int.3)

ch <- 1
ch <- 1
ch <- 1
Copy the code

At this time, G1 is running normally. When it sends again (ch<-1), it will proactively call the Go scheduler, make G1 wait, and then release M to other G to use

At the same time, G1 is abstracted as a sudog structure with the G1 pointer and send element and stored in Hchan’s SendQ to be awakened.

So, when does G1 wake up? This is the grand entrance of G2.

Recv (p := <-ch);

G2 will fetch data from the cache queue, channel will push OUT G1 in the waiting queue, push the sent data of G1 to the cache, then call the Scheduler of Go, wake up G1, and put G1 in the running Goroutine queue.

What if G2 performs recv first?

You might want to go backwards along these lines. First of all:

At this time, G2 will actively call the Go scheduler, make G2 wait, and then release M for other G to use. G2 is also abstracted as a sudog structure with a G2 pointer and a recv null element and stored in Hchan’s RECvq to be awakened

At this point, goroutine G1 happens to start pushing data ch < -1 to the channel. At this point, something very interesting happens:

Instead of locking the channel and putting the data in the cache, G1 copies the data directly from G1 to the stack of G2. This way is very good! During the wake up process, G2 no longer needs to acquire the lock of the channel and then fetch data from the cache. The memory copy is reduced and the efficiency is improved.

What follows is obvious:

For more exciting content, please follow my wechat official accountInternet Technology NestOr add wechat to discuss and communicate:

References:

  • www.youtube.com/watch?v=KBZ…
  • zhuanlan.zhihu.com/p/27917262