When it comes to Go Channel, many people will use the words “excellent” and “philosophy” to describe it. However, go Channel is probably one of golang’s most problematic features. In many cases, when we use Go Channel, we think we can close the channel, but we don’t, which is why go Channel leaks.
The reader is required to be familiar with the basic knowledge of Go Channel before reading this article. If you’re not familiar with Go Channel, you can start by reading “What To Do for Newbies using Go Channel”. In fact, if you don’t turn off the Go Channel, you can also cause a memory leak, as I explained in detail in the previous article. This article assumes that you already know about memory leaks in such cases.
Without further ado, let’s look at the code.
func TestLackOfMemory(t *testing.T) {
fmt.Println("NumGoroutine:", runtime.NumGoroutine())
chanLackOfMemory()
time.Sleep(time.Second * 3) // Wait for goroutine to execute to prevent premature output
fmt.Println("NumGoroutine:", runtime.NumGoroutine())
}
func chanLackOfMemory(a) {
errCh := make(chan error) / / (1)
go func(a) chan error { / / (5)
time.Sleep(2 * time.Second)
errCh <- errors.New("chan error") / / (2)
return errCh
}()
var err error
select {
case <-time.After(time.Second): < -ctx.done ()
fmt.Println("Timeout")
case err = <-errCh: / / (4)
iferr ! =nil {
fmt.Println(err)
} else {
fmt.Println(nil)}}}Copy the code
What do you think the output is? The correct output is as follows:
NumGoroutine: 2 Times out NumGoroutine: 3Copy the code
This is a classic scenario where go Channel causes a memory leak. Based on the output (starting with two goroutines and ending with three), we know that one goroutine did not exit until the end of the test function. The reason is that the errCh created at (1) is a channel without a cache queue. If a channel is sent only by the sender, the sender will block. If a channel has only receivers, the receivers block.
We can see that the code at (4) keeps blocking because no sender is sending data to errCh. Until (3) after the timeout, print “timeout”, the function exits, (4) the code is not received successfully. The goroutine at (2) does not start sending until the “timeout” is printed. Because the external goroutine has exited, errCh has no receiver, causing (2) to block. Therefore, the coroutine where the code at (2) is located never exits, causing a memory leak. If you have a lot of similar code in your code, or use code of the form described above in your for loop, multiple unexiting gorouting over time will result in OOM.
The case is actually relatively simple. We just need to add a cache queue for the channel. ErrCh := make(chan error, 1) The output is as follows, indicating that the goroutine we created has exited.
NumGoroutine: 2 Times out NumGoroutine: 2Copy the code
Someone might want to close a channel using defer Close (errCh). For example, change the code at (1) to the following (error) :
errCh := make(chan error)
defer close(errCh)
Copy the code
The code at (2) blocks because there is no receiver. Until close(errCh) runs, (2) is still blocked. This results in goroutine still being sent to errCh when the channel is closed. In Golang, however, you cannot close a channel while sending to it, otherwise it will panic. So this approach is wrong.
Alternatively, add defer close(errCh) to the first sentence of the goroutine at (5). Defer Close (errCh) will remain unexecuted due to blocking at (2). And therefore wrong. Even if the sender and receiver at (2) and (4) are swapped, the channel will be closed, resulting in meaningless zero value output.
In this example, there is only one sender and it is sent only once, so just add a cache queue. But in other cases, there may be more than one sender (or more than one sender), so this scenario requires that the size of the cache queue be the same as the number of times it is sent. Once the cache queue capacity is used up, sending another sender blocks the sender goroutine. If the recipient quits at this point, at least one Goroutine will still be unable to exit, causing a memory leak. Take the following code for example. I wonder if the reader will be able to see the problem after the above explanation.
func TestLackOfMemory2(t *testing.T) {
fmt.Println("NumGoroutine:", runtime.NumGoroutine())
chanLackOfMemory2()
time.Sleep(time.Second * 3) // Wait for goroutine to execute to prevent premature output
fmt.Println("NumGoroutine:", runtime.NumGoroutine())
}
func chanLackOfMemory2(a) {
ich := make(chan int.100) / / (3)
// sender
go func(a) {
defer close(ich)
for i := 0; i < 10000; i++ {
ich <- i
time.Sleep(time.Millisecond) // Try not to send it too fast
}
}()
// receiver
go func(a) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
for i := range ich { / / (2)
ifctx.Err() ! =nil { / / (1)
fmt.Println(ctx.Err())
return
}
fmt.Println(i)
}
}()
}
// Output:
// NumGoroutine: 2
/ / 0
/ / 1
/ /... (omitted)...
/ / 789
// context deadline exceeded
// NumGoroutine: 3
Copy the code
We use channel’s cache queue wisely. We thought we would loop and then close the channel. And we use for range to get the value of the channel, and we keep getting it until the channel closes. But in code (1), in the receiver’s goroutine, we add a judgment statement. This causes the channel at code (2) to exit the receiver Goroutine before it has been received. Although there is a cache at code (3), because the sending channel is in the for loop, the cache queue fills up quickly, blocking at position 101. So in this case we use an extra stop channel to terminate the sender’s Goroutine. The method is as follows:
func TestLackOfMemory2(t *testing.T) {
fmt.Println("NumGoroutine:", runtime.NumGoroutine())
chanLackOfMemory2()
time.Sleep(time.Second * 3) // Wait for goroutine to execute to prevent premature output
fmt.Println("NumGoroutine:", runtime.NumGoroutine())
}
func chanLackOfMemory2(a) {
ich := make(chan int.100)
stopCh := make(chan struct{})
// sender
go func(a) {
defer close(ich)
for i := 0; i < 10000; i++ {
select {
case <-stopCh:
case ich <- i:
return
}
time.Sleep(time.Millisecond) // Try not to send it too fast
}
}()
// receiver
go func(a) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
for i := range ich {
ifctx.Err() ! =nil {
fmt.Println(ctx.Err())
close(stopCh)
return
}
fmt.Println(i)
}
}()
}
// Output:
// NumGoroutine: 2
/ / 0
/ / 1
/ /... (omitted)...
/ / 789
// context deadline exceeded
// NumGoroutine: 2
Copy the code
One might ask, what if the receiver Goroutine closes the Stop channel and the sender continues sending? No memory leaks?
The answer is no. Because there are only two possible situations, one is that the sender sends data to the cache, and when the sender wants to continue sending, select finds that the Stop Channel has been closed, and the sender goroutine exits. If a channel is not cached and the sender blocks, select finds that the stop channel is closed and the sender goroutine exits.
In summary, we usually only encounter memory leaks from these two Go Channels (memory leaks from one sender and memory leaks from multiple senders). If you know of other memory leaks caused by Go Channels, feel free to leave a comment in the comments section.
Let’s take a closer look at these two memory leak cases:
func chanLackOfMemory(a) {
errCh := make(chan error) / / (1)
go func(a) chan error { / / (5)
time.Sleep(2 * time.Second)
errCh <- errors.New("chan error") / / (2)
return errCh
}()
var err error
select {
case <-time.After(time.Second): < -ctx.done ()
fmt.Println("Timeout")
case err = <-errCh: / / (4)
iferr ! =nil {
fmt.Println(err)
} else {
fmt.Println(nil)}}}func chanLackOfMemory2(a) {
ich := make(chan int.100) / / (3)
// sender
go func(a) {
defer close(ich)
for i := 0; i < 10000; i++ {
ich <- i
time.Sleep(time.Millisecond) // Try not to send it too fast
}
}()
// receiver
go func(a) {
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
for i := range ich { / / (2)
ifctx.Err() ! =nil { / / (1)
fmt.Println(ctx.Err())
return
}
fmt.Println(i)
}
}()
}
Copy the code
You can see that no memory leak will occur if the receiver’s goroutine does not end before receiving data in the channel, whether it is a single sender or multiple senders. Otherwise, if the receiver needs to exit early before the channel closes, the channel buffer queue needs to be set up as appropriate or the sender needs to be notified to close using a special Stop channel.
This article refers to how to Exit coroutine Goroutine (Timeout Scenario). Thanks to the original author.