1. Introduction
There is a saying in the GO community
Instead of communicating through shared memory, share memory through communication.
Go officially recommends using pipe communication for concurrency.
A channel is a communication carrier used for communication between coroutines. Strictly speaking, a channel is a conduit through which data is “passed” or “read”. So a coroutine can send data to a channel, and another coroutine can read data from that channel.
A new term is introduced here: coroutines subdivide threads into coroutines, such as multi-person collaboration on a pipeline. This reduces the wait time within each thread.
2. Introduction to the channel
Go provides a chan keyword to create a channel. Only one type of data can be passed through a channel; no other type is allowed to be transmitted.Subdivide threads into smaller coroutines, resulting in less intermediate waiting time and improved efficiency!
2.1 the statement
package main
import "fmt"
func main(a){
var channel chan int // Declare a channel for which int data can be passed.
fmt.Println(channel)
// The program will print nil because the channel's 0 value is nil.
}
Copy the code
A nil channel is useless. You can’t pass data to it or read data from it. Therefore, we must use the make function to create a usable channel.
package main
import "fmt"
func main(a){
channel := make(chan int)
// Declare a channel for which int data can be passed.
fmt.Println(channel)
// The program prints the address of the channel. 0xc0000180c0
}
Copy the code
It’s a pointer to a memory address. The channel variable defaults to a pointer. In most cases, when you want to communicate with a coroutine, you can pass a channel as an argument to a function or method. After receiving a channel parameter from a coroutine, you can receive or send data from the channel without dereferencing it.
2.1 read and write
The Go language provides a very concise left-arrow syntax <- to read and write data from channels.
There are variables that accept pipe values
channel <- data
Copy the code
The above code means that we want to push the data into the channel, notice where the arrows are pointing. It indicates from data data to channel. So we can think of it as we’re pushing data into a channel channel.
Accept pipe values without variables
<- data
Copy the code
This statement does not transfer data to any variables, but is still a valid statement.
The above channel operation is blocked by default.
- In previous lessons, we learned that you can use time.sleep to block a channel. Channel operations are inherently blocking. When some data is written to a channel, the corresponding coroutine blocks until another coroutine can receive data from that channel.
- Channel operations tell the scheduler to schedule other coroutines, which is why the program does not block on one coroutine all the time. These features of the channel are very useful when different coroutines communicate, and it prevents us from using locks or hacks to block the coroutine.
2.3 Channel Details
2.3.1 example
package main
import "fmt"
func Rush(c chan string) {
fmt.Println("Hello "+ <-c + "!")
// Declare a function greet whose argument c is a string channel.
// In this function, we receive data from channel C and print it to the console.
}
func main(a){
fmt.Println("Main Start")
// The first statement of the main function is to print main start to the console.
channel := make(chan string)
// Use the make function in main to create a string channel and assign it to the 'channel' variable
go Rush(channel)
// Pass the channel channel to the greet function and run it coroutine with the GO keyword.
// At this point, the program has two coroutines and main goroutine is scheduled to run
channel <- "DEMO"
// Pass a data DEMO to the channel.
// The main thread blocks until a coroutine receives the data. The Go scheduler begins to dispatch data from the greet coroutine to receive the channel
fmt.Println("Main Stop")
// Then the main thread activates and executes the following statement, printing Main stopped
}
/*
Main Start
Hello DEMO!
Main Stop
*/
Copy the code
2.3.2 deadlock
When a channel reads or writes data, its coroutine blocks and scheduling control passes to another non-blocking coroutine.
- If the current coroutine is reading from a channel that has no value, the current coroutine blocks and waits for other coroutines to write values to the channel.
- Therefore, the read operation will be blocked. Similarly, if you send data to a channel, it blocks the current coroutine until another coroutine reads data from the channel. The write operation will block at this point.
The following is an example of a deadlock caused by the main thread during channel operations
package main
import "fmt"
func main(a) {
fmt.Println("main start")
// The first statement of the main function is to print main start to the console.
channel := make(chan string)
// Use the make function in main to create a string channel and assign it to the 'channel' variable
channel <- "GoLang"
// Pass a data DEMO to the channel.
// The main thread blocks until a coroutine receives the data. The Go scheduler starts scheduling the coroutine to receive data from the channel
// But since no coroutines are accepted, no coroutines are schedulable. All coroutines go to sleep, which means the main program is blocked.
fmt.Println("main stop")}/* Error: main start FATAL error: all goroutines are asleep - deadlock! Goroutine 1 [chan send]: main.main() */
Copy the code
2.3.3 Closing a Channel
package main
import "fmt"
func RushChan(c chan string) {
<- c
fmt.Println("1")
<- c
fmt.Println("2")}func main(a) {
fmt.Println("main start")
c := make(chan string.1)
go RushChan(c)
c <- "Demo1"
close(c)
/* Cannot send a message to a closed channel. Main start panic: send on closed channel */
c <- "Demo2"
//close(c)
/* Stop */ * start */
fmt.Println("Main Stop")}Copy the code
- The first operation
c <- "Demo2"
Will block the coroutine until some other coroutine reads from the channel, so the greet is scheduled for execution by the scheduler. - The first operation
<-c
Student: Non-blocking because now the channelc
There’s data to read. - Second operation
<-c
Will be blocked because the channelc
There’s no data to read. - At this time
main
The coroutine is activated and the program executesclose(c)
Close the channel operation.
2.3.4 buffer
c := make(chan Type, n)
Copy the code
- When the buffer parameter is not 0. Coroutines will not block unless the buffer is filled.
- When the buffer is full, sending data to the buffer is blocked until another coroutine receives data from the buffer.
- One thing to note is that reading a buffer is a wistful read, meaning that once the read operation begins it will read all the data in the buffer until the buffer is empty.
- In principle, a read coroutine will not block until the buffer is empty.
package main
import "fmt"
func RushChan(c chan string) {
for {
val ,_ := <-c
fmt.Println(val)
}
}
func main(a) {
fmt.Println("Main Start")
c := make(chan string.1)
go RushChan(c)
c <- "Demo1" 1 / / results
C <- "Demo2" // Result 2
fmt.Println("Main Stop")}/* Result 1: Main Start Main Stop */
/* Result 2: Main Start Join Mike Main Stop */
Copy the code
- Since this is a buffered channel, when I only have
c <- Demo1
It’s just full, but it’s not blocked. So the subcoroutine received this dataDemo1
, but because it is non-blocking, the main thread is not blocked and does not finish until the subcoroutine is complete, resulting in 1. - When you add one more
c <- Demo2
When the buffer is empty, you have to wait for the coroutineDemo1
Read, so the main thread will block, and the result is result 2.
package main
import "fmt"
func RushChan(c chan string) {
for {
val ,_ := <-c
fmt.Println(val)
}
}
func main(a) {
c := make(chan int.3)
c <- 1
c <- 2
c <- 3
close(c)
for elem := range c {
fmt.Println(elem)
}
}
Copy the code
- We’re closing the channel, but the data is not only in the channel, it’s in the buffer, and we can still read it.
2.3.5 Channel Length and Capacity
Similar to slicing, a buffer channel has a length and a capacity. The length of a channel is the amount of unread data in its internal buffer queue, and the capacity of the channel is the maximum amount of data that the buffer can hold. We can use len to calculate the length of the channel and cap to get the capacity of the channel. It’s similar to slicing
package main
import "fmt"
func RushChan(c chan string) {
for {
val ,_ := <-c
fmt.Println(val)
}
}
func main(a) {
c := make(chan int.3)
c <- 1
c <- 2
fmt.Println("Length:".len(c))
fmt.Println(<-c)
fmt.Println("Length:".len(c))
fmt.Println(<-c)
fmt.Println("Length:".len(c))
fmt.Println("Capacity:".cap(c))
}
/* Result: Length: 2 1 Length: 1 2 Length: 0 Capacity: 3 */
Copy the code
- This C channel has a capacity of 3, but only holds 2 pieces of data. Go does not have to block the main thread to schedule other coroutines.
- You can also read the data in the main thread, because even though the channel is not full, it does not prevent you from reading data from the channel.
2.3.6 Unidirectional Channel
So far, we’ve learned about channels that can pass data in both directions, or we can read and write to them. But we can actually create one-way channels as well. For example, read-only channels only allow read operations, and write-only channels only allow write operations.
One-way channels can also be created using the make function, but an additional arrow syntax is required.
roc := make(< -chan int)
soc := make(chan<- int)
Copy the code
In the above program, ROC is a read-only channel, <- before the chan keyword. Soc is only write channels, <- after the chan keyword. They also count as different data types.
But what does a one-way channel do?
Using a one-way channel increases the type safety of your program and makes it less error-prone.
But what if you only need to read a channel in a coroutine, but you need to read and write that channel in the main thread? Fortunately, Go provides a simple syntax to turn a two-way channel into a one-way channel.
package main
import "fmt"
func greet(roc <-chan string) {
fmt.Println("Hello " + <-roc ,"!")}func main(a) {
fmt.Println("Main Start")
c := make(chan string)
go greet(c)
c <- "Demo"
fmt.Println("Main Stop")}/* Result Main Start Hello Demo! Main Stop */
Copy the code
We modify the greet coroutine function to change the parameter C type from bidirectional to unidirectional receive.
“Invalid Operation: ROC <- “Temp” (send to receive-only type <-chan string)”
2.3.7 Select
Select is similar to switch in that it requires no input parameters and is used only for channel operations.
The Select statement is used to perform one of the multiple channel operations and its accompanying case block code.
The principle of
Let’s look at the following example to see how it works
package main
import (
"fmt"
"time"
)
var start time.Time
func init(a) {
start = time.Now()
}
func service1(c chan string) {
time.Sleep(3 * time.Second)
c <- "Hello from service 1"
}
func service2(c chan string) {
time.Sleep(5 * time.Second)
c <- "Hello from service 2"
}
func main(a) {
fmt.Println("main start", time.Since(start))
chan1 := make(chan string)
chan2 := make(chan string)
go service1(chan1)
go service2(chan2)
select {
case res := <-chan1:
fmt.Println("Response form service 1", res, time.Since(start))
case res := <-chan2:
fmt.Println("Response form service 2", res, time.Since(start))
}
fmt.Println("main stop ",time.Since(start))
}
/* Result: Main start 0s Response form service 1 Hello from service 1 3.0018445s main stop 3.0019815s */
Copy the code
From the above program, we know that the select statement is similar to the switch, except that instead of Boolean operations, the channel reads and writes are used. The channel will be blocked unless it has a default default block (described later). Once a case condition is executed, it will not block.
So when is a case condition executed?
If all case statements (channel operations) are blocked, then the SELECT statement will block until one of these case conditions is not blocked (channel operations) and the case block executes. If there are multiple case blocks (channel operations) that are not blocked, the runtime will randomly select a non-blocking case block for immediate execution.
To demonstrate the above, we open two coroutines and pass in the corresponding channel variables. Then we write a select statement with two case operations. One case operation reads data from CHAN1 and the other reads data from CHAN2. Both channels are unbuffered and the read operation will be blocked. So the SELECT statement will block. So SELECT will wait until there is a case statement that does not block.
- When the program executes to
select
Statement, the main thread blocks and starts schedulingservice1
和service2
Coroutines.service1
The write channel is not blocked after 3 seconds of hibernationchan1
And similarly,service2
Wait 5 seconds for the unblocked write channelchan2
- because
service1
比service2
I’m going to do it a little bit earlier,case 1
Will schedule execution first, the restcases
Block (here refers tocase 2
) is ignored. Once thecase
After the block is executed,main
The thread will begin to continue execution.
So there is no output for case2
The above program truly simulates a server load balancing example of millions of requests, returning one response from multiple valid services. Using coroutines, channels, and select statements, we can request data from multiple servers and get the one that responds fastest.
To simulate which of the above case blocks returns data first, we can simply remove the Sleep function call.
package main
import (
"fmt"
"time"
)
var start time.Time
func init(a) {
start = time.Now()
}
func service1(c chan string) {
c <- "Hello from service 1"
}
func service2(c chan string) {
c <- "Hello from service 2"
}
func main(a) {
fmt.Println("main start", time.Since(start))
chan1 := make(chan string)
chan2 := make(chan string)
go service1(chan1)
go service2(chan2)
select {
case res := <-chan1:
fmt.Println("Response form service 1", res, time.Since(start))
case res := <-chan2:
fmt.Println("Response form service 2", res, time.Since(start))
}
fmt.Println("main stop ",time.Since(start))
}
Copy the code
Result 1: Main start 0s
Response form service 1 Hello from service 1 539.3µs main stop 539.3µs main start 0s Response form service 2 Hello from service 2 0s main stop 0s
So there’s a total of 2 factorial. Different results
To prove that Golang randomly selects a code block to execute the print response when all case blocks are non-blocking, we use the buffer channel to modify the program.
package main
import (
"fmt"
"time"
)
var start time.Time
func init(a) {
start = time.Now()
}
func service1(c chan string) {
c <- "Hello from service 1"
}
func service2(c chan string) {
c <- "Hello from service 2"
}
func main(a) {
fmt.Println("main start", time.Since(start))
chan1 := make(chan string.2)
chan2 := make(chan string.2)
chan1 <- "Value 1"
chan1 <- "Value 2"
chan2 <- "Value 1"
chan2 <- "Value 2"
select {
case res := <-chan1:
fmt.Println("Response form service 1", res, time.Since(start))
case res := <-chan2:
fmt.Println("Response form service 2", res, time.Since(start))
}
fmt.Println("main stop ",time.Since(start))
}
Copy the code
The results of the above procedures are different
Result 1: Main start 0s
Response Form Service 1 Value 1 496.2µs main stop 496.2µs main start 0s Response form service 2 Value 1 0s main stop 0s
In the above program, both channels have two values in their buffers. Because we are sending two values to each of the buffer channels of capacity 2, these channel send operations will not block and the following select block will be executed. None of the case operations in the SELECT block block because there are two values in each channel, and our case operation only needs to fetch one of them. Therefore, the GO runtime randomly selects a case operation and executes the code in it.
2.3.8 default case
Like switch, the SELECT statement has a default case block. Not only is the default case block non-blocking, but the default Case block makes the SELECT statement unblocking, which means that the send and receive operations of any channel (buffered or not) do not block the current thread.
If the channel operation with a case block is non-blocking, then SELECT executes its case block. If not then select will execute the default block by default.
package main
import (
"fmt"
"time"
)
var start time.Time
func init() {
start = time.Now()
}
func service1(c chan string) {
c <- "Hello from service 1"
}
func service2(c chan string) {
c <- "Hello from service 2"
}
func main() {
fmt.Println("main start", time.Since(start))
chan1 := make(chan string)
chan2 := make(chan string)
go service1(chan1)
go service2(chan2)
select {
case res := <-chan1:
fmt.Println("Response form service 1", res, time.Since(start))
case res := <-chan2:
fmt.Println("Response form service 2", res, time.Since(start))
default:
fmt.Println("No Response received",time.Since(start))
}
fmt.Println("main stop ",time.Since(start))
}
/*
结果:
main start 0s
No Response received 0s
main stop 0s
*/
Copy the code
-
In the above program, because channels are not buffered, the channel operations of case blocks are blocked and all default blocks are executed.
-
If the select statement above does not have a default block, select will block and no response will be printed until the channel becomes non-blocking.
-
With default, select will be non-blocking and the scheduler will not switch from the main thread to schedule other coroutines.
-
But we can change this with time.sleep. In this way, the main thread transfers scheduling to other coroutines and returns to the main thread when the other coroutines finish executing.
-
When the main thread executes again, the channel has a value, and the case operation will not block.
package main
import (
"fmt"
"time"
)
var start time.Time
func init(a) {
start = time.Now()
}
func service1(c chan string) {
fmt.Println("service1 start")
c <- "Hello from service 1"
}
func service2(c chan string) {
fmt.Println("service2 start")
c <- "Hello from service 2"
}
func main(a) {
fmt.Println("main start", time.Since(start))
chan1 := make(chan string)
chan2 := make(chan string)
go service1(chan1)
go service2(chan2)
time.Sleep(3*time.Second)
select {
case res := <-chan1:
fmt.Println("Response form service 1", res, time.Since(start))
case res := <-chan2:
fmt.Println("Response form service 2", res, time.Since(start))
default:
fmt.Println("No Response received",time.Since(start))
}
fmt.Println("main stop ",time.Since(start))
}
/* The result is not unique. Main start 0s service2 start Service1 start Response form service1 Hello from service1 3.0006729s main stop 3.0006729s * /
Copy the code
2.3.9 empty select
Much like empty loops like for{}, the empty select{} syntax is valid. But one thing must be said.
We know that SELECT is going to block unless there’s a case block that’s not blocking. Because select{} does not have a case non-blocking statement, the main thread will block and may cause a deadlock.
package main
import "fmt"
func service(a) {
fmt.Println("Hello from service")}func main(a) {
fmt.Println("main started")
go service()
select {}
fmt.Println("main stop")}/* result Main started Hello from service fatal error: all goroutines are asleep - deadlock! goroutine 1 [select (no cases)]: */
Copy the code
In the above program we know that SELECT will block the main thread and the scheduler will schedule the service coroutine. After the service finishes executing, the scheduler attempts to schedule other coroutines that are available, but no coroutines are available and the main thread is blocking, so the result is a deadlock.
2.3.10 Deadlock
The default block is useful when channel operations are blocked to avoid deadlocks. At the same time, because of the non-blocking feature of the default block, Go can avoid scheduling other coroutines when other coroutines are blocked, thus avoiding deadlock. Similarly for channel send operations, default can be executed when other coroutines cannot be scheduled, thus avoiding deadlocks.
2.3.11 nil channel
2.4 Multiple coroutines work together
Write two coroutines, one to square the number and the other to cubic the number.
package main
import "fmt"
func square(c chan int) {
fmt.Println("[square] reading")
num := <-c
c <- num * num
}
func cube(c chan int) {
fmt.Println("[cube] reading")
num := <-c
c <- num * num * num
}
func main(a) {
fmt.Println("[main] main started")
squareChan := make(chan int)
cubeChan := make(chan int)
go square(squareChan)
go cube(cubeChan)
testNum := 3
fmt.Println("[main] send testNum to squareChan")
squareChan <- testNum
fmt.Println("[main] resuming")
fmt.Println("[main] send testNum to cubeChane")
cubeChan <- testNum
fmt.Println("[main] resuming")
fmt.Println("[main] reading from channels")
squareVal,cubeVal := <-squareChan, <-cubeChan
sum := squareVal + cubeVal
fmt.Println("[main] sum of square and cube of",testNum," is",sum)
fmt.Println("[main] main stop")}/ * results: [main] main started [main] send testNum to squareChan [cube] reading [square] reading [main] resuming [main] send testNum to cubeChane [main] resuming [main] reading from channels [main] sum of square and cube of 3 is 36 [main] main stop */
Copy the code
Process:
- Create two functions
square
和cube
Run as a coroutine. - Both functions have one
int
Type channel Parametersc
, fromc
To read data into variablesnum
And finally write the calculated data to the channelc
In the. - Used in the main thread
make
Function to create twoint
Type of channelsquareChan and cubeChan
- And then run them separately
square
andcube
Coroutines. - Because the scheduling rights are still in the main thread, so execute
testNumb
Assign a value of 3. - Then we put the data into the channel
squareChan
. The main thread blocks until the data on the channel is read. Once the channel’s data has been read, the main thread continues execution. - In the main thread we try to read data from both channels, and the thread may block until something is written to the channel. Here we use
: =
Syntax to receive values for multiple channels. - Once these coroutines write data to the channel, the main thread will block.
- As the data is written to the channel, the main thread continues to execute, and finally we sum up the numbers and print them to the console.
2.5 WaitGroup
There is a business scenario where you need to know whether all coroutines have performed their tasks. This is different from select, which only requires you to randomly select one condition that is true. It requires you to meet all conditions that are true to activate the main thread to continue. Conditions here refer to non-blocking channel operations.
2.5.1 profile
A WaitGroup is a structure with a counter that keeps track of how many coroutines have been created and how many have completed their work. A counter of zero indicates that all coroutines have finished their work.
package main
import (
"fmt"
"sync"
"time"
)
func service(wg *sync.WaitGroup, instance int) {
time.Sleep(2 * time.Second)
fmt.Println("Service called on instance",instance)
wg.Done() // The number of coroutines is -1
}
func main(a) {
fmt.Println("main started")
var wg sync.WaitGroup
for i:=1; i<=3; i++{
wg.Add(1)
go service(&wg,i)
}
wg.Wait()/ / blocking
fmt.Println("main stop")}/* results :(results are not unique, there are 3! Main started Service called on instance 2 Service called on instance 1 Service called on instance 3 main stop */
Copy the code
-
In the above program, we created an empty structure of type Sync. WaitGroup (with a 0 value field) WG. The WaitGroup structure has internal fields such as noCopy, state1, and SEMa. The structure also has three public methods: Add, Wait, and Done.
-
The Add method takes an int parameter named delta, which is used primarily for internal counting. The internal counter defaults to 0. It is used to record how many coroutines are running.
-
When the WaitGroup is created and the counter value is zero, we can increase the number of waitGroups by passing an int value to the Add method. Remember that the counter does not increment automatically when the coroutine is created, so we need to increment it manually.
-
The Wait method blocks the current coroutine. Once the counter is zero, the coroutine resumes running. Therefore, we need a method to lower the value of the counter.
-
The Done method reduces the value of the counter. It does not accept any arguments, so it subtracts the counter by one every time it executes.
-
-
In the example above, after creating the WARgaming variable, we ran the for loop three times, each time we created a coroutine and incrementing the counter by 1.
-
This means that we now have three coroutines waiting to run and the WaitGroup counter has a value of 3. Note that we passed a pointer to the coroutine function, because once the work inside the coroutine is Done, we need to lower the counter by calling the Done method.
-
If WARgaming is passed by value copy, the WARgaming in the main thread will not be modified because it is a copy.
-
After the for loop completes, we block the current main thread by calling Wg.wait () and give the scheduler the right to another coroutine until the counter value is zero.
-
In the other three coroutines, we reduce the counter value to 0 with the Done method, at which point the main thread is scheduled again and starts executing the following code.
2.5.2 work pool
As the name implies, a workpool is a collection of coroutines that concurrently perform a job. Above, we’ve used multiple coroutines to perform a task, but they don’t perform any specific work, they just sleep a bit. If you pass a channel into a coroutine, they can go and do some work and become a pool of work.
So the workpool essentially maintains multiple work coroutines that receive tasks, execute tasks, and return results. We’ll get the results when they finish their task. These coroutines use the same channels for their own purposes.
package main
import (
"fmt"
"time"
)
func sqrWorker(tasks <-chan int, results chan <-int, instance int) {
for num := range tasks {
time.Sleep(time.Millisecond) / / blocking
fmt.Printf("[worker %v ] Sending result by worker %v \n",instance,instance)
results <- num*num
}
}
func main(a) {
fmt.Println("main started")
tasks := make(chan int.10)
results := make(chan int.10)
for i:=0; i<3; i++{go sqrWorker(tasks,results,i)
}
for i := 0; i < 5; i++ {
tasks <- i*2
}
fmt.Println("[main] write 5 tasks")
close(tasks)
for i := 0; i < 5; i++ {
result := <-results
fmt.Println("[main] Result" , i , ":", result)
}
fmt.Println("main stop")}[main] write 5 tasks [worker 0] Sending result by worker 0 [worker 1] Sending result by worker 1 [worker 2] Sending result by worker 2 [main] Result 0 : 4 [main] Result 1 : 16 [main] Result 2 : 0 [worker 1 ] Sending result by worker 1 [main] Result 3 : 64 [worker 0 ] Sending result by worker 0 [main] Result 4 : 36 main stop */
Copy the code
sqrWorker
It’s the one withtasks
Channel,results
Channels andid
Coroutine function with three arguments. The job of this coroutine function is to remove thetasks
The channel receives the square of the number sent toresults
Channel.- In the main function, we create two channels with a buffer of 10
tasks and result
. Therefore, any operation is non-blocking until the buffer is full. So sometimes it’s a good idea to have a bigger buffer. - Then we loop to create multiple
sqrWorker
Coroutine, and passtasks
Channel,results
Channels andid
Three arguments to pass and get data before and after execution of the coroutine. - And then we go
tasks
Non-blocking channels put 5 task data. - Since we have already put data into the task channel, we can close it, although this operation is not required, to prevent the channel if an error occurs later in the run
range
The deadlock problem. - And then we open the loop 5 times from
results
The channel receives data. Because there is no data in the channel buffer at the moment, the channel read operation causes the main thread to block. The scheduler schedules the workpool coroutine until data is added toresults
Channel. - Currently we have three
work
Coroutines at work, we usesleep
Operations to simulate blocking operations, so the scheduler will call others when one blockswork
Coroutine, when awork
coroutinessleep
When it’s done, it puts in the resulting data from the square of the numberresults
Buffer non-blocking channels. - When the three coroutines alternate
task
After all the tasks of the channel are done,for range
The loop will complete, and since we have closed the task channel, the coroutine will not deadlock. The scheduler continues to return to the scheduler main thread. - Sometimes all the work coroutines may be blocked, at which point the scheduler will schedule the main thread until
results
The channel is empty again.
When all the Work coroutines have completed their tasks and have exited, the main thread will continue to get scheduling rights and print the remaining data from the Results channel, continuing the subsequent code execution.
2.5.3 Mutex
Mutual exclusion is a simple concept in Go. Before I explain it, I need to understand what race conditions are. Goroutines have their own separate call stacks, so they don’t share any data with each other. But there is a case where the data is on the heap and used by multiple Goroutines. Multiple Goroutines trying to manipulate data in one area of memory can have unintended consequences.
package main
import (
"fmt"
"sync"
)
var i int
func worker(wg *sync.WaitGroup) {
i = i+1
wg.Done()
}
func main(a) {
fmt.Println("main started")
var wg sync.WaitGroup
for i:=0; i<1000; i++{ wg.Add(1)
go worker(&wg)
}
wg.Wait()
fmt.Println("main stop",i)
}
/* The result is different!! main started main stop 985 */
Copy the code
I = I + 1 and there are three steps to this calculation
(1) gets the value of I (2) adds 1 to the value of I (3) updates the value of I
There’s a lot going on here, and because GO is a coroutine, not all of these steps are necessarily executed in the same order at the same time. It’s possible that A is going to execute smoothly so that I is equal to 2, but B is going to read the I before A is updated which is 1, so it’s going to be less than or equal to 1000,
Unless one coroutine blocks, other coroutines have no chance of getting scheduled. So I = I + 1 doesn’t block, why would the Go scheduler schedule other coroutines?
In any case, you should not rely on Go’s scheduling algorithm, but instead implement your own logic to synchronize different Goroutines.
One way to do this is to use the mutex we mentioned above. A mutex is a programming concept that guarantees that only one thread or coroutine can operate on the same data at a time. When a coroutine wants to manipulate data, it must acquire a lock on the data. When the operation is complete, it must release the lock. If no lock is acquired on the data, it cannot operate on the data.
In Go, mutually exclusive data structures (Maps) are provided by the Sync package. In Go, even multiple coroutines manipulating a value can cause race conditions. We need to use mutex.lock () to Lock the data before we can manipulate it. Once we have done the operation, such as I = I + 1 mentioned above, we can use the mutext.unlock () method to Unlock it.
If one of the coroutines wants to read or write the value of I while locking, the coroutine blocks until the previous coroutine completes and unlocks the data. Therefore, only one coroutine can manipulate the data at any given time, thus avoiding race conditions. Remember, any variables between locks are not available to other coroutines until unlocked.
Let’s modify the example above with a mutex
package main
import (
"fmt"
"sync"
)
var i int
func worker(wg *sync.WaitGroup,m *sync.Mutex) {
m.Lock()
i = i+1
m.Unlock()
wg.Done()
}
func main(a) {
fmt.Println("main started")
var wg sync.WaitGroup
var m sync.Mutex
for i:=0; i<1000; i++{ wg.Add(1)
go worker(&wg,&m)
}
wg.Wait()
fmt.Println("main stop",i)
}
Main started main stop 1000 */
Copy the code
In the above program, we create a mutex variable m and pass Pointers to it to all the coroutines we create. Inside the coroutine, when we want to start manipulating the I variable, we first acquire the lock by using m.lock (), and then release the lock by using m.nunlock (). Mutex can help us solve race conditions. But the first rule is to avoid sharing resources between Goroutines. Therefore, the official recommendation is not to share memory concurrency, but to do it through pipe communication.
3. The conclusion
The latter part of the GO concurrent knowledge is based on the author Summar’s GO concurrent knowledge and the knowledge in the book. Thank you very much for the author’s translation work, which enables me to better understand the GO channel concurrent mechanism! I’m going to link it here, channel
With the expansion of services, concurrency can better maximize server performance.
The last
Xiao Sheng Fan Yi, looking forward to your attention.