Concurrent programming in the Go language, let’s start with a concept
Concurrent execution of multiple tasks in the same time period
Perform multiple tasks at the same time
Concurrency in Go is implemented through Goroutine. Goroutines are similar to threads. They are user-mode threads, and we can create thousands of goroutines as needed to work concurrently. Goroutine is scheduled by the Go language runtime, while threads are scheduled by the operating system.
The Go language also provides channels to communicate between multiple Goroutines. Goroutine and channel are the important basis for realizing the concurrent mode of CSP (Communicating Sequential Process) based on Go language.
So let’s see how do we use Goroutine
func Hello(a){FMT. Println (" start ")}func main(a){
go Hello()
fmt.Println("end")}Copy the code
Ok, this is a simple goroutine, so what does this code actually mean? Let’s look at it step by step. The first thing you do with a goroutine is you start a gonroutine by adding the go keyword to the function.
So main is basically a main goroutine and then when you call Hello with the go keyword, you open up a Goroutine that executes Hello.
So what’s the result of this code? Maybe it just prints end, but notice now this code is not serial but turns on goroutine. So it’s important to note that when main finishes and exits, all of the goroutines that were distributed are destroyed and cannot continue, so we need to make main wait for all of the goroutines to finish before exiting main
Here we use a toolkit called sync.waitGroup
//WaitGroup is a structure
var wg sync.WaitGroup
func Hello(i int){
fmt.Println(i)
wg.Done() Wg.done () tells WG I'm Done with counter -1
}
// a main goroutine is turned on to execute main
func main(a) {
wg.Add(10000) // Add a goroutine to the counter +1
// A goroutine executes hello
for i:=0; i<10000; i++{go Hello(i)
}
fmt.Println("end")
wg.Wait() //Wait until all goroutines have run, and wg.add is 0
}
Copy the code
If you execute the above code multiple times, you'll see that the numbers are printed in an inconsistent order each time. This is because 10 Goroutines are executed concurrently, and goroutines are scheduled randomly.
Goroutine performs anonymous functions
var wg sync.WaitGroup
func main(a) {
wg.Add(10000)
for i:=0; i<10000; i++{go func(i int){
fmt.Println(i)
wg.Done()
}(i)
}
wg.Wait()
}
Copy the code
Notice that our Goroutine is running into a closure, so we can’t directly refer to variables outside the function, we need to pass in arguments, we need to pass in external I
The relationship between goroutine and threads
Operating system threads have a fixed stack memory, the general size of about 2M, and goroutine stack in the life cycle of the initial size of only 2KB, Goroutine size can be increased, theoretically the size of up to 1G, but generally not so big.
Goroutine scheduling
GPM is the implementation of Go language runtime level and a scheduling system implemented by Go language itself. It’s different from operating system scheduling OS threads and you can go to the concept
GOMAXPROCS
The Go runtime scheduler uses the GOMAXPROCS parameter to determine how many OS threads are needed to execute the Go code simultaneously. The default value is the number of CPU cores on the machine. For example, on an 8-core machine, the scheduler will schedule the Go code to eight OS threads simultaneously (GOMAXPROCS is n in m:n scheduling).
In Go, the runtime.gomaxprocs () function can be used to set the number of CPU cores that the current program occupies concurrently.
Prior to Go1.5, single core execution was used by default. After Go1.5, the total number of CPU logical cores is used by default.
One operating system thread corresponds to multiple goroutines in user mode. The GO program can use multiple operating system threads at the same time. Goroutine and OS threads have a many-to-many relationship.
chan
Chan is used for communication between Goroutines, and GO advocates communication through shared memory rather than shared memory. Chan is a channel that operates on a first-in, first-out basis.
var ch1 = make(chan int) // No buffer
var ch2 = make(chan string.1) // wait for buffer
Copy the code
Chan is a reference data type, which must be initialized to allocate memory. Chan can be divided into no buffer and buffer. The size of buffer can be specified, and how much data can be stored. No buffer, also known as a synchronous channel, must have a Goroutine accept value, otherwise panic will occur. The waiting buffer is also called an asynchronous channel. Put the data in the pipeline, whether you have takers or not, and I’ll move on to something else.
Chan’s sending
var ch2 = make(chan string,1)
ch2 <- 10
Copy the code
Chan’s accept
x := <-ch2
Copy the code
Close the channel
close(ch2)
Copy the code
Chan is a reference data type and does not need to be closed manually every time. It will be collected by garbage.
A channel does not need to release resources through close, and as long as no Goroutine holds the channel, the resources are automatically released. Close can be used to notify a channel receiver that no more data will be received. Therefore, even if there is data in a channel, it can be closed without causing the receiver to miss the residual data. Some scenarios require you to close the channel manually, such as the range traversal channel. If you do not close the range traversal channel, a deadlock will occur.
func main(a) {
var ch1 = make(chan int.1) // There are buffer channels called synchronous channels
//var ch2 = make(chan int) // a goroutine must be set from the buffer
ch1<- 10
x:= <-ch1
fmt.Println(x)
close(ch1)
}
Copy the code
So how do Goroutine and channel work together? See the following example
/** two goroutines two chan 1, generate a number 0-100 to send to CH1, receive data from CH1, compute the square to ch2 */
// Generate a number from 0 to 100 and send it to CH1
func a(c1 chan int){
for i:=0; i<100; i++{ c1 <- i }close(c1)
}
// Take data from CH1 and square it to CH2
func b(c1 chan int,c2 chan int){
// Loop the value
for tem := range c1{
c2 <- tem*tem
}
close(c2)
}
func main(a) {
ch1 := make(chan int ,100)
ch2 :=make(chan int.100)
go a(ch1)
go b(ch1,ch2)
// The main program does not exit until the ch2 loop is complete
for num := range ch2{
fmt.Println(num)
}
}
Copy the code
Note that chan uses traversal to pass data, so you need to manually close the channel or it will cause a Goroutine deadlock
Single channel
Single-channel values are those in which a channel can only accept values, or send values, and can either send or receive values if not specified. How to define single channel, look at the code
// Generate a number between 0 and 100 to send to CH1
func a(c1 chan<- int){
for i:=0; i<100; i++{ c1 <- i }close(c1)
}
// Take data from CH1 and square it to CH2
// C1 can only send, c2 can only receive
func b(c1 <-chan int,c2 chan<- int){
// Loop the value
for tem := range c1{
c2 <- tem*tem
}
close(c2)
}
func main(a) {
ch1 := make(chan int ,100)
ch2 := make(chan int.100)
go a(ch1)
go b(ch1,ch2)
// The main program does not exit until the ch2 loop is complete
for num := range ch2{
fmt.Println(num)
}
}
Copy the code
Or you can define it at declaration time
// Only values can be sent
var ch3 = make(< -chan int.1)
// Only values can be received
var ch4 = make(chan<- int.1)
Copy the code
Summary of common exceptions for channel
(Picture taken from Qimi teacher’s blog)
Concurrent security and locking
In a project, multiple Goroutines can modify a value at the same time, and then the value will compete and the result will not be as expected.
var (
a int
wg sync.WaitGroup
)
func add(a){
for i:=0; i<5000; i++{ a++ } wg.Done() }func main(a) {
wg.Add(2)
go add()
go add()
wg.Wait()
fmt.Println(a)
}
Copy the code
In the above code, I performed two Goroutine concurrent sections to change the value of A. If everything went well, the value would be 10000, but it didn’t. I concurrently changed the value of A, causing a data race. So how to solve this problem, here we use the go language lock.
The mutex
var (
a int
wg sync.WaitGroup
lock sync.Mutex // Only one goroutine can access the mutexes at a time
)
func add(a){
for i:=0; i<5000; i++{ lock.Lock()/ / lock
a++
lock.Unlock()/ / unlock
}
wg.Done()
}
func main(a) {
wg.Add(2)
go add()
go add()
wg.Wait()
fmt.Println(a)
}
Copy the code
Using a mutex ensures that only one goroutine enters a critical section at a time, while the rest of the goroutine waits for the lock. When the mutex is released, the waiting Goroutine can acquire the lock and enter the critical region. When multiple Goroutines are waiting for a lock at the same time, the wake-up strategy is random.
Read and write mutex
var (
a int
wg sync.WaitGroup
lock sync.RWMutex
)
func read(a){
lock.RLock() / / read lock
time.Sleep(time.Millisecond) // If it takes a millisecond to read
lock.RUnlock() / / unlock
wg.Done()
}
func write(a){
lock.Lock() / / lock
a = a+1
lock.Unlock() / / unlock
wg.Done()
}
func main(a) {
wg.Add(1000)
for i:=0; i<1000; i++{go read()
}
wg.Add(100)
for i:=0; i<100; i++ {go write()
}
wg.Wait()
fmt.Println(a)
}
Copy the code
Read/write mutex, which is read/write separate, when we read much more than we write, we can use read/write mutex, which is much, much more efficient than just a mutex.
sync.WaitGroup
We’ve used this method a lot before, and it’s definitely not appropriate to use time.Sleep in the code, but sync.WaitGroup is used in Go to synchronize concurrent tasks. Sync.waitgroup has the following methods:
(WG *WaitGroup) (WG *WaitGroup) counter -1 (WG *WaitGroup) Wait() blocks until the counter becomes 0
The method name | function |
---|---|
Add(delta int) | Counter + delta |
Done() | Counter – 1 |
Wait() | Block until the counter is 0 |
Sync.waitgroup maintains a counter internally whose value can be increased or decreased. For example, when we started N concurrent tasks, we increased the counter value by N. Each task completes by calling the Done() method to reduce the counter by one. Wait() is called to Wait for concurrent tasks to complete. When the counter value is 0, all concurrent tasks have completed.
sync.Once
In many programming scenarios we need to ensure that certain operations are performed only once in high concurrency scenarios, such as loading configuration files only once, closing channels only once, etc.
var loadIconsOnce sync.Once
// Only one Do method is provided
func main(a){
for i:=0; i<30i+ + {go func(a){
// Execute only once
loadIconsOnce.Do(func(a) {
fmt.Println(111)})}()}}Copy the code
sync.Map
Map is not safe for partition changes in the case of high concurrency, a small number of Goroutines may be fine, fatal Error: Concurrent map writes errors will occur when executing the above code when there are too many.
The sync package for the Go language provides an out-of-the-box concurrency security version of Map – sync.map. Out of the box means it can be used directly without using the make function initialization as the built-in map. Sync. Map also has built-in operation methods such as Store, Load, LoadOrStore, Delete, and Range.
var (
m sync.Map
wg sync.WaitGroup
)
func set(i int){
m.Store(i,i) // Set the map value
wg.Done()
}
func main(a) {
wg.Add(30)
for i:= 0; i<30; i++ {go set(i)
}
wg.Wait()
/ / traverse map
m.Range(func(key, value interface{}) bool {
fmt.Println(key,value)
return true
})
// delete key = 1
m.Delete(1)
/ / to get
m.Load(1)}Copy the code
Sync. Map does not provide a method to obtain the number of maps. Instead, it iterates over and calculates the number of maps when obtaining sync.Map. Sync. Map has some performance penalty to ensure concurrency security, so using Map provides better performance than using Sync. Map in non-concurrent cases.