According to the content of Go concurrent programming shared by Wuhan · Optical Valley cat friends on September 16, 2018, Wuhan Gopher partners. The topic content of this sharing includes Go language concurrency philosophy, concurrency evolution history, hello concurrency, concurrency memory model, common concurrency mode and so on. About concurrent programming added content can refer to [] “Go high-level programming language” (https://github.com/chai2010/advanced-go-programming-book), the first chapter of the related content.

This collation of the contents of the concurrent memory model section.

## Atomic operation

In the early days, cpus sequentially executed machine instructions in the form of single cores. In the era of single-core cpus, only one core reads or writes data, so data reads and writes do not require additional protection. But in the multicore era, the same data can be read and written by multiple threads running on different cpus at the same time, so additional means are needed to ensure data integrity. Atomic operations can ensure that data will not be interrupted by other threads while being read or written by atoms, thus ensuring the integrity of read and write data states.

The Go language’s ‘sync/ Atomic’ package provides support for atomic operations. The ‘sync/atomic’ package does atomic reads and writes primarily to four – or eight-byte, address-aligned memory and can be used for basic data types such as integers or Pointers. There are also more complex ‘atomic.Value’ types that can be used to store structure objects.

In Go, you can actually program without using the ‘sync/atomic’ package. But the ‘sync/atomic’ package is in some low-level code that provides more flexibility for performance tuning. For example, the Do function of the sync.once object in the library:

“`go

type Once struct {

    m    Mutex

    done uint32

}

func (o *Once) Do(f func()) {

    if atomic.LoadUint32(&o.done) == 1 {

        return

    }

    // Slow-path.

    o.m.Lock()

    defer o.m.Unlock()

    if o.done == 0 {

        defer atomic.StoreUint32(&o.done, 1)

        f()

    }

}

` ` `

Where ‘atomy.loaduint32 (&O.Done)’ determines whether the Once object has already been run at the beginning with minimal computation cost (yard operation is cheaper with sync.mutex and other advanced metals). If not, use sync.muyex. Lock to Lock and run the once object. If this is the first run, the status flag that has already been run is set by ‘defer atomic.storeuint32 (& O.Tone, 1)’ before exiting.

Atomic package-based functions allow the construction of more advanced concurrent programming tools such as sync.mutex. But the concurrency philosophy of Go is: Don’t communicate by sharing memory, communicate by sharing memory! Therefore, we need to avoid directly using the atomic operations provided by the Sync/Atomic package for concurrent programming.

## Within the same Goroutine: satisfy the sequential consistency memory model

The memory consistency model is the order in which code is written and executed in the same order. For single-threaded programs, the code is typically executed in written order. To be more precise, the sequential consistent memory model is generally for statements in a block of code.

For example, the following code satisfies the sequential consistent memory model:

“`go

var msg string

var done bool = false

func main() {

    msg = “hello, world”

    done = true

    for {

        if done {

            println(msg)

            break

        }

println(“retry…” )

    }

}

` ` `

The MSG string variable is initialized and done is set to true to indicate that the string initialization is complete. Therefore, we can indirectly deduce whether the MSG string has been initialized by judging the done state. In Go, the same Goroutine satisfies the sequential consistency memory model. So the above code works correctly.

## Between different Goroutines: order consistency is not satisfied!

If we put the code to initialize MSG and done in another Goroutine, the situation would be different! The following concurrent code would be wrong:

“`go

var msg string

var done bool = false

func main() {

    go func() {

        msg = “hello, world”

        done = true

} ()

    for {

        if done {

            println(msg); break

        }

println(“retry…” )

    }

}

` ` `

At runtime, there are several types of errors: first, main cannot see the modified done, so main’s for loop does not end properly. Second, although the main function sees that done is changed to true, MSG is still not initialized, which will result in incorrect output.

The reason for this error is that the memory model of the Go language explicitly states that there is no order consistency between goroutines! At the same time, the compiler may adjust the order of MSG and done execution by initializing the Goroutine to optimize the code. The main function does not derive the initialization state of MSG from changes in the done state.

Align the time reference frame with Channel

Each Goroutine is like a separate universe, with its own time system. When certain operations in a Goroutine cannot be observed, their execution state and order are unknown. Only when certain events of one Goroutine are observed by another Goroutine does the state of those events become certain. There are many ways to observe, and aligning the time reference frames of different Goroutines by Channel is a common way.

In the following code, fix the previous error by changing done to pipe:

“`go

var msg string

var done = make(chan struct{})

func main() {

    go func() {

        msg = “hello, world”

        done <- struct{}{}

} ()

    <-done

    println(msg)

}

` ` `

The send and receive of the done pipe forces a synchronization between the main Gorotuine of the main function and the Goroutine that performs initialization in the background. When the ‘<-done’ statement is executed in the main function, the background Goroutine (e.g. done < -struct {}{}) has been initialized. Because the MSG initialization of the background Goroutine is observed by the main function through the done pipeline, the compiler must ensure that the MSG is initialized at this point. So eventually the main function prints the MSG string as normal.

## Align the time reference frame with sync.mutex

There are many ways to align the time reference frame. In addition to synchronizing through pipes, you can also synchronize through Mutex of the Sync package:

“`go

var msg string

var done sync.Mutex

func main() {

    done.Lock()

    go func() {

        msg = “hello, world”

        done.Unlock()

} ()

    done.Lock()

    println(msg)

}

` ` `

In the code, sync.Mutex must first Lock and then Unlock, because unlocking a Mutex object directly causes panic. In the code, done.unlock () and the second done.lock () are in different Goroutines, and they force a time synchronization. So eventually the main function can print the MSG string as well.

## Pipe with cache

Pipes are a concurrency primitive built into the Go language. When I first learned Go, I usually used a pipe with no cache, which is a pipe with zero cache length. For a buffered Channel, the KTH receive completion for a Channel occurs before the K+C send completion, where C is the Channel’s cache size. If C is set to 0, it naturally corresponds to a Channel with no cache, even if the KTH receive is completed before the KTH send is completed. Since only one unbuffered Channel can be sent synchronously, this simplifies to the previous rule of unbuffered channels: receiving from an unbuffered Channel occurs before sending to that Channel is complete.

Cache-based pipes can control the number of concurrent requests:

“`go

func main() {

    var wg sync.WaitGroup

    var limit = make(chan struct{}, 3)

    for i := 0; i < 10; i++ {

        wg.Add(1)

        go func(id int) {

            defer wg.Done()

Limit < -struct {}{} // len(limit) < cap(limit

Defer func(){<-limit}() // On exit len(limit) minus 1

            println(id)

        }(i)

    }

    wg.Wait()

}

` ` `

Because the limit pipe is three in length, a maximum of three goroutines created inside the for loop can execute println concurrently at any one time.

## Initialization order

Package initialization is performed for each package imported in Go, including initialization of global package variables and execution of init initialization functions. If new Goroutines are started during package initialization, these new goroutines cannot be executed immediately; they cannot be created until all package initializations have been completed. The new Goroutine created during initialization will be executed concurrently with the main function.

Package initialization is shown as follows:

Initialization is started by Runtime. main with the following pseudocode:

“`go

func runtime.main() {

    for pkg := range imported_pkg_list {

        pkg.init()

    }

    go goroutines_from_init()

    main()

}

` ` `

Initialization of the imported package is performed sequentially, and then the new Goroutine started during the package initialization phase is concurrently started, and the main function is concurrently started.

# # Goroutine characteristics

A Goroutine is a container for each code that executes concurrently, similar to threads and processes in traditional operating systems. But Go’s Goroutine has its own features, and understanding them is a prerequisite for writing good concurrent programs.

Goroutine features:

– Started by the go keyword, is a lightweight thread

– Start with a very small stack (maybe 2KB/4KB), which can start a lot

– The size of the Goroutine stack is dynamically scaled as needed without worrying about stack overflow

-m Goroutines run on n operating system threads. N by default corresponds to the number of CPU cores

– Runtime. GOMAXPROCS controls the number of system threads that are running properly and non-blocking Goroutine

– In user mode, switching costs are lower than system threads (only necessary registers need to be saved when switching)

– Goroutine uses semi-preemptive collaborative scheduling (collaborative code inserted at function entry)

– IO/sleep/runtime.Gosched all cause scheduling

– Goroutine is deliberately designed to have no ID

Note: Goroutine is a resource and is subject to leakage!

## More content to be continued

Cat friends: Concurrent programming in Go 01 – The Evolutionary history of concurrency

https://mp.weixin.qq.com/s/UaY9gJU85dq-dXlOhLYY1Q

Cat friends will: Go language concurrent programming 02 – Hello, concurrent!

https://mp.weixin.qq.com/s/_aKNO-H11GEDA-l0rycfQQ

Online slideshow viewing:

https://talks.godoc.org/github.com/chai2010/awesome-go-zh/chai2010/chai2010-golang-concurrency.slide

Source files:

https://github.com/chai2010/awesome-go-zh/tree/master/chai2010