In this article we will talk more about the use of atomic operations in Go. What are atomic operations? As the name implies, an atomic operation is an operation with atomic properties… Is it the same as not being said? Atomicity is explained as follows:
The ability of one or more operations to remain uninterrupted while the CPU is executing is called atomicity. These operations appear to the outside world as an integral whole, and they are either all performed or none performed. The outside world does not see that they are only half-performed.
Cpus cannot perform a series of operations without interruption, but if we can make their intermediate states invisible while performing multiple operations, we can claim that they have “indivisible” atomicity.
A similar explanation has been heard in the ACID concept of database transactions, except that the execuor guaranteeing atomicity is the CPU.
What atomic operations are provided by the Go language
The Go language provides support for atomic operations through the sync/atomic built-in package, which provides the following broad categories of atomic operations:
- Add or subtract, the operation method is named as
AddXXXType
, to ensure that operands are added or subtracted atomically. The supported type isint32
,int64
,uint32
,uint64
,uintptr
When used, replace the actual type I said earlierXXXType
That’s how you do it. - Load ensures that no other task changes the operand before it is read. The operand method is named as
LoadXXXType
, supported types in addition to the base typesPointer
, which supports loading Pointers of any type. - Storage, there must be a load of storage operations, such operations to the method name
Store
Initially, the supported types are the same as those supported by the load operation. - Compare and swap, which is
CAS
Compare And Swap, likeGo
Many concurrent primitive implementations of theCAS
Operations that also support the types listed above. - Swap, it’s a little bit more crude, it’s not a direct swap, it’s rarely used.
The difference between mutex and atomic operations
In parallel programming, Mutex is the synchronization primitive in Go’s sync package, which is used to ensure concurrency safety. How is it different from those in atomic packages? It seems to me that they differ in purpose and underlying implementation:
- Purpose: Mutex is used to protect a piece of logic, atomic operation is used to protect a variable update.
- Underlying implementation:
Mutex
byThe operating systemThe scheduler implementation, whileatomic
Atomic operations in packages are defined byUnderlying hardware instructionTo provide direct support, these instructions are not allowed to interrupt during execution, so that atomic operations can be performed inlock-free
In the case of concurrency security, and its performance can be achieved withCPU
The number goes up and linearly expands.
Atomic operations are often more efficient against the protection of a variable update, and take advantage of multiple computer cores.
For example, a concurrent counter program that uses a mutex:
func mutexAdd(a) {
var a int32 = 0
var wg sync.WaitGroup
var mu sync.Mutex
start := time.Now()
for i := 0; i < 100000000; i++ {
wg.Add(1)
go func(a) {
defer wg.Done()
mu.Lock()
a += 1
mu.Unlock()
}()
}
wg.Wait()
timeSpends := time.Now().Sub(start).Nanoseconds()
fmt.Printf("use mutex a is %d, spend time: %v\n", a, timeSpends)
}
Copy the code
Changing Mutex to be called with the method atomic.addint32 (&a, 1) ensures concurrency safety by incrementing variables without locking them.
func AtomicAdd(a) {
var a int32 = 0
var wg sync.WaitGroup
start := time.Now()
for i := 0; i < 1000000; i++ {
wg.Add(1)
go func(a) {
defer wg.Done()
atomic.AddInt32(&a, 1)
}()
}
wg.Wait()
timeSpends := time.Now().Sub(start).Nanoseconds()
fmt.Printf("use atomic a is %d, spend time: %v\n", atomic.LoadInt32(&a), timeSpends)
}
Copy the code
You can run the above two pieces of code locally and observe that the counter results both end up at 1000000, which is thread-safe.
It is important to note that the operand parameters of all atomic operation methods must be of pointer type. Through the pointer variable, the memory address of the operand can be obtained, thus applying special CPU instructions to ensure that only one Goroutine can operate at a time.
The above example demonstrates the load operation in addition to the add operation. Next, let’s look at the CAS operation.
Compare and exchange
This operation is called CAS (Compare And Swap). This type of operation is prefixed with CompareAndSwap:
func CompareAndSwapInt32(addr *int32, old, new int32) (swapped bool)
func CompareAndSwapPointer(addr *unsafe.Pointer, old, new unsafe.Pointer) (swapped bool)
Copy the code
This operation ensures that the value of the operand has not been changed before the exchange, that is, the value recorded by the parameter old is saved. The exchange operation is performed only when this condition is met. The CAS approach is similar to the optimistic locking mechanism common when operating on databases.
Note that the CAS operation may not succeed when there are a large number of Goroutines reading and writing variables, so you can use the for loop to try multiple times.
I only listed the typical CAS methods of the int32 and unsafe.Pointer types above, mainly to say that in addition to reading numeric types for comparison exchanges, Pointers are also supported for comparison exchanges.
Unsafe.Pointer provides a way to bypass the Go language Pointer type constraint. Unsafe does not mean unsafe, but backwards compatibility is not officially guaranteed.
// define a struct type P
type P struct{ x, y, z int }
// Execute a pointer to type P
var pP *P
func main(a) {
// Define a Pointer variable that performs the unsafe.Pointer value
var unsafe1 = (*unsafe.Pointer)(unsafe.Pointer(&pP))
// Old pointer
var sy P
// Set unsafe1 to Old Pointer for demonstration purposes
px := atomic.SwapPointer(
unsafe1, unsafe.Pointer(&sy))
// The CAS operation succeeds, and true is returned
y := atomic.CompareAndSwapPointer(
unsafe1, unsafe.Pointer(&sy), px)
fmt.Println(y)
}
Copy the code
The above example is not a CAS in a concurrent environment, but sets the operand to Old Pointer to demonstrate the effect.
In fact, the underlying implementation of Mutex also relies on the CAS implementation of atomic operations, which are equivalent to the implementation dependencies of synchronization primitives in the Sync package.
For example, the Mutex structure has a state field, which is the status bit of the lock.
type Mutex struct {
state int32
sema uint32
}
Copy the code
For ease of understanding, we define its states here as 0 and 1, where 0 indicates that the Lock is currently idle and 1 indicates that the Lock is locked. Here is part of the implementation code for the Lock method in sync.mutex.
func (m *Mutex) Lock() { // Fast path: grab unlocked mutex. if atomic.CompareAndSwapInt32(&m.state, 0, mutexLocked) { if race.Enabled { race.Acquire(unsafe.Pointer(m)) } return } // Slow path (outlined so that the fast path can be inlined) m.lockSlow() }Copy the code
At atomic.Com pareAndSwapInt32 (& Margaret spellings Tate, 0, mutexLocked), Margaret spellings Tate on behalf of the state of the lock, by the method of CAS, to determine whether a lock at this time the state of the free (Margaret spellings Tate = = 0), is, Then it is locked (the mutexLocked constant is 1).
Atomic.Value guarantees that any Value can be read or written safely
The atomic package provides a set of store-starting methods to ensure safe concurrent writes to variables of various types and to prevent other operations from reading the dirty data in the process of modifying variables.
func StoreInt32(addr *int32, val int32)
func StoreInt64(addr *int64, val int64)
func StorePointer(addr *unsafe.Pointer, val unsafe.Pointer).Copy the code
The definitions of these operations are similar to those described above, and I won’t show you how to use them.
It is worth mentioning that if you want to safely set multiple fields of a structure concurrently, in addition to converting the structure to Pointers via StorePointer, you can also use the atomic.Value that was introduced later in the atomic package. It does the underlying conversion for us from the concrete Pointer type to unbroadening.Pointer.
With atomic.Value, it allows us to rely on non-compatible types such as unsafe.Pointer while encapsulating reads and writes of arbitrary data types as atomic operations (intermediate states are not visible).
The atomic.Value type exposes two methods:
v.Store(c)
– Write operation will be the original variablec
Store it in aatomic.Value
The type ofv
In the water.c := v.Load()
– Read operations from thread-safev
Read the contents stored in the previous step.
Version 1.17 also added Swap and CompareAndSwap methods.
The simplicity of the interface makes it easy to use, simply by substituting Load() and Store() for variable reads and assignments that require concurrent protection.
Since Load() returns an interface{} type, remember to convert it to a value of a specific type before using it. Here is a simple example to demonstrate the use of atomic.value.
type Rectangle struct {
length int
width int
}
var rect atomic.Value
func update(width, length int) {
rectLocal := new(Rectangle)
rectLocal.width = width
rectLocal.length = length
rect.Store(rectLocal)
}
func main(a) {
wg := sync.WaitGroup{}
wg.Add(10)
// 10 coroutines are updated concurrently
for i := 0; i < 10; i++ {
go func(a) {
defer wg.Done()
update(i, i+5)
}()
}
wg.Wait()
_r := rect.Load().(*Rectangle)
fmt.Printf("rect.width=%d\nrect.length=%d\n", _r.width, _r.length)
}
Copy the code
You could also try, without atomy. Value, to assign the Rectange pointer directly, and see if the values of the Rectange fields change to 10 and 15, as expected, in parallel.
conclusion
This article introduces in detail the usage scenarios and usages of the Go language atomic operations that are frequently used in the Atomic package. Of course, I did not list all the usages of the atomic package operations, mainly considering that some of them are not used in many places, or have been replaced by a better way, and I think it is really unnecessary. By the end of this article, you should be able to explore atomic packages on your own.
Again, atomic operations are supported by the underlying hardware, while locking is implemented by the operating system’s scheduler. Locks should be used to protect a piece of logic. Atomic operations are usually more efficient for protecting a variable update and take advantage of multiple cores in a computer. If you want to update a composite object, use an implementation wrapped in atomic.value.