Sample code can be found at: github.com/wenjianzhan…

Built-in RLock and WLock

RLock is not mutually exclusive at the time of simultaneous reads, so one might think the overhead is small or negligible. The following code shows how much of an impact it has;

package lock

import (
	"fmt"
	"sync"
	"testing"
)

var cache map[string]string

const NUM_OF_READER int = 40
const READ_TIMES = 100000

func init(a) {
	cache = make(map[string]string)

	cache["a"] = "aa"
	cache["b"] = "bb"
}

func lockFreeAccess(a) {

	var wg sync.WaitGroup
	wg.Add(NUM_OF_READER)
	for i := 0; i < NUM_OF_READER; i++ {
		go func(a) {
			for j := 0; j < READ_TIMES; j++ {
				_, err := cache["a"]
				if! err { fmt.Println("Nothing")
				}
			}
			wg.Done()
		}()
	}
	wg.Wait()
}

func lockAccess(a) {

	var wg sync.WaitGroup
	wg.Add(NUM_OF_READER)
	m := new(sync.RWMutex)
	for i := 0; i < NUM_OF_READER; i++ {
		go func(a) {
			for j := 0; j < READ_TIMES; j++ {

				m.RLock()
				_, err := cache["a"]
				if! err { fmt.Println("Nothing")
				}
				m.RUnlock()
			}
			wg.Done()
		}()
	}
	wg.Wait()
}

func BenchmarkLockFree(b *testing.B) {
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		lockFreeAccess()
	}
}

func BenchmarkLock(b *testing.B) {
	b.ResetTimer()
	for i := 0; i < b.N; i++ {
		lockAccess()
	}
}
Copy the code

In the above code we initialize a map as a virtual cache, two methods read the cache separately, one with a read lock and one without a lock, using benchma to check the elapsed time

$ go test -bench=.  goos: darwin goarch: amd64 pkg: github.com/wenjianzhang/golearning/src/ch46/lock BenchmarkLockFree-4 100 10176144 ns/op BenchmarkLock-4 10 117721790 Ns/op PASS ok github.com/wenjianzhang/golearning/src/ch46/lock 8.439 sCopy the code

You can see that the two methods differ by an order of magnitude in performance

go test -bench=. -cpuprofile=cpu.prof goos: darwin goarch: amd64 pkg: github.com/wenjianzhang/golearning/src/ch46/lock BenchmarkLockFree-4 200 10080805 ns/op BenchmarkLock-4 10 117811130 Ns/op PASS ok github.com/wenjianzhang/golearning/src/ch46/lock 4.523 sCopy the code

Run the go test-bench =. -cpuprofile=cpu.prof statement to output the CPI.prof file

Use ** go tool pprof cpu.prof ** to see

$ go tool pprof cpu.prof  
Copy the code

You can see the output of a control program like this

Type: cpu
Time: Nov 23, 2019 at 4:05pm (CST)
Duration: 4.49s, Total samples = 11.45s (254.80%)
Entering interactive mode (type "help" for commands, "o" for options)
(pprof) 
Copy the code

The ** top-cum ** statement takes the longest time and is sorted by CUM

So again cum is cumulative;

Showing nodes Accounting for 8.46s, 73.89% of 11.45s total Dropped 10 nodes (cum <= 0.06s) Showing top 10 nodes out of 29 flat flat% sum% cum cum% 1.09s 44.28% 9.52% 9.52% 34.76% s 47.77% github.com/wenjianzhang/golearning/src/ch46/lock.lockFreeAccess.func1 3.98 s 5.47 4.60 s Mapaccess2_faststr 0.09s 0.79% 45.07% 3.12s 27.25% Github.com/wenjianzhang/golearning/src/ch46/lock.lockAccess.func1 0 0% 45.07% s 22.88% 2.62 runtime. Goexit0 0 0% 45.07% Funcname 00% 60.00% 1.72s 15.02% Runtime. funcName 0 0% 60.00% s 15.02% 1.72 runtime. Gostringnocopy 0 0% 60.00% s 15.02% runtime. 1.72 isSystemGoroutine 1.59 s 13.89% 73.89% 1.59 s 13.89% sync. (* RWMutex). RLockCopy the code

You can run the ** list lockAccess ** command to view the lockAccess time details

(pprof) list lockAccess Total: 10.30 s ROUTINE = = = = = = = = = = = = = = = = = = = = = = = = github.com/wenjianzhang/golearning/src/ch46/lock.lockAccess.func1 in / Users/zhangwenjian/Code/golearning/SRC/ch46 / lock/lock_test go 30 ms 2.78 s (flat, cum). 26.99% of the Total. 41: var wg sync.WaitGroup . . 42: wg.Add(NUM_OF_READER) . . 43: m := new(sync.RWMutex) . . 44: for i := 0; i < NUM_OF_READER; i++ { . . 45: go func() { 10ms 10ms 46: for j := 0; j < READ_TIMES; J++ {.. 47:.1.29s 48: m.lock () 20ms 220ms 49: _, err := cache["a"].. 50: if! Err {.. 51: FMT.Println("Nothing").. 52:}.1.25s 53: m.unlock ().. 54:}.10ms 55: warg.done ().. 56: }() . . 57: } . . 58: wg.Wait() . . 59:} . . 60: (pprof)Copy the code

You can see that the lock reading time is still relatively long;

sync.Map

  • Suitable for read and write less, and the Key is relatively stable environment
  • The space-for-time scheme is adopted and values are mapped indirectly through Pointers. Therefore, the storage space is larger than that of a Built-in map
  • Association process safety
  • The interior is divided into read – only area, and read – write area

Concurrent Map

  • Used when both reading and writing are frequent
  • After a large map is changed into multiple small maps, different read and write operations are locked on different small maps. This reduces the probability of accessing the same area and improves the read and write efficiency

Define a Map interface to Set the common methods of Set, Get, and Del. Implement these three interfaces for different maps

type Map interface {
	Set(key interface{}, val interface{})
	Get(key interface({})interface{}, bool)
	Del(key interface{})}Copy the code

The sample code, can consult https://github.com/wenjianzhang/golearning make a brief introduction of corresponding ch46 path under the code

RWLockMap uses ** sync.rwmutex **

  • The RLock used by Get
  • The Lock used by Set
  • Lock used by Del;

Sync.map is called directly

  • Get use of the Load
  • Use the Set Store
  • Del use Delete

ConcurrenMap

  • Get use Get
  • The Set using the Set
  • Del using the Del

Number of the same read-write threads

const (
	NumOfReader = 100
	NumOfWriter = 100
)
Copy the code
$ go test -bench=.                     goos: darwin goarch: amd64 pkg: github.com/wenjianzhang/golearning/src/ch46/maps BenchmarkSyncmap/map_with_RWLock-4 200 7733864 ns/op BenchmarkSyncmap/sync.map-4 200 8198908 ns/op BenchmarkSyncmap/concurrent_map-4 300 5475214 ns/op PASS ok Github.com/wenjianzhang/golearning/src/ch46/maps 7.649 sCopy the code

Map_with_RWLock is comparable to sync.map, while concurrent_map is superior.

Write more read less

const (
	NumOfReader = 100
	NumOfWriter = 200
)
Copy the code
$ go test -bench=.goos: darwin goarch: amd64 pkg: github.com/wenjianzhang/golearning/src/ch46/maps BenchmarkSyncmap/map_with_RWLock-4 100 15719999 ns/op BenchmarkSyncmap/sync.map-4 100 13498850 ns/op BenchmarkSyncmap/concurrent_map-4 200 7318006 ns/op PASS ok Github.com/wenjianzhang/golearning/src/ch46/maps 5.679 sCopy the code

Concurrent_map is still optimal

Read more to write less

const (
	NumOfReader = 200
	NumOfWriter = 100
)
Copy the code
- $go test-bench =. Goos: Darwin Goarch: amd64 PKG: github.com/wenjianzhang/golearning/src/ch46/maps BenchmarkSyncmap/map_with_RWLock-4 100 10347642 ns/op BenchmarkSyncmap/sync.map-4 200 8800578 ns/op BenchmarkSyncmap/concurrent_map-4 300 4990815 ns/op PASS ok Github.com/wenjianzhang/golearning/src/ch46/maps 6.325 sCopy the code

Concurrent_map is still the best;

Definitely write less and read more

const (
	NumOfReader = 10
	NumOfWriter = 1
)
Copy the code
- $go test-bench =. Goos: Darwin Goarch: amd64 PKG: github.com/wenjianzhang/golearning/src/ch46/maps BenchmarkSyncmap/map_with_RWLock-4 5000 243022 ns/op BenchmarkSyncmap/sync.map-4 10000 111173 ns/op BenchmarkSyncmap/concurrent_map-4 10000 176643 ns/op PASS ok Github.com/wenjianzhang/golearning/src/ch46/maps 5.043 sCopy the code

At this time sync.map will perform better than the others

conclusion

  • Reduce the influence of locks
  • Reduce the probability of lock conflicts
    • sync.Map
    • ConcurrentMap
  • Avoid locks
    • LAMX Disruptor: martinfowler.com/articles/lm…

Sample code can be found at: github.com/wenjianzhan…