Basic knowledge of

The concept of the map

A map is an abstract structure consisting of <key, value> pairs. The keys are not repeated, and all operations are key/value. There are generally two underlying implementations of MAP:

  • Search tree, naturally ordered, average/worst write/search complexity is O(logN)
  • Hash table, unordered storage, average write/find complexity O(1), worst O(N)

The hash function

Hash functions (often referred to as hash functions) are functions that can be used to map arbitrary size data to fixed size values. Common examples include MD5, SHA, etc.

A well-designed hash function should contain the following properties:

  • Uniformity: A good hash function should map as evenly as possible over its output range, that is, it should generate every hash value in its output range with roughly the same probability.
  • High efficiency: Hash efficiency is high, even if the input parameter is very long can also be quickly calculated hash value.
  • Deterministic: The hash process must be deterministic, meaning that for a given input value, it must always generate the same hash value.
  • Avalanche effect: Small changes in input values can cause large changes in output values.
  • Irreversible: The original data cannot be derived backwards from the output value of the hash function.

Hash conflict

A hash function is a function that maps data of any size to a fixed-size value. We can expect, then, that even if the hash function is well designed, almost every input value can be mapped to a different hash. However, when the input data is large enough to exceed the maximum number of values that can be expressed in a combination of fixed-size values, conflicts are inevitable! (See drawer principle)

Drawer principle: there are ten apples on the table, to put these ten apples in nine drawers, no matter how to put, we will find at least one drawer will put no less than two apples. The drawer principle is sometimes called the pigeon nest principle.

Hash bucket

A hash bucket (also known as a slot, similar to a drawer in the drawer principle) can be thought of simply as a hash, all of which make up the hash space.

Load factor

The load factor represents the degree to which elements in the hash table are filled. It is calculated as: load factor = number of elements in the hash table/length of the hash table. The larger the load factor, the more elements are filled, the higher the space utilization, but the greater the chance of hash conflicts. On the other hand, the smaller the loading factor is, the fewer the elements are filled, the probability of conflicts is reduced, but the space waste is also more, and the number of capacity expansion operations is increased. The load factor is also a key indicator to determine whether to expand the hash table. In Java HashMap, the default load factor is 0.75. Python’s dict defaults to a 2/3 load factor.

Solutions to hash conflicts

Chain address method

Join keys with the same hash address in a linked list. You also need to traverse the chain to locate the real key. This is also a conflict resolution solution for Map in go.

Open addressing

It is also called rehash, which means that if a hash conflict occurs, the hash is performed again until no conflict occurs, and the corresponding store is put into the hash.

The biggest performance impact in open addressing is the load factor, which is the ratio of the number of elements in the array to the size of the array. As the load factor increases, the average time for linear probes increases, which affects the read and write performance of the hash table. When cubed out of more than 70%, the performance of the hash table will fell sharply, and once cubed out reached 100%, the whole hash table will complete failure, and then find and insert any element of time complexity is O (n), then need to through all the elements of the array, so in fulfilling the hash table must pay attention to the change of the load factor.

Compare the two solutions

As for the open addressing method, it only has array as a data structure to complete storage, inheriting the advantages of array, friendly to CPU cache, easy serialization operation. However, its memory utilization is not as good as chain-address method, and the cost is higher when conflicts occur. When the amount of data is clear and the loading factor is small, the open addressing method is suitable.

Linked list nodes can be created as needed without having to allocate enough memory in advance like open addressing, so linked addresses are more memory efficient than open addressing. The chain address method is more tolerant of the load factor and is suitable for storing large objects and large amounts of data. It is also more flexible than open addressing and supports more optimization strategies, such as using red-black trees instead of linked lists. But chaining requires extra space to store Pointers.

It’s worth noting that dict uses open addressing for hash collisions in Python, Java’s HashMap uses chained address, and Golang’s Map uses chained address.

Basic map operations

1, the statement

var m map[T]T

Note: At this point, the map is nil. The map of nil can be read/deleted safely, but the write operation of the map of nil will cause panic

package main
func main(a) {
    var a map[int64]bool
    delete(a, 10)
    _ = a
    a[10] = false
}


go run main.go
panic: assignment to entry in nil map// See the source code for details
Copy the code

2. Initialize

var m make(map[T]T, len) var m map[T]T{… }

3. Add/update

m[key] = value

4, the query

v := m[key] v, ok := m[key]

In the first case, an unset ket-value is zero; For the second, unset ket-value, ok=false.

5, delete,

delete(m, key)

You can delete the same key multiple times without error

Comparison with other Map/HashTables

1. The underlying Java Map is hash table + linked list (zipper method), whose feature is that when the linked list length exceeds the threshold (8), it will turn into a red-black tree. 2. C++ map is a red-black tree implementation. Its unordered_map is a hashtable implementation. 4. Redis hash is an ordinary Hashtable, which is characterized by the use of high-order iterations in traversal to avoid repeating elements during scan. However, there is still the possibility of repeating elements if you shrink them.

Deep source

Map implementation

Like Python and Java, MAP in Go language is also implemented based on hash table. The way to resolve hash conflicts is linked address method, that is, to express map by using the data structure of array + linked list.

Map data structure

The map data is stored in an array of buckets, each containing up to eight key-value pairs. Low-order bits are used to select buckets, and high-order bits are used to distinguish keys in a separate bucket.

  • Hmap structure
/ / https://github.com/golang/go/blob/release-branch.go1.15/src/runtime/map.go
type hmap struct {
    count     int // Represents the number of elements in the hash table, which increases on insertion and decreases on deletion. Len (map) returns this value.
    flags     uint8 // State flags. The meanings of the four state bits are explained in constants below.
    B         uint8  // buckets log2 (Max load factor *2^B)
    noverflow uint16 // The approximate number of barrels spilled.
    hash0     uint32 // Hash a seed.

    buckets    unsafe.Pointer // A pointer to the buckets array, size 2^B, nil if the number of elements is 0.
    oldbuckets unsafe.Pointer If capacity expansion occurs, oldbuckets is a pointer to the oldbuckets array, which is 1/2 the size of the new buckets array. In the non-expanded state, it's nil.
    nevacuate  uintptr        // Represents the capacity expansion progress. Buckets whose address is smaller than this one have been moved.

    extra *mapextra // This field is designed to optimize GC scanning. Used when neither key nor value contains a pointer and both can be inline. Extra is a pointer to the mapExtra type.
Copy the code

  • The structure of MAPExtra
// mapextra holds fields that are not present on all maps.
type mapextra struct {
    // If neither key nor value contains a pointer and can be inline(<=128 bytes)
    // Use the Hmap extra field to store overflow buckets to avoid GC scanning the entire map
    // However, bmap.overflow is also a pointer. At this point we can only refer to these overflow Pointers
    Overflow and hmap.extra. oldoverFlow
    // Buckets' overflow contains the buckets of Hmap. buckets' overflow
    // OldoverFlow contains the overflow bucket of MAP. oldbuckets
    overflow    *[]*bmap
    oldoverflow *[]*bmap

    // A pointer to an empty Overflow bucket
    nextOverflow *bmap
}
Copy the code
  • Bmap structure
// A bucket for a Go map.
type bmap struct {
    // TopHash contains information about the highest byte (8 bits) hash value for each key in the bucket (i.e., high-order bits described earlier).
    // If TopHash [0] < minTopHash, topHash [0] represents the evacuation state of the bucket.
    tophash [bucketCnt]uint8
}
Copy the code

The code and location of the bMAP structure field are inferred as follows:

func bmap(t *types.Type) *types.Type {
  / / a little...

  field := make([]*types.Field, 0.5)

	field = append(field, makefield("topbits", arr))

  / / a little...
  
	keys := makefield("keys", arr)
	field = append(field, keys)

  / / a little...
  
	elems := makefield("elems", arr)
	field = append(field, elems)

  / / a little...
  
	if int(elemtype.Align) > Widthptr || int(keytype.Align) > Widthptr {
		field = append(field, makefield("pad", types.Types[TUINTPTR]))
	}

  / / a little...
  
	overflow := makefield("overflow", otyp)
	field = append(field, overflow)

  / / a little...
}
Copy the code
  • Important constant flag

const (
    // A bucket can hold a maximum of 8 key-value pairs
    bucketCntBits = 3
    bucketCnt     = 1 << bucketCntBits

  // The load factor that triggers capacity expansion is 13/2=6.5
    loadFactorNum = 13
    loadFactorDen = 2

    // Keys and values over 128 bytes are converted to Pointers
    maxKeySize  = 128
    maxElemSize = 128

    // The data offset should be the size of the BMAP structure, which needs to be properly aligned.
  // For AMD64p32, this means: even if the pointer is 32-bit, it is 64-bit aligned.
    dataOffset = unsafe.Offsetof(struct {
        b bmap
        v int64
    }{}.v)


  // Each barrel (a link bucket containing its overflow, if it has an overflow), in a evacuated* states, would either contain all of its key-value pairs, or none of them (except the evacuated () method stage, This method call only happens when a map is written, and the map is not viewable by other Goroutines at this stage. Simply put, all the data in the bucket is either removed altogether or not.
  // In addition to the normal high 8-bit hash value, topHash stores some special state values (indicating the cell's move state). The normal tophash value should be at least 5, and some of the special state values are listed below.
    emptyRest      = 0 // Indicates that the cell is empty, and cells with higher index bits or in overflows are empty. (This is the state when the bucket is initialized)
    emptyOne       = 1 // The empty cell has been moved to a new bucket
    evacuatedX     = 2 // The key-value pair has been moved and the key is in the first half of the new buckets array
    evacuatedY     = 3 // The key-value pair has been moved and the key is in the latter part of the new buckets array
    evacuatedEmpty = 4 // The cell is empty and the bucket has been relocated
    minTopHash     = 5 // The minimum normal value of tophash

    // flags
    iterator     = 1 // Maybe iterators are using buckets
    oldIterator  = 2 // There may be iterators using oldbuckets
    hashWriting  = 4 // A coroutine is writing a key to the map
    sameSizeGrow = 8 // Add the same capacity

    // Bucket ID for iterator checks
    noCheck = 1< < (8*sys.PtrSize) - 1
)
Copy the code

The memory model

Bmap memory layout

What is special in Bmap is the storage of KV. Bmap stores 8 keys and 8 values together, namely key/key/…. value/value… This storage, as opposed to the key/value/key/value storage design, makes the BMAP memory layout tighter by storing the key and value separately, just by adding the padding alignment at the end.

You can calculate the space occupied by each Bmap of map[INT64] INT8 in the preceding two storage designs. If kV is stored together, a total of 7 bytes of padding is needed after each pair of kV. The total padding is 7*8 = 56 bytes. However, if kV is stored separately, 8 keys are aligned and 8 values are aligned, so no additional padding is required.

Create a map

There are two ways to initialize a map

make(map[k]v)
// Specify an initial map size hint
make(map[k]v, hint)
Copy the code

For hint<=8 (bucketCnt), go calls makemap_small (source location SRC/Runtime /map.go) and allocates it directly from the heap.

func makemap_small(a) *hmap {
    h := new(hmap)
    h.hash0 = fastrand()
    return h
}
Copy the code

When hint>8, the makemap function is called

// If the compiler thinks the map and the first bucket can be created directly on the stack, both h and bucket may be non-null
// If h! = nil, then the map can be created directly in h
// If h. peckets! = nil, then the bucket pointed to by h can be used as the first bucket of the map
func makemap(t *maptype, hint int, h *hmap) *hmap {
  // math.muluintptr returns the product of hint and t.bucket.size and determines whether the product overflows.
    mem, overflow := math.MulUintptr(uintptr(hint), t.bucket.size)
// The value of maxAlloc varies from platform to platform. For details, see SRC /runtime/malloc.go
    if overflow || mem > maxAlloc {
        hint = 0
    }

// initialize Hmap
    if h == nil {
        h = new(hmap)
    }
  // Get hash seeds by fastrand
    h.hash0 = fastrand()

    // Hint at the number of elements that can hold those elements
    B := uint8(0)
    for overLoadFactor(hint, B) {
        B++
    }
    h.B = B

    // Allocate the initial hash table
  // If B is 0, buckets are lazily assigned to the mapAssign method
    ifh.B ! =0 {
        var nextOverflow *bmap
    MakeBucketArray creates an array of buckets at the bottom of the map, which allocates at least h.B^2.
        h.buckets, nextOverflow = makeBucketArray(t, h.B, nil)
        ifnextOverflow ! =nil {
    h.extra = new(mapextra)
            h.extra.nextOverflow = nextOverflow
        }
    }

    return h
}
Copy the code

Allocate the makeBucketArray function for the Buckets array

MakeBucket creates an array for the map to store buckets.
func makeBucketArray(t *maptype, b uint8, dirtyalloc unsafe.Pointer) (buckets unsafe.Pointer, nextOverflow *bmap) {
    base := bucketShift(b)
    nbuckets := base
  // For small b values (less than 4), i.e. the number of buckets is less than 16, there is little possibility of using overflow buckets. In this case, computational overhead is avoided.
    if b >= 4 {
    When the number of buckets is greater than or equal to 16, an additional 2^(b-4) overflow buckets are normally created
        nbuckets += bucketShift(b - 4)
        sz := t.bucket.size * nbuckets
        up := roundupsize(sz)
        ifup ! = sz { nbuckets = up / t.bucket.size } }// There are two cases of dirtyalloc. If it is nil, a new underlying array is allocated. If it's not nil, then it points to the underlying array that was allocated by makeBucketArray with the same t and B arguments. If the array is not empty, it needs to be emptied and reused.
    if dirtyalloc == nil {
        buckets = newarray(t.bucket, int(nbuckets))
    } else {
        buckets = dirtyalloc
        size := t.bucket.size * nbuckets
        ift.bucket.ptrdata ! =0 {
            memclrHasPointers(buckets, size)
        } else {
            memclrNoHeapPointers(buckets, size)
        }
}

  // If b is greater than or equal to 4, some overflow buckets will be pre-allocated.
  // To minimize the overhead of tracking these overflow buckets, the following conventions are used:
  // If the overflow pointer of a pre-allocated bucket is nil, we can bumping the pointer around to get more buckets available.
  / / (about pointer collision: assuming that memory is absolutely neat, all used memory aside and free memory on the other side, there is a middle pointer as a cut-off point indicator, the allocated memory is just put the pointer to the free space there move a and the object is equal to the size of the distance, the distribution, called "pointer collision")
  // For the last overflow bucket, we need a safe non-nil pointer to it.
    ifbase ! = nbuckets { nextOverflow = (*bmap)(add(buckets, base*uintptr(t.bucketsize)))
        last := (*bmap)(add(buckets, (nbuckets- 1) *uintptr(t.bucketsize)))
        last.setoverflow(t, (*bmap)(buckets))
    }
    return buckets, nextOverflow
}
Copy the code

Based on the above code, we can determine that normal buckets and overflow buckets are contiguously stored in memory, but are referred to by different fields in hMAP.

The map operation

It is assumed that the hash of the key results in the 64-bit hash value. If B=5, the length of buckets’ array, the number of buckets, is 32 (2 ^ 5).

For example, now want to buy a key in the map, the key after the hash, the hash values are as follows: 10010111 000011110110110010001111001010100010010110010001010 01010

== low-order bits are used to select buckets. When searching, tophash can be compared first. If they are consistent, key comparison can be performed. Hash high-order bits are used to distinguish the key == in a separate bucket. When B is equal to 5, then the lowest hash we choose is also 5 bits, 01010, which has a decimal value of 10, representing bucket number 10. Find the position of the key in the bucket using the high 8 bits of the hash value. == If there is no key already in the bucket, the new key and value will be put into the first key and value space. = =

Note: for the choice of high and low bits, the essence of this operation is to mod, but the mod is very expensive, in the actual code implementation is the bit operation. Here’s the code for topHash:

func tophash(hash uintptr) uint8 {
    top := uint8(hash >> (sys.PtrSize*8 - 8))
    if top < minTopHash {
        top += minTopHash
    }
    return top
}
Copy the code

Hashing occurs when two different keys fall into the same bucket. The solution to go is chain address method (for better understanding, assuming non-expansion and the key is added for the first time) : find the first empty space in the bucket and record it. No key is found in the bucket and its overflow bucket, and place the key in the first empty space. Otherwise, go to the overflow bucket of the bucket to find the empty space, if there is no overflow bucket, add the overflow bucket, and put it in the first empty space of the overflow bucket (because it is the first time to add, so does not describe the existence of the key).

The value of B in the figure above is 5, so the number of buckets is 32. The hash function is used to calculate the hash value of the key to be inserted. The lower 5 bits are hash 00110, corresponding to bucket 6. Since the first six cells in the bucket have been filled with normal hash values (traversal), the high-order hash values corresponding to 151 are placed in the seventh cell, and the key and value are respectively placed in the seventh corresponding space.

If no cell is found in the bucket and overflow is not nil, continue to search in the overflow bucket until it is found. If all cells are found and overflow is not nil, return the default value of type key (for example, key is int, Returns 0).

read

There are two forms of reading, that is, the return value has two forms, but the Go language does not have the concept of function overloading. This dynamic return value is clearly the syntax sugar provided by Go, which is converted to the corresponding function of the Runtime at compile time according to the number of return values used by the user.

To:

    / / form a
    v := m[k]
    / / form 2
    v, ok := m[k]
Copy the code

The source location is still in runtime/map.go, and the corresponding functions are respectively

  • Runtime. mapAccess1, which corresponds to a return value of the form
  • Runtime. mapAccess2, which corresponds to the form of two return values
  • Runtime. mapAccessk, which returns kv at the same time, is used when traversing a map

The code implementation of form one is the mapAccess1 method described above. In addition, there is a mapAccess2 method in the source code, whose function signature is as follows.

func mapaccess2(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, bool) {}
Copy the code

Compared with mapAccess1, mapAccess2 has a return value of type bool, which indicates whether the corresponding key was found in the map. Since it is basically the same as MapAccess1, the detailed code is no longer posted.

At the same time, the source code also has mapaccessK method, its function signature is as follows.

func mapaccessK(t *maptype, h *hmap, key unsafe.Pointer) (unsafe.Pointer, unsafe.Pointer) {}
Copy the code

In contrast to MapAccess1, mapaccessK returns both key and value, and its code logic is consistent.

The logic of these three functions is basically the same. The following uses Runtime. mapccess1 as an example to analyze map

func mapaccess1(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer {
  // If race detection is enabled -race
    ifraceenabled && h ! =nil {
        callerpc := getcallerpc()
        pc := funcPC(mapaccess1)
        racereadpc(unsafe.Pointer(h), callerpc, pc)
        raceReadObjectPC(t.key, key, callerpc, pc)
    }
  // If memory sanitizer - mSAN is enabled
    ifmsanenabled && h ! =nil {
        msanread(key, t.key.size)
    }
  // If map is empty or the number of elements is 0, return zero
    if h == nil || h.count == 0 {
        if t.hashMightPanic() {
            t.hasher(key, 0) // see issue 23734
        }
        return unsafe.Pointer(&zeroVal[0])}// Note that this is bitwise and operant
  The h lags behind the "hashWriting" value (signifies that some other goroutine is writing keys into the map) and the bitwise calculation is not 0.
  // This also indicates that go's map is non-concurrent safe
    ifh.flags&hashWriting ! =0 {
        throw("concurrent map read and map write")}// Different types of keys use different hash algorithms. For details, see the logic in the typehash function in SRC/Runtime /alg.go
    hash := t.hasher(key, uintptr(h.hash0))
    m := bucketMask(h.B)
  // Find the corresponding bucket by bit and operation
    b := (*bmap)(add(h.buckets, (hash&m)*uintptr(t.bucketsize)))
  // If oldbuckets is not empty, the map has been expanded
  // If capacity expansion occurs, data in the old buckets may not have been moved to the new buckets
  // You should also check with your old buckets
    ifc := h.oldbuckets; c ! =nil {
        if! h.sameSizeGrow() { m >>=1
        }
        oldb := (*bmap)(add(c, (hash&m)*uintptr(t.bucketsize)))
    // If the value of tophash[0] in oldbuckets, it is one of evacuating X, claiming atedy, claiming dempty
    // the evacuated() is returned to true, representing the completion of the relocation.
    // Therefore, this oldbucket will only be traversed if the move is not complete
        if! evacuated(oldb) { b = oldb } }// Retrieves the tophash value of the current key
    top := tophash(hash)
  // The following is the core logic of the lookup
  // Double loop traversal: the outer loop is traversal from bucket to overflow bucket; The inner layer is cell traversal in the bucket
  // There are three conditions for breaking out of the loop: first, the key has been found; The second is that the current bucket has no overflow bucket;
  EmptyRest = emptyRest = emptyRest = emptyRest = emptyRest = emptyRest
bucketloop:
    for; b ! =nil; b = b.overflow(t) {
        for i := uintptr(0); i < bucketCnt; i++ {
      // Determine whether tophash values are equal
            ifb.tophash[i] ! = top {if b.tophash[i] == emptyRest {
                    break bucketloop
                }
                continue
      }
      // Since the key in a bucket is stored in contiguous storage space, the bucket address + the data offset (the size of the BMAP structure) + the size of the keysize can be used to obtain the address of k
      // In the same way, the value address is calculated similarly, but with 8 keysize memory addresses added
            k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize))
            if t.indirectkey() {
                k = *((*unsafe.Pointer)(k))
            }
      // Check whether the key is equal
            if t.key.equal(key, k) {
                e := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.elemsize))
                if t.indirectelem() {
                    e = *((*unsafe.Pointer)(e))
                }
                return e
            }
        }
    }
  // If none of the buckets are found, zero is returned
    return unsafe.Pointer(&zeroVal[0])}Copy the code

The following is a diagram of the search process for MapAccess1

write

The source code is as follows

Can you prove why an empty map can be read and deleted, but not written

func mapassign(t *maptype, h *hmap, key unsafe.Pointer) unsafe.Pointer {
  // If h is a null pointer, assignment causes panic
  // For example
  // var m map[string]int
    // m["k"] = 1
    if h == nil {
        panic(plainError("assignment to entry in nil map"))}// If race detection is enabled -race
    if raceenabled {
        callerpc := getcallerpc()
        pc := funcPC(mapassign)
        racewritepc(unsafe.Pointer(h), callerpc, pc)
        raceReadObjectPC(t.key, key, callerpc, pc)
    }
  // If memory sanitizer - mSAN is enabled
    if msanenabled {
        msanread(key, t.key.size)
    }
  // Some other goroutine is writing a key to the map, raising the following error
    ifh.flags&hashWriting ! =0 {
        throw("concurrent map writes")}// Calculate the corresponding hash value from the key and hash seed
    hash := t.hasher(key, uintptr(h.hash0))

  // Bitwise or operation flags with hashWriting
  // Because the goroutine may not have finished writing to the key at this time, a panic will occur when you call t.asher again.
    h.flags ^= hashWriting

    if h.buckets == nil {
        h.buckets = newobject(t.bucket) // newarray(t.bucket, 1)
}

again:
  // bucketMask returns 2 to the power B minus 1
  // Buckets are returned by bitwise operations on the hash value and bucketMask value
    bucket := hash & bucketMask(h.B)
  // If map is moving (i.e. H.ldbuckets! = nil), then the relocation is carried out first.
    if h.growing() {
        growWork(t, h, bucket)
    }
  // Calculate the memory location of the bucket number above
  // post = start + bucketNumber * bucketsize
    b := (*bmap)(unsafe.Pointer(uintptr(h.buckets) + bucket*uintptr(t.bucketsize)))
    top := tophash(hash)

    var inserti *uint8
    var insertk unsafe.Pointer
    var elem unsafe.Pointer
bucketloop:
    for {
    // Iterate through the 8 cells in the bucket
        for i := uintptr(0); i < bucketCnt; i++ {
      // There are two cases. The first case is that the tophash value of the cell bit is different from the current tophash value
      / / in b.t ophash [I]! = top
      // In theory, it could be an empty slot
      // In normal case, the slot distribution of map is like this, e stands for empty:
      // [h0][h1][h2][h3][h4][e][e][e]
      // When a delete operation is performed, it might look like this:
      // [h0][h1][e][e][h5][e][e][e]
      // So if you insert it again, it will be inserted as far forward as possible
      // [h0][h1][e][e][h5][e][e][e]
      / / ^
      / / ^
      // This position
      // Write down the empty space in front of the loop
      // Because it is possible to find an equal key later, or not possible to find an equal key later
            ifb.tophash[i] ! = top {// If the cell bit is empty, it can be inserted at the corresponding position
                if isEmpty(b.tophash[i]) && inserti == nil {
                    inserti = &b.tophash[i]
                    insertk = add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize))
                    elem = add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.elemsize))
                }
                if b.tophash[i] == emptyRest {
                    break bucketloop
                }
                continue
            }
      // The second case is that the tophash value of the cell bit is equal to the current tophash value
            k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize))
            if t.indirectkey() {
                k = *((*unsafe.Pointer)(k))
            }
      // Even if the tophash value of the current cell bit is equal, it does not necessarily correspond to the same key
            if! t.key.equal(key, k) {continue
            }
            // Update the key if it already exists
            if t.needkeyupdate() {
                typedmemmove(t.key, k, key)
            }
      // Get the memory address of the value to insert the key
      // pos = start + dataOffset + 8*keysize + i*elemsize
            elem = add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.elemsize))
      // If you get to this point, skip to the done end logic
            goto done
        }
    // If no empty cell or overwrite cell is found after traversing the 8 cells in the bucket, the overflow bucket will be traversed
        ovf := b.overflow(t)
    // If no suitable cell is found in the overflow bucket, exit the loop.
        if ovf == nil {
            break
        }
        b = ovf
    }

    // If no cell is found in the existing bucket or the overflow bucket for the key to write to, the following two conditions may occur
  //
  // Check whether the load factor of the current map reaches the preset threshold of 6.5 or whether the number of overflow buckets of the current Map is too many. If either of the two conditions exists, perform capacity expansion.
  // hashGrow() is not actually expanded. GrowWork () is used to move (copy) hash data.
  // Jump back into the again logic and iterate over the new bucket again after growWork().
    if! h.growing() && (overLoadFactor(h.count+1, h.B) || tooManyOverflowBuckets(h.noverflow, h.B)) {
        hashGrow(t, h)
        goto again // Growing the table invalidates everything, so try again
    }

  //
// If case 1 is not met, a new overflow bucket is created for the current bucket, and the tophash and key are inserted into position 0 of the corresponding memory of the new overflow bucket
    if inserti == nil {
        // all current buckets are full, allocate a new one.
        newb := h.newoverflow(t, b)
        inserti = &newb.tophash[0]
        insertk = add(unsafe.Pointer(newb), dataOffset)
        elem = add(insertk, bucketCnt*uintptr(t.keysize))
    }

  // Store the new key and value at the insert location
    if t.indirectkey() {
        kmem := newobject(t.key)
        *(*unsafe.Pointer)(insertk) = kmem
        insertk = kmem
    }
    if t.indirectelem() {
        vmem := newobject(t.elem)
        *(*unsafe.Pointer)(elem) = vmem
    }
    typedmemmove(t.key, insertk, key)
    *inserti = top
  // The number of keys in the map +1
    h.count++

done:
    if h.flags&hashWriting == 0 {
        throw("concurrent map writes")
    }
    h.flags &^= hashWriting
    if t.indirectelem() {
        elem = *((*unsafe.Pointer)(elem))
    }
    return elem
}
Copy the code

delete

In the deletion logic, it is special that empty positions will be set to the emptyOne identifier; The emptyRest flag bit is used for all subsequent empty positions. This flag bit is also used to continue and break searches.

func mapdelete(t *maptype, h *hmap, key unsafe.Pointer) { 
   ifh.flags&hashWriting ! =0 { // Deleting a key is also a write operation and cannot be concurrent
      throw("concurrent map writes") 

   } 
  h.flags ^= hashWriting 
  if h.growing() { // Triggers capacity expansion

      growWork(t, h, bucket) 
   } 
   hash := t.hasher(key, uintptr(h.hash0)) 
   bucket := hash & bucketMask(h.B) 
   b := (*bmap)(add(h.buckets, bucket*uintptr(t.bucketsize))) 
   bOrig := b 
   top := tophash(hash) 
search: 
   for; b ! =nil; b = b.overflow(t) { 
      for i := uintptr(0); i < bucketCnt; i++ { 
         ifb.tophash[i] ! = top {if b.tophash[i] == emptyRest { 
               break search 
            } 
            continue 
         } 
         k := add(unsafe.Pointer(b), dataOffset+i*uintptr(t.keysize)) 
         k2 := k 

         if t.indirectkey() { 

            k2 = *((*unsafe.Pointer)(k2)) 
         } 
         if! t.key.equal(key, k2) {continue 
         } 
         // Only clear key if there are pointers in it.
         if t.indirectkey() { 
            *(*unsafe.Pointer)(k) = nil 
         } else ift.key.ptrdata ! =0 { 
            memclrHasPointers(k, t.key.size) 
         } 
         e := add(unsafe.Pointer(b), dataOffset+bucketCnt*uintptr(t.keysize)+i*uintptr(t.elemsize)) 
         if t.indirectelem() { 
            *(*unsafe.Pointer)(e) = nil 
         } else ift.elem.ptrdata ! =0 { 
            memclrHasPointers(e, t.elem.size) 
         } else { 
            memclrNoHeapPointers(e, t.elem.size) 
         } 
         b.tophash[i] = emptyOne
         // If the bucket now ends in a bunch of emptyOne states, 
         // change those to emptyRest states.
         // It would be nice to make this a separate function, but
         // for loops are not currently inlineable.
         Exit if the tophash value of the next element of the current element is not emptyRest
         if i == bucketCnt- 1 { 
            ifb.overflow(t) ! =nil && b.overflow(t).tophash[0] != emptyRest { 
               goto notLast 
            } 
         } else { 
            if b.tophash[i+1] != emptyRest { 
               goto notLast 
            } 
         } 

         // Otherwise, set the topHash of the current element to emptyRest and check forward and set the emptyRest identifier
         for { 
            b.tophash[i] = emptyRest 
            if i == 0 { 
               if b == bOrig { 
                  break // beginning of initial bucket, we're done. 
               } 
               // Find previous bucket, continue at its last entry. 
               c := b 
               forb = bOrig; b.overflow(t) ! = c; b = b.overflow(t) { } i = bucketCnt -1 
            } else { 
              i-- 
            } 
            ifb.tophash[i] ! = emptyOne {break 
           } 
         }
      notLast: 
         h.count-- 
         break search 
      } 
   } 
   if h.flags&hashWriting == 0 { 
      throw("concurrent map writes") 
   } 
   h.flags &^= hashWriting 
} 

Copy the code

capacity

When we explained the load factor in this article, we mentioned that the load factor is the key metric that determines whether a hash table should be expanded or not. In go map expansion, besides the loading factor, the number of overflowing buckets is another key indicator for expansion.

To ensure the access efficiency, the system checks whether the map needs to be expanded before adding, modifying, or deleting keys. Expansion is actually a way of exchanging space for time. In the previous source mapassign, in fact, has annotated map expansion conditions, mainly two points:

  • It is judged that the critical point of loading factor has been reached, that is, the number of elements >= the total number of buckets * 6.5. In this case, most buckets may be nearly full (that is, the average number of key-value pairs stored in each bucket reaches 6.5). If new elements are inserted, there is a high probability that they need to hang on overflow bucket.
func overLoadFactor(count int, B uint8) bool {
    return count > bucketCnt && uintptr(count) > loadFactorNum*(bucketShift(B)/loadFactorDen)
}
Copy the code
  • Check whether too many buckets overflow. When the total number of buckets overflow < 2 ^ 15, if the total number of buckets overflow >= the total number of buckets overflow, the number of buckets overflow is too many. When the total number of buckets >= 2 ^ 15, it is directly compared with 2 ^ 15. When the total number of overflowing buckets >= 2 ^ 15, it is considered that there are too many overflowing buckets.
Double the capacity

Load factor represents the usage of BMAP. When the number of elements in a map exceeds 8 and the load factor exceeds 6.5(81.25%), double capacity is triggered and the number of buckets doubles. Since the capacity of each Bmap is 8, the search performance deteriorates rapidly when the loadFactor is too large. Golang sets the loadFactor threshold to 13/2=6.5.

The same capacity

If too many buckets overflow, equal capacity expansion is triggered. The number of buckets remains the same. If too many buckets overflow, the map search efficiency decreases. Therefore, triggering equal capacity expansion can reduce the use of overflow buckets and make the BMAP arrangement closer

There are two triggering conditions for equal capacity expansion

  • If map.B<=15, equal capacity expansion is triggered once the number of overflowing buckets exceeds that of conventional buckets
  • If map.B>15, equal capacity expansion is triggered as long as the number of overflowing buckets exceeds pow(2,15)

Expansion conditions are determined before data writing. If the current HMAP status conforms to the above double expansion or equal expansion, the expansion operation will be triggered. Note that the expansion of Golang Map is not an atomic process, but a gradual one, and the expansion operation will be incrementally added to each write operation. Runtime. hashGrow is a pre-expansion preparation operation that marks the map as being expanded.

Gradual expansion is also a kind of the thought of divide and conquer, mean golang won’t be a complete migration of the map, but discrete to every write operation, as for why choose gradual expansion, golang benefits is also very obvious, gradual expansion can reduce the amount of expansion transient calculation/storage costs, and disadvantages also there will be some, Since incremental expansion is intended to maintain intermediate states, it also incurs additional costs in use, which we will discuss below

Move the bucket

During capacity expansion, data on the old bucket is not transferred to the new bucket. Data transfer follows the copy on write rule. When the actual assignment is made, the data transfer is selected and only the old buckets that need to be operated on are transferred.

  • hash

Since B is a positive integer power of 2, the effect of doubling and expanding the hash value is just to take one more digit forward

  • biggerSizeGrow

  • SameSizeGrow (equal expansion)

The source code

// Migrate 1 or 2 buckets at a time

func growWork(t *maptype, h *hmap, bucket uintptr) {

   // make sure we evacuate the oldbucket corresponding

   // to the bucket we're about to use

   // Migrate the old bucket currently being accessed

   evacuate(t, h, bucket&h.oldbucketmask())



   // evacuate one more oldbucket to make progress on growing

   // In order to speed up the relocation process, if the capacity is still being expanded, move another bucket

   if h.growing() {

      evacuate(t, h, h.nevacuate)

   }

}



// evacDst is an evacuation destination.

type evacDst struct {

    b *bmap          // current destination bucket

    i int            // key/elem index into b

    k unsafe.Pointer // pointer to current key storage

    e unsafe.Pointer // pointer to current elem storage

}



func evacuate(t *maptype, h *hmap, oldbucket uintptr) { 

   b := (*bmap)(add(h.oldbuckets, oldbucket*uintptr(t.bucketsize))) 

   newbit := h.noldbuckets()// Number of old buckets

   if! evacuated(b) {// TODO: reuse overflow buckets instead of using new ones, if there

        // is no iterator using the old buckets.  (If !oldIterator.)

        

   // If the capacity is doubled, one bucket will be moved to two new buckets, which are located in the front and back of the new buckets. Xy [0] points to the front half, xy[1] points to the back half

   // xy contains the x and y (low and high) evacuation destinations.

      var xy [2]evacDst 

      x := &xy[0] 

      x.b = (*bmap)(add(h.buckets, oldbucket*uintptr(t.bucketsize))) 

      x.k = add(unsafe.Pointer(x.b), dataOffset) 

      x.e = add(x.k, bucketCnt*uintptr(t.keysize)) 



      if! h.sameSizeGrow() {// If f is double capacity, also need to use ypart, calculate ypart address

         // The bucket sequence number represented by y is increased by 2^oldB

         y := &xy[1] 

         y.b = (*bmap)(add(h.buckets, (oldbucket+newbit)*uintptr(t.bucketsize))) 

         y.k = add(unsafe.Pointer(y.b), dataOffset) 

         y.e = add(y.k, bucketCnt*uintptr(t.keysize)) 

      } 



      // The bucket index dimension is used for migration. Each migration requires the migration of regular buckets corresponding to index and all overflowing buckets

      for; b ! =nil; b = b.overflow(t) { 

         k := add(unsafe.Pointer(b), dataOffset) 

         e := add(k, bucketCnt*uintptr(t.keysize)) 

         // Traverses all the cells in the bucket

         for i := 0; i < bucketCnt; i, k, e = i+1, add(k, uintptr(t.keysize)), add(e, uintptr(t.elemsize)) { 

            top := b.tophash[i] 

    

            k2 := k 

            if t.indirectkey() {

               k2 = *((*unsafe.Pointer)(k2)) 

            } 

    

            var useY uint8 // Migrate to xpart for equal capacity expansion. If the capacity is doubled, it will decide whether to migrate to Xpart or ypart based on the higher one bit

            if! h.sameSizeGrow() { hash := t.hasher(k2,uintptr(h.hash0)) 

               ifh.flags&iterator ! =0&&! t.reflexivekey() && ! t.key.equal(k2, k2) {// If condition only key is math.nan will enter, because math.nan! = math.nan, we'll ignore the special logic here for now and cover math.nan in more detail later

                        useY = top & 1

                        top = tophash(hash)

               } else { 

             // Depending on whether the oldB+1 bit of the new hash value is 0 or 1

                  ifhash&newbit ! =0 { 

                     useY = 1}}}if evacuatedX+1! = evacuatedY || evacuatedX^1! = evacuatedY { throw("bad evacuatedN")

            }



            b.tophash[i] = evacuatedX + useY // evacuatedX + 1 == evacuatedY 

            dst := &xy[useY]                 // evacuation destination 



       // If the new bucket is full, use the overflow bucket

            if dst.i == bucketCnt { 

               dst.b = h.newoverflow(t, dst.b) // Create a bucket

               dst.i = 0 // xi starts at 0

               dst.k = add(unsafe.Pointer(dst.b), dataOffset) 

               dst.e = add(dst.k, bucketCnt*uintptr(t.keysize)) 

            } 

            dst.b.tophash[dst.i&(bucketCnt- 1)] = top // mask dst.i as an optimization, to avoid a bounds check 

            if t.indirectkey() { // key is a pointer

               *(*unsafe.Pointer)(dst.k) = k2 // copy pointer 

            } else { // Copy the old key (which is the value) to the new location

               typedmemmove(t.key, dst.k, k) // copy elem 

            } 

            if t.indirectelem() { // value is a pointer to key

               *(*unsafe.Pointer)(dst.e) = *(*unsafe.Pointer)(e) 

            } else { 

               typedmemmove(t.elem, dst.e, e) 

            } 

 

           // Use the next cell

            dst.i++ 

            dst.k = add(dst.k, uintptr(t.keysize)) 

            dst.e = add(dst.e, uintptr(t.elemsize)) 

         } 

      } 



      // Unlink the overflow buckets & clear key/elem to help GC.

        if h.flags&oldIterator == 0&& t.bucket.ptrdata ! =0 {

            b := add(h.oldbuckets, oldbucket*uintptr(t.bucketsize))

            // Preserve b.tophash because the evacuation

            // state is maintained there.

            ptr := add(b, dataOffset)

            n := uintptr(t.bucketsize) - dataOffset

            memclrHasPointers(ptr, n)

        }

   } 



  // Bucket migration is not sequential. The map preferentially migrates the bucket that is being written to. If the bucket that is being migrated is the next Bmap recorded in hMAP, the migration progress is updated.

   if oldbucket == h.nevacuate { 

      advanceEvacuationMark(h, t, newbit) 

   } 

} 



// Record the migration progress

func advanceEvacuationMark(h *hmap, t *maptype, newbit uintptr) {

    h.nevacuate++

    // Experiments suggest that 1024 is overkill by at least an order of magnitude.

    // Put it in there as a safeguard anyway, to ensure O(1) behavior.

    stop := h.nevacuate + 1024

    if stop > newbit {

        stop = newbit

    }

    forh.nevacuate ! = stop && bucketEvacuated(t, h, h.nevacuate) { h.nevacuate++ }// Hwap. flags, hwap. oldbuckets, hwmap.extra. oldoverflow should be cleared

    if h.nevacuate == newbit { // newbit == # of oldbuckets

        // Growing is all done. Free old main bucket array.

        h.oldbuckets = nil

        // Can discard old overflow buckets as well.

        // If they are still referenced by an iterator,

        // then the iterator holds a pointers to the slice.

        ifh.extra ! =nil {

            h.extra.oldoverflow = nil

        }

        h.flags &^= sameSizeGrow

    }

}
Copy the code

traverse

Conclusion: The result of traversing the map is unordered

In map traversal, buckets are traversed sequentially and cells in buckets and overflow Buckets are traversed as required. However, after map expansion, keys will be relocated. As a result, keys in one bucket may be relocated to other buckets. From this point of view, the map traversal results cannot be in the original order (see map expansion below).

However, in order to ensure that the result of traversing a map is unordered, GO does the following: The map traverses from a bucket with a random number of cells, rather than a fixed bucket 0. Then iterate through the bucket sequence until you return to the end of the starting bucket.

The example above is traversing a map in the unexpanded state. If the map is being expanded, check whether the current traversal bucket has been moved. If the data is still in the old bucket, go to the old bucket to obtain the data.

Note: Incremental and equal capacity expansion will be covered in the following sections. In an incremental capacity expansion, an old bucket may be split into two different buckets. In this case, if you need to traverse the data from the old bucket, such as bucket number 1, you cannot extract all the data from the old bucket number 1. Only those keys from the old bucket 1 that were assigned to the new bucket 1 after fission can be retrieved.

Hiter as iterators, runtime.mapiterinit for iterator initialization, and Runtime. mapiternext for map iteration.

If the expansion process is atomic, traversal can be as simple as traversing buckets and cells in order. However, since the expansion of map in Golang is a gradual operation, there may be a map in the expansion state. In this case, part of oldbucket data has not been migrated to the bucket. In order to ensure that all keys are traversed, You have to go back and forth between oldbuckts and buckets. Take a look at an illustration of the traversal:

Assume that a map has the following memory layout, that the map is expanding, and that bucket migration has been completed. Red Buckets [0] have migrated to buckets[0] and buckets[2].

conclusion

  • The map of Go language is implemented by hash table at the bottom and resolves hash conflicts through chain address method. The core data structure it relies on is array plus linked list.

  • The map defines 2 to the B buckets, and each bucket can hold eight keys. Scatter the key into different buckets depending on its hash value. The low hash value (the last B bits of the hash value) determines the bucket number, and the high hash value (the first 8 bits of the hash value) identifies different keys in the same bucket.

  • Capacity expansion is triggered when too many keys are added to buckets and the number of elements exceeds the value set by the loading factor, or too many buckets overflow due to multiple add and delete operations.

  • Expansion includes incremental expansion and equal expansion. Incremental expansion, which doubles the number of buckets, redistributes keys from one bucket to two buckets. Capacity expansion does not change the number of buckets, but compacts the data in buckets. Either incremental or equal capacity expansion requires the creation of a new bucket array, not in situ.

  • The capacity expansion process is gradual to prevent performance problems caused by a large number of keys to be relocated during capacity expansion. Capacity expansion is triggered when new elements are added, and bucket relocation occurs during assignment and deletion, with a maximum of two buckets being moved at a time. One of the core aspects of search, assignment, and delete is how to locate the key. Once understood, the source code for map is ready to read.

Common potholes

  • Types that cannot be compared cannot be used as keys
  • The map value is unaddressable, that is, the address of the map value cannot be obtained
  • Len (map) returns the number of elements in the map, not the map’s capacity
  • The map from new is not usable because new only malloc 8 bytes of memory (new returns pointer).

In contrast, make is a syntactic sugar that is eventually translated by the compiler as a runtime.makemap function call, which does memory flushing and object initialization.

  • When a map is printed through FMT, the result of an empty map is the same as that of a nil map, which is map[]. So, don’t say map is empty or nil, say map == nil
  • When the capacity is exceeded, it will be automatically expanded, but try to provide a reasonable initial value
  • Map does not need to be passed as a pointer when used as an output parameter of the function, because the map itself is a pointer. As you can see in the map initialization section above, runtime.hmap is returned by runtime.hmap
  • Delete does not actually free the map’s memory, so to reclaim the map you still need to set it to nil

Use advice

  • As you can see from map design, it is not a concurrency safe data structure. The program is prone to errors when reading and writing maps simultaneously. Therefore, to use map in concurrent situations, add a lock (sync.mutex or sync.rwmutex). In fact, the Go standard library already implements a concurrency safe map for us — sync.map

  • The result of traversing the map is unordered and should be noted in use.

  • The map structure shows that it is a pointer to the underlying Buckets array. So, like slice, even though the GO function is passed by value, when a map is called as an argument to the function, operations on the map inside the function also affect the external map.

  • In addition, there is a special key value math.nan that generates a different hash each time, which makes m[math.nan] unfetching, and assigning it multiple times will result in multiple Math.nan keys in the map. But you don’t really need that, you just need to know that there’s a special case.

reference

1, in-depth analysis of Golang map 2, from shallow to deep, the introduction of Go language map implementation principle 3, a comprehensive analysis of Golang map design