Memory allocation for Go

In Go, memory management is done at the bottom, from allocating memory to recollecting it when it is no longer used. While developers don’t have to worry too much about memory allocation and recycling when writing code, there are some interesting designs in Go’s memory allocation strategy that help us improve and write more efficient Go programs by understanding them.

Go memory management is designed to run quickly in a concurrent environment and is integrated with the garbage collector. Let’s look at a simple example:

package main

type smallStruct struct {
   a, b int64
   c, d float64
}

func main(a) {
   smallAllocation()
}

//go:noinline
func smallAllocation(a) *smallStruct {
   return &smallStruct{}
}
Copy the code

The comment //go:noinline disallows go from inlining the function so that main uses the pointer variable returned by smallAllocation. Since smallAllocation is used by multiple functions, the returned variable will be allocated to the heap.

A previous article on the concept of inlining said:

Inlining is a manual or compiler optimization to replace the call to a short function with the function body itself. The reason for this is that it eliminates the overhead of the function call itself and enables the compiler to perform other optimization strategies more efficiently.

So if the example above does not interfere with the compiler, the compiler would inline the smallAllocation function body directly into main. In this way, there would be no call to smallAllocation. All variables would be used within the main range. You don’t have to allocate variables to the heap anymore.

Continuing with the above example, we can confirm that our analysis above, &smallStruct{}, will be allocated to the heap by using the escape analysis command go tool compile -m main.go.

➜ go tool compile -m main.go
main.go:12:6: can inline main
main.go:10:9: &smallStruct literal escapes to heap

Copy the code

With the command go tool compile -s main.go, we can display the assembly code of the program and also show us the memory allocation explicitly:

0x001d 00029 (main.go:10)       LEAQ    type."".smallStruct(SB), AX
0x0024 00036 (main.go:10)       PCDATA  $2, $0
0x0024 00036 (main.go:10)       MOVQ    AX, (SP)
0x0028 00040 (main.go:10)       CALL    runtime.newobject(SB)
Copy the code

The built-in newObject function allocates new memory on the heap by calling another built-in function, mallocGC. There are two memory allocation strategies in Go, one for small chunks of application memory and the other for large chunks, which are larger than 32KB.

Let’s take a closer look at these two strategies.

Allocation policy for memory blocks smaller than 32KB

When a small memory request of less than 32KB occurs in an application, Go allocates memory to the application from a local cache called the McAche. The local cache McAche holds a series of 32KB memory blocks called Mspan, which is the allocation unit for programs to allocate memory.

In the Go scheduler model, each thread M is bound to a single processor P, which can only run one goroutine in a single granularity of time, and each P is bound to a local cache McAche described above. When memory allocation is required, the currently running Goroutine looks for available MSpans from the McAche. Memory allocation from the local McAche does not need to be locked, which is more efficient.

So one might ask, well, some variables are small numbers, and some variables are complex structures, and they’re assigned one when they’re requesting memorymspanWill such units produce waste? Actually,mcacheHold this series ofmspanThey are not all uniform in size, but are grouped into about 70 categories, ranging in size from 8 bytes to 32KBmsapn.

In the case of the example at the beginning of this article, the size of that structure is 32 bytes, exactly 32 bytesmspanIf this is true, then memory allocation will allocate it a 32 byte sizemspan.

Now, we might be wondering, what if McAchce doesn’t have a free 32-byte MSPAN when it allocates memory? Go also maintains an McEntral for each type of MSPAN.

The purpose of McEntral is to provide shard MSPAN resources for all McAche. Each Central holds a list of global Mspans of a specific size, both allocated and unallocated. Each McEntral corresponds to one MSPAN, which is fetched from McEntral when the worker thread does not have an appropriate (that is, a specific size) Mspan in its McAche. McEntral is shared by all worker threads, and multiple Goroutines compete, so resources from McEntral need to be locked.

McEntral is defined as follows:

// Runtime/McEntral. go type McEntral struct {// mutex lock mutex // MSpanList // MSPAN list with no free objects or msAPN list removed by McAche // mSpanList // Number of allocated objects nMALLOC uint64}Copy the code

mcentralMaintains two bidirectional linked lists,nonemptyIt means there’s still something free in the listmspanTo be distributed.emptyThat’s what’s in this listmspanIt’s all allocatedobject.

If there is no suitable free MSPAN in the McAche application, the worker thread will apply to McEntral as shown in the following figure.

Here’s how McAche gets and returns mSPAN from McEntral:

  • Get lock; fromnonemptyLinked list to find one availablemspan; And from itnonemptyLinked list deletion; To remove themspanTo join theemptyList; willmspanReturn to the worker thread; Unlocked.
  • Return lock; willmspanfromemptyLinked list deletion; willmspanTo join thenonemptyList; Unlocked.

whenmcentralHave no spare timemspanLook tomheapTo apply for. whilemheapIf no resources are available, the system applies for new memory resources from the operating system.mheapMainly used for memory allocation of large objects, and for managing uncut objectsmspanTo givemcentralCut into small objects.

Also, we can see that the MHeap contains all sizes of McEntral, so when a McAche requests a MSPAN from McEntral, it only needs to use the lock in the separate McEntral and does not affect the application of other sizes of MSpan.

It says every sizemspanThere is a global list stored theremcentralIs used by all threads, allmcentralIs stored in themheapIn the.mheapIn thearenaThe area is the real heap area that will be used at runtime8KBViewed as a page, these memory pages store all objects initialized on the heap. The runtime uses two dimensionsruntime.heapArenaArrays manage all memory, eachruntime.heapArenaBoth manage 64MB of memory.

If the arena area does not have enough space, runtime.mheap.sysalloc is called to request more memory from the operating system.

Allocation policy for memory blocks larger than 32KB

Go cannot use the worker thread’s local cachemcacheAnd global central cachemcentralManages memory allocations exceeding 32KB, so requests that exceed 32KB are directly fetched from the heap (mheap) to allocate the corresponding number of memory pages (each page size is 8KB) to the program.

conclusion

If we string together all the concepts involved in memory allocation management, we can outline a global view of Go memory management:

The memory allocation of Go language is very complicated. This article looks at the memory allocation of Go from a rough perspective without going into details. In general, you know how it works, and that’s about it.

To sum up, the strategies for Go memory allocation management are as follows:

  • When the program starts, Go requests a chunk of memory from the operating systemmheapStructure global management.
  • The basic unit of Go memory management ismspan, eachmspanCan be assigned a specific sizeobject.
  • mcache.mcentral.mheapisGoThe three major components of memory management,mcacheManages threads cached locallymspan;mcentralManaged globallymspanUsed by all threads;mheapmanagementGoAll dynamically allocated memory of.
  • Generally small objects passmspanAllocate memory; Large objects are directly defined bymheapAllocate memory.

reading

Go memory management code escape analysis

Last week concurrent problem solution idea and introduce Go language scheduler

Refer to the link

Memory Management and Allocation

Illustration of Go language memory allocation

Memory allocator

If you like my article, please give me a thumbs up. I will share what I have learned and seen through technical articles every week. Thank you for your support. Wechat search the public account “NMS bi Bi” to get my article push in the first time.