Memory allocation for Go
In Go, memory management is done at the bottom, from allocating memory to recollecting it when it is no longer used. While developers don’t have to worry too much about memory allocation and recycling when writing code, there are some interesting designs in Go’s memory allocation strategy that help us improve and write more efficient Go programs by understanding them.
Go memory management is designed to run quickly in a concurrent environment and is integrated with the garbage collector. Let’s look at a simple example:
package main
type smallStruct struct {
a, b int64
c, d float64
}func main() {
smallAllocation()
}
//go:noinline
func smallAllocation() *smallStruct {
return &smallStruct{}
}
Copy the code
The comment //go:noinline disallows go from inlining the function so that main uses the pointer variable returned by smallAllocation. Since smallAllocation is used by multiple functions, the returned variable will be allocated to the heap.
A previous article on the concept of inlining said:
Inlining is a manual or compiler optimization to replace the call to a short function with the function body itself. The reason for this is that it eliminates the overhead of the function call itself and enables the compiler to perform other optimization strategies more efficiently.
So if the example above does not interfere with the compiler, the compiler would inline the smallAllocation function body directly into main. In this way, there would be no call to smallAllocation. All variables would be used within the main range. You don’t have to allocate variables to the heap anymore.
Continuing with the above example, we can confirm that our analysis above, &smallStruct{}, will be allocated to the heap by using the escape analysis command go tool compile -m main.go.
➜ go tool compile -m main.go
main.go:12:6: can inline main
main.go:10:9: &smallStruct literal escapes to heap
Copy the code
With the command go tool compile -s main.go, we can display the assembly code of the program and also show us the memory allocation explicitly:
0x001d 00029 (main.go:10) LEAQ type."".smallStruct(SB), AX
0x0024 00036 (main.go:10) PCDATA $2, $0
0x0024 00036 (main.go:10) MOVQ AX, (SP)
0x0028 00040 (main.go:10) CALL runtime.newobject(SB)
Copy the code
The built-in newObject function allocates new memory on the heap by calling another built-in function, mallocGC. There are two memory allocation strategies in Go, one for small chunks of application memory and the other for large chunks, which are larger than 32KB.
Let’s take a closer look at these two strategies.
Allocation policy for memory blocks smaller than 32KB
When a small memory request of less than 32KB occurs in an application, Go allocates memory to the application from a local cache called the McAche. The local cache McAche holds a series of 32KB memory blocks called Mspan, which is the allocation unit for programs to allocate memory.
Allocates memory for the program from the McAche
In the Go scheduler model, each thread M is bound to a single processor P, which can only run one goroutine in a single granularity of time, and each P is bound to a local cache McAche described above. When memory allocation is required, the currently running Goroutine looks for available MSpans from the McAche. Memory allocation from the local McAche does not need to be locked, which is more efficient.
So one might ask, well, some variables are small numbers, and some variables are complex structures, and they’re assigned one when they’re requesting memorymspan
Will such units produce waste? Actually,mcache
Hold this series ofmspan
They are not all uniform in size, but are grouped into about 70 categories, ranging in size from 8 bytes to 32KBmsapn
。
In the example at the beginning of this article, the size of the structure is 32 bytes, and since 32 bytes of mSPAN is sufficient, memory allocation will allocate a 32 byte Mspan to it.
Alloc allocates memory
Now, we might be wondering, what if McAchce doesn’t have a free 32-byte MSPAN when it allocates memory? Go also maintains an McEntral for each type of MSPAN.
The purpose of McEntral is to provide shard MSPAN resources for all McAche. Each Central holds a list of global Mspans of a specific size, both allocated and unallocated. Each McEntral corresponds to one MSPAN, which is fetched from McEntral when the worker thread does not have an appropriate (that is, a specific size) Mspan in its McAche. McEntral is shared by all worker threads, and multiple Goroutines compete, so resources from McEntral need to be locked.
McEntral is defined as follows:
// Runtime/McEntral. go type McEntral struct {// mutex lock mutex // MSpanList // MSPAN list with no free objects or msAPN list removed by McAche // mSpanList // Number of allocated objects nMALLOC uint64}Copy the code
McEntral maintains two bidirectional lists, with nonempty indicating that there are free Mspans in the list to allocate. Empty indicates that the mSPAN in the list has been assigned an object.
If there is no suitable free MSPAN in the McAche application, the worker thread will apply to McEntral as shown in the following figure.
Here’s how McAche gets and returns mSPAN from McEntral:
-
nonempty mspan nonempty mspan empty mspan
-
mspan empty mspan nonempty
When McEntral has no free MSPAN, it applies to mHeap. When the MHEAP has no resources, it requests new memory from the operating system. Mheap is mainly used for memory allocation of large objects and for managing the uncut MSPAN, which is used to slice McEntral into small objects.
Also, we can see that the MHeap contains all sizes of McEntral, so when a McAche requests a MSPAN from McEntral, it only needs to use the lock in the separate McEntral and does not affect the application of other sizes of MSpan.
The mSPAN of each size has a global list stored in McEntral for all threads, and all McEntral collections are stored in the Mheap. The Arena area in the Mheap is the actual heap area, which at runtime is considered to be 8KB as a page, and these memory pages hold all the objects initialized on the heap. The runtime uses a two-dimensional runtime.heapArena array to manage all memory, with each Runtime. heapArena managing 64MB of memory.
If the arena area does not have enough space, runtime.mheap.sysalloc is called to request more memory from the operating system.
Allocation policy for memory blocks larger than 32KB
Go cannot manage memory allocations exceeding 32KB using the worker thread’s local cache McAche and the global central cache McEntral, so for memory requests exceeding 32KB, a corresponding number of pages (8KB each) are allocated directly from the mheap to the program.
conclusion
If we string together all the concepts involved in memory allocation management, we can outline a global view of Go memory management:
The memory allocation of Go language is very complicated. This article looks at the memory allocation of Go from a rough perspective without going into details. In general, you know how it works, and that’s about it.
To sum up, the strategies for Go memory allocation management are as follows:
-
When the program starts, Go requests a large chunk of memory from the operating system, which is managed globally by the ‘Mheap’ structure.
-
mspan mspan object
-
mcache mcentral mheap Go mcache mspan mcentral mspan mheap Go
-
General small objects allocate memory via ‘mspan’; Large objects are allocated directly by ‘Mheap’.
reading
Go memory management code escape analysis
Last week concurrent problem solution idea and introduce Go language scheduler
Refer to the link
Memory Management and Allocation [1]
Diagram of Go language memory allocation [2]
Memory allocator [3]
The resources
Memory Management and Allocation: medium.com/a-journey-w…
Go memory allocation: juejin.cn/post/684490…
Memory allocator: DraP. Me/Golang /docs…
- END –