Memory allocator

The data and variables in the program are allocated to the program’s virtual memory, which contains two important areas: the Heap and the Stack. The parameters, return values, and local variables of function calls are mostly allocated on the stack, which is managed by the compiler. Different languages use different methods to manage heap memory. In C language, it is necessary to actively apply and release memory, and in Java and GO, it is basically managed by the compiler. Objects in the heap are allocated by memory allocator and garbage collection period

The principle of

Memory management generally consists of three different components, namely Mutator (user program), Allocator (Allocator), and Collector (Collector). When a user program requests memory, it requests new memory through the memory allocator, which initializes the corresponding memory region from the heap.

Distribution of distribution

Memory allocators generally contain two allocation methods

  • Sequential locator, Bump Allocato
  • Free-list Allocator (Free List Allocator)

Linear distributor

This is an efficient memory allocation method, but the limitation is relatively large. You only need to maintain a pointer in memory to a specific location in memory. If memory is requested, the allocator only needs to check the remaining free memory, return the allocated memory area, and modify the location of the pointer in memory.

Although this approach has a higher execution speed and angular implementation complexity, it is not possible to reuse memory when it is freed.

Often it needs to be used in conjunction with appropriate garbage collection algorithms such as mark-compact, Copying GC, and Generational GC. They can defragment living objects by copying, periodically merging free memory and improving allocator performance

Idle linked list allocator

This approach reuses memory that has been freed and maintains a linked list-like data structure internally. When memory is requested, the free list iterates through the free memory block once, finds enough memory, and then applies for new resources and modifies the list

Common strategies for free linked list allocators are

  • For the first time, the first memory block whose size is larger than the requested memory is selected
  • For the first time, the loop is adapted to iterate from the technical location of the last iteration, selecting the first memory block larger than the requested memory
  • It is optimal to iterate from the table header and select the most appropriate memory block
  • Isolation is used to separate memory into multiple linked lists. Each linked list has the same memory block size. When applying for memory, find the linked list that meets the price adjustment, and then select the appropriate memory block from the linked list

Object size

In THE GO language, the memory allocator will choose different processing logic based on the allocated memory size, and the runtime will divide objects into micro objects, small objects and large objects according to the strip

category The size of the
Micro object (B) 0
Small objects [16B,32KB]
The big object (32 KB, + up)

Since most objects are under 32KB in size, and requested memory size affects the process and overhead of allocating memory at the Go language runtime, handling large and small objects separately helps improve the performance of the memory allocator.

Multistage cache

The memory allocator not only treats objects of different sizes differently, but also manages memory at different levels. Both the TCMalloc and Go runtime allocator introduce three components, Thread Cache, Central Cache, and Page Heap, to manage memory at different levels

Thread cache belongs to each independent thread, it can meet the vast majority of memory allocation requirements on the thread, because it does not involve multi-threading, so there is no need to use mutex to protect memory, which can reduce the performance loss caused by lock competition. When the thread cache does not meet the requirement, the runtime uses the central cache as a supplement to solve the memory allocation of small objects. When encountering objects larger than 32KB, the memory allocator will choose the page heap to allocate large memory directly.

This multi-level memory allocation design is similar to the multi-level cache in computer operating systems, because most objects are small. We can provide sufficient memory space through thread caches and central caches, and obtain more memory resources from the upper level components when resources are insufficient.

Virtual Memory layout

After version Go1.11, sparse heap memory space was used instead of contiguousmemory, addressing the limitations of contiguousmemory and problems that could arise in special scenarios.

Address space

Because all memory is ultimately claimed from the operating system, the Go language runtime builds the operating system’s memory-management abstraction layer, which divides the run-time managed address space into the following four states

state explain
None Memory is not saved or mapped and is the default state of the address space
Reserved The runtime holds the address space, but accessing the memory causes an error
Prepared Memory is reserved, and generally there is no corresponding physical memory. The behavior of accessing this memory is undefined and can be quickly changed to Ready
Ready Can be securely accessed

If you have any questions or want to know more, you can follow my official accountJim the Superhero, left a message on the official account, I saw the first reply.