Netty Source code storage Management (2)(4.1.44)

There was a lot of foreshadowing (Netty source code storage management (1)), familiar with the definition and allocation logic of classes related to memory allocation. But there is no real implementation of how the jemalloc ideas are represented in the source code. This chapter is rightPoolChunkWord for word, stick to the details. Before we can analyze the source code we need to have a clear position on the allocated memory level when allocatingHugeLevel object, directly usedPoolChunkPackaging, there is no complicated distribution logic. And fortiny&small&normalAt the level of refinement, memory management is necessary. Opening picture: PoolChunkIt’s essentially maintaining this full binary tree, which is managed by default16MBMemory (this value can be set manually).memoryMap[]Netty logically constructs a full binary tree on this array (of course, it can also use linked lists and other data structures, but the random index efficiency is not high, we can quickly locate the first node of a certain layer according to the array index. But linked lists can’t),depthMap[]Represents the depth corresponding to each node, which is immutable. aLongThe TYPE is divided into high and low parts, with high 32 bits recordedSmall memoryAssign information, low 32 bit recordnodeThe coordinate values. When the distribution ofnormalWhen level memory is used, only the lower 32 bits of information are useful, and its value indicates the node number (starting with a value of 1) when allocatedtiny&normalWhen the memory level is high, the memory level is lowpageUnder a certainsubpage. Another interesting thing is that any node can manage memory to the power of two, so Netty will evaluate the size of memory requested by the usernormalizedThat’s why. Any specification value (cannot exceed of coursePoolChunkSize), unless there are no nodes that can meet the current memory requirements, then you have to create a new onePoolChunk. There is another question is how to updatememoryMap[]Arrays, look at the big screen:Index indicates the node index, and value indicates the memoryMap[index]. The user applies for 4MB memory for the first time. The memory size is determined, so the location of the layer can also passmaxOrder - (log2(normCapacity) - pageShifts)Confirm that 4MB corresponds to 2 layers (also known as depth). The memory allocation process is as followsallocateNode(int)The implementation logic: First determine the usage status of node 1: memoryMap1! = usable(12) = node 1 is usable(usable); = 12 and the number of layers does not match, then continue to obtain child node 4, found! =12 and the layers match, so node 4 is used as the node for this memory allocation, and memoryMap[4] is updated. =12 indicates that node 4 is used, variablehandleAt the same time, the memoryMap of the parent node is cyclically updated. The value of the parent node is the minimum value of the memoryMap of the child node, so memoryMap[2]=2 and memoryMap[1] =1. In this way, the first request for 4MB of memory is complete. When the memoryMap value of node 4 equals 12 at layer 2, the sibling node 5 is determined to be allocated. Let’s adjourn.

PoolChunk memory allocation

PoolChunk is a reflection of the idea of jemalloc3.x algorithm, in which the ALLOCATE API begins with allocate is the implementation of the memory allocation algorithm. The entry method is allocate(PooledByteBuf, int, int).

allocate(PooledByteBuf, int, int)

This method does a few things:

  • Select an appropriate memory allocation policy based on the requested memory size. If >=pageSize, useallocateRun()Method assigned, otherwise usedallocateSubpage()Allocate, they both return the handle value handle.
  • Initialize theByteBuf.

The source code is as follows:

// io.netty.buffer.PoolChunk#allocate
/ * * * *@paramBuf "ByteBuf" object, which is a host for physical memory@paramReqCapacity Specifies the memory size required by the user@paramNormCapacity Specifications *@return* /
boolean allocate(PooledByteBuf<T> buf, int reqCapacity, int normCapacity) {
    
    // Lower 32 bits: node index
    // High 32 bits: bitmap index
    final long handle;
    // bit operation to determine the size
    if((normCapacity & subpageOverflowMask) ! =0) { 
        // #1 request >=8KB
        handle =  allocateRun(normCapacity);
    } else {
        // #2 request <8KB
        handle = allocateSubpage(normCapacity);
    }

    if (handle < 0) {
        return false;
    }
    
    // #3 Reuse ByteBuffer if PoolChunk exists in the cacheByteBuffer nioBuffer = cachedNioBuffers ! =null ? cachedNioBuffers.pollLast() : null;
    
    // #4 Initialize ByteBuf memory information
    initBuf(buf, nioBuffer, handle, reqCapacity);
    return true;
}
Copy the code

allocateRun(int)

The allocateRun(int) method also does a few things:

  • Calculate the corresponding depth according to the specification valued
  • callallocateNode(d)Complete memory allocation
  • Update the remaining free values
// io.netty.buffer.PoolChunk#allocateRun
/** * Apply for the norCapacity memory block */
private long allocateRun(int normCapacity) {
    // #1 Calculate the depth d of the tree corresponding to the current specification value
    // log2(normCapacity): obtain the sequence number of the highest bit 1 corresponding to the current value.
    // pageShifts: The default value is 13, i.e. pageSize=8192, the highest digit of the sequence is 1, because the assigned value is >=8192,
    // We need to subtract its offset from 0.
    // maxOrder: The default value is 11, maxOrder - offset = the exact (appropriate) tree height
    // As normCapacity continues to grow from 8192, the tree becomes smaller and smaller
    int d = maxOrder - (log2(normCapacity) - pageShifts);
    
    // #2 △ Find free nodes in depth D and allocate memory
    int id = allocateNode(d);
    if (id < 0) {
        return id;
    }
    
    // #3 Update the remaining free values
    freeBytes -= runLength(id);
    
    // #4 returns information
    return id;
}

private static final int INTEGER_SIZE_MINUS_ONE = Integer.SIZE - 1; / / 31
/** * gets the log base 2 */
private static int log2(int val) {
    // compute the (0-based, with lsb = 0) position of highest set bit i.e, log2
    / / Integer. NumberOfLeadingZeros (int) : returns the unsigned Integer non zero in front of the highest number of 0 (including the sign bit)
    // The number of 31-0 bits = the number of non-0 bits, such as 0000... 1000 Integer.numberOfLeadingZeros(0000... 1000) = 28,
    // select * from (1, 1); // select * from (1, 1);
    return INTEGER_SIZE_MINUS_ONE - Integer.numberOfLeadingZeros(val);
}
Copy the code

allocateNode(int)

Finally, we get to the big game of memory allocation, which belongs to the allocation logic of node granularity. The whole idea is not hard, and was illustrated earlier, but it looks a little dizzy because of the number of bits involved. So let’s get familiar with the partial bit formula

  • id^=1: -1 if the ids are odd, +1 if the ids are even. This is used to get an even number of siblings. For example, if id=2, its sibling is ID ^=1 = 3.
  • id<<=1: the equivalent ofid=id*2To jump to the left child of the node ID. For example, if ID = 2, its left child is 4.
  • 1<<d: indicates 1 x 2. For any node of depth D, the index value of the node is in the range of 2 to 2-1. For example, if the depth is 1, the index value is in [2, 3]; if the depth is 2, the index value is in the range of [4, 7].
  • initial=-(1 << d): Inverts 2 to determine whether it matches the target depth value D. Initial can be regarded as the mask. If the target depth is matched, id & initial == initial; if the current depth is smaller than the target depth, id & initial ==0. For example, if the target depth is 2, then initial=-4. When id=1, id&initial=0, it indicates that the target depth has not been reached. If 2 &initial=0, it indicates that the target depth is not reached. And then 4&-4=4, and then we have our target depth. Free nodes can then be found from left to right and allocated.

allocateNode(int depth)The goal is to find idle nodes in depth D and, if present, update the value of the corresponding memoryMap node regularly and return the node serial number. If it can be allocated but the depth does not match, the left child node is obtained. If it cannot be allocated, -1 is returned. Continue to check whether the left child node is assignable and whether the depth matches. If neither match, continue to repeat the above steps. If there is a deep match but the current node is not allot (val>d=true), get the sibling node and continue. If a deep match is made and the current node is allocatable, the node is the target of the application, set to unusable state, and the memoryMap is cyclically updated by the formula memoryMap[parent] = Min(child 1, child 2). Another interesting point to note is the memoryMap stored values, which are at the heart of the tree. The initialization of the values and the subsequent updates are also very clever, I don’t know, so let’s adjourn. To recap the above text description, use the memory allocation diagram: The relevant source code is analyzed as follows

// io.netty.buffer.PoolChunk#allocateNode
/** * Find the free node * at depth "d"@param 	d  
 * @returnIdle node index. If -1 is returned, PoolChunk cannot meet the current memory allocation */
private int allocateNode(int d) {
    int id = 1;
    // Check whether the node matches the depth of "d"
    int initial = - (1 << d);
    
    // #1 Check whether the root node is available
    byte val = value(id);
    if (val > d) {
        // Root node unavailable, return "-1"
        return -1;
    }
    
    // #2 traverses the node from top to bottom and left to right. Until a suitable free node is found and the index value is returned.
    // id&initial=0: indicates that the node ID matches the depth d. Otherwise, the node ID does not match
    Val 
      
       d indicates that the node ID has no free memory to allocate
      
    // Loop exit condition is val>d or id&initial! =0. When an idle node is matched, val
    while (val < d || (id & initial) == 0) { 
        
        // id = id * 2;
        id <<= 1;
        // Get the momeryMap[id] value
        val = value(id);
        
        // If val>d, node ID does not meet the requirement
        // The root node can satisfy the memory request of depth D.
        // If val>d, the parent val is the minimum value of the two children,
        // If the parent node has val>d, the child node cannot meet the memory requirement of the current depth d
        if (val > d) {
            // Get the sibling node and continue the search
            id ^= 1; val = value(id); }}byte value = value(id);
    assert value == d && (id & initial) == 1 << d : String.format("val = %d, id & initial = %d, d = %d",
                                                                  value, id & initial, d);
	// #3 Mark the selected node as "unusable"
    setValue(id, unusable); 
    
    // #4 Loop to update the value of the parent node "memoryMap", which takes the value of the smallest of the two children
    updateParentsAlloc(id);
    
    // #5 Return
    return id;
}

private byte value(int id) {
    return memoryMap[id];
}

/** * Updates the corresponding memoryMap value */ from the node ID up to the root node
private void updateParentsAlloc(int id) {
    while (id > 1) {
        int parentId = id >>> 1;
        byte val1 = value(id);
        byte val2 = value(id ^ 1);
        byteval = val1 < val2 ? val1 : val2; setValue(parentId, val); id = parentId; }}Copy the code

AllocateNode (int depth) is a core method for allocating Normal levels. The essence of allocateNode is to maintain the memoryMap[] array and traverse the tree to find free memory for the current memory request.

allocateSubpage(int)

This method is applicationTiny&SmallLevel of memory. The idea behind this memory allocation was described in the previous section: in a simple sentence, divide an idle page into subpages and use objectsPoolSubpageThese subpages are managed.

// io.netty.buffer.PoolChunk#allocateSubpage
/** * Apply for tiny&small level memory */
private long allocateSubpage(int normCapacity) {
    
    PoolSubpage [] PoolSubpage [] PoolSubpage [] PoolSubpage
    // If the PoolSubpage object exists, use the existing one, otherwise create a new PoolSubpage object and place it in the PoolSubpagePools[] array
    // "PoolSubpage[]" each node initializes a head node, so the return must not be empty
    PoolSubpage<T> head = arena.findSubpagePoolHead(normCapacity);
    
    // Child pages can be found directly in the leaf node layer
    int d = maxOrder;
    // Use arrays to reduce the granularity of locks and improve concurrency
    synchronized (head) {
        // get idle cotyledons at depth d
        int id = allocateNode(d);
        if (id < 0) {
            return id;
        }
		
        // If the page succeeds, modify it
        final PoolSubpage<T>[] subpages = this.subpages;
        final int pageSize = this.pageSize;

        freeBytes -= pageSize;
		
        // #3 get offset (relative to maxSubpageAlloc)
        // subpageIdx = id - maxSubpageAllocs
        int subpageIdx = subpageIdx(id);
        // Locate PoolSubpage within PoolChunk []
        PoolSubpage<T> subpage = subpages[subpageIdx];
        
        if (subpage == null) {
            // If it does not exist, create it
            subpage = new PoolSubpage<T>(head, 			// This header comes from "PoolArena"
                                         this.// Current PoolChunk
                                         id, 			// Node id
                                         runOffset(id), // Node id byte offset
                                         pageSize,		/ / page size
                                         normCapacity); // Each "subpage" equals the size
            // Record the newly created PoolSubpage
            subpages[subpageIdx] = subpage;
        } else {
            subpage.init(head, normCapacity);
        }
        
        // #4 Use PoolSubpage to allocate memory
        returnsubpage.allocate(); }}/** * removes the highest bit to get the offset */
private int subpageIdx(int memoryMapIdx) {
    return memoryMapIdx ^ maxSubpageAllocs; // remove highest set bit, to get offset
}

/** * Obtain the node ID byte offset. * /
private int runOffset(int id) {
    // << priority is higher than ^
    // depth(id): obtains the depth of the node ID
    // int index = 1 << depth(id) => 2^depth(id), obtain the index value of the leftmost node at the layer where id is located
    ^ / / id index = > | id - index |, relative layer the leftmost node of the node id of the offset
    int shift = id ^ 1 << depth(id);
    
    // runLength(id): obtains the unit of the node ID (byte). For example, for node 4, the corresponding value is 4194304, that is, 4MB
    // Shift * runLength(id)
    // size=runLength(id): specifies the memory size allocated to a node ID
    // Shift *size: byte offset
    return shift * runLength(id);
}

/** * Gets the size of memory allocated for node "id" in bytes * equivalent to chunkSize/(2^maxOrder) */
private int runLength(int id) {
	// log2ChunkSize=log2chunkSize
    return 1 << (log2ChunkSize - depth(id));
}
Copy the code

AllocateSubpage (int) : The main purpose of this method is to create a PoolSubpage object and delegate tiny&small level memory allocation to this object. PoolChunk internally holds a reference to the PoolSubpage object using the PoolSubage[] array. PoolSubage[] is the same length as the number of leaves in a binary tree, and they correspond one to one. The PoolSubpage object is the embodiment of a Page and has the ability to manage pageSize size memory. PoolChunk only needs to manage pages, with a clear division of labor.

PoolSubpage Memory allocation

PoolSubpage internal variables have been explained previously,Explained here. Its memory management logic is to divide the pageSize size of the memory block into several blocks, the number of sub-blocks is determined by the size of the memory application, for example, 1KB of memory, then find a free page and divide it into 8 equal parts (8KB/1KB=8). Bitmap is used to record the usage status of each subblock. 1 indicates used, 0 indicates unused. It can be divided into up to 512 equal portions, and the underlying long[] array is used to store bitmap information. The higher 32 bits of the 64-bit handle value store bit information and the lower 32 bits store the node index value. So the core question is, how do you use the long[] array to record usage? No secrets under source code:

// io.netty.buffer.PoolSubpage#allocate
/** * "PoolSubpage" memory allocation entry *@returnReturns the bitmap index of this memory allocation */
long allocate(a) {
    if (elemSize == 0) {
        return toHandle(0);
    }
	
    // No sharding available, return -1
    if (numAvail == 0| |! doNotDestroy) {return -1;
    }
	
    // get the next available shard index (absolute value)
    // The first index is 0
    final int bitmapIdx = getNextAvail();
    
    // divide #2 by 64 to determine which bitmap[] is used
    // The first bitmap index is 0
    int q = bitmapIdx >>> 6;
    
    // # 3&63: identifies the 64-bit length of long
    // Mod by 64 to get the offset of the current absolute ID
    // 63: 0011 1111
    int r = bitmapIdx & 63;
    assert (bitmap[q] >>> r & 1) = =0;
    
    // Update the RTH bit to 1
    / / < < priority than | =
    bitmap[q] |= 1L << r;
	
    // Update the available quantity
    if (-- numAvail == 0) {
        // If the number of available pages is 0, there is no more space to allocate in the child page
        // Needs to be removed from the bidirectional list
        removeFromPool();
    }
	
    // Convert bitmapIdx to long. The high 32 bits of long store the small memory location index
    return toHandle(bitmapIdx);
}

/** * gets the next available shard block */
private int getNextAvail(a) {
    int nextAvail = this.nextAvail;
    // nextAvail>=0, indicating that it can be used directly
    if (nextAvail >= 0) {
        this.nextAvail = -1;
        return nextAvail;
    }
    // nextAvaild<0, need to find the next available "fragment"
    return findNextAvail();
}

/** * Retrieves the next available "shard block" * essentially searching for the index value of the bitmap[] array 0 */
private int findNextAvail(a) {
    final long[] bitmap = this.bitmap;
    final int bitmapLength = this.bitmapLength;
    // loop over
    for (int i = 0; i < bitmapLength; i ++) {
        long bits = bitmap[i];
        
        // #1 Check whether the entire bits have a "0" bit
        // When unavailable, bits are 0xffFFFFFFFFFFFF, ~bits=0
        // When available ~bits! = 0
        if(~bits ! =0) {
            // #2 find available bits
            returnfindNextAvail0(i, bits); }}return -1;
}

/** * searches for the next available bit ** /
private int findNextAvail0(int i, long bits) {
    final int maxNumElems = this.maxNumElems;
    
    // i << 6 => i * 2^6=i*64
    // Imagine expanding long[], where baseVal is the base address
    final int baseVal = i << 6;

    for (int j = 0; j < 64; j ++) {
        // bits & 1: checks whether the minimum value is 0
        if ((bits & 1) = =0) {
            // Find the free subblock and assemble the data
            / / baseVal | j = > baseVal + j, b + a offset value
            int val = baseVal | j;
            // Do not cross the line
            if (val < maxNumElems) {
                return val;
            } else {
                break; }}// Unsigned move 1 bit to the right
        bits >>>= 1;
    }
    return -1;
}

// io.netty.buffer.PoolSubpage#toHandle
/** * Write bitmap index information to the higher 32 bits and memoryMapIdx information to the lower 32 bits ** 0x4000000000000000L: the highest bit is 1 and all other bits are 0. * Why use the 0x4000000000000000L value? * because for the first small memory allocation, if the high 32 bits are 0, the high handle value is 0, and the low 32 bits are 2048 (the index value of the first node at level 11), but this return value is not treated as a child page. Which affects the subsequent logic * https://blog.csdn.net/wangwei19871103/article/details/104356566 *@paramBitmapIdx Indicates the bitmap index *@returnHandle value * /
private long toHandle(int bitmapIdx) {
    return 0x4000000000000000L | (long) bitmapIdx << 32 | memoryMapIdx;
}
Copy the code

Bitmap filling schematic diagramPoolSubpage uses eight long values to store subblock usage, 32 bits higher than the bitmap index and 32 bits lower than the node index. Through a lot of bit operations to improve performance, through the source code reading, but also improve the bit programming application skills. Here’s how PoolSubpage works with PoolArena, because PoolArena also holds PoolSubpage[] array objects. How are these array objects added? The answer is added to the corresponding PoolArena# PoolSubpage [] when PoolSubpage is initialized. The source code is as follows:

// io.netty.buffer.PoolSubpage
/** * "PoolSubpage" constructor *@paramHead gets PoolSubpage * from PoolArena@paramChunk The PoolChunk object * of the PoolSubpage property@paramMemoryMapIdx Node value * of Page to which memoryMapIdx belongs@paramRunOffset is useful for byte[], which represents the offset *@paramPageSize specifies the pageSize. The default value is 8KB *@paramElemSize Specifies the number of elements */
PoolSubpage(PoolSubpage<T> head, 
            PoolChunk<T> chunk, 
            int memoryMapIdx, int runOffset, int pageSize, int elemSize) {
    this.chunk = chunk;
    this.memoryMapIdx = memoryMapIdx;
    this.runOffset = runOffset;
    this.pageSize = pageSize;
    bitmap = new long[pageSize >>> 10]; // pageSize / 16 / 64
    
    // Add to the "head" list
    init(head, elemSize);
}

// initialize PoolSubpage
void init(PoolSubpage<T> head, int elemSize) {
    doNotDestroy = true;
    this.elemSize = elemSize;
    if(elemSize ! =0) {
        // Initialize various parameters
        maxNumElems = numAvail = pageSize / elemSize;
        nextAvail = 0;
        // Determine the number of bitmaps required according to the number of elements, i.e. confirm the value of "bitmapLength"
        bitmapLength = maxNumElems >>> 6;
        if ((maxNumElems & 63) != 0) {
            bitmapLength ++;
        }

        for (int i = 0; i < bitmapLength; i ++) {
            bitmap[i] = 0; }}// Add to the bidirectional list for subsequent allocation
    addToPool(head);
}

private void addToPool(PoolSubpage<T> head) {
    assert prev == null && next == null;
    prev = head;
    next = head.next;
    next.prev = this;
    head.next = this;
}
Copy the code

PoolSubpage memory analysis of the source code, clear thinking, functional disassembly is not difficult.

How does PoolChunk reclaim memory

We’ve covered physical memory allocation, but we haven’t covered how PoolChunk reclaims memory.

// io.netty.buffer.PoolChunk#free
/** * PoolChunk releases the memory block indicated by Handle. * Essentially modifying the "memoryMap" value of the corresponding node. * If the "nioByteBuf" parameter is not empty, it will be cached in the "Deque" queue, reducing GC */
void free(long handle, ByteBuffer nioBuffer) {
    
    // #1 Get the lower 32-bit value of handle "handle", which represents the node ID
    int memoryMapIdx = memoryMapIdx(handle);
    
    // #2 Get the 32-bit high value of handle "handle", which represents the bitmap index value
    int bitmapIdx = bitmapIdx(handle);
	
    // #3 If bitmqpIdx is not 0, it is subpage freed
    if(bitmapIdx ! =0) { // free a subpage
        PoolSubpage<T> subpage = subpages[subpageIdx(memoryMapIdx)];
        assertsubpage ! =null && subpage.doNotDestroy;

        // #4 Don't forget that there is a reference to PoolSubpage in the "PoolArena" object
        PoolSubpage<T> head = arena.findSubpagePoolHead(subpage.elemSize);
        synchronized (head) {
            // Leave it to PoolSubpage professionals
            // 0x3FFFFFFF: 0011 1111 1111 1111 1111 1111 1111 1111 1111,
            // bitmapIdx & 0x3FFFFFFF: keep the lower 30 bits
            // Why erase the top 2 bits? Because the handle is generated through | 0 x4000000000000000l operation
            if (subpage.free(head, bitmapIdx & 0x3FFFFFFF)) {
                return; }}}// update free memory information
    freeBytes += runLength(memoryMapIdx);
    
    // update memoryMap information
    setValue(memoryMapIdx, depth(memoryMapIdx));
    
    // #6 loop to update the parent node
    // Note: When updating the parent node value, it is possible to encounter two sibling nodes with initial values,
    // In this case, the value of the parent node is also initialized instead of the minimum of the two
    updateParentsFree(memoryMapIdx);

    // #7 Cache "ByteBuffer" objects
    if(nioBuffer ! =null&& cachedNioBuffers ! =null&& cachedNioBuffers.size() < PooledByteBufAllocator.DEFAULT_MAX_CACHED_BYTEBUFFERS_PER_CHUNK) { cachedNioBuffers.offer(nioBuffer); }}// Unsigned right shift 32 bits
private static int bitmapIdx(long handle) {
    return (int) (handle >>> Integer.SIZE);
}
Copy the code

It can be seen from the source code that PoolChunk is very simple to reclaim a piece of memory. Retrieving the PoolSubpage is a bit more cumbersome because it also needs to be synchronized with the PoolSubpage in the PoolArena.

How does PoolSubpage reclaim memory

// io.netty.buffer.PoolSubpage#free
/** * "PoolSubpage" releases the memory block ** the target is to modify the corresponding "bitmap" value **@paramHead comes from the head node * of the PooArena#PoolSubpage[] array@paramBitmapIdx Bitmap index */
boolean free(PoolSubpage<T> head, int bitmapIdx) {
    if (elemSize == 0) {
        return true;
    }
    
    // Verify the bitmap[] array index value
    int q = bitmapIdx >>> 6;
    // Determine which of the 64 bits is present
    int r = bitmapIdx & 63;
    assert (bitmap[q] >>> r & 1) != 0;
    
    // Change the corresponding bit to 0
    bitmap[q] ^= 1L << r;

    // Set the available bit information for the next allocation
    setNextAvail(bitmapIdx);
	
    // because when numAvail=0, there is no available memory block, it is removed from the "PoolArena#PoolSubpage[]" array
    // After this addition, you can return to the "PoolArena[]" queue
    if (numAvail ++ == 0) {
        // Add a queue
        addToPool(head);
        return true;
    }
	
    if(numAvail ! = maxNumElems) {// It has not reached saturation, that is, there is still free after this allocation, that is directly returned
        return true;
    } else {
        // It reaches saturation and no memory blocks are available
        if (prev == next) {
            // If the current list only has such a PoolSubpage object, it will not be removed
            return true;
        }
		
        // Remove the linked list
        doNotDestroy = false;
        removeFromPool();
        return false; }}Copy the code

PoolSubpage needs to take care of the PoolSubpage[] variable of PoolArena, so there is a bit more code. But the logic is clear. It’s pretty clear from the code, so I’m not going to force the summary.

How do I reclaim the entire PoolChunk

Core code in the PoolArena, according to the presence of Cleaner release memory object can be.

// io.netty.buffer.PoolArena.DirectArena#destroyChunk
@Override
protected void destroyChunk(PoolChunk<ByteBuffer> chunk) {
    if (PlatformDependent.useDirectBufferNoCleaner()) {
        PlatformDependent.freeDirectNoCleaner(chunk.memory);
    } else{ PlatformDependent.freeDirectBuffer(chunk.memory); }}Copy the code

summary

Netty’s memory reclamation is huge, and two articles from the ByteBuf architecture to the source-level memory allocation implementation don’t seem to have covered it all. Here is just the most core of the code twisted out for everyone to taste, and finally hope to see these articles debugging you go through. I know very well that my knowledge and ability are limited, and I lack expression ability in some parts of the article. Some simple descriptions are too complicated, but I just mention the parts that need to be explained clearly, so please readers correct me.

My official account

Search the trail