“This is the 15th day of my participation in the November Gwen Challenge. See details of the event: The Last Gwen Challenge 2021”.

In this chapter, learn about ByteBuf in Netty. ByteBuf encapsulates byte data.

Next we’ll focus on the use of Bytebuf and its details.

1. Use ByteBuf

1.1 create ByteBuf

It can usually be created as follows:

Public static void main (String [] args) {/ / create a capacity of 10 ByteBuf ByteBuf buf = ByteBufAllocator. DEFAULT. The buffer (10); System.out.println(buf); }Copy the code

View the results:

PooledUnsafeDirectByteBuf(ridx: 0, widx: 0, cap: 10)
Copy the code

From the result we see that a pooled direct memory BUF is initialized, which means that if we create with the default type, a direct memory is returned.

Ridx indicates read index, that is, the read position is 0. Widx indicates write index, that is, write position is 0; Cap stands for capacity, which is 10;

1.1.1 Direct memory and heap memory

  • Create a heap memory as follows:

// Create a heap memory buF

    ByteBuf heapBuffer = ByteBufAllocator.DEFAULT.heapBuffer(10);
Copy the code

The heap memory is reclaimed by the JVM, so we don’t need much intervention.

  • Create a direct memory as follows

// Create a direct memory buF

    ByteBuf directBuffer = ByteBufAllocator.DEFAULT.directBuffer(10);
Copy the code
  • Features of direct memory:
    • Direct memory creation and destruction are expensive, but have high read/write performance (one less memory copy) and are suitable for use with pooling
    • Direct memory is less stressful to the GC because it is not managed by JVM garbage collection, but it is important to be proactive about releasing it in a timely manner

The introduction of direct memory in Netty, which will be continued later, is briefly covered here.

1.1.2 Pooled and unpooled

Wherever pools are used, such as database connection pools, thread pools, and so on.

The greatest benefit of pooling is that resources can be reused.

In Netty, pooling was introduced so that we could reuse ByteBuf.

So what advantages does pooling give us in using ByteBuf?

  • So that we don’t have to create a new ByteBuf instance every time, which is expensive for direct memory; Even using heap memory can increase GC stress.
  • ByteBuf instances in the pool can be reused, and a memory allocation algorithm similar to Jemalloc (jemalloc.net/) is used to improve allocation efficiency.
  • With high concurrency, pooling saves memory and reduces the possibility of memory overflow

1.1.3 Composition of ByteBuf

The source code is as follows:

public abstract class ByteBuf extends Object implements ReferenceCounted, Comparable

ReferenceCounted, Comparable, as shown above, is an abstract class that inherits Object and implements ReferenceCounted, Comparable.

ReferenceCounted is used to recycle ByteBuf, and refCnt is used to recycle ByteBuf. It provides the method retain(), which counts +1; The release() method, the count is reduced by one, and the resource is freed when the final reference count is 0. There will be little introduction here.

Comparable is mainly used for comparison.

Just like a normal raw byte array, ByteBuf uses a zero-based index. This means that the index of the first byte is always 0 and the index of the last byte is always Capacity-1.

ByteBuf provides two pointer variables to support sequential reads and writes: readerIndex for reads and writerIndex for writes.

The buffer change process is shown in the following figure:

Buffer partitioning as shown above has certain advantages over ByteBuffer in NIO:

  • In ByteBuffer, reading and writing is done through the position pointer, which needs to be switched by flip(). In ByteBuf, read and write Pointers are separated, making it easier to use.
  • ByteBuf introduces automatic capacity expansion.

1.2 write ByteBuf

There are a number of ways to write ByteBuf, so I’ll just paste the graph below:

1.2.1 Write Example

Public static void main (String [] args) {/ / application buffer length is 10 ByteBuf ByteBuf = ByteBufAllocator. DEFAULT. The buffer (10); Byte [] bytes = new byte[]{1, 2, 3, 4, 5}; byteBuf.writeBytes(bytes); System.out.println(byteBuf); // Write 5 more bytebuf.writebytes (bytes); System.out.println(byteBuf); }Copy the code

Results:

PooledUnsafeDirectByteBuf(ridx: 0, widx: 5, cap: 10)
PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 10)
Copy the code

It’s a little easier, but I won’t show you the other way.

1.2.2 Big-end Write and Little-end Write

In the previous figure, there are some methods with LE that can be small endian writes, for example

writeIntLE(int value)

LE is short for Little Endian, which can be understood as small or low byte order.

The counterpart is Big Endian, large or high Endian. High endienment is not specified because it is commonly used in network programming. The following code is high order:

writeInt(int value)

Example: Write two numbers 888 and 888L

The binary representation is:

| 0000 0000 | 0000 0000 | 0000 0011 | 0111 1000 | | 0000 0000 | 0000 0000 | 0000 0000 | 0000 0000 | 0000 0000 | 0000 0000 | 0000 0011 | 0111 1000 |

Converting it to a byte array is

,0,3,120 [0] [0,0,0,0,0,0,3,120]

There is the following test code:

public static void main(String[] args) { int a = 888; / / the big end writing ByteBuf byteBuf1 = ByteBufAllocator. DEFAULT. The buffer (); byteBuf1.writeInt(a); byte[] bytes1 = new byte[4]; byteBuf1.readBytes(bytes1); System.out.println(Arrays.toString(bytes1)); / / write small end ByteBuf byteBuf2 = ByteBufAllocator. DEFAULT. The buffer (); byteBuf2.writeIntLE(a); byte[] bytes2 = new byte[4]; byteBuf2.readBytes(bytes2); System.out.println(Arrays.toString(bytes2)); long b = 888L; / / the big end writing ByteBuf byteBuf3 = ByteBufAllocator. DEFAULT. The buffer (); byteBuf3.writeLong(b); byte[] bytes3 = new byte[8]; byteBuf3.readBytes(bytes3); System.out.println(Arrays.toString(bytes3)); / / write small end ByteBuf byteBuf4 = ByteBufAllocator. DEFAULT. The buffer (); byteBuf4.writeLongLE(b); byte[] bytes4 = new byte[8]; byteBuf4.readBytes(bytes4); System.out.println(Arrays.toString(bytes4)); }Copy the code

Results:

[0, 0, 3, 120]
[120, 3, 0, 0]
[0, 0, 0, 0, 0, 0, 3, 120]
[120, 3, 0, 0, 0, 0, 0, 0]
Copy the code

Conclusion: Int is 4 bytes, long is 8 bytes, big-endian write is written from left to right; Little endian writes from right to left.

Network transport usually uses Big Endian

1.2.3 Write in Set mode

There are also some column writes named set, for example:

public abstract ByteBuf setBytes(int index, byte[] src)

This class of methods requires us to specify index from which to start writing data. This method does not modify the readerIndex or writerIndex of this buffer.

Example code is as follows:

Public static void main (String [] args) {/ / application buffer length is 10 ByteBuf ByteBuf = ByteBufAllocator. DEFAULT. HeapBuffer (10); Byte [] bytes = new byte[]{1, 2, 3, 4, 5}; // Set bytebuf.setbytes (5, bytes) from index 5; System.out.println(byteBuf); // Manually set the index write position to 10 bytebuf. writerIndex(10); System.out.println(byteBuf); Byte [] readBytes = new byte[10]; byteBuf.readBytes(readBytes); System.out.println(Arrays.toString(readBytes)); }Copy the code

Results:

PooledUnsafeHeapByteBuf(ridx: 0, widx: 0, cap: 10) PooledUnsafeHeapByteBuf(ridx: 0, widx: 10, cap: 10) [0, 0, 0, 0, 1, 2, 3, 4, 5]Copy the code

Expansion and 1.

Here we continue with the code in section 1.2.1 and add another 5 lengths:

Public static void main (String [] args) {/ / application buffer length is 10 ByteBuf ByteBuf = ByteBufAllocator. DEFAULT. The buffer (10); Byte [] bytes = new byte[]{1, 2, 3, 4, 5}; byteBuf.writeBytes(bytes); System.out.println(byteBuf); // Write 5 more bytebuf.writebytes (bytes); System.out.println(byteBuf); // Write 5 more bytebuf.writebytes (bytes); System.out.println(byteBuf); }Copy the code

Results:

PooledUnsafeDirectByteBuf(ridx: 0, widx: 5, cap: 10)
PooledUnsafeDirectByteBuf(ridx: 0, widx: 10, cap: 10)
PooledUnsafeDirectByteBuf(ridx: 0, widx: 15, cap: 16)
Copy the code

As shown above, the capacity of the last entry has become 16. However, we only gave 10 at the beginning, which is the automatic expansion operation.

AbstractByteBuf ensureWritable0 method is an expansion method with the following code:

Final void ensureWritable0(int minWritableBytes) {final int writerIndex = writerIndex(); Final int targetCapacity = writerIndex + minWritableBytes; // Final int targetCapacity = writerIndex + minWritableBytes; If (targetCapacity <= capacity()) {ensureAccessible(); return; } if (checkBounds && targetCapacity > maxCapacity) {ensureAccessible(); throw new IndexOutOfBoundsException(String.format( "writerIndex(%d) + minWritableBytes(%d) exceeds maxCapacity(%d): %s", writerIndex, minWritableBytes, maxCapacity, this)); } // Normalize the target capacity to a power of 2. Final int fastWritable = maxFastWritableBytes(); final int fastWritable = maxFastWritableBytes();  // If the maximum value that can be written is larger than the capacity that needs to be written, // Otherwise use calculateNewCapacity to allocate int newCapacity = fastWritable >= minWritableBytes? writerIndex + fastWritable : alloc().calculateNewCapacity(targetCapacity, maxCapacity); // Adjust to the new capacity. capacity(newCapacity); }Copy the code

MaxFastWritableBytes internally takes a maxLength that is a calculated power of 2 in the allocate buffer and is less than 512, subtracting the current position at which index is written.

    public int maxFastWritableBytes() {
        return Math.min(maxLength, maxCapacity()) - writerIndex;
    }
Copy the code

Here’s how initialization computed the default value:

int normalizeCapacity(int reqCapacity) { checkPositiveOrZero(reqCapacity, "reqCapacity"); / / if here is more than chunkSize (16777217), will this value as the value of the distribution the if (reqCapacity > = chunkSize) {return directMemoryCacheAlignment = = 0? reqCapacity : alignCapacity(reqCapacity); } // >= 512 if (! Int normalizedCapacity = reqCapacity; normalizedCapacity --; normalizedCapacity |= normalizedCapacity >>> 1; normalizedCapacity |= normalizedCapacity >>> 2; normalizedCapacity |= normalizedCapacity >>> 4; normalizedCapacity |= normalizedCapacity >>> 8; normalizedCapacity |= normalizedCapacity >>> 16; normalizedCapacity ++; if (normalizedCapacity < 0) { normalizedCapacity >>>= 1; } assert directMemoryCacheAlignment == 0 || (normalizedCapacity & directMemoryCacheAlignmentMask) == 0; return normalizedCapacity; } if (directMemoryCacheAlignment > 0) { return alignCapacity(reqCapacity); } // Quantum-spaced if ((reqCapacity & 15) == 0) { return reqCapacity; } return (reqCapacity & ~15) + 16; }Copy the code

In the above code, there is a default value of chunkSize. If the value allocated is greater than chunkSize (16777217), this value will be used as the allocated value, while the judgment in ensureWritable0 will be later expanded:

int newCapacity = fastWritable >= minWritableBytes ? writerIndex + fastWritable : this.alloc().calculateNewCapacity(targetCapacity, this.maxCapacity);
Copy the code

In this case, the method of recalculating the capacity will be taken:

his.alloc().calculateNewCapacity(targetCapacity, this.maxCapacity);
Copy the code

The specific code is as follows:

public int calculateNewCapacity(int minNewCapacity, int maxCapacity) { checkPositiveOrZero(minNewCapacity, "minNewCapacity"); // If the required capacity is greater than the maximum capacity, If (minNewCapacity > maxCapacity) {throw new IllegalArgumentException(string. format("minNewCapacity: %d (expected: not greater than maxCapacity(%d)", minNewCapacity, maxCapacity)); } // Define a threshold of 4M per page final int threshold = CALCULATE_THRESHOLD; // 4 MiB Page // If the required capacity is equal to the page threshold, this threshold is returned. If (minNewCapacity == threshold) {return threshold; } // If (minNewCapacity > threshold) {// Calculate the minimum number of page thresholds required, Int newCapacity = minNewCapacity/threshold * threshold; If (newCapacity > maxCapacity - threshold) {newCapacity = maxCapacity; if (newCapacity > maxCapacity - threshold) {newCapacity = maxCapacity; } else {// add a page capacity to the newCapacity newCapacity += threshold; } return newCapacity; } // If the page threshold is not exceeded, starting from 64, the maximum increase to 4 int newCapacity = 64; While (newCapacity < minNewCapacity) {// move left one bit, i.e. *2 newCapacity <<= 1; Return math. min(newCapacity, maxCapacity); return math. min(newCapacity, maxCapacity); }Copy the code

You can see the expansion here.

1.3 read ByteBuf

1.3.1 Code demonstration

On the reading method is and write corresponding, here is not listed, directly on the example code

Public static void main (String [] args) {/ / application buffer length is 10 ByteBuf ByteBuf = ByteBufAllocator. DEFAULT. The buffer (10); Byte [] bytes = new byte[]{1, 2, 3, 4, 5}; byteBuf.writeBytes(bytes); System.out.println(byteBuf); System.out.println(bytebuf.readbyte ())); System.out.println(byteBuf); System.out.println(byteBuf.readByte()); System.out.println(byteBuf); // Read your own array bytebuf. readBytes(new byte[3]); System.out.println(byteBuf); }Copy the code

Results:

PooledUnsafeDirectByteBuf(ridx: 0, widx: 5, cap: 10)
1
PooledUnsafeDirectByteBuf(ridx: 1, widx: 5, cap: 10)
2
PooledUnsafeDirectByteBuf(ridx: 2, widx: 5, cap: 10)
PooledUnsafeDirectByteBuf(ridx: 5, widx: 5, cap: 10)
Copy the code

1.3.2 Repeated Reads

1) Use the method that starts with get and does not modify index.

2) Use mark:

Public static void main (String [] args) {/ / application buffer length is 10 ByteBuf ByteBuf = ByteBufAllocator. DEFAULT. The buffer (10); Byte [] bytes = new byte[]{1, 2, 3, 4, 5}; byteBuf.writeBytes(bytes); // Set a mark bytebuf.markReaderIndex (); Println (bytebuf.readbyte ()); // Read your own array system.out.println (bytebuf.readbyte ()); System.out.println(byteBuf); // Reset to mark bytebuf.resetreaderIndex (); Println (bytebuf.readbyte ()); // Read your own array system.out.println (bytebuf.readbyte ()); System.out.println(byteBuf); }Copy the code

Results:

1
PooledUnsafeDirectByteBuf(ridx: 1, widx: 5, cap: 10)
1
PooledUnsafeDirectByteBuf(ridx: 1, widx: 5, cap: 10)
Copy the code

For space reasons, I will write here first and continue to update ByteBuf’s article later. Please give me a thumbs up for help