As a transport layer (Layer 4) protocol, TCP does not know the meaning of the data packets at the application layer (Layer 7). Therefore, when TCP is used to transmit data, sticky packets or unpack packets may occur, and the reference layer needs to use a specific protocol to solve the problem.
1. What is TCP packet sticking/unpacking?
“Sticky packet/unpacket” often occurs in Socket programming. When TCP is used to transmit data, if the peer end sends multiple small packets, TCP combines these packets into one TCP packet and sends them out. This is called “sticky packet”. If the peer end sends a large packet, TCP splits the packet into several smaller TCP packets according to the buffer conditions. This is called packet unpacking.
Take the HTTP service as an example. If the number of HTTP requests is too large, TCP unpacks the packets. As a transport layer protocol, TCP does not know the meaning of HTTP packets at the upper application layer. It does not know where the data boundary of a single packet is. Once the packet length exceeds the maximum TCP packet length, TCP packets have to be unpacked. At this point, the server receives an incomplete HTTP request. If the server does not process it, how does it process the request? The server checks whether the received request packet is complete based on the HTTP regulations on request packets and response packets. If the received request packet is incomplete, the server waits for the peer end to send more request data and read a complete request before processing it.
The following figure shows the sticky/unpacked scenario:
2. Why does it cause package sticking/unpacking?
Understand the concept of “sticky packet/unpack”, and understand the possible causes of TCP “sticky packet/unpack”.
2.1 Nagle algorithm
TCP is a connection-oriented, reliable, byte stream – based transport layer protocol. The data sent to TCP by the application layer is not transmitted in the unit of the packet messages at the application layer, but may be combined into a data segment and sent to the target host.
Nagle is an algorithm to improve the efficiency of TCP transmission by reducing data packets. Because the network bandwidth is limited, if small data packets are frequently sent, the bandwidth pressure will be relatively large. Nagle algorithm will first buffer the data to be sent in the local buffer, until the total amount of data reaches the maximum data segment (MSS), and then batch send. In this way, messages may be sent late, but the pressure on bandwidth is small, reducing the possibility of network congestion and extra overhead.
Network resources are not as tight as they were a few decades ago, and Linux turns Nagle off by default, SO_NODELAY=1.
2.2 TCP_CORK
TCP has an option TCP_CORK that can also cause “sticky/unpack”. If TCP_CORK is turned on, TCP will delay sending by 20ms when less than the maximum data segment (MSS) is sent, or wait for the maximum data segment (MSS) to be sent in the send buffer.
2.1 MTU
Maximum Transmission Unit (MTU) indicates the Maximum size of the data service Unit that can be accepted by the sender and the size of the payload that can be accepted by the sender.
MTU indicates the maximum transmission unit of the communication protocol. The MTU of a commonly used network adapter is 1500, that is, it can transmit a maximum of 1500 bytes of data frames. Can be achieved byifconfig
Command to view the data frames of each nic:
2.2 MSS
Maximum Packet Segment Length (MSS) is an option of TCP. It is used for the maximum data length (excluding the packet header) that can be carried by each packet segment when the sender and receiver negotiate for communication during the ESTABLISHMENT of a TCP connection.
After the TCP connection is established, the two parties agree on the maximum packet length that can be transmitted, which is used by TCP to limit the maximum number of bytes that can be sent at the application layer. If the length of packets sent by the application layer exceeds that of the MSS, packet unpacking is also required.
3. Sticky/unpack solution
Common solutions can be roughly divided into three categories:
- The length of the data packet is fixed, and it is automatically filled when insufficient. This method is simple to implement, but wastes some bandwidth.
- Use a specific delimiter, such as a newline character, to separate different packets.
- Write the length of the message in the request header.
For these three solutions, Netty provides a decoder out of the box, very convenient to use.DelimiterBasedFrameDecoder is a custom delimiter decoder, Netty only when read the specified delimiter will think is a complete data message.
LineBasedFrameDecoder is a decoder with a “newline” separator.
FixedLengthFrameDecoder is a fixed-length frame decoder. You need to specify the frame size. Netty will only call the subsequent ChannelRead() when it has read a full frame.
LengthFieldBasedFrameDecoder is an will be written to the end of the request packet length decoder, Netty will according to the length of the offset and length field takes up the number of bytes, the length of the read this message, when read a complete frame to invoke the subsequent ChannelRead (). It can be used as follows:
/ * * *@paramMaxFrameLength Maximum frame size *@paramLengthFieldOffset Specifies the offset of the length field *@paramLengthFieldLength Specifies the number of bytes used by a length field
public LengthFieldBasedFrameDecoder(
int maxFrameLength,
int lengthFieldOffset, int lengthFieldLength) {
this(maxFrameLength, lengthFieldOffset, lengthFieldLength, 0.0);
}
Copy the code
4. Case study
Because LengthFieldBasedFrameDecoder is commonly used, here only demonstrate this one, the other decoder to explore.
Because it was only a simple test, I only wrote an EmbeddedChannel to test without starting Netty service. The case is as follows:
// Read and write half package Demo
public class HalfDemo {
public static void main(String[] args) {
EmbeddedChannel channel = new EmbeddedChannel();
// maximum frame 1MB, 0 to 4 bytes record the packet length
channel.pipeline().addLast(new LengthFieldBasedFrameDecoder(1024 * 1024.0.4));
channel.pipeline().addLast(new ChannelInboundHandlerAdapter(){
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
ByteBuf buf = (ByteBuf) msg;
int length = buf.readInt();
System.out.println("Message length :" + length);
System.out.println("Received data :"+ buf.toString(Charset.defaultCharset())); }}); ByteBuf buf = Unpooled.buffer();// The length of the written message
buf.writeInt(5);
// The console will have no output until the full 5 bytes are written
buf.writeBytes("hello".getBytes()); channel.writeInbound(buf); }}Copy the code