This series of Netty source code analysis articles is based on the 4.1.56.final version
For a high performance network communication framework, the most important is the core work is how to efficient receiving client connection, it’s as if we open a restaurant, then greet guests is hotel is the most important work, we need to meet our guests to come in, can’t let guests away from more than a judge of character, as long as the guests came in, it doesn’t matter if the food is a little more slowly.
In this article, I will introduce netTY to you the most core content, see how netty is efficient to receive client connections.
The following figure is the author in a dark sky appears so deep and distant night, idle, so holding up Netty on how to receive and connect this part of the source code carefully read, accidentally found a Bug affecting Netty receive and connect throughput.
Therefore, the author raised Issue#11708 on Github, explained the cause and result of this Bug, and discussed the fix measures with the author of Netty. As shown above.
Issue#11708:github.com/netty/netty…
Here is not a detailed explanation of the Issue, nor does it suggest that you now open the Issue to view, the author will in the introduction of this article with the in-depth interpretation of the source slowly for everyone layer by layer to clear the fog.
The reason why I picked this up at the beginning of the article is that I want everyone to read the code written by the world’s top programmers with suspicion, scrutiny, appreciation, reverence and awe. I sincerely thank them for their contribution in this area.
Well, with the question out of the way, let’s begin this article with this question
These reviews
As usual, before starting this article, let’s review the summaries of previous articles to help you put together a big picture.
I would like to emphasize again that this article can be viewed independently of the previous series of articles, but if you are interested in the details of the relevant part, you can go back to the relevant article after reading this article.
In the previous series of articles, THE author introduced the entire process of creating, starting and running Reactor, the core engine that drives the entire Netty framework. From now on, the whole core framework of Netty is up and running. This article will introduce the main content is the first event that Netty needs to do after starting up: listen for port address and receive client connections efficiently.
In this article, we describe Netty’s IO thread model from the perspective of the IO model, which is the cornerstone of the entire network framework.
The Reactor in Netty is the model definition of IO thread in Netty. Reactor appears in the form of Group in Netty, which is divided into:
-
The main Reactor thread group is the EventLoopGroup bossGroup configured in the startup code. The Reactor in the main Reactor group is mainly responsible for monitoring client connection events and efficiently processing client connection. This article is also the focus of our introduction.
-
From the Reactor thread group, which is the EventLoopGroup workerGroup configured in the startup code, the Reactor in the sub Reactor group is responsible for handling I/O events on the client connection and the execution of asynchronous tasks.
Finally, we get the whole IO model of Netty as follows:
In this article, we focus on the core work of MainReactorGroup as shown in step 1, Step 2, and Step 3.
After introducing the IO model of Netty as a whole, we introduced the framework of Netty framework and the construction process of Reactor group in the Implementation of Reactor in Netty (Foundation), explained how the Reactor was created, and introduced its core components as shown in the figure below:
-
Thread is the I/O thread in Reactor, which is mainly responsible for monitoring I/O events, processing I/O tasks, and performing asynchronous tasks.
-
Selector is a JDK NIO wrapper around the underlying IO multiplexing implementation of the operating system. Used to listen for I/O ready events.
-
TaskQueue is used to store the asynchronous tasks that the Reactor needs to perform. These asynchronous tasks can be submitted to the Reactor by the user in the business thread, or they can be core tasks submitted by the Netty framework.
-
The scheduledTaskQueue saves the scheduled tasks performed by the Reactor. Instead of the original time wheel to perform delayed tasks.
-
TailQueue stores tail-ending tasks that the Reactor needs to perform. The Reactor thread performs tail-ending tasks after completing normal tasks, such as statistics on the running status of the Netty, such as the time spent in the task cycle, the amount of physical memory used, and so on
After the framework was built, we then introduced the creation, initialization, binding port address of server NioServerSocketChannel in the article “detailed diagram of Netty Reactor startup process”. Register the complete process of listening for OP_ACCEPT events with the Main Reactor.
How the Main REACTOR handles OP_ACCEPT events will be the main content of this article.
Since then, the Main REACTOR Group of Netty framework has been started and ready to listen for OP_accept events. When the client connected, the OP_accept events were active and the main REACTOR started to process the OP_accept events.
In Netty, I/O events are classified into: OP_ACCEPT event, OP_READ event, OP_WRITE event and OP_CONNECT event, netty’s monitoring and processing of IO events are uniformly encapsulated in Reactor model, and the processing process of these four IO events will be separately introduced in our subsequent articles. In this article we focus on handling OP_ACCEPT events.
In order to make everyone have a complete understanding of the PROCESSING of IO events, the author wrote the article “Talking about the operation framework of Netty Core engine Reactor”, in which the whole operation framework of Reactor thread was introduced in detail.
The Reactor thread runs continuously in an endless loop, polling for I/O events on the Selector. When the I/O events are active, the Reactor wakes up from the Selector and executes the I/O ready events. In this process we have introduced the above four IO event processing entry functions.
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
// Get Channel's underlying operation class, Unsafe
final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
if(! k.isValid()) { ...... If SelectionKey has expired, close the corresponding Channel...... }try {
// Get the I/O ready event
int readyOps = k.readyOps();
// Handle the Connect event
if((readyOps & SelectionKey.OP_CONNECT) ! =0) {
int ops = k.interestOps();
// Remove listening for Connect events, otherwise the Selector will always be notified
ops &= ~SelectionKey.OP_CONNECT;
k.interestOps(ops);
// Trigger the channelActive event to process the Connect event
unsafe.finishConnect();
}
// Handle the Write event
if((readyOps & SelectionKey.OP_WRITE) ! =0) {
ch.unsafe().forceFlush();
}
// Handle Read or Accept events
if((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) ! =0 || readyOps == 0) { unsafe.read(); }}catch(CancelledKeyException ignored) { unsafe.close(unsafe.voidPromise()); }}Copy the code
In this article, I will focus on the implementation of the OP_ACCEPT event handler function unsafe.read().
When the client connection completes the three-way handshake, the selector in the main reactor generates an OP_ACCEPT event, and the main reactor wakes up to the OP_ACCEPT event handler and starts receiving the client connection.
1. The Main Reactor processes OP_ACCEPT events
When the Main Reactor polls the OP_ACCEPT event on the NioServerSocketChannel, The Main Reactor thread returns from the blocking poll apisElector.select (timeoutMillis) call on JDK Selector. Instead, handle the OP_ACCEPT event on NioServerSocketChannel.
public final class NioEventLoop extends SingleThreadEventLoop {
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
finalAbstractNioChannel.NioUnsafe unsafe = ch.unsafe(); . Omit...try {
int readyOps = k.readyOps();
if((readyOps & SelectionKey.OP_CONNECT) ! =0) {... Process the OP_CONNECT event................. }if((readyOps & SelectionKey.OP_WRITE) ! =0) {... Process the OP_WRITE event................. }if((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) ! =0 || readyOps == 0) {
// This article focuses on the OP_ACCEPT eventunsafe.read(); }}catch(CancelledKeyException ignored) { unsafe.close(unsafe.voidPromise()); }}}Copy the code
-
AbstractNioChannel ch, an entry function for processSelectedKey that handles IO ready events, is the Netty server NioServerSocketChannel. At this time, the executing thread is the Main reactor thread, and the main reactor registers the Netty server NioServerSocketChannel, which is responsible for listening the port address and receiving the client connection.
-
The NioUnsafe action class, obtained via ch. Unsafe (), is the underlying NIO ServerSocketChannel action class for the underlying JDK NIO ServerSocketChannel.
The Unsafe interface encapsulates the underlying operations of a Channel, such as the NioServerSocketChannel’s underlying Unsafe operation class, which binds port addresses and handles OP_ACCEPT events.
As you can see, Netty encapsulates the entry function for the OP_ACCEPT event handling in the read method of the underlying operation class Unsafe in NioServerSocketChannel.
And the types of Unsafe operation class implements in NioServerSocketChannel NioMessageUnsafe definition in the above inheritance structure AbstractNioMessageChannel parent class.
NioMessageUnsafe#read (NioMessageUnsafe#read);
2. An overview of the core process for receiving client connections
We still according to the old rules, first from the whole of the whole OP_ACCEPT event logical processing framework extracted, let us first look at the overall picture of the process, and then for each core point to break.
The main reactor thread is used in a do… while{… } read circulation loop in continuous call JDK NIO serverSocketChannel. The accept () method to receive the complete client connection NioSocketChannel three-way handshake, And the received client connection NioSocketChannel is temporarily saved in the List
The read loop is limited to 16 reads. After the main reactor has read a NioServerSocketChannel 16 times, it can no longer read the NioServerSocketChannel regardless of whether there is a client connection.
As we mentioned in the article “The Core Reactor Architecture”, Netty presses the Reactor thread very hard. Besides listening for polling IO ready events and processing IO ready events, Netty has a lot to do. You also need to perform asynchronous and scheduled tasks submitted by users and the Netty framework.
Therefore, the main reactor thread cannot run indefinitely in the READ loop, because time needs to be allocated to perform asynchronous tasks, and the asynchronous task cannot be delayed by receiving unlimited client connections. So we limit the number of read loops to 16.
If the Main reactor thread reads a NioSocketChannel 16 times in the read loop, the main reactor thread will not receive a NioSocketChannel even if there are still unreceived client connections. The main reactor thread will perform an asynchronous task instead. When the asynchronous task is finished, it will come back to perform the rest of the task receiving connections.
The main REACTOR thread exits the read loop under two conditions:
-
Within the limit of 16 reads, there are no new client connections to receive. Exit the loop.
-
Client connections are read from NioServerSocketChannel 16 times, and the loop needs to exit whether there are any client connections at this point.
Above is Netty in overall core logic in receiving the client connection, the following logic core source this part the author will pick up the implementation framework, facilitate everybody in accordance with the above core logic with the actual processing module in the source code, or that sentence, only need to overall grasp the core process, do not need to read every line of code, I will break them down in modules later in the article.
public abstract class AbstractNioMessageChannel extends AbstractNioChannel {
private final class NioMessageUnsafe extends AbstractNioUnsafe {
// Store the client SocketChannel created after the connection is established
private final List<Object> readBuf = new ArrayList<Object>();
@Override
public void read(a) {
// Must be executed in the Main Reactor thread
assert eventLoop(a).inEventLoop(a);
// note that the following config and pipeline are in the ServerSocketChannel
final ChannelConfig config = config();
final ChannelPipeline pipeline = pipeline();
// Create a Buffer allocator for receiving data.
// In the case of receiving connections, allocHandle is only used to control how many times the loop of the read loop creates a connection.
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
allocHandle.reset(config);
boolean closed = false;
Throwable exception = null;
try {
try {
do {
// Call NioServerSocketChannel->doReadMessages to create client SocketChannel
int localRead = doReadMessages(readBuf);
Exit the read loop if no new connections are available
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
// Count the number of messages that have been read in the current event loop
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());// Check whether you have read all 16 times
} catch (Throwable t) {
exception = t;
}
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false;
// propagate ChannelRead events in pipeline corresponding to NioServerSocketChannel
// Initialize the client SocketChannel and bind it to a Reactor in the Sub Reactor thread group
pipeline.fireChannelRead(readBuf.get(i));
}
// Clears the client SocketChannel collection created by this accept
readBuf.clear();
allocHandle.readComplete();
// Trigger readComplete event propagationpipeline.fireChannelReadComplete(); . Omit... }finally{... Omit... } } } } }Copy the code
The first step is to ensure that the thread handling the receiving client connection must be the Main Reactor thread by asserting assert eventLoop().ineventloop ().
However, the main REACTOR mainly registers the server NioServerSocketChannel, which is mainly responsible for processing OP_ACCEPT events. Therefore, the main REACTOR thread receives connections in NioServerSocketChannel.
So here we are through the config () to get to is NioServerSocketChannelConfig NioServerSocketChannel attribute configuration class, it is in the Reactor start-up phase is created.
public NioServerSocketChannel(ServerSocketChannel channel) {
The AbstractNioChannel parent holds the JDK NIO native ServerSocketChannel and the event OP_ACCEPT to listen on
super(null, channel, SelectionKey.OP_ACCEPT);
/ / DefaultChannelConfig set to Channel receive data in the buffer - > AdaptiveRecvByteBufAllocator
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
Copy the code
In the same way, pipeline() gets pipeline from NioServerSocketChannel. It is initialized after the SERVER SocketChannel is successfully registered with the Main Reactor.
The main Reactor thread is limited to reading 16 client connections to NioServerSocketChannel in a Read loop, so before starting the Read loop, we need to create an object that can keep track of the number of reads. After each read loop, this object is used to determine whether to terminate the read loop.
This object is the RecvByteBufAllocator here. Handle allocHandle dedicated to statistics the read receives the number of client connection in the loop, and determine whether the end of the read loop transfer asynchronous tasks to perform.
Once this is in place, the Main Reactor thread starts at do{…. }while(…) The receiving client is connected in the loop.
In the Read loop, a client connection that completes the three-way handshake is received by calling the doReadMessages function. The underlying call to the ACCEPT method of the JDK NIO ServerSocketChannel removes the client connection from the kernel full connection queue.
The return value localRead indicates how many client connections were received. Client connections are accepted one by one through the Accept method, so localRead normally returns 1. When localRead <= 0, there are no new client connections to receive. This is the end of the main Reactor’s task of receiving the client, and the read loop is skipped. Start a new round of LISTENING processing of I/O events.
public static SocketChannel accept(final ServerSocketChannel serverSocketChannel) throws IOException {
try {
return AccessController.doPrivileged(new PrivilegedExceptionAction<SocketChannel>() {
@Override
public SocketChannel run(a) throws IOException {
returnserverSocketChannel.accept(); }}); }catch (PrivilegedActionException e) {
throw(IOException) e.getCause(); }}Copy the code
The received client connection time is then stored in the List
private final class NioMessageUnsafe extends AbstractNioUnsafe {
// Store the client SocketChannel created after the connection is established
private final List<Object> readBuf = new ArrayList<Object>();
}
Copy the code
Call allocHandle. IncMessagesRead statistics this event loop to receive client connection number, and finally in the read through allocHandle. At the end of the loop continueReading determine whether has reached the limit of 16 times. This determines whether the Main Reactor thread continues to receive client connections or goes to perform asynchronous tasks.
Two conditions for the main REACTOR thread to exit the read loop:
-
Within the limit of 16 reads, there are no new client connections to receive. Exit the loop.
-
Client connections are read from NioServerSocketChannel 16 times, and the loop needs to exit whether there are any client connections at this point.
When the above two exit conditions are met, the main reactor thread will exit the READ loop. Since all the client connections received in the READ loop are temporarily stored in the List
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false;
NioServerSocketChannel propagates the read event in the pipeline
//io.netty.bootstrap.ServerBootstrap.ServerBootstrapAcceptor.channelRead
// Initialize the client SocketChannel and bind it to a Reactor in the Sub Reactor thread group
pipeline.fireChannelRead(readBuf.get(i));
}
Copy the code
Finally, the ChannelHandler(ServerBootstrapAcceptor) in the pipeline responds to the ChannelRead event and initializes the client NioSocketChannel in the corresponding callback function. And register it with the Sub Reactor Group. The sub reactor to which the client NioSocketChannel is bound starts listening for read and write events on the client connection.
The logical process for Netty to receive a client is shown in Steps 1, 2, and 3.
The above content is the overall process framework extracted by the author. Let’s break down the important core modules involved and interpret them in detail one by one.
3. RecvByteBufAllocator profile
The Reactor uses a ByteBuffer to receive THE I/O data on the corresponding Channel when processing the I/O data on the corresponding Channel. The RecvByteBufAllocator described in this section is one of the allocator used to allocate ByteBuffer.
Remember where this RecvByteBufAllocator was created?
In the article “talking about the Netty Reactor’s implementation in Netty”, I introduced the process of creating the NioServerSocketChannel. Corresponding configuration of the Channel class NioServerSocketChannelConfig will also with the creation of NioServerSocketChannel created.
public NioServerSocketChannel(ServerSocketChannel channel) {
super(null, channel, SelectionKey.OP_ACCEPT);
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
Copy the code
In the process of creating NioServerSocketChannelConfig creates RecvByteBufAllocator.
public DefaultChannelConfig(Channel channel) {
this(channel, new AdaptiveRecvByteBufAllocator());
}
Copy the code
Here we see the RecvByteBufAllocator NioServerSocketChannel actual type for AdaptiveRecvByteBufAllocator, as the name implies, This type of RecvByteBufAllocator can dynamically adjust the size of the ByteBuffer based on the size of each incoming IO data on the Channel.
For server NioServerSocketChannel, the IO data on it is the client connection. Its length and type are fixed, so it does not need such a ByteBuffer to receive the client connection. We store the received client connections in the List
For the client NioSocketChannel, the length of the IO data on it is the network data sent by the client, so it needs a ByteBuffer that can adjust the capacity dynamically according to the size of each IO data to receive.
RecvByteBufAllocator RecvByteBufAllocator RecvByteBufAllocator RecvByteBufAllocator RecvByteBufAllocator RecvByteBufAllocator RecvByteBufAllocator The Bug is caused by this class.
3.1 RecvByteBufAllocator. Handle the acquisition
In this article, we are through NioServerSocketChannel unsafe action class to obtain RecvByteBufAllocator. The bottom of the Handle
final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
Copy the code
protected abstract class AbstractUnsafe implements Unsafe {
@Override
public RecvByteBufAllocator.Handle recvBufAllocHandle(a) {
if (recvHandle == null) {
recvHandle = config().getRecvByteBufAllocator().newHandle();
}
returnrecvHandle; }}Copy the code
We see will eventually in NioServerSocketChannelConfig NioServerSocketChannel configuration classes get AdaptiveRecvByteBufAllocator
public class DefaultChannelConfig implements ChannelConfig {
/ / to Channel receive data in the buffer for AdaptiveRecvByteBufAllocator distributor type
private volatile RecvByteBufAllocator rcvBufAllocator;
}
Copy the code
AdaptiveRecvByteBufAllocator creates adaptive dynamic adjustment in the capacity of ByteBuffer distributor.
public class AdaptiveRecvByteBufAllocator extends DefaultMaxMessagesRecvByteBufAllocator {
@Override
public Handle newHandle(a) {
return new HandleImpl(minIndex, maxIndex, initial);
}
private final class HandleImpl extends MaxMessageHandle {... Omit... }}Copy the code
The specific type returned by newHandle method here is MaxMessageHandle, which stores the capacity indicator of the IO data read from the Channel each time, so as to facilitate the allocation of appropriate size buffer for the next read.
You need to call allochandle.reset (config) every time before using allocHandle; Reset the statistics inside.
public abstract class MaxMessageHandle implements ExtendedHandle {
private ChannelConfig config;
// Read a maximum of 16 times per event poll
private int maxMessagePerRead;
// The total number of messages read by the event polling, which refers to the number of received connections
private int totalMessages;
// The total number of bytes read by this event poll
private int totalBytesRead;
@Override
public void reset(ChannelConfig config) {
this.config = config;
// By default, a maximum of 16 entries are read each time
maxMessagePerRead = maxMessagesPerRead();
totalMessages = totalBytesRead = 0; }}Copy the code
- maxMessagePerRead: controls the maximum number of times a read loop can be read. The default value is 16
ServerBootstrap
Through theChannelOption.MAX_MESSAGES_PER_READ
Option Settings.
ServerBootstrap b = newServerBootstrap(); b.group(bossGroup, workerGroup) .channel(NioServerSocketChannel.class) .option(ChannelOption.MAX_MESSAGES_PER_READ, Custom times)Copy the code
- totalMessages: Counts the total number of connections received in the Read loop. This is called after each read loop
allocHandle.incMessagesRead
Increases the number of connections received.
@Override
public final void incMessagesRead(int amt) {
totalMessages += amt;
}
Copy the code
- TotalBytesRead: This field is used by the Sub reactor to receive data from NioSocketChannel. This field is used by the main REACTOR to receive data from NioSocketChannel. So we’re not going to use this field here. This field adds a record each time the sub reactor reads network data from NioSocketChannel.
@Override
public void lastBytesRead(int bytes) {
lastBytesRead = bytes;
if (bytes > 0) { totalBytesRead += bytes; }}Copy the code
MaxMessageHandler and a very important way is to read every time the loop will be called at the end of allocHandle. ContinueReading () method to judge whether read connection times has reached the age of 16 times, to determine the main reactor thread is out of circulation.
do {
// Call NioServerSocketChannel->doReadMessages to create client SocketChannel
int localRead = doReadMessages(readBuf);
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
// Count the number of messages that have been read in the current event loop
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());
Copy the code
The two judgment conditions circled in the red box are irrelevant to the topic of this paper. We do not need to pay attention to them here. The author will introduce them in detail in the following article.
-
TotalMessages < maxMessagePerRead: In this article’s receiving client connection scenario, this condition is used to determine whether the Main reactor thread has read more than 16 times in the Read loop. If more than 16 times it returns false and the main reactor thread exits the loop.
-
TotalBytesRead > 0: Indicates whether the sub reactor thread has read network data in the read loop when OP_READ is active on NioSocketChannel.
Above contents is RecvByteBufAllocator. Handle the receiving client connection scenarios, everyone here. A closer look at the allocHandle continueReading () method to exit the loop judgment conditions, combining the whole do {… }while(…) Pick up the connecting loop and feel if there’s something wrong? Bugs are coming ~~~
4. Aha!! Bug ! !
Netty handles both the receiving client connection scenario in this article and network data on the receiving client connection in a do{…. }while(…) Continuous processing in the loop read loop.
At the same time, it will be used in the previous section introduces RecvByteBufAllocator. Handle to record every time read loop receives the connection number and size of the network data read from the connection to the.
To read the end of the loop by allocHandle. ContinueReading should have quit the read () method to determine whether the loop cycle end connection receiving process or end connected to the data reading process.
Both the main reactor for receiving client connections and the sub reactor for receiving network data are based on the same framework, but the specific division of labor is different.
So netty here want to use unified RecvByteBufAllocator. Handle to Handle the above two scenarios.
And RecvByteBufAllocator. Handle mainly records the totalBytesRead field in the sub reactor thread in the process of the client NioSocketChannel OP_READ event is active, Total network data read in the Read loop, the main reactor thread is receiving client connections so this field is not set. The value of the totalBytesRead field will always be 0 in this article.
So no matter how many concurrent clients are connected to the server, the read loop that receives the connection will always accept only one connection and then exit the loop, Because allocHandle. ContinueReading () method of judging conditions totalBytesRead > 0 will always return false.
do {
// Call NioServerSocketChannel->doReadMessages to create client SocketChannel
int localRead = doReadMessages(readBuf);
if (localRead == 0) {
break;
}
if (localRead < 0) {
closed = true;
break;
}
// Count the number of messages that have been read in the current event loop
allocHandle.incMessagesRead(localRead);
} while (allocHandle.continueReading());
Copy the code
The intent of Netty is to receive as many concurrent connections as possible within the Read loop without affecting the Main Reactor thread to perform asynchronous tasks. Due to this Bug, the Main Reactor ended the loop only once. This has also affected netty’s throughput to a certain extent.
Let’s imagine a scenario where 16 clients are concurrently connected to the server, the NioServerSocketChannel OP_ACCEPT event is active, the main reactor wakes up from Selector, and the OP_ACCEPT event is executed.
public final class NioEventLoop extends SingleThreadEventLoop {
@Override
protected void run(a) {
int selectCnt = 0;
for (;;) {
try {
int strategy;
try {
strategy = selectStrategy.calculateStrategy(selectNowSupplier, hasTasks());
switch (strategy) {
caseSelectStrategy.CONTINUE: ............ Omit...caseSelectStrategy.BUSY_WAIT: ............ Omit...caseSelectStrategy.SELECT: ............ Listen to polling I/O events.........default:}}catch(IOException e) { ............ Omit... }... Handle I/O ready events......... . Perform asynchronous tasks......... }}Copy the code
However, due to this Bug, the Main Reactor received only one client connection in the read loop that received the client connection and hastily returned it.
private final class NioMessageUnsafe extends AbstractNioUnsafe {
do {
intlocalRead = doReadMessages(readBuf); . Omit... }while (allocHandle.continueReading());
}
Copy the code
The asynchronous task is then executed according to the Reactor structure shown below, and the loop is then taken back to the NioEventLoop#run method to re-poll the OP_ACCEPT event.
Since there are currently 15 concurrent client connections that have not yet been received, the Main Reactor thread does not block at selector. Select () and eventually loops back to NioMessageUnsafe#read do{….. } the while () loop. It exits the loop after receiving a connection.
We could have received all 16 concurrent client connections in a single read loop, but the Main Reactor had to poll OP_ACCEPT events repeatedly, which was a huge loop. It also adds a lot of unnecessary select.select () system call overhead
The discussion in Issue#11708 is clear
Issue#11708:github.com/netty/netty…
4.1 Bug fixes
At the time of writing, the latest version of Netty was 4.1.68.final, and the Bug was fixed in 4.1.69.final.
The Bug arises precisely because the server NioServerSocketChannel (which listens for port addresses and receives client connections) and Client NioSocketChannel communications (for) the Config configuration classes to mix the same ByteBuffer distributor AdaptiveRecvByteBufAllocator.
So in the new version to repair for server Config configuration in the ServerSocketChannel class introduces a new ByteBuffer distributor ServerChannelRecvByteBufAllocator, Dedicated to the scenario where the server ServerSocketChannel receives client connections.
In the superclass DefaultMaxMessagesRecvByteBufAllocator ServerChannelRecvByteBufAllocator ignoreBytesRead, introduces a new field is used to represent whether ignore network byte read, When creating the service side Channel configuration class NioServerSocketChannelConfig, this field will be assigned to true.
When the main REACTOR thread receives a client connection in the Read loop.
private final class NioMessageUnsafe extends AbstractNioUnsafe {
do {
intlocalRead = doReadMessages(readBuf); . Omit... }while (allocHandle.continueReading());
}
Copy the code
In the read at the end of the loop cycle will adopt created from ServerChannelRecvByteBufAllocator MaxMessageHandle# continueReading method to judge whether read hits more than 16 times. IgnoreBytesRead == true this time we ignore totalBytesRead == 0 so that the read loop that receives the connection continues. Receive all 16 connections at once in a read loop.
The above is a comprehensive introduction to the cause of this Bug, the process of discovery and the final solution. Therefore, the author also appeared in the thank list of the release announcement of Netty 4.1.69. Final version. Ha ha, is really a happy thing ~~~
Through the above analysis of the whole process of netty receiving client connection and the introduction of the ins and out of this Bug as well as the repair scheme, we must now understand the whole process framework of receiving connection.
Next, the author will carry out some core modules involved in this process separately from the details, for everyone to break ~~~
5. DoReadMessages Receiving client connections
public class NioServerSocketChannel extends AbstractNioMessageChannel
implements io.netty.channel.socket.ServerSocketChannel {
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(javaChannel());
try {
if(ch ! =null) {
buf.add(new NioSocketChannel(this, ch));
return 1; }}catch (Throwable t) {
logger.warn("Failed to create a new channel from an accepted socket.", t);
try {
ch.close();
} catch (Throwable t2) {
logger.warn("Failed to close a socket.", t2); }}return 0; }}Copy the code
- through
javaChannel()
Obtain encapsulation on the Netty serverNioServerSocketChannel
In theThe JDK native ServerSocketChannel
.
@Override
protected ServerSocketChannel javaChannel(a) {
return (ServerSocketChannel) super.javaChannel();
}
Copy the code
- through
The JDK NIO native
theServerSocketChannel
theAccept method
To obtainThe JDK NIO native
Client connectionSocketChannel
.
public static SocketChannel accept(final ServerSocketChannel serverSocketChannel) throws IOException {
try {
return AccessController.doPrivileged(new PrivilegedExceptionAction<SocketChannel>() {
@Override
public SocketChannel run(a) throws IOException {
returnserverSocketChannel.accept(); }}); }catch (PrivilegedActionException e) {
throw(IOException) e.getCause(); }}Copy the code
This step is called the Accept method for listening on the Socket, as described in Netty’s Talk about the IO Model from a Kernel perspective. The kernel creates a new Socket based on the listening Socket for network communication with the client. This is called the client connection Socket. The ServerSocketChannel is like a listening Socket. A SocketChannel is like a client connecting to a Socket.
Since we set the JDK NIO native ServerSocketChannel to non-blocking when creating NioServerSocketChannel, So the SocketChannel is created directly when there is a client connection on the ServerSocketChannel, and the Accept call returns null immediately if there is no client connection and does not block.
protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
super(parent);
this.ch = ch;
this.readInterestOp = readInterestOp;
try {
// Set Channel to non-blocking with IO multiplexing model
ch.configureBlocking(false);
} catch(IOException e) { .......... Omit... }}Copy the code
5.1 creating a client NioSocketChannel
public class NioServerSocketChannel extends AbstractNioMessageChannel
implements io.netty.channel.socket.ServerSocketChannel {
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
SocketChannel ch = SocketUtils.accept(javaChannel());
try {
if(ch ! =null) {
buf.add(new NioSocketChannel(this, ch));
return 1; }}catch(Throwable t) { ......... Omit... }return 0; }}Copy the code
Here we get the JDK NIO native SocketChannel (the Channel that the underlying layer actually communicates with the client) based on the Accept method of ServerSocketChannel to create a NIO SocketChannel in Netty.
public class NioSocketChannel extends AbstractNioByteChannel implements io.netty.channel.socket.SocketChannel {
public NioSocketChannel(Channel parent, SocketChannel socket) {
super(parent, socket);
config = new NioSocketChannelConfig(this, socket.socket()); }}Copy the code
The procedure for creating a client NioServerSocketChannel is the same as the procedure for creating a server NioServerSocketChannel. We will only compare the difference between client NioSocketChannel and server NioServerSocketChannel creation.
For more details, you can refer back to the Netty Reactor startup process for details about the creation of NioServerSocketChannel.
5.3 comparing NioSocketChannel and NioServerSocketChannel
1: Channels have different levels
In our article introducing the Reactor’s creation, we mentioned that channels in Netty are hierarchical. Since the client NioSocketChannel is created in the server NioServerSocketChannel when the Main Reactor receives the connection, Therefore, the parent property NioServerSocketChanel is specified in the constructor when the client NioSocketChannel is created. And encapsulate JDK NIO native SocketChannel into Netty’s NIO SocketChannel client.
The parent attribute was specified as null when the NioServerSocketChannel was created during Reactor startup. Because it is the top-level Channel, responsible for creating the client NioSocketChannel.
public NioServerSocketChannel(ServerSocketChannel channel) {
super(null, channel, SelectionKey.OP_ACCEPT);
config = new NioServerSocketChannelConfig(this, javaChannel().socket());
}
Copy the code
2: THE I/O events registered with Reactor are different
The NioSocketChannel client registers the selectionkey. OP_READ event with the Sub Reactor. The server NioServerSocketChannel registers the selectionKey. OP_ACCEPT event with the Main Reactor.
public abstract class AbstractNioByteChannel extends AbstractNioChannel {
protected AbstractNioByteChannel(Channel parent, SelectableChannel ch) {
super(parent, ch, SelectionKey.OP_READ); }}public class NioServerSocketChannel extends AbstractNioMessageChannel
implements io.netty.channel.socket.ServerSocketChannel {
public NioServerSocketChannel(ServerSocketChannel channel) {
The AbstractNioChannel parent holds the JDK NIO native ServerSocketChannel and the event OP_ACCEPT to listen on
super(null, channel, SelectionKey.OP_ACCEPT);
/ / DefaultChannelConfig set to Channel receive data in the buffer - > AdaptiveRecvByteBufAllocator
config = new NioServerSocketChannelConfig(this, javaChannel().socket()); }}Copy the code
3: Different functional attributes result in different inheritance structures
Client NioSocketChannel inheritance is AbstractNioByteChannel, while the service side NioServerSocketChannel AbstractNioMessageChannel inheritance. What’s the difference between the two abstract classes they inherit prefixed with Byte and Message?
The client NioSocketChannel mainly deals with the communication between the server and the client, which involves receiving the data sent by the client, while the Sub Reactor thread reads the network communication data in Byte from NioSocketChannel.
The server NioServerSocketChannel is responsible for handling the OP_ACCEPT event and creating the client NioSocketChannel for communication. At this time, the client and server have not started to communicate, so the Main Reactor thread reads Message from NioServerSocketChannel. Message refers to the underlying SocketChannel client connection.
These are the differences between NioSocketChannel and NioServerSocketChannel creation, and the rest of the process is the same.
- Encapsulate JDK NIO native in an AbstractNioChannel class
SocketChannel
And set its underlying IO model tonon-blocking
To save the I/O events to be monitoredOP_READ
.
protected AbstractNioChannel(Channel parent, SelectableChannel ch, int readInterestOp) {
super(parent);
this.ch = ch;
this.readInterestOp = readInterestOp;
try {
// Set Channel to non-blocking with IO multiplexing model
ch.configureBlocking(false);
} catch (IOException e) {
}
}
Copy the code
- Create globally unique for client NioSocketChannel
channelId
To create the underlying action class for the client NioSocketChannelNioByteUnsafe
, create pipeline.
protected AbstractChannel(Channel parent) {
this.parent = parent;
//channel machineId+processId+ Sequence +timestamp+random
id = newId();
//unsafe is used to read and write the underlying socket
unsafe = newUnsafe();
// Assign a separate pipeline to a channel for IO event choreography
pipeline = newChannelPipeline();
}
Copy the code
- During the creation of NioSocketChannelConfig, set the RecvByteBufAllocator type of NioSocketChannel to
AdaptiveRecvByteBufAllocator
.
public DefaultChannelConfig(Channel channel) {
this(channel, new AdaptiveRecvByteBufAllocator());
}
Copy the code
Version of the server after the Bug fixes NioServerSocketChannel RecvByteBufAllocator type set to ServerChannelRecvByteBufAllocator
Finally, we get the client NioSocketChannel structure as follows:
6. ChannelRead Event response
When we introduced the overall core process framework for receiving connections earlier, we mentioned that the main reactor thread is in a do{….. }while(…) The read loop repeatedly calls the ServerSocketChannel#accept method to receive connections from clients.
There are two conditions for exiting the read loop:
-
Within the limit of 16 reads, there are no new client connections to receive. Exit the loop.
-
Client connections are read from NioServerSocketChannel 16 times, and the loop needs to exit whether there are any client connections at this point.
The main reactor exits the read loop, and the received client connects NioSocketChannel to the List
private final class NioMessageUnsafe extends AbstractNioUnsafe {
private final List<Object> readBuf = new ArrayList<Object>();
@Override
public void read(a) {
try {
try {
do{... Omit...// Call NioServerSocketChannel->doReadMessages to create client SocketChannel
intlocalRead = doReadMessages(readBuf); . Omit... allocHandle.incMessagesRead(localRead); }while (allocHandle.continueReading());
} catch (Throwable t) {
exception = t;
}
int size = readBuf.size();
for (int i = 0; i < size; i ++) {
readPending = false; pipeline.fireChannelRead(readBuf.get(i)); }... Omit... }finally{... Omit... }}}Copy the code
The Main reactor thread then iterates through the NioSocketChannel in the List
Eventually ChannelRead events are propagated to ServerBootstrapAcceptor, where Netty’s core logic for handling client connections resides.
ServerBootstrapAcceptor initializes NioSocketChannel, registers NioSocketChannel with the Sub Reactor Group, and listens for OP_READ events.
These properties of the client NioSocketChannel are initialized in the ServerBootstrapAcceptor.
Such as: The Reactor EventLoopGroup childGroup is used to initialize the NioSocketChannel’s ChannelHandler. And some childOptions and childAttrs in NioSocketChannel.
private static class ServerBootstrapAcceptor extends ChannelInboundHandlerAdapter {
private final EventLoopGroup childGroup;
private final ChannelHandler childHandler;
private finalEntry<ChannelOption<? >, Object>[] childOptions;private finalEntry<AttributeKey<? >, Object>[] childAttrs;@Override
@SuppressWarnings("unchecked")
public void channelRead(ChannelHandlerContext ctx, Object msg) {
final Channel child = (Channel) msg;
// pipeline to client NioSocketChannel
// Add ChannelHandler configured in the bootstrap configuration class ServerBootstrap
child.pipeline().addLast(childHandler);
// Initialize client NioSocketChannel with configured properties
setChannelOptions(child, childOptions, logger);
setAttributes(child, childAttrs);
try {
/** * 1: select a Reactor binding from the Sub Reactor thread group * 2: register the client SocketChannel with the binding Reactor * 3: SocketChannel registers with selector in the Sub reactor and listens for OP_READ events * */
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if(! future.isSuccess()) { forceClose(child, future.cause()); }}}); }catch(Throwable t) { forceClose(child, t); }}}Copy the code
It was here, Netty initializes into NioSocketChannel all of the attributes (child prefix configuration) of the client NioSocketChannel that we configured in ServerBootstrap in the bootstrap example of the netty Reactor boot process.
public final class EchoServer {
static final int PORT = Integer.parseInt(System.getProperty("port"."8007"));
public static void main(String[] args) throws Exception {
// Configure the server.
// Create a primary/secondary Reactor thread group
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
final EchoServerHandler serverHandler = new EchoServerHandler();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)// Configure the primary/secondary Reactor
.channel(NioServerSocketChannel.class)// Configure the channel type in the primary Reactor
.option(ChannelOption.SO_BACKLOG, 100)// Set the channel option in the primary Reactor
.handler(new LoggingHandler(LogLevel.INFO))// Set the primary Reactor to Channel->pipline->handler
.childHandler(new ChannelInitializer<SocketChannel>() {// Set up a pipeline to register a channel from the Reactor
@Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline p = ch.pipeline();
//p.addLast(new LoggingHandler(LogLevel.INFO));p.addLast(serverHandler); }});// Start the server. The bound port starts the service and listens for accept events
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code
Nio SocketChannel-related properties configured with ServerBootstrap in the above example code, Encapsulate ServerBootstrapAcceptor initialization as an asynchronous task when Netty starts and initializes NioServerSocketChannel. It is executed after NioServerSocketChannel is successfully registered with the Main Reactor.
public class ServerBootstrap extends AbstractBootstrap<ServerBootstrap.ServerChannel> {
@Override
void init(Channel channel) {... Omit... p.addLast(new ChannelInitializer<Channel>() {
@Override
public void initChannel(final Channel ch) {
finalChannelPipeline pipeline = ch.pipeline(); . Omit... ch.eventLoop().execute(new Runnable() {
@Override
public void run(a) {
pipeline.addLast(newServerBootstrapAcceptor( ch, currentChildGroup, currentChildHandler, currentChildOptions, currentChildAttrs)); }}); }}); }}Copy the code
After being processed by the ServerBootstrapAccptor#chanelRead callback, the structure of pipeline in client NioSocketChannel is as follows:
The initialized client NioSocketChannel is then registered with the Sub Reactor Group and listens for OP_READ events.
As shown in Step 3 in the figure below:
7. Register NioSocketChannel with SubReactorGroup
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if(! future.isSuccess()) { forceClose(child, future.cause()); }}});Copy the code
The procedure for registering client NioSocketChannel with the Sub Reactor Group is exactly the same as that for server NioServerSocketChannel with the Main Reactor Group.
About the server NioServerSocketChannel registration process, the author has made a detailed introduction in the article “detailed diagram Netty Reactor startup process”, if you are interested in related details, you can read it back.
Here I will take you through a brief review of the whole registration process and focus on the differences between client NioSocetChannel and server NioServerSocketChannel registration process.
7.1 Select a Sub Reactor from the Sub Reactor Group for binding
public abstract class MultithreadEventLoopGroup extends MultithreadEventExecutorGroup implements EventLoopGroup {
@Override
public ChannelFuture register(Channel channel) {
return next().register(channel);
}
@Override
public EventExecutor next(a) {
returnchooser.next(); }}Copy the code
7.2 register a NioSocketChannel with a bound Sub Reactor
public abstract class SingleThreadEventLoop extends SingleThreadEventExecutor implements EventLoop {
@Override
public ChannelFuture register(Channel channel) {
// Register a channel to the binding Reactor
return register(new DefaultChannelPromise(channel, this));
}
@Override
public ChannelFuture register(final ChannelPromise promise) {
ObjectUtil.checkNotNull(promise, "promise");
// Unsafe is responsible for the underlying operations of channel
promise.channel().unsafe().register(this, promise);
returnpromise; }}Copy the code
-
When we introduced the registration process for NioServerSocketChannel, promise.channel() was NioServerSocketChannel. The underlying unsafe operation class is NioMessageUnsafe.
-
Here promise.channel() is NioSocketChannel. The underlying unsafe operation class is NioByteUnsafe.
@Override
public final void register(EventLoop eventLoop, final ChannelPromise promise) {... Omit...// eventLoop is a Sub Reactor
AbstractChannel.this.eventLoop = eventLoop;
/** * The channel registration must be performed by the Reactor thread ** 1: If the current thread is a Reactor thread, register register0 directly; if the current thread is an external thread, wrap the asynchronous Task of register0 by the Reactor thread
if (eventLoop.inEventLoop()) {
register0(promise);
} else {
try {
eventLoop.execute(new Runnable() {
@Override
public void run(a) { register0(promise); }}); }catch(Throwable t) { .............. Omit... }}}Copy the code
Note that the EventLoop passed in is a Sub Reactor.
However, the executing thread is the Main Reactor thread, not the Sub Reactor thread (which has not been started yet).
So eventloop.ineventloop () returns false.
Submit the task of registering NioSocketChannel to the bound Sub Reactor in the else branch.
When the registration task is submitted, the bound Sub Reactor thread starts.
7.3 register0
We come back to the old register0 method of Channel registration. We discussed this method at length in the Netty Reactor Startup Diagram. Here we only compare NioSocketChannel with NioServerSocketChannel.
private void register0(ChannelPromise promise) {
try{... Omit...boolean firstRegistration = neverRegistered;
// Perform the actual registration
doRegister();
// Change the registration status
neverRegistered = false;
registered = true;
pipeline.invokeHandlerAddedIfNeeded();
if (isActive()) {
if (firstRegistration) {
// Trigger the channelActive event
pipeline.fireChannelActive();
} else if(config().isAutoRead()) { beginRead(); }}}catch(Throwable t) { ................ Omit... }}Copy the code
Here the doRegister() method registers the NioSocketChannel with the Selector in the Sub Reactor.
public abstract class AbstractNioChannel extends AbstractChannel {
@Override
protected void doRegister(a) throws Exception {
boolean selected = false;
for (;;) {
try {
selectionKey = javaChannel().register(eventLoop().unwrappedSelector(), 0.this);
return;
} catch(CancelledKeyException e) { ............... Omit... }}}}Copy the code
This is where Netty client NIO SocketChannel is associated with JDK NIO native SocketChannel. In this case, the registered I/O event is still 0. The purpose is just to get the NIAN SocketChannel SelectionKey in the Selector.
Attach Netty’s custom NioSocketChannel to the attechment property of SelectionKey via the SelectableChannel#register method. Complete Netty custom Channel and JDK NIO Channel relationship binding. Netty gets a custom Channel object (NIO SocketChannel) from the SelectionKey returned by the JDK NIO Selector each time a Selector is polling for an IO ready event.
Then call pipeline. InvokeHandlerAddedIfNeeded () callback client NioSocketChannel all ChannelHandler handlerAdded methods of the pipeline, There is only one ChannelInitializer in the pipeline structure at this point. The client NioSocketChannel’s pipeline is eventually initialized in the ChannelInitializer#handlerAdded callback method.
public abstract class ChannelInitializer<C extends Channel> extends ChannelInboundHandlerAdapter {
@Override
public void handlerAdded(ChannelHandlerContext ctx) throws Exception {
if (ctx.channel().isRegistered()) {
if (initChannel(ctx)) {
// After initialization, it needs to be removed from the pipelineremoveState(ctx); }}}protected abstract void initChannel(C ch) throws Exception;
}
Copy the code
For the detailed initialization process of pipeline in Channel, if you are interested in the details, please refer to the Netty Reactor Startup Process.
At this point, the structure in the pipeline in the client NioSocketChannel will look like our own custom. In the example code, our custom ChannelHandler is EchoServerHandler.
@Sharable
public class EchoServerHandler extends ChannelInboundHandlerAdapter {
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
ctx.write(msg);
}
@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
// Close the connection when an exception is raised.cause.printStackTrace(); ctx.close(); }}Copy the code
When the pipeline in the client NioSocketChannel is initialized, netty calls the safeSetSuccess(promise) method to call back the ChannelFutureListener registered in regFuture. Notifies the client that NioSocketChannel has successfully registered with the Sub Reactor.
childGroup.register(child).addListener(new ChannelFutureListener() {
@Override
public void operationComplete(ChannelFuture future) throws Exception {
if(! future.isSuccess()) { forceClose(child, future.cause()); }}});Copy the code
During server NioServerSocketChannel registration we submit bind to the Main Reactor listener. However, when NioSocketChannel is registered, only the case of a failed registration is handled in the listener.
When the Sub Reactor thread notifies the ChannelFutureListener that it has registered successfully, Then it invokes pipeline. FireChannelRegistered () on the client side NioSocketChannel ChannelRegistered events of pipeline.
The NioServerSocketChannel is not active because the NioServerSocketChannel is not bound to the port address. Here isActive() returns false. The register0 method returns directly.
Server NioServerSocketChannel Determines whether the port is activated by binding successfully.
public class NioServerSocketChannel extends AbstractNioMessageChannel
implements io.netty.channel.socket.ServerSocketChannel {
@Override
public boolean isActive(a) {
returnisOpen() && javaChannel().socket().isBound(); }}Copy the code
NioSocketChannel determines whether the client is active by being in the Connected state. So obviously this must be connected.
@Override
public boolean isActive(a) {
SocketChannel ch = javaChannel();
return ch.isOpen() && ch.isConnected();
}
Copy the code
NioSocketChannel is already in connected state. There is no need to bind ports, so isActive() returns true.
if (isActive()) {
/** * This is where the client SocketChannel is registered. Register the OP_READ event in the channelActive event callback
if (firstRegistration) {
// Trigger the channelActive event
pipeline.fireChannelActive();
} else if(config().isAutoRead()) { ....... Omit... }}}Copy the code
Last call pipeline. FireChannelActive () in the pipeline in the NioSocketChannel spread ChannelActive events, Finally, the OP_READ event is responded to in the pipeline header HeadContext and registered to the Selector in the Sub Reactor.
public abstract class AbstractNioChannel extends AbstractChannel {{@Override
protected void doBeginRead(a) throws Exception {... Omit...final int interestOps = selectionKey.interestOps();
/** * 1: ServerSocketChannel initialization readInterestOp set the OP_ACCEPT event * 2: SocketChannel initialization readInterestOp set the OP_READ event ** /
if ((interestOps & readInterestOp) == 0) {
// Register to listen for OP_ACCEPT or OP_READ eventsselectionKey.interestOps(interestOps | readInterestOp); }}}Copy the code
Notice that readInterestOp here is the OP_READ event set by the client NioSocketChannel during initialization.
The structure of the Main Reactor group in Netty is as follows:
conclusion
This article walks you through the process of NioServerSocketChannel handling client connection events.
-
The entire processing framework that receives the connection.
-
The cause of the Bug affecting Netty connection throughput and the solution to fix it.
-
Create and initialize the client NioSocketChannel.
-
Initialize the pipeline in a NioSocketChannel.
-
The process of registering client NioSocketChannel with the Sub Reactor
We also compare the differences between NioServerSocketChannel and NioSocketChannel in the process of creation initialization and the process of Reactor registration.
After client NioSocketChannel receives and registers successfully with the Sub Reactor, the Sub Reactor then listens for OP_READ events of all client nio socketchannels registered on it and waits for the client to send network data to the server.
The core of the Reactor becomes the Sub Reactor and NioSocketChannel, the client registered to it.
In the next article, we will discuss how Netty receives network data ~~~~ see you in the next article ~~