Q: What is multiplexing?
Take your time: Once you understand the NIO model, you might notice that in Linux, everything is a file, and files have to be operated on disk, which requires switching between user and kernel mode. In NIO mode, we are constantly trying to determine if the connection is coming, and here we have to operate to disk. We were constantly switching between kernel and user modes during the polling process, which was very wasteful.
So we wondered if there was a way to switch between kernel mode and user mode.
We can delegate polling to the kernel and let us know when a connection has received data, so that we can process it through the thread. Sending polling to the kernel, where user mode only needs to receive the results, is called multiplexing.
Q: What is select, poll, epoll?
Slowly: Select and Poll work the same way we did before, except that the kernel performs the polling process and notifies the user thread when the kernel knows there is data in the connection.
Select is based on an array, so its length is limited. Poll is based on a linked list format, which solves the problem of insufficient select length. In both cases, file descriptors are placed in arrays or linked lists, and the kernel constantly polls them to see if their state has changed, notifying the user thread with a callback if it has.
Q: What about epoll?
Slowly: Using SELECT and poll, we found the following three problems:
- How to break the limit of the number of file descriptors (Poll already solved)
- How do I avoid user-mode and kernel-mode copying of file descriptor sets
- How do I avoid linear traversal of the file descriptor collection
For the first point, ePoll uses a red-black tree structure to store file descriptors, which provides a significant performance improvement over Poll’s linked list. For the second point, epoll creates only one space, that is, only the kernel holds file descriptors, and the user state operates on this space.
For the third point, epoll adds a socket ready list and callback function. When a socket has data, it automatically triggers a callback function, puts the ready socket into a linked list, and tells the user to process it. This way, the kernel doesn’t have to poll the symbol table all the time, achieving O(1) complexity.
Q: Can you demonstrate it in code?
Slowly: Here’s the code to demonstrate how to implement the epoll model with Netty.
The service side
public class EpollServer {
public void static main(String[] args) {
EventLoopGroup bossGroup = new NioEventLoopGroup(); // Listen for the thread group of the connection
EventLoopGroup workerGroup = new NioEventLoopGroup(); // The thread group that handles the connection
try {
// 1. The initiator is responsible for assembling Netty components
new ServerBootstrap()
// 2. Add a thread group
.group(bossGroup, workerGroup)
// 3. Select niO-based server channel implementation
.channel(NioServerSocketChannel.class)
// 4. Processing process
.childHandler(
// 5. Channel Initializer is used to initialize the channel for the client to read and write data, and other handlers are added
new ChannelInitializer<NioSocketChannel>() {
@Override
protected void initChannel(NioSocketChannel ch) throws Exception {
// 6. Add a handler
ch.pipeline().addLast(new StringDecoder()); // Convert bytebuf to a string
ch.pipeline().addLast(new ChannelInboundHandlerAdapter() { // Create a custom handler
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
// Prints the string converted in the previous step
System.out.println(msg);
super.channelRead(ctx, msg); }}); } }) .bind(8080) // 7. Bind the listening port
.channel().closeFuture().sync(); // Wait for the server to shut down
} finally{ bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code
The client
public class EpollClient {
public static void main(String[] args) throws InterruptedException {
// 1. Start class
new Bootstrap()
// add eventLoop
.group(new NioEventLoopGroup())
// 3. Select the client channel implementation
.channel(NioSocketChannel.class)
// 4. Add handlers
.handler(new ChannelInitializer<NioSocketChannel>() {
@Override // is called after the connection has been established
protected void initChannel(NioSocketChannel nioSocketChannel) throws Exception {
nioSocketChannel.pipeline().addLast(newStringEncoder()); }})// 5. Connect to the server
.connect(new InetSocketAddress("localhost".8080))
.sync() // block the method until the connection is established
.channel() // represents the connection object
// 6. Send data to the server
.writeAndFlush("hello, world"); // Send data}}Copy the code