“This is the 26th day of my participation in the November Gwen Challenge. See details of the event: The Last Gwen Challenge 2021”.

Like again, form a habit 👏👏

Summary of Netty

NIO class library and API complex, troublesome to use: need to master Selector, ServerSocketChannel, SocketChannel, ByteBuffer and so on.

The workload and difficulty of development are very large: for example, the client is faced with disconnection and reconnection, network intermittent disconnection, heartbeat processing, half-packet reading and writing, network congestion and abnormal flow processing, etc.

Netty has a good encapsulation of the NIO API of the JDK to solve the above problems. Netty has the advantages of high performance, high throughput, low latency, reduced resource consumption, and minimal unnecessary memory replication.

JDK 6 or later is required for Netty 4.x.

Netty usage scenarios

  1. Internet industry: In distributed systems, remote service invocation is required between nodes, and high-performance RPC frameworks are essential. Netty, as an asynchronous high-performance communication framework, is often used by these RPC frameworks as a basic communication component. Typical applications are as follows: Ali distributed service framework Dubbo RPC framework uses Dubbo protocol for inter-node communication, Dubbo protocol uses Netty as the basic communication component by default, used for implementation. Internal communication between process nodes. Rocketmq also uses Netty as the underlying communication component.

  2. Game industry: Whether mobile game server or large online games, Java language has been more and more widely used. As a high-performance basic communication component, Netty itself provides TCP/UDP and HTTP protocol stacks.

  3. Big data: The RPC framework of Avro, a classical high-performance communication and serialization component of Hadoop, adopts Netty for cross-border point communication by default, and its Netty Service is realized based on the secondary encapsulation of Netty framework.

Netty. IO /wiki/ Relate…

Netty communication example

1. Introduce Maven dependencies

<dependency>
    <groupId>io.netty</groupId>
    <artifactId>netty-all</artifactId>
    <version>4.1.35. The Final</version>
</dependency>
Copy the code

2. Server-side code

public class NettyService {

    public static void main(String[] args) throws InterruptedException {
        // Create two thread groups, bossGroup and workGroup, with nioEventloops as many as twice the number of CPU cores by default
        //bossGroup only handles connection requests. Real and client business processing is handed over to workGroup
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try{
            // Create a start object on the server side
            ServerBootstrap bootstrap = new ServerBootstrap();
            // Use chained programming to configure parameters
            bootstrap.group(bossGroup,workerGroup)  // Set two thread groups
                    .channel(NioServerSocketChannel.class)      // use NioServerSocketChannel as the channel implementation for the server
                    // Initialize the server connection queue size. The server processes client connection requests sequentially, so only one client connection can be processed at a time.
                    // When multiple clients arrive at the same time, the server queues the client connection requests that cannot be processed
                    .option(ChannelOption.SO_BACKLOG,1024)
                    .childHandler(new ChannelInitializer<SocketChannel>() {     // Create channel initialization object and set initialization parameters
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
                            // Set the handler for workGroup's SocketChannel
                            socketChannel.pipeline().addLast(newNettyServerHandler()); }}); System.out.println("Netty server start.");
            // Bind a port and synchronize, generate a ChannelFuture asynchronous object, isDone() method can determine the execution of the asynchronous event
            // Start the server (and bind the port). Bind is an asynchronous operation, and sync waits for the asynchronous operation to complete
            ChannelFuture cf = bootstrap.bind(8888).sync();
            // Register a listener for cf to listen for events we care about
            /*cf.addListener(new ChannelFutureListener() { @Override public void operationComplete(ChannelFuture channelFuture) Throws Exception {if (channelFuture.isSuccess()){system.out.println (" Listening on port 9000 successfully "); throws Exception {if (channelFuture.isSuccess()){system.out.println (" Listening on port 9000 successfully "); }else{system.out. println(" listening on port 9000 failed "); }}}); * /
            // Listen for channel closure. CloseFuture is an asynchronous operation that listens for channel closure
            // Use the sync method to synchronize the wait channel closing process, which blocks the wait channel closing process
            cf.channel().closeFuture().sync();
        }finally{ bossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code

Set to HandlerAdapter

public class NettyServerHandler extends ChannelInboundHandlerAdapter {

    /** * Read the data sent by the client **@paramCTX context object, containing channel, pipeline *@paramMSG is the data sent by the client@throws Exception
     */
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        System.out.println("Server reading thread" + Thread.currentThread().getName());
       // Channel channel = ctx.channel();
       // ChannelPipeline pipeline = ctx.pipeline(); // It is a two-way link, outbound, inbound
        // Convert MSG to a ByteBuf, similar to NIO's ByteBuffer
        ByteBuf buf = (ByteBuf)msg;
        System.out.println("The client sends the message:" + buf.toString(CharsetUtil.UTF_8));
        super.channelRead(ctx, msg);
    }

    /** * Data is read after processing method *@param ctx
     * @throws Exception
     */
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) throws Exception {
        ByteBuf buf = Unpooled.copiedBuffer("HelloClient", CharsetUtil.UTF_8);
        ctx.writeAndFlush(buf);
    }

    /** * to handle exceptions, it is usually necessary to close the channel *@param ctx
     * @param cause
     * @throws Exception
     */
    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { ctx.close(); }}Copy the code

3. Client code

public class NettyClient {

    public static void main(String[] args) throws InterruptedException {
        // The client needs an event loop group
        EventLoopGroup group = new NioEventLoopGroup();
        try{
            // Create a client startup object
            // Notice that the client is using Bootstrap instead of ServerBootstrap
            Bootstrap bootstrap =   new Bootstrap();
            // Set related parameters
            bootstrap.group(group)      // Set the thread group
                    .channel(NioSocketChannel.class)      // use NioSocketChannel as the client-side channel implementation
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        protected void initChannel(SocketChannel socketChannel) throws Exception {
                            // Add the handler
                            socketChannel.pipeline().addLast(newNettyClientHandler()); }}); System.out.println("netty client start");
            // Start the client to connect to the server
            ChannelFuture channelFuture = bootstrap.connect("127.0.0.1".8888).sync();
            // Listen for closed channels
            channelFuture.channel().closeFuture().sync();
        }finally{ group.shutdownGracefully(); }}}Copy the code

Set to HandlerAdapter

public class NettyClientHandler extends ChannelInboundHandlerAdapter {

    @Override
    public void channelActive(ChannelHandlerContext ctx) throws Exception {
        ByteBuf buf = Unpooled.copiedBuffer("HelloServer", CharsetUtil.UTF_8);
        ctx.writeAndFlush(buf);
    }

    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf buf = (ByteBuf) msg;
        System.out.println("Received message from server :" + buf.toString(CharsetUtil.UTF_8));
        System.out.println("Server address:" + ctx.channel().remoteAddress());
    }

    @Override
    public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); }}Copy the code

Netty threading model

Model explanation:

  1. Netty abstracts out two sets of thread poolsBossGroupWorkerGroup.BossGroupIs responsible for receiving connections from clients,WorkerGroupSpecialized in network read and write
  2. BossGroup and WorkerGroup are both typesNioEventLoopGroup
  3. The NioEventLoopGroup is equivalent to an event loopThread group, this group contains multiple event loop threads, each event loop thread isNioEventLoop
  4. Each NioEventLoop has oneselectorIs used to listen for those registered on itsocketChannelNetwork communication
  5. Each Boss NioEventLoop thread executes three steps in an internal loop
    • To deal withacceptEvent to establish a connection with the clientNioSocketChannel
    • willNioSocketChannelRegister a selector on a worker NIOEventLoop
    • Tasks that process the task queue, i.erunAllTasks
  6. eachworker NIOEventLoopThe steps that a thread executes in a loop
    • Polls read, write events for all NIoSocketChannels registered to its selector
    • To deal withI/OEvents, that is,read , writeEvent, in the correspondingNioSocketChannelDeal with business
    • RunAllTasks processes the task queueTaskQueueSome time-consuming tasks can be placed in the TaskQueue to be processed slowly, so that data in the TaskQueue is not affectedpipelineFlow processing in
  7. Used by each worker NIOEventLoop when processing NioSocketChannel businesspipeline(pipe), the pipe maintains a number of handler handlers to process the data in the channel

Netty module component

【Bootstrap, ServerBootstrap】

Bootstrap means Bootstrap. A Netty application usually starts with a Bootstrap, which is used to configure the entire Netty program and connect various components. In Netty, the Bootstrap class is used to Bootstrap the client program. ServerBootstrap is the server boot class.

【Future, ChannelFuture】

As mentioned earlier, in Netty all IO operations are asynchronous and it is not immediately clear whether the message was properly processed.

However, you can either wait for it to complete or register a listener directly. This is done with Future and ChannelFutures. They can register a listener and the listener will automatically trigger the registered listener event when the operation succeeds or fails.

[Channel]

Netty A component of network communication that performs network I/O operations. Channel provides users with:

  1. The status of the channel currently connected to the network (for example, open? Is it connected?
  2. Configuration parameters for the network connection (such as receive buffer size)
  3. Provides asynchronous network I/O operations (such as establishing connections, reading and writing, binding ports). Asynchronous calls mean that any I/O calls are returned immediately and there is no guarantee that the requested I/O operation has been completed at the end of the call.
  4. The call immediately returns an instance of ChannelFuture, and by registering listeners on ChannelFuture, the caller can be notified of a successful, failed, or canceled I/O callback.
  5. Supports associated I/O operations with corresponding handlers.

Connections with different protocols and blocking types have different Channel types.

Here are some common Channel types:

  • NioSocketChannel, asynchronous client TCP Socket connection.
  • NioServerSocketChannel, asynchronous server TCP Socket connection.
  • NioDatagramChannelAsynchronous UDP connection.
  • NioSctpChannel, asynchronous client Sctp connection.
  • NioSctpServerChannel, asynchronous Sctp server side connection.

These channels cover UDP and TCP network IO and file IO.

“Selector”

Netty implements I/O multiplexing based on a Selector object, through which a thread can listen for Channel events of multiple connections.

When a Channel is registered with a Selector, the mechanism inside the Selector automatically and continuously checks whether those registered channels have I/O events ready (such as readable, writable, network connection completed, etc.). This makes it easy to use a single thread to efficiently manage multiple channels.

【 NioEventLoop 】

NioEventLoop maintains a thread and task queue, which supports asynchronous submission of tasks. When the thread is started, NioEventLoop’s RUN method is called to execute I/O tasks and non-I /O tasks:

I/O tasks, the ready events in selectionKey, such as Accept, Connect, read, write, and so on, are triggered by the processSelectedKeys method.

Non-io tasks, tasks added to the taskQueue, such as register0 and bind0, are triggered by the runAllTasks method.

【 NioEventLoopGroup 】

NioEventLoopGroup, which manages the life cycle of eventLoop, can be understood as a thread pool. It maintains a group of threads internally. Each thread (NioEventLoop) is responsible for processing events on multiple channels, and a Channel corresponds to only one thread.

【 ChannelHandler 】

ChannelHandler is an interface that processes I/O events or intercepts I/O operations and forwards them to the next handler in its ChannelPipeline(business processing chain).

ChannelHandler itself does not provide many methods, because the interface has a number of methods that need to be implemented, which can be subclassed during use:

ChannelInboundHandler Is used to handle inbound I/O events. ChannelOutboundHandler Is used to process outbound I/O operations.

Or use the following adapter classes:

ChannelInboundHandlerAdapter handles inbound I/O events. ChannelOutboundHandlerAdapter used to handle the outbound I/O operations.

【 ChannelHandlerContext 】

Saves all context information associated with a Channel, along with a ChannelHandler object.

【 ChannelPipline 】

Saves a List of ChannelHandlers that handle or intercept inbound events and outbound operations for a Channel.

ChannelPipeline implements an advanced form of intercepting filter pattern that gives the user complete control over how events are handled and how the various Channelhandlers in a Channel interact with each other.

In Netty, each Channel has only one Channel pipeline corresponding to it, and their composition relationship is as follows:

A Channel contains a ChannelPipeline, which maintains a two-way list of ChannelHandlerContext, And each ChannelHandlerContext is associated with a ChannelHandler.

In a two-way list, inbound events are passed back from the head to the last inbound handler, and outbound events are passed back from tail to the last outbound handler. The two types of handlers do not interfere with each other.