This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.

This paper introduces the core concepts and introductory cases of Netty network programming framework in detail.

1 Netty introduction

  1. Java NIO network communication framework based on event-driven, can quickly and easily develop network applications.
  2. Network programming, such as TCP and UDP socket servers, is greatly simplified and optimized, and many aspects of performance and security are even better.
  3. Supports multiple communication protocols such as FTP, SMTP, HTTP, binary and text-based traditional protocols, as well as custom protocols.

In short, Netty has three advantages:

  1. High concurrency: Based on NIO development (Reactor model), concurrency performance is significantly improved compared to BIO.
  2. Fast transmission: transmission relies on the zero-copy feature to reduce unnecessary memory copy as far as possible, and uses the high-performance serialization protocol Protobuf to achieve efficient transmission.
  3. Wrapped up: Encapsulates many of the details of original NIO programming, provides an easy-to-use call interface, and is simpler to use.

To borrow the official description: Netty has successfully found a way to achieve ease of development, performance, stability, and flexibility without compromising maintainability and performance.

Netty’s community is very active at the moment. Many open source projects and frameworks that involve network calls use Netty at the bottom, such as Dubbo, RocketMQ, Elasticsearch, gRPC, Spark, GateWay, etc.

In short, when it comes to network programming and development, such as instant messaging systems, custom RPC frameworks, custom HTTP servers, real-time message push systems and other scenarios, Netty is the right choice.

2 Core components of Netty

2.1 the Channel

Channels, Netty’s abstract class for network operations, include basic I/O operations such as bind, connect, read, and write. Netty’s Channel interface provides apis that greatly reduce the complexity of using Socket classes directly.

Different types of channels correspond to connections of different protocols and blocking types. Here are some common Channel types:

  1. NioSocketChannel, asynchronous client TCP Socket connection.
  2. NioServerSocketChannel, an asynchronous server TCP Socket connection.
  3. NioDatagramChannel, asynchronous UDP connection.
  4. NioSctpChannel, asynchronous client Sctp connection.
  5. NioSctpServerChannel, asynchronous Sctp server side connection These channels cover UDP and TCP network IO and file IO.

2.2 EventLoop

The EventLoop interface is the core interface of Netty. It is used to process various events that occur during the lifetime of a connection. In fact, it is responsible for listening for network events and calling the event handler for related I/O operations.

An EventLoop can listen to multiple channels. EventLoop is the core of IO multiplexing. It can be regarded as the mainReactor in the Reactor model.

Channel is an abstract class for Netty network operations. EventLoop listens for I/O events registered to the Channel.

2.3 ChannelFuture

In Netty, all I/O operations are asynchronous and it is not immediately clear whether the message was correctly processed.

A Channel is registered in an EventLoop and immediately returns a ChannelFuture object. GenericFutureListener can be registered with ChannelFuture#addListener. A listener automatically triggers a registered listener event when the operation succeeds or fails.

2.4 ChannelHandler

ChannelHandler is the specific handler for the message. He handles a wide range of tasks, including reading and writing events, connecting, decoding, data conversion, business logic, and so on, and then forwards the data to the next ChannelHandler in the ChannelPipeline.

You can extend Netty by customizing the ChannelHandler. The ChannelHandler interface itself does not provide many methods, because the interface has many methods to implement. For ease of use, you can subclass the ChannelHandler interface:

  1. ChannelInboundHandler Is used to handle inbound I/O events
  2. ChannelOutboundHandler Is used to process outbound I/O operations

Or, more conveniently, use the following adapter classes:

  1. ChannelInboundHandlerAdapter handles inbound I/O events
  2. ChannelOutboundHandlerAdapter used to handle the outbound I/O operations
  3. ChannelDuplexHandler is used to handle inbound and outbound events

2.5 ChannelPipeline

A ChannelPipeline is a linked List of channelhandlers, a List of channelhandlers, that provides an API for propagating a stream of inbound and outbound events along the chain.

One or more ChannelHandlers can be added to a ChannelPipeline using the addLast() method, because a single data or event may be handled by multiple handlers. When one ChannelHandler has finished processing the data is passed to the next ChannelHandler.

On execution, inbound events are passed back from the head to the last inbound handler (type ChannelOutboundHandler), and outbound events are passed back from tail to the last outbound handler (type ChannelOutboundHandler). The two types of handlers execute without interfering with each other. If the Handler belongs to both inbound and outbound handlers, both are executed once.

There is only one Channel pipeline for each Channel in Netty. When a Channel is created, it is automatically assigned to its own Channel pipeline.

2.5.1 ChannelHandlerContext

Used to transmit business data and save all context information related to a Channel.

A ChannelHandlerContext is stored directly in a ChannelPipeline, and each ChannelHandlerContext is associated with a unique ChannelHandler.

2.5.2 Inbound and outbound

Data inbound, generally refers to read event trigger, that is, data to read in; Data is read from the underlying Java NIO channel to the Netty channel and decoded during this process.

Data outbound, generally refers to write event trigger, that is, data to write out; Data is written to the underlying Java NIO Chanel from Netty’s Channel, and data is encoded during this process.

The inbound Handler reads the data first and then executes the inbound Handler. Outbound executes the outbound Handler before writing.

That is, every time a read event occurs, the inbound operation will be performed. After the actual data is read, the InboundHandler in ChannelPipeline will be called from beginning to end for processing, instead of calling OutboundHandler. When the write event is triggered, the outbound operation will be performed. Before data is actually written, the OutboundHandler of ChannelPipeline will be called from end to end. InboundHandler will not be called.

The following figure illustrates how The ChannelHandlers in ChannelPipeline typically handle I/O events (netty. IO /4.1/ API/IO /…

Inbound events are processed from the inbound handler in a bottom-up direction, as shown on the left. The inbound handler typically processes raw inbound data generated by the I/O thread at the bottom of the graph, such as through socketchannel.read (ByteBuffer).

Outbound events are processed by outbound handlers in a top-down direction, as shown on the right. Outbound handlers typically generate or transform outbound traffic, such as write requests. If the outbound event exceeds the bottom outbound handler, it is handled by the I/O thread associated with the channel. The I/O thread performs the actual output operations, such as socketChannel.write (ByteBuffer) output.

2.6 EventLoopGroup

An EventLoopGroup is equivalent to an EventLoop group, which contains multiple eventloops. The main function of an EventLoop is to listen for network events and call the event handler for I/O processing.

Each EventLoop within an EventLoopGroup usually contains one Selector and one EventLoop thread. An EventLoop can bind multiple channels, but a Channel can bind only one EventLoop. In this way, the IO events of a particular connection are processed on a dedicated thread to ensure thread safety.

The Netty Server contains one Boss NioEventLoopGroup and one Worker NioEventLoopGroup:

  1. Boss NioEventLoop The work performed by the main loop:
    1. Select listens for accept events.
    2. Process the incoming Accept event, establish a connection with the Client, generate a SocketChannel, and register the SocketChannel with a Worker NioEventLoop Selector.
    3. Process tasks in the task queue, runAllTasks. The tasks in the task queue include tasks performed by users calling Eventloop. execute or Schedule, or tasks submitted to the Eventloop by other threads.
  2. Worker NioEventLoop The work performed by the main loop:
    1. Select listens for read and write events.
    2. Handle incoming read and write events when NioSocketChannel readable and writable events occur.
    3. Process tasks in the task queue, runAllTasks.

3 Netty thread model

Netty receives and processes user requests based on the Reactor model and multiplexer, and internally implements two thread pools, boss thread pool and Work thread pool. The boss thread pool is responsible for processing the accept event of the request. When receiving the accept event request, The connection is established and the corresponding socket is wrapped into a NioSocketChannel and handed over to the work thread pool, which is responsible for the requested read and write events, as well as the business logic, which is handled by the corresponding Handler.

Netty relies on the configuration of the NioEventLoopGroup thread pool to implement the specific thread model.

3.1 Single-threaded model

BossGroup and workerGroup use the same NioEventLoopGroup and set the number of threads to 1.

It is suitable for applications with small amount of connection and concurrency.

3.2 Multi-thread model

BossGroup and workerGroup use different NioEventLoopGroups, and the number of threads configured for bossGroup is 1.

Suitable for applications with a small number of connections and a large number of concurrent connections.

3.3 Master-slave multithreading model

BossGroup and workerGroup use different NioEventLoopGroups and are both configured to be multithreaded.

Suitable for applications with large amount of connections and concurrency.

An Acceptor thread is selected from a main NIO thread pool to bind a listener port to receive connections from clients, and other threads are responsible for subsequent access authentication. Once the connection is established, it is dispatched to the workerGroup thread.

4 Number of Netty threads started by default

The default constructor for EventLoopGroup actually starts the number of threads as CPU cores x 2, but the bossGroup is usually set to 1. The number of Eventloops within an EventLoopGroup is the number of threads, ensuring a 1-to-1 relationship.

5 Netty startup process

5.1 the service side

First, two NiOEventLoopGroups are initialized. BoosGroup is used to process the client’s request to establish a TCP connection (Accept event), and workerGroup is used to process the I/O read/write event and specific business logic of each connection.

The default setting for the no-argument constructor of the NioEventLoopGroup class is the number of threads CPU cores *2. Typically we specify the number of bossGroup threads to be 1 (for small concurrent connections) and the number of workGroup threads to be CPU cores *2.

Then create a ServerBootstrap, which is the server boot bootstrap class/helper class that will guide us through the server boot. Use ServerBootstrap to configure the EventLoopGroup, Channel type, connection parameters, and inbound and outbound event handlers.

Finally, bind the ports using the bind() method and work begins.

public class NettyServer {
    static int port = 8888;

    public static void main(String[] args) {
        //1 bossGroup is used to receive the connection mainReactor
        //workerGroup for a specific processing subReactor
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            //2. Create the server boot/helper class ServerBootstrap
            ServerBootstrap serverBootstrap = new ServerBootstrap();
            //3. Configure two large thread groups for the boot class and determine the thread model
            serverBootstrap
                    .group(bossGroup, workerGroup)
                    // 4. Specify the IO model
                    .channel(NioServerSocketChannel.class)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        public void initChannel(SocketChannel ch) {
                            ChannelPipeline p = ch.pipeline();
                            //5. You can customize the business processing logic Handler for client messages
                            p.addLast(newHelloServerHandler()); P.a ddLast (.....................) ; }});// 6. Bind the ports and call sync to block until the bind is complete
            ChannelFuture f = serverBootstrap.bind(1234).sync();
            // 7. Block and wait until the server Channel closes (the closeFuture() method gets the closeFuture object for the Channel and calls sync()))
            f.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            //8. Gracefully close related thread group resourcesbossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code

5.2 the client

First initialize a NioEventLoopGroup.

Then create a Bootstrap, which is the client boot Bootstrap/helper class that will guide us through the client boot. Use Bootstrap to configure the EventLoopGroup, Channel type, connection parameters, and inbound and outbound event handlers.

Finally, connect with the server IP and port through the connect() method to start the work.

public class NettyClient {
    static int port = 8888;
    static String host = "127.0.0.1";

    public static void main(String[] args) {
        //1. Create a NioEventLoopGroup instance
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            //2. Create a client boot/helper class: Bootstrap
            Bootstrap bootstrap = new Bootstrap();
            //3. Specify a thread group
            bootstrap.group(group)
                    //4. Specify the IO model
                    .channel(NioSocketChannel.class)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        public void initChannel(SocketChannel ch) throws Exception {
                            ChannelPipeline p = ch.pipeline();
                            // 5. Here you can customize the business processing logic of the message
                            p.addLast(newHelloClientHandler(message)); P.a ddLast (.....................) ; }});// 6. Try to establish a connection
            ChannelFuture f = bootstrap.connect(host, port).sync();
            // 7. Wait for the connection to close (block until the Channel closes)
            f.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally{ group.shutdownGracefully(); }}}Copy the code

6 Causes and solutions of TCP Packet Sticking/Unpacking

6.1 the reason

TCP processes data in the form of streams. There is a buffer at the bottom. A complete larger packet may be divided into multiple packets for transmission by TCP, or multiple small packets may be encapsulated into a large packet for transmission.

TCP packet sticking/unpacking: The size of bytes written by the application is larger than the size of the socket send buffer, which causes packet unpacking. In fact, complete messages cannot be received. If the data written by the application program is smaller than the size of the socket buffer, the network adapter sends the data written by the application for several times to the network. In this case, the network adapter receives multiple messages that are stuck together at one time.

The MSS (Maximum Segment Size) field specifies the Maximum number of bytes that can be transmitted by a TCP packet. Generally, the value is 1500-20-20=1460 bytes. If the value is greater than this, packets will be unpacked.

6.2 Solution

  1. Use the Netty built-in decoder
    1. LineBasedFrameDecoder: When sending packets, the sender separates each packet with a newline character, i.e. \n or \r\n. This works by iterating through the readable bytes in ByteBuf to determine if there is a newline character and intercepting it accordingly.
    2. DelimiterBasedFrameDecoder: can custom delimiter decoder, it is actually a kind of special DelimiterBasedFrameDecoder decoder.
    3. FixedLengthFrameDecoder: a fixed-length decoder that unpacks messages at specified lengths. You need to agree on a fixed size for each package.
    4. LengthFieldBasedFrameDecoder: the message into the message header and the body of the message. The length of the current entire message is kept in the header, and a complete message is not read until a sufficient length of message is read.
  2. You can glue or unpack packets using user-defined protocols.

7 Netty long connection and heartbeat mechanism

The Netty client communicates with the server over a long period of time. After a read/write operation, the connection between the client and server is not closed. Subsequent read/write operations continue to use the connection. A long connection saves TCP establishment and closure operations, reducing the dependence on network resources and saving time.

Network exceptions such as disconnection may occur during the TCP long connection. If an exception occurs, the client and server cannot discover that the other party is disconnected without interaction. To solve this problem, we need to introduce a heartbeat mechanism.

The heartbeat mechanism works like this: When there is no data interaction between the client and server within a certain period of time, that is, when the client or server is in idle state, the client or server sends a special data packet to the peer. After receiving the data packet, the receiver immediately sends a special data packet to the sender. This is a ping-pong interaction. Therefore, when one end receives a heartbeat message, it knows that the other end is still online, which ensures the validity of the TCP connection.

In fact, TCP has its own long connection option, and also has a heartbeat packet mechanism, that is, TCP option: SO_KEEPALIVE. However, long connections at the TCP layer are not flexible enough. Therefore, we generally implement the custom heartbeat mechanism on the application layer protocol, that is, on the Netty level through coding. For Netty, the core class is IdleStateHandler.

Which heartbeat types are supported by Netty:

  1. ReaderIdleTime: indicates the read timeout period (the test end does not receive any message from the tested end within a specified period of time).
  2. WriterIdleTime: indicates the write timeout period (the test end sends a message to the tested end within a certain period of time).
  3. AllIdleTime: Timeout for all types.

8 Zero copy of Netty

Zero copy (English: zero-copy; Zero copy (zero copy) is a technique in which a computer performs an operation without the CPU having to copy data from one part of memory to another specific area. This technique is commonly used to save CPU cycles and memory bandwidth when transferring files over the network.

Zero copy in Netty is reflected in the following aspects:

  1. Netty provides the CompositeByteBuf class, which combines multiple ByteBuFs into a logical ByteBuf, avoiding data copying between each ByteBuf.
  2. ByteBuf supports slice, so you can split ByteBuf into multiple BytebuFs that share the same storage area, avoiding memory copying.
  3. FileChannel. TranferTo, which is wrapped in FileRegion, can be used to transfer files directly to the target Channel, avoiding the memory copy problem caused by the traditional write loop.
  4. Netty receives and sends bytebuffers using DIRECT BUFFERS, which use out-of-heap DIRECT memory for Socket reading and writing without the need for secondary copy of byte BUFFERS. If traditional HEAP BUFFERS are used for Socket reads and writes, the JVM copies the HEAP Buffer into direct memory before writing it to the Socket. The message is sent with an extra memory copy of the buffer compared to direct out-of-heap memory.

9 Differences between Netty and Tomcat

Different functions: Tomcat is a Servlet container, can be regarded as a Web server, is a developed software, and Netty is a powerful asynchronous event-driven network application framework, used to simplify network programming, can be used to write a variety of servers.

Different agreements: Tomcat is a Web server based on HTTP, while Netty supports a variety of off-the-shelf protocols and can customize them programmatically because Netty itself encodes/decodes byte streams. So Netty can implement HTTP server, FTP server, UDP server, RPC server, WebSocket server, Redis Proxy server, MySQL Proxy server and so on.

10 Netty Simple Case

Client:

public class NettyClient {

    public static void main(String[] args) throws IOException, InterruptedException {
        //1. Create a NioEventLoopGroup instance
        EventLoopGroup group = new NioEventLoopGroup();
        try {
            //2. Create a client boot/helper class: Bootstrap
            Bootstrap bootstrap = new Bootstrap();
            //3. Specify a thread group
            bootstrap.group(group)
                    //4. Specify the IO model
                    .channel(NioSocketChannel.class)
                    .handler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        public void initChannel(SocketChannel ch) throws Exception {
                            ChannelPipeline pipeline = ch.pipeline();
                            // 5. Here you can customize the business processing logic of the message
                            pipeline.addLast(new DelimiterBasedFrameDecoder(4096, Delimiters.lineDelimiter()));
                            pipeline.addLast(new StringDecoder(CharsetUtil.UTF_8));
                            pipeline.addLast(new StringEncoder(CharsetUtil.UTF_8));
                            pipeline.addLast(newClientHandler()); }});// 6. Try to establish a connection
            ChannelFuture f = bootstrap.connect("localhost".8888).sync();
            Channel channel = f.channel();
            // 7. Wait for the connection to close (block until the Channel closes)
            //channel.closeFuture().sync();

            BufferedReader br = new BufferedReader(new InputStreamReader(System.in));
            for(; ;) { String s = br.readLine(); channel.writeAndFlush(s +"\r\n");
                if ("bye".equals(s)) {
                    break; }}}finally{ group.shutdownGracefully(); }}}Copy the code

ClientHandler:

/ * * *@author lx
 */
public class ClientHandler extends SimpleChannelInboundHandler<String> {


    @Override
    protected void channelRead0(ChannelHandlerContext ctx, String msg) throws Exception { System.out.println(msg); }}Copy the code

NettyServer:

public class NettyServer {

    public static void main(String[] args) {
        //1 bossGroup is used to receive the connection mainReactor
        //workerGroup for a specific processing subReactor
        EventLoopGroup bossGroup = new NioEventLoopGroup(1);
        EventLoopGroup workerGroup = new NioEventLoopGroup();
        try {
            //2. Create the server boot/helper class ServerBootstrap
            ServerBootstrap serverBootstrap = new ServerBootstrap();
            //3. Configure two large thread groups for the boot class and determine the thread model
            serverBootstrap
                    .group(bossGroup, workerGroup)
                    // 4. Specify the IO model
                    .channel(NioServerSocketChannel.class)
                    .childHandler(new ChannelInitializer<SocketChannel>() {
                        @Override
                        public void initChannel(SocketChannel ch) {
                            ChannelPipeline pipeline = ch.pipeline();
                            //5. You can customize the business processing logic Handler for client messages

                            pipeline.addLast(new DelimiterBasedFrameDecoder(4096,Delimiters.lineDelimiter()));
                            pipeline.addLast(new StringDecoder(CharsetUtil.UTF_8));
                            pipeline.addLast(new StringEncoder(CharsetUtil.UTF_8));
                            pipeline.addLast(newServerHandler()); }});// 6. Bind the ports and call sync to block until the bind is complete
            ChannelFuture f = serverBootstrap.bind(8888).sync();
            // 7. Block and wait until the server Channel closes (the closeFuture() method gets the closeFuture object for the Channel and calls sync()))
            f.channel().closeFuture().sync();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            //8. Gracefully close related thread group resourcesbossGroup.shutdownGracefully(); workerGroup.shutdownGracefully(); }}}Copy the code

ServerHandler:

/ * * *@author lx
 */
public class ServerHandler extends SimpleChannelInboundHandler<String> {


    /** * Read request */
    @Override
    protected void channelRead0(ChannelHandlerContext ctx, String msg) {
        Channel channel = ctx.channel();
        System.out.println("client: " + channel.remoteAddress());
        System.out.println("from client: " + msg);
        double v = ThreadLocalRandom.current().nextDouble();
        channel.writeAndFlush("from server: " + v + " \r\n");

    }

    @Override
    public void handlerAdded(ChannelHandlerContext ctx) {
        Channel channel = ctx.channel();
        System.out.println("client: " + channel.remoteAddress() + "Join");
    }

    @Override
    public void handlerRemoved(ChannelHandlerContext ctx) {
        Channel channel = ctx.channel();
        System.out.println("client: " + channel.remoteAddress() + "Left");
    }

    @Override
    public void channelActive(ChannelHandlerContext ctx) {
        Channel channel = ctx.channel();
        System.out.println("client: " + channel.remoteAddress() + "Online"); }}Copy the code

Reference article:

  1. Netty often meet test summary
  2. Netty interview questions

If you need to communicate, or the article is wrong, please leave a message directly. In addition, I hope to like, collect, pay attention to, I will continue to update a variety of Java learning blog!