1.1 Understanding Netty What is Netty? Netty is a high-performance, asynchronous, event-driven network application framework. Why use Netty? A Netty is implemented based on Java NIO and encapsulates various apis uniformly. B Based on the event model, we can encode our own business in corresponding events. Let developers focus on the business. C Highly customizable thread model, single thread, one or more thread pools. D Netty relies only on the underlying JDK API. E In communication, reduce unnecessary memory copy and improve performance. F For security, complete SSL/TLS and StartTLS. What are Netty’s advantages over NIO? A To encapsulate the API in NIO, easy to use. B Write high quality NIO programs that require knowledge of multithreading and network programming. C Poor NIO reliability, such as client secondary connection, network flash rollback, half-packet read/write, and cache failure. D NIO will cause the Selector to poll empty, resulting in cpu100%. Jdk1.7 will still have this problem, but the sending probability is lower.
Article reprinted: Le Byte
1.2 What is the Architecture of Netty? Image Core (Core) A extensible event model. B Unified communication API (both HTTP and socket use agreed API). C Zero copy mechanism and byte buffer. Transport Services A supports sockets and Datagrams. B Supports HTTP. C In-VM Pipe (Pipe protocol) Protocol Support a HTTP and Websocket. B SSL secure socket protocol support. C Google Protobuf serialization framework. D Supports zlib and gzip compression. E Supports large file transfer. F RTSP (Real-time Stream Transport Protocol, an application layer protocol in the TCP/IP protocol system) G supports binary protocols and provides full unit testing. IO model and Reactor model in Netty
2.1 Common IO Models 2.1.1 BIO Model Image
The number of concurrent clients on A is the same as the number of threads on the server. As the number of concurrent clients increases, the server performance deteriorates. B After a connection is created, the thread does not operate and is blocked. Huge waste of server resources. A Buffer: it is a Buffer, and the bottom layer is realized through an array. All read and write operations in NIO are buffer-based. All Java primitives except Boolean have buffer objects.
B Channel: usually called a Channel, it is used to connect the client and server. It goes both ways. You can read and you can write.
C Selector: Usually called a multiplexer, used to find registered channels on which read and write occurs. Here’s how it works: the Selector constantly polls the Channel above it, and the Channel is read or written, which is picked by the Slector. Then the set of ready channels is obtained through SelectionKey for IO read and write operations. To analyze the above figure, a thread processes multiple channels to avoid system overhead caused by switching up and down between multiple threads. A channel reads or writes only when an event occurs. In NIO, the Selector multiplexer blocks when polling if no event occurs. How can we optimize this block as well? So AIO was born in this context. AIO is called asynchronous I/O, which depends on the underlying asynchronous I/O of the operating system. Basic AIO process The user thread informs the kernel to start an I/O operation through a system call, and the user thread returns. Kernel After I/O operations (including data preparation and replication) are complete, the kernel notifies the user of the program and enables the user to perform subsequent operations. There are problems in the Image AIO model a To complete the registration and transmission of events, which requires a lot of support from the bottom layer of the operating system. B In Windows, real asynchronous I/O is implemented through IOCP. But today’s high-concurrency systems are all deployed on Linux. C The AIO model supported by Linux is unstable. In this system, asynchrony is implemented by NIO. 2.2 Common Reactor Models 2.2.1 Overview of the Reactor Model What is a Reactor model? Is a concurrent programming model, is an idea, with guiding significance. Common Reactor model single-thread model, multi-thread model, master-slave multi-thread model. Netty is very friendly in supporting the first three models and generally adopts a master-slave architecture approach. The Reactor model has three roles: A Reactor listens for and allocates events; B Acceptor Processes new client connections and allocates requests to the processing chain. C The Handler binds itself to the event and performs reads and writes (reads channel, executes business logic and writes the result to channel). 2.2.1 Single-thread model image
The picture above shows
A A thread completes business processing. A Reactor is a multiplexer that listens for events and passes them to a Handler or Acceptor. C If a connection event is established, the Reactor passes to the Acceptor. If it is a read/write event, Reactor passes to Handler. Advantages A Advantages of simple structure, single-thread completion, no process communication problems. For some simple service scenarios with low performance requirements. Disadvantages A cannot play the advantages of server multi-core
B Too many client connections cause too many client connections, and the Reactor thread is overloaded. The client connection times out, resulting in a large amount of information backlog. The performance is low.
C A single point failure causes system communication failure. 2.2.2 Multi-threaded model image
The picture above shows
The difference, as opposed to a single thread, is that the Handler is only responsible for user response and event distribution. The real business logic is handled in the Work thread pool. Problem A Multi-threaded data sharing is complicated. If the child thread passes the result to the main thread Reactor after completing the business process, the mutual exclusion and protection mechanism of the data will be involved.
B Reactor undertakes all monitoring and response. If millions of clients are connected and the server is authenticated for client handshake security, authentication itself is a performance drain. 2.2.3 Master-slave multithreading model image
The figure above shows that A Reactor is divided into two parts. The MainReactor is responsible for monitoring server sockets, processing NETWORK I/O establishment, and registering established socketChannel to SubReactor. B SubReactor establishes data interaction and event business processing with sockets. 1 the advantages
A has fast response and does not need to be blocked by a single synchronous event, although the Reactor itself is synchronous b has strong extensibility and makes full use of CPU resources by expanding the SubReactor C Has high reusability This model is independent of specific event processing logic and has high reusability. 2.2.4 Netty Model (mainly master-slave multi-thread model) Image
In netTY model, BossGroup is responsible for handling new connections, and WorkGroup is responsible for other events. (Group stands for thread pool concept)
B NioEventLoop represents a thread that loops through tasks and listens for read/write events bound to it.
C PipeLine contains a ChannelHandler, ChannelHandler for business processing. The first Netty service 3.1 Server dependency
4.0.0
<groupId>com.haopt.iot</groupId> <artifactId>first-netty</artifactId> <packaging>jar</packaging> < version > 1.0 - the SNAPSHOT < / version > < dependencies > < the dependency > < groupId > io.net ty < / groupId > <artifactId>netty-all</artifactId> <version>4.1.50.Final</version> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> </dependency> </dependencies> <build> <plugins> <! Plugins </groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.2</version> < Configuration > <source>1.8</source> <target>1.8</target> <encoding>UTF-8</encoding> </configuration> </plugin> </plugins> </build>Copy the code
import io.netty.bootstrap.ServerBootstrap; import io.netty.buffer.UnpooledByteBufAllocator; import io.netty.channel.ChannelFuture; import io.netty.channel.ChannelOption; import io.netty.channel.EventLoopGroup; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.nio.NioServerSocketChannel;
Public class MyRPCServer {public void start(int port) throws Exception {// The main thread does not handle any business logic, EventLoopGroup boss = new NioEventLoopGroup(1); EventLoopGroup worker = new NioEventLoopGroup(); ServerBootstrap ServerBootstrap = new ServerBootstrap(); serverBootstrap.group(boss, Worker) / / set the thread group. The channel (NioServerSocketChannel. Class) / / configuration server channel. ChildHandler (new MyChannelInitializer ()); // The handler for worker threads // the allocation of ByteBuf is set to unpooled, Otherwise can’t switch to heap buffer mode serverBootstrap. ChildOption (ChannelOption. ALLOCATOR, UnpooledByteBufAllocator. DEFAULT); ChannelFuture future = serverBootstrap.bind(port).sync(); System.out.println(” server started, port: “+ port); // Wait for the server listening port to close future.channel().closeFuture().sync(); } the finally {/ / elegant close boss. ShutdownGracefully (); worker.shutdownGracefully(); }}
} the service side – ChannelHandler package com.haopt.net ty. Server handler; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.ChannelInboundHandlerAdapter; import io.netty.util.CharsetUtil;
Public class MyChannelHandler extends ChannelInboundHandlerAdapter {/ * * * to obtain from the client data * @ param CTX * @ param MSG * @throws Exception */ @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { ByteBuf byteBuf = (ByteBuf) msg; String msgStr = byteBuf.toString(CharsetUtil.UTF_8); System.out.println(” client sends data: “+ msgid); Ctx.writeandflush (Unpooled. CopiedBuffer (” OK “, charsetutil.utf_8)); }
/** * Exception handling * @param CTX * @param cause * @throws Exception */ @override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); }Copy the code
} Test case package com.haopt.netty.myrpc; import com.haopt.netty.server.MyRPCServer; import org.junit.Test; public class TestServer { @Test public void testServer() throws Exception{ MyRPCServer myRPCServer = new MyRPCServer(); myRPCServer.start(5566); }} 3.2 Client-client package com.haopt.netty.client; import com.haopt.netty.client.handler.MyClientHandler; import io.netty.bootstrap.Bootstrap; import io.netty.channel.ChannelFuture; import io.netty.channel.EventLoopGroup; import io.netty.channel.nio.NioEventLoopGroup; import io.netty.channel.socket.nio.NioSocketChannel; public class MyRPCClient { public void start(String host, Throws Exception {// Define a thread group EventLoopGroup worker = new NioEventLoopGroup(); Bootstrap = new Bootstrap(); Bootstrap.group (worker).channel(niosocketchannel.class) Client uses nioSocketChannel.handler (new MyClientHandler()); ChannelFuture = bootstrap.connect(host, port).sync(); future.channel().closeFuture().sync(); } finally { worker.shutdownGracefully(); }}} – client (ClientHandler) package com.haopt.net ty. Client. The handler; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.util.CharsetUtil; public class MyClientHandler extends SimpleChannelInboundHandler { @Override protected void ChannelRead0 (ChannelHandlerContext CTX, ByteBuf MSG) throws Exception {system.out. println(” Receives the message from the server: ” + msg.toString(CharsetUtil.UTF_8)); } @override public void channelActive(ChannelHandlerContext CTX) throws Exception {// Send data to the server. String MSG = “hello”; ctx.writeAndFlush(Unpooled.copiedBuffer(msg, CharsetUtil.UTF_8)); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); Channel A can be understood as a socket connection. When a client connects to a server, a Channel is created. Responsible for basic IO operations, such as bind(), connect(), read(), write(). B The API provided by Netty’s Channel interface greatly reduces the complexity of Socket classes. Common channels (Different protocols and blocking types correspond to different channels) NIO client TCP Socket connection.
B NioServerSocketChannel: TCP Socket connection on the NIO server.
C NioDatagramChannel, UDP connection.
D NioSctpChannel: Client Sctp connection.
E NioSctpServerChannel, Sctp server connection, these channels cover UDP and TCP network IO and file IO. 4.2 EventLoopGroup and EventLoop Overview With the Channel connection service, messages flow between connections. A server sending a message is called an outbound and a server receiving a message is called an inbound. Then the outbound and inbound messages generate events such as: the connection is active; Data reading; User events; Abnormal events; Open the connection; Close connections and so on. With events, you need a mechanism to monitor and coordinate them, and that mechanism is EventLoop. Learn about EventLoopGroup and EventLoop
image
Explain the above figure
A An EventLoopGroup contains one or more EventLoops b An EventLoop is bound to one Thread during its lifetime C All I/O events on an EventLoop are processed on its dedicated Thread. A Channel is registered with only one Event Loop during its lifetime. An Event Loop may be assigned to one or more Channel 3 code implementations
EventLoopGroup boss = new NioEventLoopGroup(1); EventLoopGroup worker = new NioEventLoopGroup(); 4.3 ChannelHandler Introduction To the ChannelHandler The ChannelHandler provides the outbound and inbound logic for data. ChannelHandler for outbound and inbound
Image ChannelInboundHandler Inbound event handler ChannelOutBoundHandler Specifies the ChannelHandler used in outbound event handler 3
A on the server when writing ChannelHandler inherited ChannelInboundHandlerAdapter b when the client write ChannelHandler SimpleChannelInboundHandler inheritance
Note: The difference is that the former does not release a reference to message data, while the latter does. 4.4 ChannelPipeline Introduction The ChannelPipeline connects channelHandlers. A Channel contains a ChannelPipeline, which maintains a list of ChannelHandlers. ChannelHandler mappings between channels and ChannelPipeline are maintained by ChannelHandlerContext. image
As illustrated above, ChannelHandlers form a two-way linked list in the order they join, with inbound events passing from the head of the list to the last ChannelHandler. Outbound events are passed from tail through the list to the last ChannelHandler, and the two types of ChannelHandler do not affect each other. 4.5 Bootstrap Overview Bootstrap is used to Bootstrap the netty program, string components together, bind interfaces, and start services. Bootstrap Two types (Bootstrap and ServerBootstrap) The client requires only one EventLoopGroup, and the server requires two EventLoopgroups. image
The figure above explains that the EventLoopGroup associated with ServerChannel will assign an EventLoop responsible for creating a Channel for the incoming connection request. Once the connection is accepted, the second EventLoopGroup assigns an EventLoop to its Channel. 4.6 Future How to notify the application when the first awareness operation is complete. This object can be thought of as a placeholder for the results of an asynchronous operation, which will be completed at some point in the future and provide access to its results. ChannelFuture origin. The interface Java JDK preset util. Concurrent. The Future, but its implementation, provided only allows manually check to see if the corresponding operation has been completed, or has been blocked until it is completed. This can be tedious, so Netty provides its own implementation, ChannelFuture, for use when performing asynchronous operations. Why is Netty completely asynchronous? A ChannelFuture provides several additional methods that enable us to register one or more ChannelFutureListener instances.
The b listener’s callback method, operationComplete(), will be called when the corresponding operation completes. The listener can then determine if the operation completed successfully or if it went wrong.
C Each Netty outbound I/O operation will return a ChannelFuture, that is, none of them will block. So Netty is completely asynchronous and event-driven. 4.7 Component Summary Image
ByteBuf JavaNIO provides a buffer container (ByteBuffer), but it is complicated to use. So Netty introduces the cache ButeBuf, which is an array of bytes. ByteBuf Two indexes (readerIndex, writerIndex) A readerIndex is incremented based on the number of bytes read b writerIndex is incremented based on the number of bytes written
Note: If readerIndex exceeds writerIndex, Netty will throw an IndexOutOf-BoundsException. Image 5.2 ByteBuf basic use read package com.haopt.net. Ty myrpc. Test; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.util.CharsetUtil; Public class TestByteBuf01 {public static void main(String[] args) {ByteBuf ByteBuf = Unpooled.copiedBuffer(“hello world”, CharsetUtil.UTF_8); System.out.println(“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); Println ((char) bytebuf.readbyte ())); while (bytebuf.isreadable ()){system.out.println ((char) bytebuf.readbyte ())); } for (int I = 0; i < byteBuf.readableBytes(); i++) { System.out.println((char)byteBuf.getByte(i)); Byte [] bytes = bytebuf.array (); for (byte b : bytes) { System.out.println((char)b); Write package com.haopt.net}}} ty. Myrpc. Test; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.util.CharsetUtil; Public class TestByteBuf02 {public static void main(String[] args) {public static void main(String[] args) { Buffer (10,20) ByteBuf = Unpooled. Buffer (10,20); System.out.println(“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); for (int i = 0; i < 5; i++) { byteBuf.writeInt(i); } system.out.println (“ok”);} system.out.println (“ok”); System.out.println(“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); while (byteBuf.isReadable()){ System.out.println(byteBuf.readInt()); }}} discard read bytes
image package com.haopt.netty.myrpc.test; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.util.CharsetUtil; public class TestByteBuf03 { public static void main(String[] args) { ByteBuf byteBuf = Unpooled.copiedBuffer(“hello world”,CharsetUtil.UTF_8); System.out.println(“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); while (byteBuf.isReadable()){ System.out.println((char)byteBuf.readByte()); } byteBuf.discardReadBytes(); System.out.println(“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); } } clear()
image package com.haopt.netty.myrpc.test; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.util.CharsetUtil; public class TestByteBuf04 { public static void main(String[] args) { ByteBuf byteBuf = Unpooled.copiedBuffer(“hello world”,CharsetUtil.UTF_8); System.out.println(“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); byteBuf.clear(); // Reset readerIndex and writerIndex to 0 system.out.println (“byteBuf: “+ bytebuf.capacity ()); System.out.println(“byteBuf: “+ bytebuf.readableBytes ()); System.out.println(“byteBuf writableBytes: “+ bytebuf.writablebytes ()); }} 5.3 ByteBuf usage mode 5.3.1 According to the storage buffer, it is divided into three types of heap cache memory allocation and reclamation speed is relatively fast, can be automatically reclaimed by THE JVM, the disadvantage is, if socket I/O read and write, need to do an extra memory copy. Copying the buffer corresponding to the heap memory into the kernel Channel results in some performance degradation. Because it is managed by the JVM on the heap, it can be quickly released when not in use. Byte [] data can be obtained by bytebuf.array (). Direct cache is non-heap memory, which is allocated externally. It is allocated and reclaimed more slowly than heap memory, but is written to or read from a Socket Channel faster than a heap memory block because of one less memory copy. A composite cache, as its name suggests, aggregates these two types of buffers together. Netty provides a CompsiteByteBuf that puts heap buffer and direct buffer data together to make it easier to use. 5.3.2 Cache Selection // By default, DirectByteBuf is used. If the HeapByteBuf mode is used, you need to set system parameters.
// Unsafe. system. setProperty(“io.netty.noUnsafe”, “true”); // The allocation of ByteBuf should be set to unpooled, Otherwise can’t switch to heap buffer mode serverBootstrap. ChildOption (ChannelOption. ALLOCATOR, UnpooledByteBufAllocator. DEFAULT); 5.3.3 Allocation of ByteBuf Netty provides two implementations of ByteBufAllocator, PooledByteBufAllocator, which implements pooling of ByteBuf objects to improve performance and minimize memory fragmentation. UnpooledByteBufAllocator, which does not implement object pooling, generates new object instances each time. // Use ChannelHandlerContext to obtain the ByteBufAllocator instance ctx.alloc(); Channel.alloc ();
//Netty uses PooledByteBufAllocator by default
/ / not pooling model can be set in the bootstrap class serverBootstrap. ChildOption (ChannelOption. ALLOCATOR, UnpooledByteBufAllocator. DEFAULT); SetProperty (“io.netty.allocator. Type “, “pooled”); System.setProperty(“io.netty.allocator.type”, “unpooled”); ByteBuf can be collected by GC if it is in heap buffer mode, but if it is in direct buffer mode, it is not managed by GC and must be released manually, otherwise it will leak memory.
5.5.1 ByteBuf manual release Implementation logic hand release, it is in use, after the completion of call ReferenceCountUtil. Release (byteBuf); Release. Netty automatically reclaims byteBuf by subtracting the byteBuf usage count from the release method. code
/ * *
- Get the data sent by the client
- @param ctx
- @param msg
- @throws Exception
*/ @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { ByteBuf byteBuf = (ByteBuf) msg; String msgStr = byteBuf.toString(CharsetUtil.UTF_8); System.out.println(” client sends data: “+ msgid); / / release resources ReferenceCountUtil. Release (byteBuf); } Note: Manual release can do the trick, but this method can be tedious, if you forget to release may cause a memory leak.
5.5.1 ByteBuf automatic release Automatic release there are three ways, respectively is: inbound TailHandler, inheritance SimpleChannelInboundHandler, HeadHandler outbound release.
TailHandler The end of Netty’s ChannelPipleline pipeline is a TailHandler. By default, if each inbound Handler passes a message down, TailHandler releases ReferenceCounted type messages. / * *
- Get the data sent by the client
- @param ctx
- @param msg
- @throws Exception
*/ @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { ByteBuf byteBuf = (ByteBuf) msg; String msgStr = byteBuf.toString(CharsetUtil.UTF_8); System.out.println(” client sends data: “+ msgid); Ctx.writeandflush (Unpooled. CopiedBuffer (” OK “, charsetutil.utf_8)); ctx.fireChannelRead(msg); // Pass ByteBuf down} The TailContext inner class in DefaultChannelPipeline executes last:
@Override public void channelRead(ChannelHandlerContext ctx, Object msg) { onUnhandledInboundMessage(ctx, msg); } / / finally performs protected void onUnhandledInboundMessage Object (MSG) {try {logger. The debug (” Discarded the inbound message that {} reached at the tail of the pipeline. ” + “Please check your pipeline configuration.”, msg); } finally { ReferenceCountUtil.release(msg); / / release resources}} SimpleChannelInboundHandler when ChannelHandler inherited SimpleChannelInboundHandler, In SimpleChannelInboundHandler channelRead () method, will be for the release of the resources, we have to be written to the business code channelRead0 (). / / the channelRead SimpleChannelInboundHandler () @ Override public void channelRead (ChannelHandlerContext CTX, Object msg) throws Exception { boolean release = true; try { if (acceptInboundMessage(msg)) { @SuppressWarnings(“unchecked”) I imsg = (I) msg; channelRead0(ctx, imsg); } else { release = false; ctx.fireChannelRead(msg); } } finally { if (autoRelease && release) { ReferenceCountUtil.release(msg); // Release}}} here using:
package com.haopt.myrpc.client.handler; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.util.CharsetUtil; public class MyClientHandler extends SimpleChannelInboundHandler { @Override protected void ChannelRead0 (ChannelHandlerContext CTX, ByteBuf MSG) throws Exception {system.out. println(” Receives the message from the server: ” + msg.toString(CharsetUtil.UTF_8)); } @override public void channelActive(ChannelHandlerContext CTX) throws Exception {// Send data to the server. String MSG = “hello”; ctx.writeAndFlush(Unpooled.copiedBuffer(msg, CharsetUtil.UTF_8)); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); }} HeadHandler During outbound processing, the allocated ByteBuf is automatically released by HeadHandler. The Bytebuf buffer used for outbound processing, typically for messages to be sent, is usually requested by the application. At the start of the outbound process, the Bytebuf buffer is entered into the pipeline pipeline for outbound processing by calling ctx.writeAndFlush(MSG). After processing in each outbound Handler, the final message goes to the last outbound HeadHandler, goes through another round of complex calls, and is finally released when flush is complete. package cn.itcast.myrpc.client.handler; import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.SimpleChannelInboundHandler; import io.netty.util.CharsetUtil; public class MyClientHandler extends SimpleChannelInboundHandler { @Override protected void ChannelRead0 (ChannelHandlerContext CTX, ByteBuf MSG) throws Exception {system.out. println(” Receives the message from the server: ” + msg.toString(CharsetUtil.UTF_8)); } @override public void channelActive(ChannelHandlerContext CTX) throws Exception {// Send data to the server. String MSG = “hello”; ctx.writeAndFlush(Unpooled.copiedBuffer(msg, CharsetUtil.UTF_8)); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { cause.printStackTrace(); ctx.close(); }} image 5.6 ByteBuf
In the inbound processing process of a, if the original message is not processed, ctx.fireChannelRead(MSG) is called to pass the original message down, and the TailHandler, the last rod of the pipeline, releases the original message automatically.
B if the truncation inbound processing production line, can inherit SimpleChannelInboundHandler, finish inbound ByteBuf automatic release.
C During outbound processing, the allocated ByteBuf is automatically released by HeadHandler.
In inbound processing, if the original message is converted to a new message and passed down by calling ctx.fireChannelRead(newMsg), the original message must be released.
E inbound processing, if no longer call CTX. FireChannelRead (MSG) pass any messages, no inheritance SimpleChannelInboundHandler complete automatic release, it’s more to the original news release.
Article reprinted: Le Byte