A simple concept RPC: (Remote Procedure Call), a Remote Procedure Call, is to Call a method in the process of a Remote computer through the network, so as to obtain the desired data, the process is the same as calling a local method. Blocking IO: When a blocking I/O is blocked on a call to inputStream.read () and does not return until data arrives, the serversocket.accept () method is also blocked and does not return until there is a client connection. The I/O communication mode is as follows:Copy the code

Picture description (50 words Max)

Disadvantages: When there are many clients, a large number of processing threads are created and certain resources are allocated to each thread. Blocking may cause frequent context switching, so NIO NIO is introduced: Jdk1.4 introduced (NEW NIO is a non-blocking IO model that polls IO events to see if they are ready. NIO is a non-blocking I/O model that polls IO events to see if they are ready Non-blocking means that the thread can do other tasks while waiting for IO. The core of synchronization is Selector, which replaces the local polling IO events of the thread, avoiding blocking and reducing unnecessary thread consumption. The non-blocking core is the channel and cache, where data can be written to the channel when an IO event is readyCopy the code

Picture description (50 words Max)

Its working principle: 3. Thread communication: Threads communicate with each other through wait and notify to ensure that each context switch is meaningful and reduce unnecessary thread switch channels: Is on the original analog I/O package, all data must be through the Channel object, common Channel FileChannel, SocketChannel, ServerSocketChannel, DatagramChannel (UDP protocol to network connection at both ends of the reading and writing data)Copy the code

Picture description (50 words Max)

Buffer: This is actually a container, a contiguous array, through which any data read or written is passedCopy the code

Picture description (50 words Max)

Netty: is a Java open source framework provided by JBOSS, is a high performance, asynchronous event-driven NIO framework, based on Java NIO API implementation, it provides TCP UDP and file transfer support, all operations are asynchronous non-blocking. Through the Futrue-Listener mechanism, the essence of the Reactor model is the reality,Selector as a multiplexer,EventLoop as a repeater, and Netty to NIO buffer optimization, greatly improve the performance of the netty client and server Netty Bootstrap and Channel in the life cycle of a brief introduction of the Bootstrap Bootstrap: boot, will ChannelPipeline ChannelHandler, EventLoop whole associationCopy the code

Picture description (50 words Max)

Bootstrap is divided into two implementations ServerBootstrap: for the server, a ServerChannel is used to receive client connections and create corresponding subchannels Bootstrap: for the client, only a single Channel is required to configure the entire Netty program and connect the main differences between the two components: 1 ServerBootstrap is used on the Server side to bind a port to listen for connections. Bootstrap is used on the Client side to connect to the Server side by calling connect(). We can also call bind() to receive back ChannelFuture In the Channel 2 Bootstrap on the client usually uses one EventLoopGroup, while ServerBootstrap on the server uses two first EventLoopGroups that bind to the port to listen for connection events, and the second EventLoopGroup that processes each received eventGroup Connection, which greatly increases concurrencyCopy the code

Public class Server {public static void main(String[] args) throws Exception {// 1 Create line two event loop groups // One is used to process the Server receiving client connections EventLoopGroup pGroup = new NioEventLoopGroup(); EventLoopGroup cGroup = new NioEventLoopGroup(); ServerBootstrap b = new ServerBootstrap(); ServerBootstrap b = new ServerBootstrap(); B.g roup (pGroup, cGroup) / / bind two thread group. The channel (NioServerSocketChannel. Class) / / specified NIO pattern. NioServerSocketChannel corresponds to TCP, NioDatagramChannel corresponds to udp.option (channeloption.so_backlog, Option (channeloption.so_sndbuf, 32 * 1024) // Set the size of the sending buffer. Option (channeloption.so_rcvbuf, 32 * 1024) // This is the size of the receive buffer. Option (channeloption.so_keepalive, ChildHandler (new ChannelInitializer() {@override protected void initChannel(SocketChannel SC) throws Exception {//SocketChannel: SocketChannel: SocketChannel: SocketChannel: SocketChannel: SocketChannel sc.pipeline().addLast(new ServerHandler()); }}); ChannelFuture cf1 = b.bind(8765).sync(); ChannelFuture cf1 = b.bind(8765).sync(); //ChannelFuture cf2 = b.bind(8764).sync(); Cf1.channel ().closeFuture().sync(); //cf2.channel().closeFuture().sync(); pGroup.shutdownGracefully(); cGroup.shutdownGracefully(); }}

public class ServerHandler extends ChannelHandlerAdapter { @Override public void channelActive(ChannelHandlerContext ctx) throws Exception { System.out.println(“server channel active… “); } @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { ByteBuf buf = (ByteBuf) msg; byte[] req = new byte[buf.readableBytes()]; buf.readBytes(req); String body = new String(req, “utf-8”); System.out.println(“Server :” + body ); String response = “response returned to client:” + body; ctx.writeAndFlush(Unpooled.copiedBuffer(response.getBytes())); // The future triggers the listener when it is done. Therefore, you need to close the connection through the server. Direct close with CTX [. Channel ()]. The close () / /. AddListener (ChannelFutureListener. Close); Override public void channelReadComplete(ChannelHandlerContext CTX) throws Exception {system.out.println (” Read “);} @override public void channelReadComplete(ChannelHandlerContext CTX) throws Exception {system.out.println (” read “); ctx.flush(); } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable t) throws Exception { ctx.close(); }}

public class Client { public static void main(String[] args) throws Exception {

EventLoopGroup group = new NioEventLoopGroup(); Bootstrap b = new Bootstrap(); b.group(group) .channel(NioSocketChannel.class) .handler(new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel sc) throws Exception { sc.pipeline().addLast(new ClientHandler()); }}); ChannelFuture cf1 = b.onnect ("127.0.0.1", 8765).sync(); ChannelFuture cf1 = b.onnect ("127.0.0.1", 8765).sync(); //ChannelFuture cf2 = b.onnect ("127.0.0.1", 8764).sync(); // Multiple ports can be used to send messages. Use writeFlush instead of cf1.channel().writeAndFlush(unpooled.copiedBuffer ("777".getBytes())); cf1.channel().writeAndFlush(Unpooled.copiedBuffer("666".getBytes())); Thread.sleep(2000); cf1.channel().writeAndFlush(Unpooled.copiedBuffer("888".getBytes())); //cf2.channel().writeAndFlush(Unpooled.copiedBuffer("999".getBytes())); cf1.channel().closeFuture().sync(); //cf2.channel().closeFuture().sync(); group.shutdownGracefully();Copy the code

}}

public class ClientHandler extends ChannelHandlerAdapter{ @Override public void channelActive(ChannelHandlerContext ctx) throws Exception { } @Override public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception { try { ByteBuf buf = (ByteBuf) msg; byte[] req = new byte[buf.readableBytes()]; buf.readBytes(req); String body = new String(req, “utf-8”); System.out.println(“Client :” + body ); } finally {// Remember to release the MSG argument of the method in xxxHandler: write data, MSG reference will be automatically released without manual processing; But when reading only data,! Must manually release cited several ReferenceCountUtil. Release (MSG); } } @Override public void channelReadComplete(ChannelHandlerContext ctx) throws Exception { } @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception { ctx.close(); }} Other components:

Handle: ChannelHandler: the most commonly used Handler is used to handle events that receive data to process our core business logic. ChannelInitializer: When a link is created, we need to know how to receive or send data. Of course, we have various Handler implementations to handle it. ChannelInitializer is used to configure those handlers. It provides a ChannelPipeline and adds handlers to the ChannelPipeline. ChannelPipeline: A Netty application based on the ChannelPipeline mechanism, which relies on EventLoop and EventLoopGroup, all of which are related to events or event handling: An EventLoop can serve multiple channels. EventLoopGroup: Contains multiple EventLoop channels. Channel: represents a Socket that connects to the Future In Netty, all IO operations are asynchronous, so we don't know whether the incoming request was processed or not, so we register a listener. When the operation succeeds or fails, the listener is automatically triggered, and all operations return a ChannelFutrue ChannelFuture Netty It's a non-blocking, event-driven network programming framework, so let's look at the relationship between Channel,EventLoop, and EventLoopGroupCopy the code

Picture description (50 words Max)

When a connection comes in,Netty registers a channel, and then the EventLoopGroup assigns an EventLoop to that channel, and that EventLoop serves that channel for its entire life cycle It's a threadCopy the code

Picture description (50 words Max)

How does Netty handle data? ChannelPipeline is responsible for ordering and executing handlers. Data flows through ChannelPipeline, where channelHandlers are valves that pass through each one ChannelHandler, the two subclasses of ChannelHandler, ChannelOutboundHandler and ChannelInboundHandler, select different handlers according to different flowsCopy the code

Picture description (50 words Max)

As can be seen from the figure, when a data stream enters the ChannelPipeline, it is executed next to each handler, and the data of each handler is transferred. This requires calling the ChannelHandlerContext in the method to operate, and this ChannelHandlerContext can be used It is used to read and write data flows in Netty. 3 Service processing in Netty There are many handlers in Netty. Depending on whether the Handler is an InboundAdapter or an OutboundAdapter,Netty provides a series of adapters to help simplify development. Each Handler in the ChannelPipeline is responsible for passing events to each other The next handler, there are these Adapters, and the work is done automatically, and we just need to override what we actually implement, and then the three channelHandlers that we use, Encoders and Decodeers In network transmission, we can only transmit byte streams. When sending data, we convert our message into bytes, which is called Encode. In contrast, when receiving data, we need to convert byte into message, which is called Decode Domain Logic We really care about how to deal with the data after decoding, our real business logic is receiving data, Netty provides a common base class is SimpleChannelInboundHandler < T >, in which T is the Handler to deal with data types, news reached the Handler, can Automatically call the channelRead0(ChannelHandlerContext,T) method in this Handler. T is the data object passed in. This is the location of my project on Github The https://github.com/developerxiaofeng/rpcByNetty project directory structure is as followsCopy the code

Picture description (50 words Max)

Detailed project details see class comments, very detailed oh!!Copy the code

Welcome Java engineers who have worked for one to five years to join Java architecture development: 855835163 Group provides free Java architecture learning materials (which have high availability, high concurrency, high performance and distributed, Jvm performance tuning, Spring source code, MyBatis, Netty, Redis, Kafka, Mysql, Zookeeper, Tomcat, Docker, Dubbo, multiple knowledge architecture data, such as the Nginx) reasonable use their every minute and second time to learn to improve yourself, don’t use “no time” to hide his ideas on the lazy! While young, hard to fight, to the future of their own account!