Hello, today for everyone to share Netty, quickly take out a small book to write it down!
Introduction of Netty
Netty is an asynchronous event-driven network application framework for rapid development of maintainable high-performance protocol servers and clients.
JDK native NIO program issues
The JDK native also has a set of web application apis, but there are a number of problems, mainly as follows:
-
NIO class libraries and apis are cumbersome to use, you need to master Selector, ServerSocketChannel, SocketChannel, ByteBuffer, etc
-
Additional skills are required, such as familiarity with Java multithreaded programming. Since NIO programming involves the Reactor model, you must be familiar with multithreaded and network programming in order to write high quality NIO programs
-
The reliability of the ability to complement, the workload and difficulty of development are very large. For example, the client is faced with disconnection and reconnection, network intermittent disconnection, half-packet read and write, failure cache, network congestion and abnormal code stream processing, etc. NIO programming is characterized by relatively easy function development, but the workload and difficulty of reliability capability complement are very large
-
JDK NIO bugs, such as the infamous epoll BUG, cause Selector polling to be empty, eventually causing the CPU to go 100%. It was claimed that the issue was fixed in update18 in JDK1.6, but the issue still exists in JDK1.7, although the bug is less likely to occur and not completely resolved
The characteristics of Netty
Netty on the JDK built-in NIO API encapsulation, to solve the above problems, the main features are:
-
Unified API elegently designed for all transport types – Blocking and non-blocking sockets based on flexible and extensible event model with clear separation of concerns highly customizable threading model – single-threaded, one or more thread pools true connectionless datagram Socket support (since 3.1)
-
JDK 5 (Netty 3.x) or 6 (Netty 4.x) will suffice
-
High performance Higher throughput and lower latency Reduce resource consumption and minimize unnecessary memory replication
-
Secure complete SSL/TLS and StartTLS support
-
Active community, constant updates active community, short version iteration cycle, found bugs can be fixed in time, at the same time, more new features will be added
Common usage scenarios of Netty
Common Netty scenarios are as follows:
-
In the Internet industry, remote service invocation is required between nodes in distributed systems, and high-performance RPC frameworks are essential. Netty, as an asynchronous and high-tech communication framework, is often used by these RPC frameworks as a basic communication component. Typical applications are as follows: Ali distributed service framework Dubbo RPC framework uses Dubbo protocol for inter-node communication. Dubbo protocol uses Netty as the basic communication component by default, which is used to realize internal communication between process nodes.
-
Java language has been more and more widely used in the game industry, whether it is mobile game server or large online game. As a high-performance basic communication component, Netty itself provides TCP/UDP and HTTP protocol stacks. It is very convenient to customize and develop private protocol stack, account login server, map server can facilitate high-performance communication through Netty
-
The RPC framework of Avro, a classic Hadoop high-performance communication and serialization component in the field of big data, uses Netty for cross-border point communication by default, and its Netty Service is implemented based on the secondary encapsulation of Netty framework
Interested readers can find out which open source projects currently use Netty: Related Projects
Netty high performance design
As an asynchronous event-driven network, Netty’s high performance mainly comes from its I/O model, which determines how to send and receive data, and its threading model, which determines how to process data
I/O model
What channels are used to send data to each other, BIO, NIO, or AIO, and the I/O model largely determines the performance of the framework
Blocking I/O
Traditional blocking I/O(BIO) can be represented as follows:
The characteristics of
Each request requires a separate thread to complete data read, business processing, and data write operations
The problem
-
When the number of concurrent connections is large, a large number of threads need to be created to process connections, occupying large system resources
-
After a connection is established, if the current thread has no data to read temporarily, the thread blocks on the read operation, resulting in a waste of thread resources
In the I/O reuse model, use the select, this function will make process jams, but and blocking I/O are different, the two function can block multiple I/O operations at the same time, but also at the same time for multiple read operations, multiple write I/O functions for testing, until there is data can be read or write, truly call I/O operations function
The key to Netty’s non-blocking I/O implementation is based on the I/O multiplexing model, which is represented by a Selector object:
Netty’s IO thread NioEventLoop can process hundreds or thousands of client connections simultaneously due to its aggregation of multiplexer selectors. When a thread reads or writes data from a client Socket channel, it can perform other tasks if no data is available. Threads typically spend idle time of non-blocking IO performing IO operations on other channels, so a single thread can manage multiple input and output channels.
Because the read and write operations are non-blocking, this can fully improve the running efficiency of THE IO thread, avoiding the thread suspension caused by frequent I/O blocking. One I/O thread can concurrently process N client connections and read and write operations, which fundamentally solves the traditional synchronous blocking I/O one connection one thread model. The performance, flexibility and reliability of the architecture have been greatly improved.
Based on the buffer
Traditional I/O is byte Stream or character Stream oriented and reads one or more bytes sequentially from a Stream in a streaming manner, so the position of the read pointer cannot be arbitrarily changed.
In NIO, the traditional I/O streams were abandoned and the concepts of channels and buffers were introduced. In NIO, data can only be read from a Channel to a Buffer or written from a Buffer to a Channel.
Unlike the sequential operations of traditional IO, buffer-based operations can read data at random in NIO
Threading model
How to read datagrams? The thread in which the codec is carried out after the read, how the message is distributed after the codec, and the thread model have great influence on the performance.
Event-driven model
In general, there are two approaches to designing an event-handling model
-
In polling mode, the thread continuously polls the source of the related event to see if an event has occurred. If an event has occurred, the event processing logic will be invoked.
-
In event-driven mode, the main thread puts the event into the event queue, and the other thread continuously circulates and consumes the events in the event list, invoking the corresponding processing logic of the event to process the event. The event-driven approach, also known as the message notification approach, is the idea behind the observer approach in design patterns.
Take GUI logic processing as an example to illustrate the difference between the two logics:
-
Polling mode Threads continually poll for button click events and invoke processing logic if they do
-
Event-driven event occurrence click event puts the event in the event queue, in the event list consumed by another thread, invokes the relevant event handling logic based on the event type
Here is O ‘Reilly’s illustration of the event-driven model
It consists of four basic components:
-
Event Queue: An entry to receive events and store events to be processed
-
Event Mediators: Distribute different events to different business logic units
-
Event Channels: Communication channels between dispensers and processors
-
Event Processor: Implements the service logic. After the processing is complete, an event is emitted to trigger the next operation
It can be seen that compared with the traditional polling mode, event-driven has the following advantages:
-
Good scalability, distributed asynchronous architecture, high decoupling between event processors, can easily extend the event processing logic
-
High performance, based on queue temporary event, can facilitate parallel asynchronous processing of events
Reactor thread model
Reactor means Reactor model, which refers to the event-driven processing of service requests through one or more inputs to the service processor simultaneously. The server program processes the incoming multiple requests and synchronously dispatches them to the corresponding processing thread. The Reactor model is also called the Dispatcher model, which means that I/O is multiplexed with unified monitoring events and dispatched to a process after receiving the events. It is one of the necessary techniques for programming high performance network servers.
There are two key components in the Reactor model:
-
A Reactor runs in a separate thread, listening for and distributing events to the appropriate handlers to react to IO events. It acts like a corporate telephone operator, answering calls from customers and redirecting the line to the appropriate contact
-
Handlers perform the actual event that the I/O event is to accomplish, similar to the actual official in the company that the customer wants to talk to. Reactor responds to I/O events by scheduling appropriate handlers that perform non-blocking actions
Depending on the number of reactors and the number of Hanndler threads, there are three variants of the Reactor model
-
Single Reactor Single thread
-
Single Reactor multithreading
-
Reactor is multithreaded
A Reactor is a code that executes while (true) {selector. Select (); … } a cyclic thread, which produces an endless stream of new events, is aptly called a reactor.
There is no longer a specific comparison of Reactor characteristics, advantages and disadvantages. Readers who are interested may refer to my previous article: Understanding High-performance Network Models.
Netty threading model
Netty mainly makes some modifications to the multithreaded model of Master/slave Reactors (as shown in the following figure). There are multiple principal/slave Reactors and Subreactors in the multithreaded model.
-
MainReactor takes care of client connection requests and passes them on to SubReactor
-
The SubReactor is responsible for THE I/O requests of the corresponding channel
-
Tasks that are not IO requests (specific logical processing) are written directly to the queue and wait for worker Threads to process them
Scalable IO in Java: Scalable multithreading model for a Master-slave Reactor
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
ServerBootstrap server = new ServerBootstrap();
server.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
123456
The bossGroup and workerGroup in the code above are the two objects passed in the Bootstrap constructor, and both groups are thread pools
-
The bossGroup thread pool simply binds a port to one of the threads as a MainReactor that processes accept events for the port, with one boss thread for each port
-
The workerGroup thread pool is fully utilized by each SubReactor and worker thread
Asynchronous processing
Asynchrony is the opposite of synchronization. When an asynchronous procedure call is made, the caller does not get the result immediately. The part that actually handles the call notifies the caller through status, notifications, and callbacks after completion.
In Netty, I/O operations are asynchronous. Bind, write, connect and other operations simply return a ChannelFuture. The caller does not get the result immediately. Users can obtain THE I/O operation results proactively or through notification mechanisms.
When the future object is created, it is in an incomplete state. The caller can use the returned ChannelFuture to obtain the state of the operation and register the listener function to perform the completed operation. Common operations include:
-
The isDone method is used to check whether the current operation is complete
-
The isSuccess method is used to determine whether the completed current operation was successful
-
Use the getCause method to get the cause of the failure of the current completed operation
-
The isCancelled method is used to determine whether the completed current operation has been cancelled
The addListener method is used to register listeners. When the operation completes (isDone returns complete), the specified listener is notified. If the Future object is completed, the specified listener is understood to be notified
For example, in the following code, the binding port is an asynchronous operation. When the binding operation is finished, the corresponding listener processing logic will be invoked
serverBootstrap.bind(port).addListener(future -> {
If (future.isSuccess()) {system.out.println (new Date() + ": port [" + port + "] bound successfully! ); } else {system.err. Println (" port [" + port + "] binding failed!" ); }});Copy the code
12345678
Compared to traditional blocking I/O, the thread blocks after an I/O operation until the operation is complete. The benefits of asynchronous processing are that threads do not block and can execute other programs during I/O operations, resulting in greater stability and throughput in high concurrency situations.
Netty architecture design
After introducing some theories about Netty, the following describes the Netty architecture design in terms of functions, features, modules, components, and operating processes
features
-
Transport services support BIO and NIO
-
Container integration supports OSGI, JBossMC, Spring, Guice containers
-
Protocol Support COMMON protocols such as HTTP, Protobuf, binary, text, and WebSocket. It also supports implementing custom protocols by implementing encoding and decoding logic
-
Core Extensible event model, common communication API, zero-copy ByteBuf buffer object
Module components
The Bootstrap, ServerBootstrap
Bootstrap means Bootstrap. A Netty application usually starts with a Bootstrap to configure the entire Netty program and connect various components. In Netty, the Bootstrap class is the Bootstrap class for the client program, and the ServerBootstrap class is the Bootstrap class for the server.
The Future, ChannelFuture
As mentioned earlier, all IO operations in Netty are asynchronous. It is not immediately clear whether the message is correctly processed or not, but you can wait for it to complete or register a listener. This is implemented through Future and ChannelFutures. A listener automatically triggers a registered listener event when the operation succeeds or fails.
Channel
Netty A component of network communication that performs network I/O operations. Channel provides users with:
-
The status of the channel currently connected to the network (for example, open? Is it connected?
-
Configuration parameters for the network connection (such as receive buffer size)
-
Provides asynchronous network I/O operations (such as establishing connections, reading and writing, binding ports). Asynchronous calls mean that any I/O calls are returned immediately and there is no guarantee that the requested I/O operation has been completed at the end of the call. The call immediately returns an instance of ChannelFuture, and by registering listeners on ChannelFuture, the caller can be notified of a successful, failed, or canceled I/O callback.
-
Supports associated I/O operations with corresponding handlers
Different types of channels correspond to connections of different protocols and blocking types. The following are some common Channel types
-
NioSocketChannel, asynchronous client TCP Socket connection
-
NioServerSocketChannel, an asynchronous server TCP Socket connection
-
NioDatagramChannel, asynchronous UDP connection
-
NioSctpChannel, asynchronous client Sctp connection
-
NioSctpServerChannel, asynchronous Sctp server side connection These channels cover UDP and TCP network IO and file IO.
Selector
Netty implements I/O multiplexing based on a Selector object. With a Selector, a thread can listen for multiple connected Channel events. After registering a Channel in a Selector, The Selector mechanism can automatically and continuously query the registered channels for ready I/O events (such as readable, writable, network connection completed, etc.), making it easy for the program to efficiently manage multiple channels with a single thread.
NioEventLoop
NioEventLoop maintains a thread and task queue, which supports asynchronous submission of tasks. When the thread is started, NioEventLoop’s RUN method is called to execute I/O tasks and non-I /O tasks:
-
I/O tasks are the ready events in selectionKey, such as Accept, Connect, read, write, and so on, triggered by processSelectedKeys.
-
Non-io tasks Tasks that are added to a taskQueue, such as register0 and bind0, are triggered by the runAllTasks method.
The execution time ratio of the two tasks is controlled by ioRatio. The default value is 50, indicating that the time allowed for non-I/O tasks is the same as that for I/O tasks.
NioEventLoopGroup
NioEventLoopGroup, which manages the life cycle of eventLoop, can be understood as a thread pool. It maintains a group of threads internally. Each thread (NioEventLoop) is responsible for processing events on multiple channels, and a Channel corresponds to only one thread.
ChannelHandler
ChannelHandler is an interface that processes I/O events or intercepts I/O operations and forwards them to the next handler in its ChannelPipeline(business processing chain).
ChannelHandler itself does not provide many methods, because the interface has a number of methods that need to be implemented, which can be subclassed during use:
-
ChannelInboundHandler Is used to handle inbound I/O events
-
ChannelOutboundHandler Is used to process outbound I/O operations
Or use the following adapter classes:
-
ChannelInboundHandlerAdapter handles inbound I/O events
-
ChannelOutboundHandlerAdapter used to handle the outbound I/O operations
-
ChannelDuplexHandler is used to handle inbound and outbound events
ChannelHandlerContext
Saves all context information associated with a Channel, along with a ChannelHandler object
ChannelPipline
Saves a List of ChannelHandlers that handle or intercept inbound events and outbound operations for a Channel. ChannelPipeline implements an advanced form of intercepting filter pattern that gives the user complete control over how events are handled and how the various Channelhandlers in a Channel interact with each other.
In Netty, each Channel has only one Channel pipeline corresponding to it, and their composition relationship is as follows:
A Channel contains a ChannelPipeline, which maintains a two-way list of ChannelHandlerContext, And each ChannelHandlerContext is associated with a ChannelHandler. Inbound events and outbound events In a bidirectional list, inbound events are passed from the head to the last inbound handler, and outbound events are passed from tail to the last outbound handler. The two types of handlers do not interfere with each other.
Working Principle framework
The process for initializing and starting the Netty server is as follows:
public static void main(String[] args) {
NioEventLoopGroup boosGroup = new NioEventLoopGroup(); NioEventLoopGroup workerGroup = new NioEventLoopGroup(); final ServerBootstrap serverBootstrap = new ServerBootstrap(); NioEventLoopGroup. Group (boosGroup, WorkerGroup) / / set the channel type for NIO type. The channel (NioServerSocketChannel. Class) / / set the connection configuration parameters. The option (ChannelOption SO_BACKLOG, 1024) .childOption(ChannelOption.SO_KEEPALIVE, true) .childOption(ChannelOption.TCP_NODELAY, ChildHandler (new ChannelInitializer<NioSocketChannel>() {@override protected void InitChannel (NioSocketChannel ch) {// Configure the inbound and outbound events channel ch.pipeline().addLast(...) ; ch.pipeline().addLast(...) ; }}); // Bind port int port = 8080; serverBootstrap.bind(port).addListener(future -> { if (future.isSuccess()) { System.out.println(new Date() + ": Port [" + port + "] bound successfully! ); } else {system.err. Println (" port [" + port + "] binding failed!" ); }});Copy the code
}
The basic process is as follows:
1 Initially create two NioEventLoopGroups. BoosGroup is used for Accetpt connection establishment events and sends requests, and workerGroup is used for I/O read/write events and service logic
2 Based on ServerBootstrap(server boot class), configure the EventLoopGroup, Channel type, connection parameters, and inbound and outbound event handlers
3 Bind ports and start working
Based on the Netty Reactor model, this paper introduces the server Netty architecture diagram:
The server contains 1 Boss NioEventLoopGroup and 1 Worker NioEventLoopGroup. NioEventLoopGroup is equivalent to an event loop group, which contains multiple event loops NioEventLoop. Each NioEventLoop contains one selector and one event loop thread.
Each Boss NioEventLoop executes a task consisting of three steps:
1 Poll the Accept event
2 Process accept I/O event, establish connection with Client, generate NioSocketChannel, and register NioSocketChannel with Selector of Worker NioEventLoop *3 Process tasks in the task queue. RunAllTasks. The tasks in the task queue include tasks performed by users calling Eventloop. execute or Schedule, or tasks submitted to the Eventloop by other threads.
Each Worker NioEventLoop consists of three steps:
1 Poll read and write events.
Two I/O events, namely read and write events, are processed when NioSocketChannel readable and writable events occur
3 runAllTasks to process tasks in the task queue.
There are three typical application scenarios for tasks in the task queue
1 User programs customize common tasks
ctx.channel().eventLoop().execute(new Runnable() {
@Override
public void run() {
//...
}
Copy the code
});
1234567
2. Various methods of calling channel by non-current reactor thread for example, in the business thread of push system, find the corresponding channel reference according to the user’s id, and then call write class method to push message to the user, this scenario will be entered. The final write is submitted to the task queue and consumed asynchronously.
3 User-defined scheduled task
ctx.channel().eventLoop().schedule(new Runnable() {
@Override
public void run() {
}
Copy the code
}, 60, TimeUnit.SECONDS);
1234567
conclusion
Currently, the mainstream version of ForkJoinPool is recommended for stable use. The use of ForkJoinPool in Netty5 increases code complexity, but does not provide significant performance improvements. This version is not recommended and is not available for download.
The barrier to entry for Netty is relatively high because there is less information on the subject, not because it is difficult, but because you can figure Netty out like Spring. Before learning, it is recommended to understand the whole framework principle structure and operation process, so that many detours can be avoided.
Well, that’s all for today’s article, hoping to help those of you who are confused in front of the screen