Traditional BIO programming
The basic model of network programming is the Client/Server model, that is, two processes communicate with each other, in which the Server provides the location information (bound IP address and listening port), the Client sends a connection request to the address monitored by the Server through the connection operation, and establishes the connection through the three-way handshake. If the connection is established successfully, The two parties can communicate through a network Socket. In the development based on traditional synchronous blocking model, ServerSocket is responsible for binding IP address and starting the listening port. The Socket initiates the connection operation. After a successful connection, the two sides synchronously block communication through the input and output streams.
First, let’s familiarize ourselves with BIO’s server communication model by looking at the communication model diagram below: In the BIO communication model, a separate Acceptor thread is responsible for listening to client connections. After receiving client connection requests, it creates a new thread for each client to process the link. After processing, it returns the reply to the client through the output stream, and the thread is destroyed. This is a typical request – reply communication model
The biggest problem of this model is the lack of elastic scaling capability. When the concurrent access of the client increases, the number of threads on the server and the number of concurrent access of the client show 1: 1 proportional relationship, because the thread is the Java virtual machine is very valuable system resources, when the number of threads after expansion, the performance of the system will have fallen sharply, as concurrent traffic continued to increase, the system will happen thread stack overflow, failed to create a new thread, and eventually lead to process downtime or zombie, cannot provide external services.
Pseudo asynchronous I/O programming
In order to solve the problem of synchronous blocking I/O, which requires one thread to handle one link, the thread model was optimized, and the back end handled multiple client access requests through a thread pool, resulting in the number of clients M: This section describes the proportion of the maximum number of threads in a thread pool (N). M can be much larger than N. You can use the thread pool to flexibly allocate thread resources and set the maximum number of threads to prevent thread exhaustion due to massive concurrent access.
When a new client accesses the server, the Socket of the client is encapsulated into a Task (which implements the Java.lang.Runnable interface) and sent to the thread pool of the backend for processing. The THREAD pool of the JDK maintains a message queue and N active threads to process the tasks in the message queue. Because the thread pool can set the size of the message queue and the maximum number of threads, its resource usage is manageable, and no matter how many clients access it concurrently, it does not cause resource exhaustion and downtime.
Pseudo-asynchronous I/O is actually just a simple optimization of the previous I/O threading model. It does not fundamentally solve the communication thread blocking problem caused by synchronous I/O. In the following section, we will briefly analyze the cascading failure caused by a long response time.
-
The server processing is slow. It takes 60 seconds to return the reply message, which only takes 10ms in normal times.
-
The thread with pseudo-asynchronous I/O is reading the response from the failed service node. It will be synchronously blocked for 60 seconds because the read input stream is blocked.
-
If all available threads are blocked by the faulty server, all subsequent I/O messages will be queued.
-
Since the thread pool is implemented with a blocking queue, subsequent enqueued operations are blocked when the queue is full.
-
Since there is only one Accptor thread on the front end that receives client access and it is blocked in the synchronized blocking queue of the thread pool, new client request messages will be rejected and a large number of connection timeouts will occur on the client.
-
Since almost all connections time out, the caller assumes that the system has crashed and cannot receive new request messages.
NIO programming
NIO is New I/O. It is a non-blocking I/O that is added to the I/O supported by Java. Also known as non-block I/O.
NIO also provides two different Socket channel implementations, SocketChannel and ServerSocketChannel, as opposed to the Socket and ServerSocket classes. Both of these new channels support both blocking and non-blocking modes. Blocking mode is very simple to use, but has poor performance and reliability, while non-blocking mode is the opposite. In general, low load, low concurrency applications can choose synchronous blocking I/O to reduce programming complexity, but high load, high concurrency network applications need to use NIO’s non-blocking mode for development.
AIO programming
NIO2.0 introduces the concept of a new asynchronous channel and provides the implementation of an asynchronous file channel and an asynchronous socket channel. The asynchronous channel provides two ways to obtain the operation results:
-
Through the Java. Util. Concurrent. The Future class to represent the results of the asynchronous operation;
-
Pass in a java.nio.channels when performing an asynchronous operation;
-
The implementation class of the CompletionHandler interface acts as a callback to the completion of the operation.
NIO2.0’s asynchronous socket channel is truly asynchronous, non-blocking I/O. It corresponds to the event driven I/O (AIO) in UNIX network programming. It simplifies NIO’s programming model by not polling registered channels through a Selector to implement asynchronous reads and writes.
I/O model comparison
6. Mainstream NIO framework
Currently, there are two mainstream NIO frameworks in the industry: Mina and Netty, both of which use Apache license-2.0 for open source. The difference is that Mina is the official NIO framework of the Apache Foundation. Netty used to be the NIO framework of Jboss. Later, it divorced from Jboss and independently applied for the netty. IO domain. As a result, the API cannot be upward compatible.
Mina and Netty also have a history. Mina’s architect on the original version of Mina was Trustin Lee, who for various reasons left the Mina community to join the Netty team. Netty was redesigned and developed. Many readers will see Mina’s shadow in Netty, the architectural concepts of the two frameworks are similar, and even some of the code is very similar, which is why.
At present, Mina and Netty have been widely used. Many open source frameworks use both of them as the underlying NIO framework. For example, Avro, the communication component of Hadoop, uses Netty as the underlying communication framework, while Openfire uses Mina as the underlying communication framework. The Netty community is currently more active and has a wider range of versions than Mina.
Why study Netty
7.1 Causes for Not selecting Java native NIO
-
NIO’s libraries and apis are cumbersome to use. You need to be familiar with selectors, ServerSocketChannels, SocketChannels, ByteBuffers, etc.
-
Additional skills are required, such as familiarity with Java multithreaded programming. This is because NIO programming involves the Reactor model, and you must be familiar with multithreading and network programming to write high quality NIO programs.
-
Reliability ability complement, workload and difficulty are very large. For example, clients are faced with problems such as disconnection and reconnection, network intermittent disconnection, half-packet read and write, failed cache, network congestion and abnormal bit stream processing. NIO programming is characterized by its relatively easy function development, but the workload and difficulty of reliability capability completion are very large
7.2 Reasons for Selecting Netty
-
API is simple to use, low development threshold;
-
Powerful, preset a variety of codec functions, support a variety of mainstream protocols;
-
The communication framework can be flexibly extended through ChannelHandler.
-
High performance. Compared with other mainstream NIO frameworks in the industry, Netty has the best comprehensive performance.
-
Mature, stable, Netty has fixed all the DISCOVERED JDK NIO bugs, business developers no longer need to worry about NIO bugs;
-
Active community, short version iteration cycle, found bugs can be fixed in time, at the same time, more new features will be added;
-
Experienced a large – scale commercial application test, quality has been verified. In the Internet, big data, online games, enterprise applications, telecom software and many other industries have been successful commercial, proved that it has been fully able to meet the business application of different industries.
8. Set up Netty development environment
8.1 Netty official website
Netty: Home
8.2 Maven Repository address
Mvnrepository.com/artifact/io…
8.3 the Maven rely on
<! -- https://mvnrepository.com/artifact/io.netty/netty-all -->
<dependency>
<groupId>io.netty</groupId>
<artifactId>netty-all</artifactId>
<version>4.166..Final</version>
</dependency>
Copy the code
8.4 Gradle rely on
// https://mvnrepository.com/artifact/io.netty/netty-all
implementation group: 'io.netty', name: 'netty-all', version: '4.1.66. The FinalCopy the code
Java: Java: Netty