Java NIO series of articles
- The underlying principles of high concurrency IO and four main IO models
- Four attributes and important methods of Buffer
- The Channel tunnel class
- The Selector Selector
Working principles of I/O read and write
As we all know, the user program IO read and write, depends on the operating system at the bottom of the IO read and write, basically will use the bottom of the read and write two major system calls.
There is a basic knowledge involved:
Read system calls do not read data directly into memory from the physical device, and write system calls do not write data directly to the physical device
Upper-layer applications involve buffers whether they call read or write from the operating system. Specifically, the operating system’s read is called to copy data from the kernel buffer to the process buffer; Write system calls, on the other hand, copy data from the process buffer to the kernel buffer.
The figure above shows a simplified “logical” diagram of how block data is moved from an external source (such as a hard disk) to a storage area inside a running process (such as RAM).
- First, the process populates its buffer by making a read() system call.
- The read call causes the kernel to issue commands to the disk controller hardware to fetch data from the disk.
- The disk controller writes data directly to the kernel memory buffer via DMA.
- After the disk controller completes filling the buffer, the kernel copies the data from the temporary buffer in kernel space to the process-specified buffer.
Why so many buffers?
The purpose of buffers is to reduce frequent physical exchanges with devices. As we all know, direct reads and writes to external devices involve the interruption of the operating system. When a system interruption occurs, you need to save the previous process data and status. After the interruption ends, you need to restore the previous process data and status. In order to reduce the time and performance loss of this underlying system, memory buffers were introduced.
With memory buffers, upper-layer applications simply copy data from the kernel buffer to the upper-layer application’s buffer (process buffer) when using read system calls; When upper-layer applications use write system calls, they simply copy data from the process buffer into the kernel buffer. The underlying operations monitor the kernel buffer and wait for the buffer to reach a certain number, then interrupt the I/O devices and centrally execute the ACTUAL I/O operations on physical devices. This mechanism improves the system performance. The kernel of the operating system decides when to interrupt (read interrupt, write interrupt), and the user program does not need to care
Quantitatively, in Linux, the operating system kernel has only one kernel buffer. And each user program (process), has its own independent buffer, called the process buffer. As a result, IO readers of user programs, for the most part, do not perform actual IO operations, but exchange data directly between the process buffer and the kernel buffer
File descriptor
File handle, also called file descriptor. In Linux, files are classified into common files, directory files, link files, and device files. A File Descriptor is an index created by the kernel to efficiently manage files that have been opened. It is a non-negative integer (usually a small integer) and is used to refer to the File that has been opened. All IO system calls, including read and write calls to sockets, are done through file descriptors.
Four main IO models
Before introducing the four IO models, we will introduce two sets of concepts
Blocking and non-blocking
Block I/O, which means that the kernel’s I/O operation is not returned to user space until the operation is complete. Blocking refers to the execution state of a user-space program. The traditional IO model is synchronous blocking IO. In Java, sockets created by default are blocked
Synchronous and asynchronous
Synchronous IO is a way of INITIATING IO in user space and kernel space. Synchronous IO means that the user space thread is the active one initiating THE IO request and the kernel space is the passive one receiving the IO request. Asynchronous I/O, on the other hand, means that the system kernel is the active one initiating I/O requests, while the user-space thread is the passive one receiving them
Blocking IO
In Java application processes, by default, all SOCKET connection I/O operations are Blocking IO (synchronous).
In the blocking IO model, Java applications block from the IO system call until the system call returns. After success is returned, the application process begins processing the user-space cache data.
- From the time Java starts the READ system call for IO reads, the user thread is blocked.
- When the system kernel receives a read system call, it prepares the data. At first, the data may not have reached the kernel buffer (for example, a complete socket packet has not been received), at which point the kernel waits.
- The kernel waits until the full data arrives, copies the data from the kernel buffer to the user buffer (memory in user space), and then returns the result (for example, the number of bytes copied into the user buffer).
- It is not until the kernel returns that the user thread unblocks and runs again.
The advantages of blocking IO are:
Application development is very simple; The user thread hangs while blocking and waiting for data. During blocking, the user thread consumes very little CPU resources.
The disadvantages of blocking IO are:
Typically, there is a separate thread for each connection; Conversely, a thread maintains IO operations for a connection. In the case of small concurrency, this is fine. However, when a large number of threads are needed to maintain a large number of network connections in high concurrency application scenarios, the overhead of memory and thread switching can be huge. Therefore, basically the blocking IO model is not available in high concurrency scenarios.
Synchronizing non-blocking NIO (None Blocking IO)
- In the stage where the kernel data is not ready, the user thread makes an IO request and returns it immediately. So, in order to read the final data, the user thread needs to make IO system calls over and over again.
- When kernel data arrives, the user thread makes a system call, and the user thread blocks. The kernel starts copying the data, which it copies from the kernel buffer to the user buffer (memory in user space), and then the kernel returns the result (for example, the number of bytes copied to the user buffer).
- After the user thread reads the data, it unblocks and starts running again. In other words, the user process needs to make several attempts before it can actually read the data and proceed.
Synchronous non-blocking IO features:
The application’s threads need to constantly make IO system calls, polling to see if the data is ready, and if not, polling until the IO system call is complete.
Advantages of synchronous non-blocking IO:
Each IO system call made can be returned immediately while the kernel waits for data. User thread will not block, real-time performance is good.
Disadvantages of synchronous non-blocking IO:
Constantly polling the kernel, which takes up a lot of CPU time, is inefficient
In general, synchronous non-blocking IO is also not available in high concurrency scenarios. Typically, Web servers do not use this IO model. This IO model is rarely used directly, but instead uses the non-blocking IO feature in other IO models. This IO model is also not involved in actual Java development
IO Multiplexing model
How do you avoid the polling wait problem in the synchronous non-blocking IO model? This is the IO multiplexing model
In the IO multiplexing model, a new system call is introduced to query the ready state of IO. In Linux, the corresponding system call is the SELECT /epoll system call. With this system call, a process can monitor multiple file descriptors, and once a descriptor is ready (typically kernel buffer readable/writable), the kernel can return the ready state to the application. The application then makes the corresponding IO system call based on the ready state.
Currently support IO multiplexing system call, select, epoll and so on. The SELECT system call, which is supported on almost all operating systems, has good cross-platform features. Epoll was introduced in the Linux 2.6 kernel as a Linux enhanced version of the SELECT system call.
In the IO multiplexing model, through the SELECT /epoll system call, a single application thread can poll hundreds or thousands of socket connections continuously, and return the corresponding read and write operation when one or more socket network connections have IO ready state
An example illustrates the flow of the IO multiplexing model. Make a system call to multiplexed IO read operation as follows:
-
Selector registration. In this mode, the target socket network connection that requires a read operation is first pre-registered with a Select /epoll Selector. The corresponding Selector class in Java is the Selector class. Then, you can start the polling process for the entire IO multiplexing model.
-
Polling for ready states. The ready state of all registered socket connections is queried through the selector query method. Through the system call to the query, the kernel returns a list of ready sockets. When data in any registered socket is ready and the kernel buffer has data (ready), the kernel adds the socket to the ready list.
When the user process calls the SELECT query method, the entire thread is blocked.
-
After the user thread gets the list of ready states, it issues a read system call based on the socket connection in the list, and the user thread blocks. The kernel starts copying the data, copying it from the kernel buffer to the user buffer.
-
After the replication is complete, the kernel returns the result, and the user thread unblocks, reads the data, and continues to execute.
IO multiplexing is essentially the same as non-blocking IO, but with the new SELECT system call, the kernel takes care of polling operations that the requesting process would otherwise do itself. This may seem like an extra system call overhead over non-blocking IO, but the efficiency gains are due to the ability to support multiple IO
Characteristics of IO multiplexing models
- There are two types of System calls involved,
- One is select/epoll (ready query)
- One is IO operations.
- Like the NIO model, multiplexing IO also requires polling. The thread that is responsible for querying the SELECT /epoll status needs to continuously conduct select/epoll to find the socket connections that are ready for I/O operations.
Advantages of IO multiplexing model
The biggest advantage of using SELECT /epoll over the blocking IO mode where one thread maintains one Connection is that a single selector query thread can process thousands of connections simultaneously. The system does not have to create a large number of threads, and does not have to maintain these threads, which greatly reduces the overhead of the system.
Disadvantages of IO multiplexing models
In essence, select/epoll system calls are blocking and belong to synchronous IO. The system call itself is responsible for the read and write after the read and write event is ready, that is, the read and write process is blocked
If you want to completely unblock threads, you must use the asynchronous IO model
Asynchronous IO model
The Asynchronous IO model (AIO). The basic flow of AIO is that the user thread registers an IO operation with the kernel through a system call. After the entire I/O operation (including data preparation and data replication) is complete, the kernel notifies the user of the program and the user performs subsequent service operations.
Let me give you an example. Initiate a system call for an asynchronous IO read operation. The process is as follows:
- When a user thread makes a read system call, it can immediately start doing something else. The user thread does not block.
- The kernel begins the first stage of IO: preparing data. When the data is ready, the kernel copies the data from the kernel buffer to the user buffer (memory in user space).
- The kernel sends a Signal to the user thread or calls back to the callback interface registered by the user thread to tell the user thread that the read operation is complete.
- The user thread reads data from the user buffer to complete subsequent service operations.
Characteristics of the asynchronous IO model
The user thread is not blocked at either stage of the kernel waiting for data or copying data. The user thread needs to receive the kernel’s I/O completion event, or the user thread needs to register an I/O completion callback function. Because of this, asynchronous IO is sometimes called signal-driven IO
Asynchronous I/O Disadvantages of the asynchronous model
The application only needs to register and receive events, leaving the rest to the operating system, that is, the underlying kernel to provide support. In theory, asynchronous IO is truly asynchronous I/O, and its throughput is higher than that of the IO multiplexing model
At present, Windows systems implement true asynchronous IO through IOCP. However, in Linux system, asynchronous IO model was introduced in version 2.6 and is not perfect at present. Its underlying implementation still uses Epoll, which is the same as IO multiplexing, so it has no obvious advantage in performance. Most of the high-concurrency server-side applications are generally based on Linux. Therefore, IO multiplexing model is mostly used in the development of such high concurrency network applications