preface

Hello, everyone. I’m Tian Luo, a programmer. Today we are going to learn IO model. Before we begin, let me ask you a few questions

What is IO? What is blocking non-blocking IO? What is Synchronous asynchronous IO? What is IO multiplexing? How does select/epoll relate to the IO model? How many classic IO models are there? What is the difference between BIO, NIO and AIO?

If you can answer these questions well, congratulations, you have mastered IO. That you read this article together with the field snail brother, review again, deepen the impression! If you are ambiguous about these questions, that’s fine. After reading this article, you will understand!

  • Public number: a boy picking up snails

What is IO?

IO, the full English name is Input/Output, translated as Input/Output. Usually we hear a lot, is what disk IO, network IO. So what exactly is IO? Don’t you have a kind of vague feeling, as if you know what it is, but also as if you can’t put your finger on it.

Input/output, who is the input? Who is the output? IO (input/output), if separated from the main body, can be confusing.

IO of computer

We often say the input and output, more intuitive meaning is the input and output of the computer, the computer is the subject. Do you still remember, when I was studying the principles of computer composition in college, there was a feng. Neumann structure, which divides the computer into five parts: arithmetic unit, controller, memory, input device, output device.

Input device is to input data and information to the computer equipment, keyboard, mouse belong to the input device; Output device is the terminal device of the computer hardware system, used to receive the output display of computer data, general display, printer belong to the output device.

For example, you knock a few times in the mouse keyboard, it will put your command data, to the host, the host through the operation, the returned data information, output to the display.

Mouse and monitor are just the input and output of the intuitive surface. Back to the computer architecture, it involves the process of data migration between the computer core and other devices, namely IO. For example, disk I/O reads data from disk to memory. This counts as an input. Correspondingly, data in memory is written to disk, which counts as an output. That’s the essence of IO.

IO of the operating system

If we were to write data from memory to disk, what would the body be? The principal could be an application, such as a Java process (assuming the network transmits a binary stream that a Java process can write to disk).

The operating system is responsible for computer resource management and process scheduling. The application program running on our computer actually needs to go through the operating system to do some special operations, such as disk file reading and writing, memory reading and writing, and so on. Because these are relatively dangerous operations, can not be messed up by the application, only to the underlying operating system. In other words, your application can write data to disk only by calling the API exposed by the operating system.

  • What is user space? What is kernel space?
  • A 32-bit operating system, for example, allocates 4 gigabytes (2 ^ 32) of memory for each process. This 4G accessible memory space is divided into two parts, one is user space and the other is kernel space. Kernel space is the area accessed by the operating system kernel and is the protected memory space, while user space is the area of memory accessed by user applications.

Our application runs in user space, there is no real IO process, the real IO is performed in the operating system. That is, application IO operations are divided into two types of actions: I/O invocation and I/O execution. IO calls are initiated by processes (the running state of the application), while IO execution is the job of the operating system kernel. In this case, I/O refers to an IO call triggered by an application program to the I/O function of the operating system.

An I/O process of an operating system

An IO operation initiated by an application consists of two phases:

  • IO call: An application process makes a call to the operating system kernel.
  • IO execution: The OS kernel completes I/O operations.

The OS kernel completes THE I/O operation by following two processes:

  • Data preparation phase: The kernel waits for the I/O device to prepare data
  • Copy data phase: Copies data from the kernel buffer to the user-space buffer

In fact, I/O is either to transfer the internal data of a process to an external device, or to migrate the data of an external device to an internal process. External devices generally refer to hard disks and network adapters for socket communication. A complete I/O process includes the following steps:

  • An application process initiates an IO call request to the operating system
  • The operating system prepares the data, loads the IO peripheral data into the kernel buffer
  • The operating system copies data, that is, data from the kernel buffer to the process buffer

Blocking IO model

We already know what IO is, but what is blocking IO?

If the kernel data is not ready, the application process will block and wait until the kernel data is ready and copied from the kernel to user space, and then return a success message. This IO operation is called blocking I/O.

  • The classic applications of blocking IO are blocking sockets and Java BIO.
  • The disadvantage of blocking IO is that if the kernel data is never ready, the user process will always block, wasting performance, and can use non-blocking IO optimization.

Non-blocking IO model

If the kernel data is not ready, you can return an error message to the user process so that it does not need to wait, but polls again. This is non-blocking IO, and the flow chart is as follows:

The flow of non-blocking I/OS is as follows:

  • An application process initiates a request to the operating system kernelrecvfromRead data.
  • The operating system kernel data is not readyEWOULDBLOCKError code.
  • The application polls the call and continues to issue to the operating system kernelrecvfromRead data.
  • The operating system kernel data is ready to be copied from the kernel buffer to user space.
  • The call completes, and a success message is returned.

Non-blocking IO model, short for NIO, non-blocking IO. Compared with blocking I/O, it greatly improves performance, but it still has performance problems, that is, frequent polling leads to frequent system calls, which also consumes a lot of CPU resources. Consider the IO reuse model to solve this problem.

IO multiplexing model

Since NIO’s invalid polling results in CPU consumption, it would be nice to wait until the kernel data is ready and then actively notify the application process to make system calls.

Before we do that, let’s go over what File Descriptor fd is, which is a computer science term, formally a non-negative integer. When a program opens an existing file or creates a new file, the kernel returns a file descriptor to the process.

Core idea of IO reuse model: the system provides us with a class of functions (such as select, poll and epoll functions), which can monitor multiple FD operations at the same time. Any one of them returns kernel data, and then the application process launches recvFROM system call.

Select for IO multiplexing

The application process can monitor multiple FDS simultaneously by calling the SELECT function. In the FDS monitored by the SELECT function, as long as any data state is ready, the SELECT function will return the readable state, and the application process will issue a recvFROM request to read the data.

In the non-blocking IO model (NIO), N (N>=1) polling system calls are required, whereas with select’s IO multiplexing model, only one system call is required, greatly optimizing performance.

However, SELECT has several disadvantages:

  • The maximum number of I/O connections monitored is limited, which is usually 1024 on Linux.
  • The select function returns by traversingfdsetFind the ready descriptorfd. (I/O events occurred, but I do not know which streams, so traversal all streams)

Poll was later proposed because of the connection number limitation. Compared to SELECT, Poll solves the connection number limitation problem. However, select, like poll, still needs to iterate through the file descriptor to get the socket ready. If a large number of clients are connected at the same time, very few of them may be ready at any one time, the efficiency decreases linearly as the number of monitored descriptors increases.

Therefore, the classical multiplexing model epoll was born.

Epoll for IO multiplexing

In order to solve the problems existing in SELECT /poll, the multiplexing model epoll was born, which is implemented by event-driven, and the flow chart is as follows:

Epoll registers a FD (file descriptor) with epoll_ctl(). Once a FD is ready, the kernel uses a callback mechanism to activate the FD quickly and is notified when the process invokes epoll_wait(). Instead of iterating through file descriptors, we listen for event callbacks. That’s where epoll shines.

Let’s summarize the differences between SELECT, poll, and epoll

select poll epoll
Underlying data structure An array of The list Red-black trees and double-linked lists
Get the ready FD traverse traverse Event callback
Event complexity O(n) O(n) O(1)
Maximum number of connections 1024 unlimited unlimited
Fd data copy Each time select is called, the FD data needs to be copied from user space to kernel space Each time poll is called, fd data needs to be copied from user space to kernel space With memory mapping (MMAP), there is no need to copy FD data frequently from user space to kernel space

Epoll obviously optimizes THE efficiency of IO execution, but it can still block when a process calls epoll_wait(). Instead of me asking you if your data is ready, I’ll let you know when your data is ready. This is signal-driven IO.

Signal driven model of IO model

Signal-driven IO is no longer asked to confirm that data is ready, but sends a signal to the kernel (a SIGIO signal is set up when sigaction is called) and the application user process can do something else without blocking. When the kernel data is ready, the SIGIO signal notifies the application process of the readability status of the data. When the application user process receives the signal, it immediately calls recvFROM to read the data.

Signal-driven IO model, after the application process sends a signal, is immediately returned, does not block the process. It already feels like an asynchronous operation. But if you look closely at the flow chart above, the application is still blocked while the data is copied to the application buffer. In retrospect, both BIO, NIO, and signal drivers block when copying data from the kernel to the application buffer. Are there any optimizations? AIO (truly asynchronous IO)!

IO model asynchronous IO(AIO)

BIO, NIO, and signal drivers are all blocked when copying data from the kernel to the application buffer, so they are not truly asynchronous. AIO implements non-blocking of the whole IO process, that is, after the application process makes a system call, it immediately returns, but immediately returns not the result of processing, but the similar meaning of submitting successfully. After the kernel data is ready, copy the data to the user process buffer and send a signal to notify the user process that the I/O operation is complete.

The process is as follows:

The optimization idea of asynchronous IO is very simple, only need to send a request to the kernel, can complete all operations of data state query and data copy, and do not block waiting for the result. In daily development, there are similar business scenarios:

For example, to initiate a batch transfer, but the transfer processing is time-consuming, at this time, the back end can first inform the front end that the transfer is successfully submitted, and then notify the front end of the result after the result is processed.

Blocking, non-blocking, synchronous, and asynchronous I/O partitioning

| IO model | | | — – | — – | — – | — – | | | synchronous blocking I/O model block | | | non-blocking I/O model synchronization non-blocking | | | synchronous blocking I/O multiplexing model | | | synchronous signal driven I/O model non-blocking | | | asynchronous I/o (AIO) model of asynchronous non-blocking |

A popular example is BIO, NIO, AIO

  • Blocking -IO for short BIO
  • Synchronous non-blocking (NIO)
  • Asynchronous non-blocking (AIO)

A classic example of life:

  • Xiao Ming went to tongren Shiji’s coconut chicken and waited in line for an hour before starting to eat hot pot. (BIO)
  • Xiao Hong also went to Tongren’s Four Seasons coconut Chicken. She saw that she would have to wait a long time, so she went shopping for a while. Every time she went shopping, she would run back to see if it was her turn. So she ended up shopping and eating coconut chicken. (NIO)
  • As Xiaohua, to eat coconut chicken, because he is a senior member, so the store manager said, you go to the mall to stroll, as soon as there is a place, I will call you immediately. So Xiaohua didn’t have to sit idly and wait, nor did she have to come back every few minutes to see if there was any waiting, she finally had delicious coconut chicken (AIO).

The last

I hope this article is helpful to you. If there are some places in the article that you feel wrong, you can put forward it and learn and progress together. Public number: a boy picking up snails.

Reference and thanks

  • That’s how programmers should understand IO
  • Linux IO mode and select, poll, epoll details
  • Know how much | IO model theory
  • 100% understanding of 5 IO models