Redis is a single thread, this word before, is a horizontal walk, everyone knows the truth. Now it’s different. Redis has changed. Say that again, more or less with the tone of doubt to argue with you. Those whose will is not firm may give up their arms and go along with others.
What is it, you see the officer please go down with Xiao Lai:
Note: Mind mapping
Reactor model
The reactor pattern, which you may not recognize, should be familiar to you if you read the previous article. It’s a tricky topic when it comes to Redis threads.
1. Traditional blocking IO model
Before moving on to the reactor model, it is important to mention the treatment of the traditional blocking IO model.
In the traditional blocking IO model, an independent Acceptor thread listens for a client connection and allocates a new thread to handle each incoming request. When multiple requests come in at the same time, the server will allocate the corresponding number of threads. This can lead to frequent CPU switching and waste resources.
Some connection requests come in and do nothing, but the server also allocates the corresponding thread, which incurs unnecessary thread overhead. It’s like you go to a restaurant, you look at the menu for a long time, you realize it’s fucking expensive, and then you leave. The server waiting for you to order during this time acts as a corresponding thread, and your order can be regarded as a connection request.
At the same time, after each connection is established, when a thread calls a read-write method, the thread is blocked until data can be read and written, during which time the thread cannot do anything else. Or the restaurant up there, you go out and you find it’s still the best value. I went back to the restaurant and looked at the menu for a long time. The waiter was waiting for you to finish ordering. In this process, the waiter can do nothing but wait, which is equivalent to blocking.
You see, every time a request comes in, a thread is allocated and blocked waiting for the thread to finish processing it. Some requests just come to connect, do nothing, and have to allocate a thread to it, which requires a lot of server resources. Encounter high concurrency scenarios, dare not imagine. It is possible to consider fixed architectures with small number of connections.
2. Pseudo asynchronous IO model
You may have seen one solution optimized by thread pooling, using thread pools and task queues. This is called the pseudo asynchronous IO model.
When a client is connected, the request from the client is encapsulated as a task and sent to the back-end thread pool for processing. A thread pool maintains a message queue and multiple active threads that process tasks in the message queue.
This solution avoids the thread resource depletion problem caused by creating one thread per request. But the underlying model is still synchronous blocking. If all threads in the thread pool are blocked, more requests cannot be responded to. Therefore, this mode limits the maximum number of connections and does not fundamentally solve the problem.
Let’s continue to use the above restaurant as an example. After the restaurant owner has been operating for a period of time, there are more customers. The original 5 waiters in the restaurant can not cope with the one-to-one service. So the boss uses a five-person thread pool. The waiter serves one guest as soon as he serves another.
Then the problem arises. Some customers order very slowly, and the waiter has to wait for a long time until the customer finishes ordering. If all five customers order very slowly, the five waiters will have to wait forever, leaving the rest of the customers unattended. This is the case where all threads in the thread pool are blocked.
So how to solve this problem? Wait, Reactor model is coming.
3. Reactor design model
The basic design idea of Reactor model is based on I/O reuse model.
Here’s the I/O multiplexing model. Unlike traditional I/O multithreaded blocking, in the I/O multiplexing model, multiple connections share a block object, and applications only need to wait on a block object. When a connection has new data to process, the operating system notifies the application, and the thread returns from the blocked state to begin business processing.
What does that mean? The owner of the restaurant also noticed that customers were slow to order, so he took a bold approach and kept only one waiter. When the guest orders, the waiter goes to serve other guests, and the guest calls the waiter directly after ordering. The customer and the server can be seen as multiple connections and one thread, respectively. A waitress stands in front of a customer, waiting to serve another customer as soon as someone else has ordered.
After understanding the design idea of reactor, let’s look at the implementation plan of today’s main character single reactor single thread:
Reactor monitors client request events through an I/O reuse program and distributes them through a task dispatcher.
The connection request event is processed through Acceptor, and the corresponding handler is established to handle subsequent services.
For non-connected events, the Reactor calls the corresponding handler to complete the read-> business ->write process and return the result to the client.
The whole process is done in one thread.
Single-threaded era
Now that you know the Reactor model, you might be wondering what this has to do with today’s topic. What you may not know is that Redis is based on Reactor’s single-threaded model.
The IO multiplexer receives the user’s request and pushes it all out to a queue to the file dispatcher. For subsequent operations, the entire process is done in a single thread, as seen in the REACTOR single-threaded implementation, so Redis is referred to as a single-threaded operation.
For single-threaded Redis, memory based and command operation time complexity is low, so read and write rates are very fast.
Multithreaded age
Multithreading was introduced in the release of Redis6. As mentioned above, Redis single-threaded processing is very fast, so why introduce multi-threading? Where are the single-threaded bottlenecks?
Let’s start with the second problem. In Redis, the performance bottleneck for single threads is mainly in network IO operations. That is, the read/write system call on the read/write network takes up most of the CPU time during execution. If you want to delete a large key or value pair, it will not be deleted in a short time, so it will block subsequent operations for a single thread.
Recall the Reactor model’s single-threaded approach from above. For non-connected events, the Reactor calls the corresponding handler to complete the read-> business ->write process, which is a performance bottleneck.
Redis is designed to handle network data reading and writing and protocol parsing in a multi-threaded manner. For command execution, it still uses a single thread operation.
conclusion
Reactor model
- The traditional blocking IO model, where client and server threads are distributed 1:1, is not conducive to scaling.
- Pseudo asynchronous I/O model uses thread pool, but the bottom layer still uses synchronous blocking, which limits the maximum number of connections.
- Reactor monitors client request events through an I/O multiplexer and distributes them through a task dispatcher.
Single-threaded era
- Based on Reactor single thread mode, all the requests received by IO multiplexer are pushed to a queue and handed to the file dispatcher for processing.
Multithreaded age
-
The single-thread performance bottleneck is mainly on network I/OS.
-
Network data reading and writing and protocol parsing are handled in a multithreaded manner, while single-threaded operations are still used for command execution.
Original text: xie. Infoq. Cn/article/d87…