Redis series of directories
Redis series – distributed lock
Redis series – Cache penetration, cache breakdown, cache avalanche
Why Is Redis so fast?
Redis series — Data Persistence (RDB and AOF)
Redis series – consistent hash algorithm
Redis series – High Availability (Master slave, Sentinel, Cluster)
Redis series – Things and Optimism lock
Redis series — Geospatial: Do you have Lao Wang next door?
Bitmaps: Did you check in today?
What is a Bloom filter?!
Someone once told me, “Ideally, our page jumps need to be instantaneous, and our in-page operations need to be instantaneous. In addition, actions that take more than a snap of a finger should have a progress notification and can be stopped or cancelled at any time for the best user experience.”
So how much time is a moment, a moment, a snap?
According to the Maha Sangha Doctrine:
The moment is a thought, the twenty thoughts are a moment, the twenty seconds are a finger, the twenty fingers are a luo, the twenty luo is an instant, a day and a night are thirty seconds.
So, after careful calculation, an instant is 0.36 seconds, and a split second is 0.018 seconds. One snap lasts 7.2 seconds.
In order to improve user experience and improve website response speed, caching is generally used, and the usual technology selection is Redis.
Today we are going to talk about why Redis is so fast!
Interview questions
“Everyone is using Redis as a cache, so Redis must be fast?” “I asked a classmate who had come in for an interview.
“Yes, because Redis is a completely memory-based operation, it’s fast.” “The interviewee replied.
“Is there any other reason? “I asked with a bad smile. The students on the other side have no idea how to answer.
In the world of computing, caches are everywhere
Browser cache, CPU cache, disk cache, CDN cache, APP cache, database cache…
Why are caches everywhere? Answer in one word: quick!!
The browser and App are cached to enable users to access previously accessed pages more quickly and improve user experience. CPU, disk cache, in order to faster processing data, improve the performance of the computer; The database is cached so that the business server can process the business more quickly.
As a Java programmer, when it comes to caching, Redis comes to mind.
The official documentation explains that The Redis operation is based on memory. CPU is not the bottleneck of Redis performance. The bottleneck of Redis is machine memory and network bandwidth.
Reds is in C, and Reds is extremely good. In the case of a single Redis, the official data is: the read speed is 110000 times /s, and the write speed is 81000 times /s.
However, the threading model of Redis indicates that Redis is single-process, single-threaded. Isn’t it amazing!!
Why is Redis so fast?
First, there are two misunderstandings: first, high-performance servers are not necessarily multi-process, multi-thread; Second, multithreading is not necessarily faster than a single thread, such as a single-core machine.
In our common cognition, high performance is achieved through multi-process and multi-thread. For example, Nginx is single-threaded and Memcached is single-threaded.
In the computer world, the SPEED of CPU is much faster than the speed of memory, and the speed of memory is much faster than the speed of hard disk. Redis operations are based on memory, the vast majority of requests are pure memory operations, very fast, using a single thread can save the time when the CPU context will switch when multi-threading, also do not have to consider the problem of various locks, there is no lock release lock operation, no deadlock caused by the performance consumption. For memory systems, multiple reads and writes are all on the same CPU, so no context switching is the most efficient! Since single-threading is easy to implement and the CPU is not a bottleneck, it makes sense to adopt a single-threaded solution (there are a lot of complications with multi-threading).
So what are the details of Redis’s single-process single-thread model?
1.IO multiplexing
First, what is IO multiplexing?
For example, now we simulate a TCP server to handle 30 client sockets, how to quickly handle these 30 requests?
Without understanding the principle, let’s compare an example: in class, 30 students are asked to do homework at the same time. After finishing, the teacher will check the homework of 30 students before the class is dismissed. How to finish class at the fastest speed with limited resources?
- One: assign a teacher to check each one in turn. Check A first, then B, then C, D… If one student gets stuck, the whole class is delayed. This mode is as if you were looping through sockets one by one with no concurrency at all. This method only requires one teacher, but it takes a long time.
- The second way: arrange 30 teachers, each teacher check a student’s work. This is similar to creating a process or thread for each socket to handle the connection. This method requires 30 teachers (the most resource intensive), but is the fastest.
- The third kind: arrange a teacher, stand on the platform, who answer who raise his hand. C and D raise their hands to show that they have finished their homework. The teacher goes down to check THE answers of C and D in turn, and then goes back to the platform. At this point, E and A raise their hands again, and then deal with E and A. This is the fastest way to complete tasks with minimal resource consumption.
The third is the IO reuse model (select, poll, and epoll under Linux do this. Register the FDS of user sockets with epoll, and epoll helps you monitor which sockets are receiving messages, thus avoiding a lot of useless operations. The socket should be in non-blocking mode. This process blocks only for select, poll, and epoll calls. Sending and receiving customer messages does not block, and the entire process or thread is fully used. This is event-driven, called reactor pattern.
2.Redis thread model
Redis developed its own network event handler based on Reactor model, which is called file event handler, and consists of socket, I/O multiplexer, file event dispatcher and event handler.
1.I/O multiplexer, file event dispatcher
The I/O multiplexer listens for multiple sockets at the same time. When the socket being listened to is ready to perform accept, read, write, close, etc., the file event corresponding to the operation will be generated. The I/O multiplexer will push all the sockets that generated the event into a queue. It is then sent in an orderly fashion, one socket at a time, to the file event dispatcher, which calls the corresponding event handler based on the type of event generated by the socket when it receives the socket.
2. Event handlers
(1) Connect the response processor:
When the Redis server is initialized, the program associates this connection reply handler with AE_READABLE events for the server to listen for sockets. When a client connects to the server to listen for sockets using sys/socket.h/connect, The socket will emit an AE_READABLE event that causes the connection reply handler to execute and perform the corresponding socket reply operation.
(2) Command request processor:
When a client successfully connects to the server through the connection reply handler, the server associates the AE_READABLE event of the client socket with the command request handler. When the client sends a command request to the server, the socket will generate AE_READABLE events, causing the command request handler to execute. And perform the corresponding socket read operation;
The server will continue to request the handler for the AE_READABLE event associated command for the client socket the entire time the client connects to the server.
(3) Command reply processor:
When the server has a command reply that needs to be sent to the client, the server associates the AE_WRITABLE event of the client socket with the command reply processor. When the client is ready to receive the command reply from the server, the AE_WRITABLE event will be generated, causing the command reply processor to execute. And performs the corresponding socket write operation.
When the command is sent, the server disassociates the command reply handler from the AE_WRITABLE event of the client socket.
- Note 1: only when the events are produced by a socket on the associated event handler is performed, the I/O multiplexing procedures will continue to transfer event dispatcher file under a socket, so that the execution time of each command is required, if a command is too long, can cause other commands block. So use the O(n) command with caution. Redis is a database for fast execution scenarios.
- Note 2: concurrency of the command. Redis is a single thread processing command, the command will be executed one by one, if three client commands are executed at the same time, the execution order is uncertain, but it can be sure that no two commands will be executed at the same time, so the final result of two INCR commands will be 2 no matter how they are executed.
3. Communication process between the client and Redis
1. Assuming that a Redis server is running, the AE_READABLE event for that server’s listening socket should be in listening state, and the handler for that event is the connection reply handler.
2. If a Redis client initiates a connection to the server at this point, the listening socket will generate an AE_READABLE event that triggers the connection reply handler to execute. The processor responds to the client’s connection request, then creates the client socket, along with the client state, and associates the AE_READABLE event for the client socket with the command request handler so that the client can send command requests to the master server.
3. Later, assuming that the client sends a command request to the master server, the client socket will generate an AE_READABLE event, which will trigger the command request to the processor, which will read the client’s command content and then pass it to the relevant program to execute.
4. In order to send these command replies back to the client, the server associates the AE_WRITABLE event of the client socket with the command reply handler. When a client attempts to read a command reply, the client socket generates an AE_WRITABLE event, which triggers the command reply handler to execute. When the command reply handler writes all the command reply to the socket, The server disassociates the AE_WRITABLE event of the client socket from the command reply handler.
Pressure test
After redis is installed, there is a command called Redis-benchmark.
This command is the official stress test tool. Let’s do a simple pressure test using the default parameters of this command.
It can be seen that
- A total of 100,000 SET requests took 0.79 seconds
- A total of 100,000 GET requests took 0.81 seconds
$ ./redis-benchmark
====== PING_INLINE ======
100000 requests completed in 0.85 seconds
50 parallel clients
3 bytes payload
keep alive: 1
99.98% <= 1 milliseconds
100.00% <= 1 milliseconds
117924.53 requests per second
====== PING_BULK ======
100000 requests completed in 0.81 seconds
50 parallel clients
3 bytes payload
keep alive: 1
100.00% <= 0 milliseconds
123915.74 requests per second
====== SET ======
100000 requests completed in 0.79 seconds # 100,000 set requests were sent in 0.79 seconds
50 parallel clients A total of 50 concurrent requests are used
3 bytes payload # set3 bytes per request
keep alive: 1 # Use a Redis server, standalone test99.90% < = 1 milliseconds99.9% of requests take less than or equal to 1 ms
99.95% <= 2 milliseconds
99.96% <= 3 milliseconds
100.00% <= 3 milliseconds
127388.53 requests per second 127,388.53 requests per second
====== GET ======
100000 requests completed in 0.81 seconds# 100,000 GET requests took 0.81 seconds
50 parallel clients A total of 50 concurrent requests are used
3 bytes payload Get3 bytes per request
keep alive: 1 # Use a Redis server, standalone test99.99% < = 1 milliseconds99.99% of requests take less than or equal to 1 millisecond
100.00% <= 1 milliseconds
123456.79 requests per second # 123456.79 requests per second
Copy the code
You can also set parameters such as number of concurrent connections -c, number of requests -n, and number of bytes -d.
The answer
- Redis is a pure memory operation: data is stored in memory, and the memory response time is about 100 nanoseconds, which is an important basis for Redis teraflops of access.
- Non-blocking I/O: Redis uses Epoll as the implementation of I/O multiplexing, and its own event processing model converts epoll connections, reads and writes, and closes into time, so as not to waste too much time on I/O.
- Single threading avoids the cost of thread switching and races.
Redis uses a single thread model, if each command execution takes a lot of time, it will cause other threads to block, which is fatal to the high performance service of Redis, so Redis is a database for high speed execution
Done, done!
[spread knowledge, share value], thank small partners attention and support, I am [Zhuge small ape], a hesitation in the struggle of the Internet migrant workers!!