Abstract: In fact, the weak consistency of open source Redis has not met the demands of many application scenarios. What, don’t believe me?
This article is shared from Huawei cloud community “Huawei cloud enterprise Redis revealed 15: Why Redis need strong consistency?” Author: GaussDB database.
Some say that the final consistency of open source Redis is sufficient for most application scenarios, while others say that the strong consistency of multiple copies is too costly to implement. To the author, in fact, weak consistency has not met the demands of many application scenarios. What, don’t believe me? Listen to the author.
1. The frustration of inconsistency
1.1 seconds kill to seconds collapse
Share an example of the current limiter in the second kill activity of e-commerce. In the second kill activity of e-commerce, two schemes are generally used to protect the system, one is cache and the other is current limiter, in order to withstand the huge traffic impact from the front-end to the database. Caching is easy to implement, just need to add a layer of caching server in front of the database, and for traffic limiting, the simplest can use Redis counters to achieve traffic limiting function.
To be specific, suppose we need to limit traffic to 5000QPS on an interface, meaning that the number of accesses per second cannot exceed 5000. So we can do this: we can set a counter to 5000 at the beginning, and the expiration time is 1s, that is, the counter expires after 1s. Every time a request comes in, the value of counter is reduced by 1 to determine whether the current value of counter is equal to 0. If it is equal to, it indicates that the number of requests is too many, and the request is rejected directly. If the counter counter does not exist, reset the counter to 5000 to start a new one-second interface limiting. Note that the counter needs to be locked in the case of concurrency.
Under normal circumstances, this scheme will not have a problem, but for this kind of second kill activity, not afraid of ten thousand, just in case, in case Redis suddenly down how to do, that is not the shape of the flow limiter is empty, all the traffic poured into the back-end database, instant system crash. If the primary server goes down, the standby server will take over. Yes, it’s right, but it’s only half right. Why, as you can see in the picture below.
When to Redis configuration from the server, if the primary server downtime, can immediately switch to from the server, but because of the open source Redis data between the slave servers are asynchronous replication, if the network breakdown, master-slave data inconsistency happens often, if the primary server downtime, switch to from the server, because the current limiter judgment errors, Traffic pressure can easily exceed the threshold and rush to the database server, which can also cause system crash.
The root cause of this problem is that the consistency mechanism of open source Redis is weak consistency. In some time, the data of master and slave copies are inconsistent. And to solve this problem completely, only real strong agreement can solve.
1.2 Difficult-to-maintain MySQL components
In fact, not only Redis, even the famous MySQL can not escape the pit of weak consistency. To ensure high availability, the primary/secondary hot backup is a common deployment mode of MySQL. However, in the event of a failure, the synchronization mechanism of MySQL itself cannot ensure the consistency of data between the Master and slave libraries. Therefore, an important auxiliary component, Master High Availability (MHA), is deployed in the following ways:
MHA consists of the management service and the Node service. The Node service is deployed on each MySQL Node. The MHA component is responsible for making the MySQL secondary library as close to the master library as possible, providing a consistent state between the master and slave. In the event of a master/slave switchover, the Manager first replenishes the lagging data for the slave library and then switches user access to the slave library, a process that can take tens of seconds.
The deployment and maintenance of MHA are complex. If failover fails or data loss occurs, operation and maintenance can be difficult. In fact, operation and maintenance students do not want the hands of the system to run stably? Why rely on complex auxiliary components when the database itself provides strong consistency?
2. What is strong consistency
In the previous section, I introduced the various problems caused by weak consistency. In this section, I will introduce what strong consistency is. Consistency is an important concept in both “distributed systems” and “databases,” but it doesn’t mean the same thing. For distributed systems, consistency refers to the discussion of what results will be generated when read and write operations are performed on a logical data in the system when there are multiple physical data copies, which is also consistent with the expression of consistency in CAP theory. In the database domain, “consistency” is closely related to transactions and further refined to ACID. Therefore, when we talk about the consistency of distributed databases, we are essentially talking about transaction consistency and data consistency.
2.1 Transaction Consistency
The consistency of a transaction mainly refers to the ACID of a transaction, which are atomicity, consistency, isolation and persistence respectively, as shown in the figure below:
-
** atomicity: all or none of the changes in the ** transaction occur through logging techniques;
-
Consistency: Transactions maintain data integrity, which is an application property and depends on atomicity and isolation properties.
-
Isolation: the results obtained by parallel execution of multiple transactions are exactly the same as those obtained by serial execution (one after another), which is achieved by concurrency control technology;
-
Persistence: Once a transaction commits, its changes to the data are preserved permanently and should not be affected by any system failure, implemented by logging technology.
2.2 Data Consistency
In distributed systems, multiple data copies are stored to avoid network reliability problems. One logical data copy is stored on multiple physical copies, which naturally causes data consistency problems.
(1) State perspective
From a state perspective, there are only two states of data after any change operation, and all copies are consistent or inconsistent. In some cases, inconsistencies are temporary and transition to a consistent state, while those that are never inconsistent are rarely discussed, so inconsistencies are conventionally referred to as “weak inconsistencies.” In contrast, agreement is called strong agreement. Take a MySQL cluster with one active and two standby nodes as an example. The “strongly consistent” interaction process is as follows:
In this mode, when the primary and secondary databases synchronize binlogs, the primary database can report a successful submission to the client only after receiving a successful response from both secondary databases. Obviously, by the time the user gets the response, the primary and secondary copies of the data have already agreed, so subsequent reads must be fine. This is the “strongly consistent” model from the state perspective.
However, this strong consistency of the state perspective has many side effects: first, poor performance, with the primary library having to wait for standby 1 and standby 2 to return successfully. The second problem is availability. If there are many active and standby nodes, the probability of failure is very high. Therefore, the strong consistency of the state view is very costly and is rarely used.
(2) Operational perspective
The strong consistency of the state view reduces the availability of the system, so many systems choose the weak consistency model of the state view, and use additional algorithms (Raft, Paxos) to ensure the consistency of the operation view without ensuring the consistency of the state of all nodes, while improving the availability of the system. By adding some restrictions, several consistency models are derived:
-
Linear consistency: The operational perspective achieves true strong consistency
-
Sequential consistency: consistency is weaker than linear consistency
-
Causal consistency: consistency is weaker than sequential consistency
-
Write – read consistency: Consistency is comparable in strength and weaker than causal consistency
See the article gauss Redis and Strong Consistency for an introduction to these consistency models.
3. Strong and consistent demand scenarios
In the last section, we introduced what strong consistency is. In this section, we introduce a typical application scenario of strong consistency.
In common Internet applications, if the database server is only deployed on a single node, all the reads and writes of the application will only access a single node, and a copy of logical data is physically only one, there is no strong consistency problem in this scenario.
However, as service traffic increases in the system, the I/O access frequency is too high if the database is deployed on a single machine. As a result, the database becomes a system bottleneck. In this case, to reduce the I/O access frequency of a single disk and improve the I/O performance of a single machine, multiple data storage nodes are added to form an active/standby or active/multiple architecture.
At this point, we can distribute the load on multiple slave nodes. On the one hand, we can achieve read and write separation, with write requests accessing the master library and read requests accessing the standby library. On the other hand, in the case of the breakdown of the master library, the master/slave switchover can be carried out to enhance the stability of the system. In the above two scenarios, because a logical data has multiple copies physically, how to ensure data consistency between multiple copies is the problem of strong consistency.
3.1 Read/write Separation Scenario
Take relational database MySQL as an example, a typical deployment scheme is one master, two slave and three nodes, with the master node handling write operations and the two slave nodes handling read operations to share the pressure of the master database, as shown in the following figure:
In this case, if the system does not implement strong consistency, it may encounter an embarrassing situation that the system reads data immediately after the write operation is completed, and then finds that the data cannot be read or the old status is read. For example, the operation sequence is as follows:
-
The client first writes to the Master node through the proxy. At this time, because strong consistency is not implemented, the write operation is returned immediately after the write operation is complete.
-
Then the second step reads data from Slave A. At this time, the synchronization between Master and Slave A is not complete and the system is in A non-strongly consistent state. Therefore, the second step reads data from the old state.
It can be seen that in the scenario where the primary and secondary read and write operations are separated, it is very important for the system to achieve strong consistency if you want to ensure accurate write and read operations.
3.2 Active/Standby Switchover Scenario
The scenario of active/standby switchover also needs strong consistency. Take Redis, the most widely used in-memory database in the industry, for example, as shown in the figure below:
As can be seen from the figure above, when the Redis client sends a command to the Master server, the Master server immediately replies to the execution result of the command from the client without waiting for the command to be synchronized to the slave server. That is to say, the Master/slave synchronization of Redis is actually asynchronous.
The Master node may break down. In this case, if the Master node breaks down when it receives the command but does not synchronize the data to the Slave server, the Master/Slave switchover will occur. Then, the data of the Master server and Slave server is not synchronized, resulting in data loss. Visible, open source Redis weak consistency itself defects and deficiencies, and to solve this problem, must achieve strong consistency to solve.
4. Gaussian Redis is strongly consistent
Because open source Redis does not have strong consistency, the application of open source Redis is also limited. In order to solve the problem of weak consistency of open source Redis, GaussDB(for Redis) was created. GaussDB(for Redis) is a native cloud database independently developed by Huawei cloud database team that is compatible with the Redis protocol. It completely solves the pain caused by the consistency problem of open-source Redis.
4.1 Gauss Redis architecture
The overall architecture of Gauss Redis is as follows:
Compared with open source Redis, Gauss Redis adopts the design idea of storage and computation separation, and the computing layer is responsible for computing and protocol processing, focusing on services. The storage layer is responsible for copy management, capacity expansion, and other processing, focusing on the data itself. Advantages of Gauss Redis are as follows:
-
Strong data consistency: The storage layer uses distributed storage DFV to easily achieve strong data consistency of three copies.
-
Overavailability: A cluster of N nodes can fail at most N to 1 nodes.
-
Low cost: Data is stored on disk and compressed, costing less than one tenth of the cost per GB of open source Redis;
-
Second capacity expansion: The computing layer only needs to modify the route mapping without data relocation to achieve second capacity expansion.
-
Automatic backup: Gauss Redis can implement MVCC snapshot backup and periodic automatic backup.
4.2 Strong consistent implementation of Gaussian Redis
The architecture of open source Redis and Gauss Redis is shown below:
The open source Redis or the traditional master-slave structure is shown in the left figure. In the scenario of read/write separation or when the master node breaks down and the master/slave switchover occurs, data will be inconsistent.
Gaussian Redis adopts the architecture of storage and computation separation. As shown in the figure on the right, distributed consensus algorithm is adopted in the copy management of storage layer DFV to achieve strong consistency of three copies. If OK is returned when the computing layer calls the storage layer’s interface, it indicates that the storage layer has implemented a strongly consistent copy.
5. Conclusion
When we do architectural design, in fact, many scenes are hidden strong consistent demands. For example, if strong consistency is not achieved in moments, comments in moments are easily out of order. Another example is the flow limiter scenario, if there is no strong consistent guarantee, it is also very easy to cause database crash. Therefore, the importance of strong consistency must be recognized at the beginning of system design in order to design a more stable and reliable system. Gauss Redis, based on the architecture design of storage and calculation separation, realizes the strong consistency of data and provides a strong guarantee for the stability and reliability of business.
6. The appendix
-
Author: Huawei cloud database GaussDB(for Redis) team
-
Resume from Hangzhou/Xi ‘an/Shenzhen: [email protected]
-
More information about GaussDB(for Redis
-
More technical articles: GaussDB(for Redis) blog
Click to follow, the first time to learn about Huawei cloud fresh technology ~