Why distributed transactions
As the complexity of business system increases and the amount of data increases, distributed transactions appear. With the rapid development of the Internet, finance and other industries, businesses are becoming more and more complex. A complete business often needs to invoke multiple sub-services or services. With the increasing number of services, more and more services and data are involved. The traditional system is difficult to support, application and database and other distributed systems. Distributed system also brings about the problem of data consistency, resulting in distributed transactions.
What is distributed transaction
Overall transaction consistency of multiple node operations under distributed conditions. Especially in the microservice scenario, business A is associated with business B, and transaction A succeeds while transaction B fails. As transaction A crosses systems, transaction B will not be detected. At this point, on the whole, the data are inconsistent. In this case, the service system can only obtain incomplete data of service A and miss data of service B.
How to achieve distributed consistency
Typically there are two ideas:
- Ideal: Just like a single database transaction, multiple databases automatically achieve consistency across database nodes through some coordination mechanism.
Usage scenario: Requires strict consistency, such as financial transactions.
- General situation: data inconsistency can be tolerated for a period of time, and finally through timeout termination, scheduling compensation, etc., to achieve data consistency in the final state.
Application scenario: Semi-real-time or non-real-time processing, such as T+1 operations or e-commerce operations.
- Strong consistency: XA
- Weak consistent:
2.2 So-called flexible transactions, using a set of transaction framework to ensure the final consistent transactions
XA distributed transaction protocol
Based on the first strongly consistent idea, there is a protocol based on the database itself support, XA distributed transactions. The overall design idea of XA can be summarized as how to fine-tune the extension on the existing transaction model to realize distributed transactions.
- Application Program (AP) : Used to define transaction boundaries (that is, define the start and end of a transaction) and operate on resources within transaction boundaries.
- Resource Manager (RM) : Provides access to resources, such as databases and file systems
- Transaction Manager (TM) : Responsible for assigning unique Transaction identifiers, monitoring the execution progress of transactions, and committing and rolling back transactions.
Problems with the XA protocol
- Synchronization blocking problem
A global transaction contains a set of independent transaction branches that either succeed or fail. The ACID properties of each transaction branch together constitute the ACID properties of a global transaction. This takes the ACID properties supported by a single transaction branch up one level into the realm of distributed transactions. Even in a non-distributed transaction, that is, a local transaction, we need to set the transaction isolation level to SERIALIZABLE if it is sensitive to operation reads. This is especially true for distributed transactions, where repeatable read isolation levels are not sufficient to ensure distributed transaction consistency. That said, if we are using MySQL to support XA distributed transactions, it is best to set the transaction isolation level to SERIALIZABLE. But this isolation level is the least efficient.
- A single point of failure
Due to the importance of the coordinator, once the coordinator TM fails, the participant RM is blocked forever. Especially in phase 2, when the coordinator fails, all participants are still locked in transaction resources and cannot continue to complete the transaction.
- Data inconsistency
In phase 2 of the two-phase commit, after the coordinator sends a COMMIT request to the participant, a local network exception occurs or the coordinator fails during the commit request, resulting in only a subset of the participants receiving the commit request. The commit operation is performed after these participants receive the COMMIT request. However, other parts of the machine that do not receive the COMMIT request cannot perform the transaction commit. So the whole distributed system will appear data inconsistency phenomenon.
BASE flexible transaction
Local transaction -> XA(2PC) -> BASE If a transaction implementing the ACID transaction element is called a rigid transaction, then a transaction based on the BASE element is called a flexible transaction. BASE stands for basic availability, flexible state, and final consistency.
- Basically Available ensures that distributed transaction participants are not necessarily online at the same time.
- Soft state allows a delay in system state updates that may not be perceptible to the customer.
- Eventually consistent, however, is usually a means of messaging to ensure the ultimate consistency of the system.
Isolation is a high requirement in ACID transactions, where all resources must be locked during transaction execution. The idea behind flexible transactions is to move mutex operations from the resource level to the business level through business logic. Ease the requirement for strong consistency in exchange for improved system throughput.
2PC Two-stage commit
MySQL supports the two-phase commit protocol:
- Prepare Phase: The transaction manager sends a Prepare message to each participant. Each participant performs the transaction locally and writes a local Undo/Redo log. The transaction is not committed. (Undo log records data before modification and is used for database rollback. Redo log records data after modification and is used for writing data files after transaction commit.)
- Commit Phase: If the transaction manager receives an execution failure or timeout message from two participants, it sends a Rollback message to each participant. Otherwise, send a Commit message; According to the instructions of the transaction manager, participants perform commit or rollback operations and release lock resources used during transaction processing. Note: Lock resources must be released at the last stage.
3PC Three-phase submission
Unlike two-phase commit, three-phase commit has two changes: 1. Introduce timeout mechanism. Introduce timeouts for both the coordinator and the participant. 2. Insert a preparation phase between Phases 1 and 2. The state of each participating node is consistent before the final submission stage. That is, in addition to introducing a timeout mechanism, 3PC splits the preparation phase of 2PC in two again, so that there are three phases of CanCommit, PreCommit, and DoCommit.
- CanCommit phase
The CanCommit phase for 3PC is actually very similar to the preparation phase for 2PC. The coordinator sends a COMMIT request to the participant, who returns a Yes response if he can commit, or a No response otherwise. 1. The transaction asks the coordinator to send a CanCommit request to the participant. Asks if a transaction commit operation can be performed. It then waits for the participant’s response. 2. Response Feedback Once a participant receives a CanCommit request, it normally returns a Yes response and enters the preparatory state if it thinks it can successfully execute the transaction. Otherwise feedback No
- PreCommit phase
The coordinator determines whether a transaction’s PreCommit operation can be remembered based on the response of the participant. Depending on the response, there are two possibilities. If the coordinator receives a Yes response from all participants, the transaction is pre-executed. 1. Send a PreCommit request. The coordinator sends a PreCommit request to the participant and enters the Prepared phase. 2. After receiving a PreCommit request, a transaction is performed and undo and redo information is recorded in the transaction log. 3. Response Feedback If the participant successfully executes the transaction, the participant returns an ACK response and waits for the final instruction. If either participant sends a No response to the coordinator, or if the coordinator does not receive a response from the participant after a timeout, the transaction is interrupted. 1. Send interrupt request The coordinator sends abort requests to all participants. 2. Interrupt a transaction participant performs an interrupt of a transaction after receiving an ABORT request from the coordinator (or after a timeout, but still no request from the coordinator).
- DoCommit phase
The actual transaction commit at this stage can also be divided into the following two scenarios. Perform commit 1. Send commit request The coordinator receives the ACK response sent by the participant, and then he will move from the pre-commit state to the commit state. DoCommit requests are sent to all participants. 2. After receiving the doCommit request, the transaction commit participant performs the formal transaction commit. All transaction resources are released after the transaction commits. 3. Response Feedback After the transaction is committed, send an Ack response to the coordinator. 4. Completion of the transaction The coordinator completes the transaction after receiving ack responses from all participants. The interrupt transaction coordinator does not receive the ACK response sent by the participant (either the recipient sent an ACK response instead, or the response timed out), then the interrupt transaction is executed. 1. Send interrupt request The coordinator sends abort request 2 to all participants. Transaction rollback participants receive abort requests, use the undo information they record in phase two to roll back the transaction and release all transaction resources after the rollback. 3. Feedback Result After the transaction rollback, the participant sends an ACK message 4 to the coordinator. The interrupt transaction coordinator performs the interrupt of the transaction after receiving the ACK message from the participant. During the doCommit phase, if participants cannot receive doCommit or Rebort requests from the coordinator in time, they will continue to commit the transaction after waiting for timeout. (Actually, this is based on probability. When entering phase 3, the participant has already received a PreCommit request in phase 2, so the coordinator can only make a PreCommit request if he has received a Yes CanCommit response from all participants before phase 2 begins. (Once a participant receives a PreCommit, he knows that everyone has agreed to change it.) So, in a nutshell, when he enters phase 3, even though he does not receive a COMMIT or abort response due to network timeouts, he has reason to believe that the commit has a high probability of success.
TCC
BASE flexible transaction TCC In TCC mode, each service operation is divided into two phases. The first phase checks and reserves resources, and the second phase performs operations based on the Try status of all services. If all services are successful, Confirm operations are performed. Cancel all. TCC service interfaces must implement the following logic: 1. Prepare operation Try: Complete all service checks and reserve necessary service resources. 2. Confirm operation Confirm: The service logic is executed without any service check and only the service resources reserved in the Try phase are used. Therefore, Confirm must succeed as long as the Try operation succeeds. In addition, the Confirm operation must be idempotent to ensure that a distributed transaction can succeed only once. 3. Cancel the operation Cancel: Releases the reserved service resources during the Try phase. Similarly, the Cancel operation needs to be idempotent. TCC does not rely on RM’s support for distributed transactions, but implements distributed transactions through the decomposition of business logic. Different from AT, IT needs to define the logic AT each stage by itself, which incurs services. TCC needs to pay attention to several issues: 1, allow empty rollback 2, anti-suspension control 3, idempotent design
SAGA
The Saga mode has no try phase and commits transactions directly.
In complex cases, the rollback operation has high design requirements.
Seata
Seata is a distributed transaction framework jointly developed by Alibaba Group and Ant Financial. The goal of its AT transaction is to provide incremental transaction ACID semantics under the microservices architecture, allowing developers to use distributed transactions as they use local transactions. The core concept is consistent with Apache ShardingSphere. The Seata AT transaction model consists of TM (transaction manager), RM (resource manager) and TC (transaction Coordinator). TCS are independently deployed services. TM and RM are deployed with business applications in the form of JAR packages. They establish long connections with TCS and maintain remote communication throughout the transaction life cycle. TM is the initiator of global transactions and is responsible for starting, committing and rolling back global transactions. As a participant of global transactions, RM is responsible for reporting the execution results of branch transactions and submitting and rolling back branch transactions through coordination of TCS. Typical lifecycle of distributed transactions managed by Seata: TM requires the TC to start a completely new global transaction. TC generates an XID representing the global transaction. The XID runs through the entire invocation chain of the microservice. TM requires TCS to commit or roll back global transactions corresponding to XIDS. All branch transactions under the global transaction corresponding to the TC driver XID are committed or rolled back.
hmily
Hmily is a high performance distributed transaction framework, which was started in 2017, currently has 2800 Star, based on TCC principle implementation, using Java language development (JDK1.8+), Naturally supports distributed transactions for microservice frameworks such as Dubbo, SpringCloud, and Motan. Support complex scenarios such as Nested transaction support, RPC transaction recovery, timeout abnormal recovery, and high stability based on asynchronous Confirm and Cancel design. Compared with other methods, it has higher performance. Based on SPI and API mechanism design, it has strong customization and supports various storage of local transactions with high scalability: Redis/mongo/zookeeper/file/mysql transaction log of a variety of serialization support: Java/hessian/kryo protostuff based on high performance component disruptor asynchronous log good performance Implements SpringBoot – Starter, out of the box, Easy integration using Aspect AOP Aspect thought and Spring seamless integration, natural support cluster implementation of VUE based UI interface, convenient monitoring and management
Local message table
The local message table is in the same database as the business data table, so local transactions can be used to ensure that the transaction characteristics are met during operations on the two tables, and message queues are used to ensure final consistency. A message is sent to the local message table after a party to a distributed transaction has written business data. The local transaction guarantees that the message is written to the local message table. After that, the message in the local message table is forwarded to a message queue such as Kafka. If the forwarding succeeds, the message is deleted from the local message table. Otherwise, the message is forwarded again. The other party in the distributed transaction reads a message from the message queue and performs the operation in the message. Advantages: A very classic implementation that avoids distributed transactions and achieves ultimate consistency. Disadvantages: Message tables are coupled to the business system, and without a packaged solution, there is a lot of chores to deal with.
MQ
Some third-party MQS, such as RocketMQ, support transaction messaging in a manner similar to the two-phase commit adopted, but some mainstream MQS on the market do not support transaction messaging, such as RabbitMQ and Kafka. Take Alibaba’s RocketMQ middleware as an example. The idea is roughly as follows: The first stage Prepared message will get the address of the message. The second phase performs a local transaction, and the third phase accesses the message using the address obtained in the first phase and modifies the state. That is, two requests, one send message and one acknowledgement message are submitted to the message queue within the business method. RocketMQ will periodically scan the message cluster for transaction messages if it finds Prepared. It will acknowledge the message to the sender, so the producer needs to implement a Check interface. RocketMQ will decide whether to roll back or continue sending the confirmation message based on the policy set by the sender. This ensures that message delivery and the local transaction both succeed or fail at the same time. Advantages: Ultimate consistency is achieved without reliance on local database transactions. Cons: Difficult to implement, not supported by mainstream MQ, and part of the Code for RocketMQ transaction messages is not open source.
This article is compiled from the Web