What is a transaction
A unit of program execution that accesses and possibly updates various data items in a database. All operations that make up a transaction can only be committed if they all execute successfully, and if any operation fails, the transaction will be rolled back.
Local transactions
In a computer system, transactions are controlled by a relational database. In a single application, the database is usually on the same server as the application, so it becomes a local transaction.
Four features of database transactions:
- Atomicity: All operations contained in a transaction either all succeed or all fail and roll back.
- Consistency: Consistency means that a transaction must change the database from one consistent state to another.
- Isolation: A transaction cannot be disturbed by the operations of other transactions. Multiple concurrent transactions are isolated from each other.
- Durability: Changes to data in the database should be permanent once transactions are committed.
Distributed transaction
A distributed transaction is a transaction composed of multiple local transactions, which usually occurs in distributed scenarios. Distributed transactions are used to solve the consistency problem of cross-system applications in distributed environment.
Basic theory of distributed transactions
CAP
- Data consistency: Consistency means that data on all nodes is consistent at the same time. If the system returns success for a write operation, all subsequent read requests must read the new data. If the return fails, the data cannot be read by subsequent reads, resulting in strong consistency for callers.
- Service Availability: Servers must respond to user requests whenever they are received.
- Partition-tolerance: In the case of network partitioning, separated nodes can still provide external services normally. In general, partition fault tolerance is unavoidable, so it can be assumed that P of CAP is always true. The CAP theorem tells us that the rest of C and A can’t be done at the same time.
CAP cannot satisfy at the same time If CAP meets all three requirements at the same time, there must be packet loss between nodes because P is allowed, but C cannot be satisfied. Therefore, P is in a distributed system and cannot be avoided. If a write operation is performed in a distributed system, all nodes will fail instead of meeting availability requirements. Therefore, the distributed system can only choose CP or AP.
BASE
CAP theory tells us that in distributed systems C(consistency), A(availability) and P(partition fault tolerance) satisfy at most two items.
BASE is Basically Available, Soft State, and Eventual Consistency. The core idea is that even if it is impossible to achieve strong Consistency, However, applications can adopt appropriate methods to achieve final consistency.
- When a distributed system fails, it allows the loss of part of its availability, that is, the core is guaranteed to be Available.
- Soft State (S) Allows the system to have an intermediate State (Soft State) that does not affect the overall system availability. For example, the status is Creating or Synchronizing. When the data is consistent, change the status to Succeeded. The intermediate state here is the data inconsistency in CAP theory.
- Eventual Consistency (E) Final Consistency After a period of time, data on all nodes in the system reaches a consistent state. For example, the order “payment in progress” will eventually become “payment successful” or “payment failed”.
BASE theory is essentially an extension and complement to CAP and, more specifically, an extension of THE AP scheme in CAP that sacrifices strong consistency for availability. Allow some parts to be unavailable in the event of a failure. Ensure that core functions are available and allow data to be inconsistent for a period of time, but eventually achieve consistency.
Distributed transaction solutions
XA mode
XA is a distributed transaction protocol proposed by Tuxedo. There are roughly two parts: transaction manager and local resource manager. Among them, the local resource manager is often implemented by the database, such as Oracle, DB2 and other commercial databases have implemented XA interface, and the transaction manager as the global scheduler, responsible for the submission and rollback of each local resource.
2PC (Two-phase Commit)
There are two roles in a two-phase commit:
- Transaction coordinator: The brain of distributed transactions, responsible for directing and coordinating various business systems to commit/roll back transactions
- Transaction participant: The local resource manager, the executor of the transaction, is the individual business system.
Specific steps for the two-phase commit protocol:
-
Prepare Phase: The transaction manager (TM) sends a Prepare message to each participant. Each database participant is informed about the transaction locally, and the transaction is not committed. Log (Undo log records data before modification and is used to roll back data. / Redo log Records data after modification and is used to write data files after transaction submission.)
-
Commit phase: If the transaction manager (TM) receives a message indicating that the parameter has failed or timed out, it sends a Rollback message to each participant. Otherwise, send a commit message; The participant performs a commit or rollback according to the transaction manager’s instructions. And release lock resources used in transaction processing.
What are the drawbacks of XA two-phase commit?
- Coordinator single point of failure issues
The transaction coordinator is the core of the whole XA model. Once the transaction coordinator node is down, the participant cannot receive the commit or rollback notification, and the participant will be blocked all the time and cannot complete the transaction in the intermediate state.
- Inconsistencies caused by missing messages.
Requests made by the coordinator may not be responded to due to network reasons. In the second phase of the XA protocol, if a local network problem occurs, some transaction participants receive commit messages and others do not, then data inconsistencies between nodes result.
-
A synchronized block
The resource lock can be released only after the two phases end, resulting in poor performance.
3PC (Three-phase submission)
The two-phase commit preparation phase is split in two and a pre-commit buffer phase is added. Introduce timeouts in both the coordinator and the participant.
Three-phase commit splits the preparation phase of phase 2 into two phases.
Three-phase submission process
-
Phase 1: the canCommit coordinator sends a commit request to the participant, who returns yes if he canCommit and goes into a preparatory state, or no otherwise. Participants do not perform transactions.
-
Phase 2: The preCommit coordinator determines whether transaction-based preCommit operations can be performed based on the responses of the canCommit participants in the previous phase. If all participants return yes in one phase, the participants pre-execute the transaction. If any of the participants feedback no, or wait time out, the coordinator cannot receive the feedback from the participants, and the transaction is interrupted.
-
Phase 3: doCommit is similar to the commit phase of 2PC.
Advantages: compared with 2 PC, 3 PC for the coordinator and participants are set the timeout time, avoid the participants in a long time with the coordinator node communication (coordinator to hang off), unable to release resources problem, because the participants timeout mechanism will be its own after a timeout, automatic locally commit to release resources. This mechanism also reduces the blocking time and scope of the overall transaction.
Cons: Still doesn’t solve data consistency problems.
TCC mode
The theory of
TCC is also called compensation transaction. In fact, it is a two-phase protocol of servitization. Developers need to code to implement the three service interfaces, Try, Confirm and Cancel. The idea is: “Register a corresponding acknowledgement and compensation (undo operation) for each operation.”
TCC corresponds to three operations: Try, Confirm, and Cancel
-
Preprocessing Try: checks services and reserves resources
-
Confirm: Confirms the Commit. After all branch transactions are successfully executed in the Try phase, Confirm is executed.
-
Cancel phase: when all applications involved in the Try operation are not successful, Rollback the successful applications
Three types of exception handling that TCC needs to be aware of
-
Empty the rollback
Q: The server where the branch transaction resides breaks down or the network is abnormal. In this case, the Try is not executed. After the fault is recovered, the Cancel interface is invoked to roll back the branch transaction. So,
How to determine whether the rollback is empty or normal?
A: Add A branch transaction table containing the global transaction ID and branch transaction ID, insert A record in the first phase Try method, indicating that the first phase is executed, Cancel phase interface reads the record, if there is A record, the normal rollback, no rollback.
-
suspension
Q: When the Try method is invoked through RPC, if network congestion occurs, RPC calls usually have a timeout mechanism. After the timeout, TM will notify RM to roll back resources. After the rollback is complete, the Try method is executed and the resources are locked and cannot be released.
Cancel occurs before the Try. If the service resources are reserved, what should I do?
A: If the branch transaction record shows that the Cancel stage has already been executed, the Try method will not be executed.
-
Power etc.
Q: Retry is required after Confirm and Cancel fail. How do I ensure idempotent?
A: Maintain the execution status in the branch transaction record table, and check the status before each execution.
The transaction manager (TM) can be implemented as a standalone service, or the global transaction initiator can act as the transaction manager. TM generates a global transaction record when it initiates a global transaction. The global transaction ID runs through the entire distributed transaction invocation link, which is used to record the transaction context, track and record the status. As Confirm and Cancel failures must be retried, idempotent must be achieved.
TCC is compared with XA
XA is a distributed transaction at the resource level and TCC is a distributed transaction at the business level.
Saga
The Saga model, also known as long-running-Transaction, was developed by H.Garcia-Molina et al., Princeton University. The Saga model divides distributed transactions into multiple local transactions, and each local transaction has its corresponding execution module and compensation module. When a local transaction fails, the final consistency of the transaction can be achieved by calling relevant supplementary methods.
Saga is suitable for long transaction scenarios.
Disadvantages: Transaction isolation is not implemented because the Saga model has no preparation phase. If two transactions operate on the same resource at the same time, concurrent read and write problems can occur. In this case, resource locking logic needs to be implemented at the application level.
Reliable messages are ultimately consistent
What is the ultimate consistency of reliable messages
After the transaction initiator executes the local transaction, it sends a message, which must be received and processed successfully by the transaction participant (message consumer).
This scheme is completed by using message-oriented middleware. The transaction initiator (message producer) sends messages to message-oriented middleware, and the transaction participants receive messages from message-oriented middleware. The communication between the transaction initiator and message-oriented middleware, as well as between transaction participants and message-oriented middleware, is through network. So there are distributed transactions because of network problems.
Problems that need to be solved
-
Atomicity issues with local transactions and message sending
Either the local transaction or the sending of the message succeeds or fails.
- If the message is sent first, then the database is manipulated. The atomicity of the database operation and the sent message cannot be guaranteed; the message may have been sent successfully, but the database operation failed.
- Operation database first, before sending messages. If the message fails to be sent, an exception is thrown and the database is rolled back. This may seem fine, but if the timeout is abnormal and the database is rolled back, the same problem can occur if the message has already been sent.
-
Reliability of messages received by transaction participants
A transaction participant must be able to receive a message from the message queue and can receive it again if it fails to receive a message.
-
Repeated message consumption
To solve the problem of repeated consumption, the method idempotency of transaction participants is implemented.
Reliable message final consistency implementation scheme
Local message table
When information is stored in the database, a message record table is maintained and a message record is added to the database table to ensure the consistency of business operations and messages through local transactions. The message is then sent to the message middleware through a scheduled task, and the data in the message table is deleted when the confirmation message is sent to the consumer.
Disadvantages: Scheduled tasks poll and scan the database message table, which affects the database performance.
RocketMQ transaction messages
RocketMQ transaction message sending steps:
- The sender sends a semi-transactional message to the RocketMQ version of the message queue server. Semi-transactional messages are not consumed by subscribers.
- After the MQ Server persists the message successfully, it returns an Ack to the sender confirming that the message has been successfully sent, which is a semi-transactional message.
- The sender starts executing the local transaction logic.
- The sender submits a secondary acknowledgement (Commit or Rollback) to the MQ server based on the result of the local transaction. Upon receipt of the Commit status, the server marks the semi-transaction message as deliverable and the subscriber receives the message. The semi-transaction message is deleted when the server receives the Rollback status and will not be accepted by the subscriber.
Q: In the case of network disconnection or application restart, what should I do if the second confirmation submitted in Step 4 fails to reach the server?
The RocketMQ server sends a message back to the message. After receiving the message back, the sender needs to check the final result of the local transaction execution of the corresponding message. The sender submits the final status of the local transaction again for confirmation. The server then performs operations on the half-transaction message according to Step 4.
Best effort notice
When the local transaction completes, a message is sent to MQ, which tells the other system to execute the transaction, retry it if it fails, and abandon it after multiple attempts. Allow a small number of transactions to fail, usually when distributed transactions are not strictly required, such as logging or state.
Seata
Seata is an open source project Fescar initiated by Ali middleware team, later renamed Seata, is an open source distributed transaction framework.
There are three modules in Seata, which are TM, RM and TC.
- A Transaction Coordinator (TC) is a standalone middleware that needs to be deployed and run independently. It maintains the running state of the global transaction and receives instructions from TM to initiate the commit or rollback of the global transaction. Responsible for communicating with RM to coordinate commit or rollback of branch transactions.
- Transaction Manager (TM) : A Transaction Manager that works in an embedded application and is responsible for starting a global Transaction and ultimately initiating a global commit or global rollback to the TC.
- Resource Manager (RM) : Controls branch transactions, is responsible for branch registration, status reporting, receives instructions from transaction coordinator TC, and drives the submission and rollback of branch transactions.
Seata executes the process
-
TM asks TC to start a new global transaction. TC generates an XID that represents a global transaction.
-
Xids propagate through the invocation chain of microservices.
-
RM registers the local transaction with the TC as a branch of the corresponding XID global transaction.
-
TM requests the TC to commit or roll back the corresponding XID global transaction.
-
The TC driver XID corresponds to all branch transactions under the global transaction to complete the branch commit or rollback.
Seata provides four distributed transaction solutions, AT mode, TCC mode, Saga mode and XA mode.
AT mode is 2PC of Seata implementation. XA mode solved the 2PC need to rely on database support XA protocol malpractice, and zero intrusion to business.
In AT mode, developers can only care about their own business SQL, with the execution of the business SQL as a phase, and Seata will automatically generate a two-phase commit or rollback of the transaction.
Write in the last
Welcome more criticism and correction.