- Distributed Transactions in Spring, with and Without XA-Part II
- By David Syer
- The Nuggets translation Project
- Permanent link to this article: github.com/xitu/gold-m…
- Translator: xiantang
- Proofread by Fengziyin1234
- Spring’s distributed transaction implementation — with and without XA — Part 1
- Spring’s distributed transaction implementation — with and without XA — Part 2
- Spring’s distributed transaction implementation — with and without XA — Part 3
A shared database resource can sometimes be synthesized from existing separate resources, especially if they are all on the same RDBMS platform. Enterprise-level database vendors all support the concept of synonyms (or equivalents), where a table in one schema (Oracle terminology) is defined as a synonym in another schema. In this way, the physical data in the platform can be transacted by the same Connection in the JDBC client. For example, implementing the shared transaction resource pattern in ActiveMQ in a real system (as a comparison) would often create synonyms involving messaging and business data.
Performance and JDBCPersistenceAdapter
Some in the ActiveMQ community claim that JDBCPersistenceAdapter is causing performance problems. However, many projects and real-time systems use ActiveMQ in conjunction with relational databases. In these cases, the wise advice received is to use the logging version for better performance. This does not apply to the shared transaction resource pattern (because the logging capability is a new transaction resource). Nevertheless, the jury is still looking at JDBCPersistenceAdapter. And there’s actually reason to think that sharing transaction resources might improve. Performance is on the logging side. This is an area of active research between the Spring and ActiveMQ engineering teams.
Another technique for sharing resources in non-messaging scenarios (multiple databases) is to link two database schemas together on the RDBMS platform using the linking capability of Oracle data (see Resources). This may require modifying the application code or creating a synonym because the alias that references the table name of the linked database contains the name of the link.
Maximum effort single phase commit mode
The maximum-effort single-phase commit pattern is fairly common, but can fail in certain situations that developers must be aware of. This is a non-XA pattern that involves a synchronous single-phase commit of many resources. Since two-phase commit is not used, it is by no means as secure as an XA transaction, but if participants are aware of the compromise, it is usually sufficient. Many high-volume, high-throughput transaction processing systems are set up in this way to improve performance.
The basic idea is to delay the commit of all resources as late as possible in a transaction so that the only thing that can go wrong is an infrastructure failure (not a business processing error). The reason systems rely on the maximum-effort single-phase commit model is that infrastructure failures are so rare that they can afford to take risks in exchange for higher throughput. If the business processing service is also designed to be idempotent, it is almost impossible to go wrong in real life.
To help you better understand the pattern and analyze the consequences of failure, I’ll use message-driven database updates as an example.
Two resources in this transaction are counted and counted. Message transactions start before the database and end in reverse order (commit or rollback). Therefore, the order in the success cases may be the same as at the beginning of this article:
- Start a message transaction
- Receive a message
- Start a database transaction
- Updating the database
- Commit a database transaction
- Commit message transaction
In fact, the order of the first four steps is not critical, except that messages must be received before updating the database, and each transaction must begin before using its corresponding resources. So this sequence is also valid:
- Start a message transaction
- Start a database transaction
- Receive a message
- Updating the database
- Commit a database transaction
- Commit message transaction
The point is that the last two steps are important: they must come last in that order. The reason ordering is important is technical, but business requirements also dictate ordering capabilities. This order tells you that the transaction resources in this case are special. It contains instructions on how to perform another job. This is a business sort: the system cannot automatically decide how to sort it (although it usually does if messages and data are two resources). The reason ordering is important is because it relates to failure. The most common failure scenario (by far) is business processing failure (bad data, programming errors, etc.). In this case, the two transactions can be easily manipulated in response to exceptions and rollbacks. In this case, the integrity of the business data is preserved and the timeline is similar to the ideal failure case outlined at the beginning of this article.
The exact mechanism for triggering a rollback is not important; several are available. Importantly, the commit or rollback occurs in the opposite order to the business order in the resource. In the sample application, the messaging transaction must be committed last because the instructions for the business process are contained in this resource. This is important because a (rare) failure occurs where the first commit succeeds and the second commit fails. Because by design, all business processing is done at this point, the only cause of this partial failure will be an infrastructure problem with the messaging middleware.
Note that if the commit of the database resource fails, the net effect is still a rollback. Therefore, the only non-atomic failure mode is when the first transaction commits and the second transaction rolls back. More generally, if there are n resources in a transaction, a failure pattern such as n-1 will leave the resource in an inconsistent (committed) state after rollback. In the message database use case, the result of this failure pattern is that the message is rolled back and returned to another transaction, even if it has been successfully processed. So, you can speculate that the worse thing that could happen is that duplicate messages can be delivered. More generally, because earlier resources in a transaction are considered likely to carry information about how to process later resources, the end result of a failure pattern can often be referred to as message duplication.
Some people take the risk that repeated messages don’t happen so often that they don’t bother to anticipate them. However, to have more confidence in the correctness and consistency of your business data, you need to understand it in your business logic. If you realize in business processing that duplicate messages may occur, all you have to do (usually at some additional cost, but not as much as 2PC) is check to see if it has already processed that data, and if so, do nothing. This specialization is sometimes called the idempotent business service pattern.
The sample code includes two examples of using this pattern to synchronize transaction resources. I’ll go through each one in turn and then test some of the other options.
Spring and message-driven POJOs
In the Best-JMS-DB project of the sample code, participants are set up using the mainstream configuration options to follow the maximum-effort single-phase commit pattern. The idea is that messages sent to queues are collected by asynchronous listeners and used to insert data into tables in the database.
A component of the TransactionAwareConnectionFactoryProxy – Spring, intended for this model – is the key factor. Use the configuration to wrap the ConnectionFactory in a decorator that handles transaction synchronization, rather than using the ConnectionFactory provided by the original vendor. This happens in jMS-context.xml, as shown in Example 6:
Example 6. Configure oneTransactionAwareConnectionFactoryProxy
To package what the supplier providesConnectionFactory
<bean id="connectionFactory"
class="org.springframework.jms.connection.TransactionAwareConnectionFactoryProxy">
<property>
<bean class="org.apache.activemq.ActiveMQConnectionFactory" depends-on="brokerService">
<property/>
</bean>
</property>
<property/>
</bean>
Copy the code
ConnectionFactory does not need to know which transaction manager to synchronize with because only one transaction is active when needed, and Spring can handle it internally. Drive the transaction by the data – the source – the context. Ordinary DataSourceTransactionManager processing XML configuration. It is important to understand that the component of the transaction manager is the JMS listener container that will poll and receive messages:
<jms:listener-container transaction-manager="transactionManager">
<jms:listener destination="async" ref="fooHandler" method="handle"/>
</jms:listener-container>
Copy the code
FooHandler and Method tell the listener container which component to call which method when a message arrives on the Async queue. The handler is implemented by taking a String as an incoming message and using it to insert the record:
public void handle(String msg) {
jdbcTemplate.update(
"INSERT INTO T_FOOS (ID, name, foo_date) values (? ,? ,?) ", count.getAndIncrement(), msg, new Date());
}
Copy the code
To simulate a failure, the code uses a FailureSimulator section. It examines the message content to see if and how it should fail. The maybeFail() method shown in example 7 is called after the FooHandler processes the message, but before the transaction ends so that it can affect the outcome of the transaction:
Example 7.maybeFail()
methods
@AfterReturning("execution(* *.. *Handler+.handle(String)) && args(msg)")
public void maybeFail(String msg) {
if (msg.contains("fail")) {
if (msg.contains("partial")) {
simulateMessageSystemFailure();
} else{ simulateBusinessProcessingFailure(); }}}Copy the code
Only throw a DataAccessException simulateBusinessProcessingFailure () method, as the database access failure. When this method is triggered, you expect a complete rollback of all database and message transactions. The scheme in the sample project AsynchronousMessageTriggerAndRollbackTests unit test was tested.
SimulateMessageSystemFailure () method by weakening the underlying JMS Session to simulate the failure of the messaging system. The expected result here is a partial commit: the database work remains committed but the message is rolled back. This is test in AsynchronousMessageTriggerAndPartialRollbackTests unit tests.
Sample package also includes AsynchronousMessageTriggerSunnyDayTests class successfully submitted all the clerical work of the unit tests.
The same JMS configuration and the same business logic can also be used in a synchronous setup, where messages are received in a blocking call within the business logic rather than delegated to the listener container. This approach is also demonstrated in the Best-JMS-DB sample project. Sunny day case and complete rollback in SynchronousMessageTriggerSunnyDayTests and SynchronousMessageTriggerAndRollbackTests test respectively.
Link transaction manager
In another example of the maximum-effort single-phase commit pattern (best-db-DB project), a crude implementation of the transaction manager simply chases together lists of other transaction managers to achieve transaction synchronization. If the business process is successful, they all commit, if not, they all roll back.
Implementation in ChainedTransactionManager, it accepts a list of other transaction manager as injection properties, as shown in example 8:
Example 8. ChainedTransactionManager configuration
<bean id="transactionManager" class="com.springsource.open.db.ChainedTransactionManager">
<property>
<list>
<bean
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property/>
</bean>
<bean
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property/>
</bean>
</list>
</property>
</bean>
Copy the code
The simplest test of this configuration is to insert content into both databases, roll back, and check that neither operation leaves a trace. This is implemented as a unit test in MulipleDataSourceTests, the same as in the Atomikos-DB project for the XA example. If the rollback is not synchronized but the commit fails, the test fails.
Remember that the order of resources is important. They are nested and committed or rolled back in the reverse order in which they were enrolled (which is the order in the configuration). This makes one resource special: if something goes wrong, the outermost resource will always roll back, even if the only problem is the failure of that resource. In addition, testInsertWithCheckForDuplicates () test method shows an idempotent business processes, can protect the system from the effects of partial failure. It is implemented as a defensive check on the business operations of the internal resource (in this case, otherDataSource) :
int count = otherJdbcTemplate.update("UPDATE T_AUDITS ... WHERE id=, ... ?");
if (count == 0) {
count = otherJdbcTemplate.update("INSERT into T_AUDITS ...",...). ; }Copy the code
First try the update using the WHERE clause. If nothing happens, insert the data you want to find in the update. In this case, the cost of additional protection for idempotent processes is an additional query (update) in the Sunny-day case. This cost would be very low in more complex business processes, where many queries are executed per transaction.
If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.
The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.