A PDF of Java, JVM, multithreading, MySQL, Redis, Kafka, Docker, RocketMQ, Nginx, MQ queues, data structures, concurrent programming, concurrent pressure, and SEC kill architecture is available if you need it

Seata

Seata is an open source distributed transaction solution dedicated to providing high performance and easy to use distributed transaction services. Seata will provide users with AT, TCC, SAGA and XA transaction modes to create a one-stop distributed solution for users.

Seata provides four different transaction modes for different business scenarios, as described below

  • AT mode: The one-phase, two-phase commit and rollback of AT mode (realized by undo_log table) are automatically generated by Seata framework. Users only need to write “business SQL” to easily access distributed transactions. AT mode is a distributed transaction solution without any intrusion on business.
  • TTC mode: Compared with AT mode, TCC mode is more intrusive to business code, but TCC mode does not have the global row lock of AT mode, and TCC performance is much higher than AT mode. (Applicable to scenarios with high performance requirements such as core systems.)
  • SAGA mode: Sage is a long transaction solution, transaction driven, using business scenarios where process auditing exists, such as the financial industry, which requires multiple layers of auditing.
  • XA pattern: XA pattern is a distributed, highly consistent solution with low performance and low usage.

                           

TCC mode

TCC mode can be divided into three stages:

  • Try: Performs service check and resource reservation
  • Confirm: Confirm the submission
  • Cancel: The branch transaction is cancelled and the reserved resources are released

Three common exceptions in TCC mode

1. Empty rolled back

Empty rollback is a two-phase Cancel method called for a distributed transaction without calling the TCC resource Try method (for example, machine down, network exception). The Cancel method needs to recognize that this is an empty rollback and return success directly.

The solution

An additional transaction control table with distributed transaction ids and branch transaction ids is required, and a record is inserted into the first phase Try method to indicate that the first phase has been executed. The Cancel interface reads the record. If the record exists, it is rolled back normally. If the record does not exist, it is a null rollback.

2. Power, etc

Idempotent means that for the same branch transaction of the same distributed transaction, the second-stage interface of the branch transaction is repeatedly called. Therefore, the two-stage Confirm and Cancel interfaces of TCC are required to ensure idempotent and not reuse or release resources. If idempotent control is not done well, it may lead to serious problems such as capital loss.

The solution

Records the execution status of each branch transaction. In the pre-execution state, if already executed, then no longer executed; Otherwise, perform normal operations. When we talk about empty rollback, there is already a transaction control table, each record of the transaction control table is associated with a branch transaction, so we can add a status field to this transaction control table, which is used to record the execution status of each branch transaction.

3. The suspension

Suspension is when, for a distributed transaction, the two-stage Cancel interface executes before the Try interface. Because empty rollback is allowed, the Cancel interface considers that the Try interface did not execute, and the empty rollback directly returns success. For the Seata framework, it considers that the two-phase interface of the distributed transaction has been successfully executed, and the entire distributed transaction ends.

The solution

During the two-phase execution, a transaction control record is inserted and the state is rolled back. In this way, when the one-phase execution is performed, the record is read first. If the record exists, the two-phase execution is considered. Otherwise, phase 2 is not executed.

Seata Server installation

Step 1: Download and install the server and unzip it to the specified location

Unzip seata - server - 1.1.0. ZipCopy the code
Seata directory structure:

  • Bin: stores the startup scripts of each system
  • Conf: stores the configuration information required during the seata Server startup and the table construction statements required in database mode
  • Lib: dependencies required to run Seata Server
Step 2: Configure SEATA

Seata configuration file (conf directory)

  • Conf: this file is used to configure the storage mode and NIO of transparent transaction information. By default, this file corresponds to the file configuration mode in Registrie. conf
  • Registry. conf:seata Server core configuration file, which can be used to configure the service registration mode and read mode.
The registration mode supports file, NACOS, Eureka, Redis, ZK, Consul, ETCD3, and SOFA. The default registration mode is File, which corresponds to the registration mode information in file.conf. The configuration information can be read in file, nacos, Apollo, ZK, Consul, or ETCD3. The default value is file, which corresponds to the configuration in file.

Modify the registry. Conf

  • The registry uses Nacos
  • The configuration center uses file for configuration
Registry configuration for mutual service discovery of TCS,TM, and RM

Registry {# file, nacos, Eureka, Redis, ZK, Consul, ETCD3, SOFA type = "nacos" consul {cluster = "seata" serverAddr = "127.0.0.1:8848}}"Copy the code
The configuration center is used to read TC configurations

Config {# file, nacos, Apollo, zk, consul, etcd3 type = "file" file {name = "file.conf"}}Copy the code
Modify file.conf to configure the central configuration, which is used to read TC configurations

store { ## store mode: file? .b mode = "file" ## file store property file { ## store location dir dir = "sessionStore" # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions maxBranchSessionSize = 16384 # globe session size , if exceeded throws exceptions maxGlobalSessionSize = 512 # file buffer size , if exceeded allocate new buffer fileWriteBufferCacheSize = 16384 # when recover batch read size sessionReloadReadSize = 100 # async, sync flushDiskMode = async } ## database store property db { ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc. datasource = "dbcp" ## mysql/oracle/h2/oceanbase etc. dbType = "Mysql" driverClassName = ". Com. Mysql. JDBC Driver "url =" JDBC: mysql: / / 127.0.0.1:3306 / seata "registry. The confuser =" mysql"  password = "mysql" minConn = 1 maxConn = 10 globalTable = "global_table" branchTable = "branch_table" lockTable = "lock_table" queryLimit = 100 } }Copy the code
Step 3: Create the SEATA database with the three required tables

  • Global_table: table for storing global transaction session data
  • Branch_table: table that stores session data of branch transactions
  • LockTable: table for storing distributed lock data

    — ——————————– The script used when storeMode is ‘db’ ——————————– — the table to store GlobalSession data CREATE TABLE IF NOT EXISTS global_table ( xid VARCHAR(128) NOT NULL, transaction_id BIGINT, status TINYINT NOT NULL, application_id VARCHAR(32), transaction_service_group VARCHAR(32), transaction_name VARCHAR(128), timeout INT, begin_time BIGINT, application_data VARCHAR(2000), gmt_create DATETIME, gmt_modified DATETIME, PRIMARY KEY (xid), KEY idx_gmt_modified_status (gmt_modified, status), KEY idx_transaction_id (transaction_id) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; — the table to store BranchSession data CREATE TABLE IF NOT EXISTS branch_table ( branch_id BIGINT NOT NULL, xid VARCHAR(128) NOT NULL, transaction_id BIGINT, resource_group_id VARCHAR(32), resource_id VARCHAR(256), branch_type VARCHAR(8), status TINYINT, client_id VARCHAR(64), application_data VARCHAR(2000), gmt_create DATETIME, gmt_modified DATETIME, PRIMARY KEY (branch_id), KEY idx_xid (xid) ) ENGINE = InnoDB DEFAULT CHARSET = utf8; — the table to store lock data CREATE TABLE IF NOT EXISTS lock_table ( row_key VARCHAR(128) NOT NULL, xid VARCHAR(96), transaction_id BIGINT, branch_id BIGINT NOT NULL, resource_id VARCHAR(256), table_name VARCHAR(32), pk VARCHAR(36), gmt_create DATETIME, gmt_modified DATETIME, PRIMARY KEY (row_key), KEY idx_branch_id (branch_id) ) ENGINE = InnoDB DEFAULT CHARSET = utf8;

Step 4: Start Seata Server

Sh -p 8091 -h 0.0.0.0 -m file Options: --host, -h The host to bind. Default: 0.0.0.0 --port, -p The port to listen. Default: 8091 --storeMode, -m log store mode: file, db Default: file --helpCopy the code
Supplement:

  • Extranet access: If extranet access is required, convert 0.0.0.0 to an extranet IP address
  • Background startup: nohup sh seata-server.sh -p 8091 -h 127.0.0.1 -m file > Catalina. out 2> &1&
See seATA’s registration information in NACOS

Dubbo integrates Seata to implement the AT pattern

Introduction of depend on

<! --seata--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-alibaba-seata</artifactId> <version> 2.1.0.release </version> <exclusions> <artifactId>seata-all</artifactId> <groupId> IO. Seata </groupId>  </exclusion> </exclusions> </dependency> <dependency> <groupId>io.seata</groupId> <artifactId>seata-all</artifactId> The < version > 1.1.0 < / version > < / dependency >Copy the code
Application.properties configuration file

#mysql spring.datasource.type=com.alibaba.druid.pool.DruidDataSource Spring. The datasource. DriverClassName = com. Mysql. Cj. JDBC. Driver spring. The datasource. Url = JDBC: mysql: / / 127.0.0.1:3306 / dubbodemo spring.datasource.username=mysql spring.datasource.password=mysql spring.cloud.alibaba.seata.tx-service-group=springcloud-alibaba-producer-testCopy the code
Create file.conf in the resoures directory and configure the file as follows

transport { # tcp udt unix-domain-socket type = "TCP" #NIO NATIVE server = "NIO" #enable heartbeat heartbeat = true # the client batch send request enable enableClientBatchSendRequest = false #thread factory for netty threadFactory { bossThreadPrefix = "NettyBoss" workerThreadPrefix = "NettyServerNIOWorker" serverExecutorThreadPrefix = "NettyServerBizHandler" shareBossWorker = false clientSelectorThreadPrefix = "NettyClientSelector" clientSelectorThreadSize = 1 clientWorkerThreadPrefix = "NettyClientWorkerThread" # netty boss thread size,will not be used for UDT bossThreadSize = 1 #auto default pin or 8 workerThreadSize = "default" } shutdown { # when destroy server, wait seconds wait = 3 } serialization = "seata" compressor = "none" } # service configuration, only used in client side service { #transaction service group mapping vgroupMapping.springcloud-alibaba-producer-test = Grouplist = "127.0.0.1:8091" #degrade, current not support enableDegrade = false #disable seata disableGlobalTransaction = false } #client transaction configuration, only used in client side client { rm { asyncCommitBufferLimit = 10000 lock { retryInterval = 10 retryTimes = 30 retryPolicyBranchRollbackOnConflict = true } reportRetryCount = 5 tableMetaCheckEnable = false reportSuccessEnable = false sqlParserType = druid } tm { commitRetryCount = 5 rollbackRetryCount = 5 } undo { dataValidation = true logSerialization = "jackson" logTable = "undo_log" } log { exceptionRate = 100 } }Copy the code
Create registry. Conf in resoures as follows:

Registry {# file, nacos, eureka, redis, Zk, Consul, ETCD3, SOFA type = "nacos" nacos {serverAddr = "127.0.0.1:8848" namespace = "" cluster = "seata" } eureka { serviceUrl = "http://localhost:8761/eureka" application = "default" weight = "1"} zk {cluster = "default" serverAddr = "127.0.0.1:2181" Session. Timeout = 6000 connect.timeout = 2000} consul {cluster = "default" serverAddr = "127.0.0.1:8500"} etcd3 { Cluster = "default" serverAddr = "http://localhost:2379"} SOFA {serverAddr = "127.0.0.1:9603" application = "default" region = "DEFAULT_ZONE" datacenter = "DefaultDataCenter" cluster = "default" group = "SEATA_GROUP" addressWaitTime = "3000"} file {name = "file.conf"}} config {# file, nacos, Apollo, zk, consul, etcd3 type = "file" nacos {serverAddr = "Localhost" namespace = "" group = "SEATA_GROUP"} consul {serverAddr = "127.0.0.1:8500"} Apollo {app.id = "Seata - server" Apollo. Meta = "http://192.168.1.204:8801" namespace = zk "application"} {serverAddr = "127.0.0.1:2181" session.timeout = 6000 connect.timeout = 2000 } etcd3 { serverAddr = "http://localhost:2379" } file { name = "file.conf" }}Copy the code
DataSourceProxyConfig data source loaded

  • SEATA is to implement a distributed transaction, based on the data source to intercept need to rule out SpringBoot default automatic injection DataSourceAutoConfigurationBean, custom configuration data source.
Add on to start the class: @ SpringBootApplication (exclude = DataSourceAutoConfiguration. Class), and add the following configuration class

@Configuration public class DataSourceProxyConfig { @Bean @ConfigurationProperties(prefix = "spring.datasource") public DataSource druidDataSource(){ return new DruidDataSource(); } @Bean public DataSourceProxy dataSourceProxy(DataSource dataSource) { return new DataSourceProxy(dataSource); } @Bean public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception { SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean(); sqlSessionFactoryBean.setDataSource(dataSourceProxy); sqlSessionFactoryBean.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources("classpath*:/mapper/*Mapper.xml")); sqlSessionFactoryBean.setTransactionFactory(new SpringManagedTransactionFactory()); return sqlSessionFactoryBean.getObject(); } @Bean public GlobalTransactionScanner globalTransactionScanner(){ return new GlobalTransactionScanner("account-gts-seata-example", "account-service-seata-service-group"); }}Copy the code
Create a business table

CREATE TABLE 'sys_user' (' id 'varchar(36) NOT NULL,' name 'varchar(100) NOT NULL,' MSG 'varchar(500) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8; INSERT INTO ` sys_user ` (` id `, ` name `, ` MSG `) VALUES (' 1 ', 'wang', 'initialization data);Copy the code
TTC transaction interface

public interface IUserTccService {

    @TwoPhaseBusinessAction(name = "IUserTccService",commitMethod = "commit",rollbackMethod = "rollback")
    boolean prepare(BusinessActionContext actionContext, @BusinessActionContextParameter(paramName = "userPojo") UserPojo userPojo);

    boolean commit(BusinessActionContext actionContext);

    boolean rollback(BusinessActionContext actionContext);
}
Copy the code
Business interface

public interface IUserService {
    public String ceshi(String input);
}
Copy the code
TTC transaction implementation class

  • Initial operation Try: Complete all service checks, reserve necessary service resources, and perform data operations. (Payment scenario: Frozen withholding 30 YUAN)
  • Confirm operation Confirm: indicates that the service logic is executed without any service check and only the service resources reserved during the Try phase are used. Therefore, Confirm must succeed as long as the Try operation succeeds. In addition, the Confirm operation must be idempotent to ensure that a distributed transaction can succeed only once. (Payment scenario: deduct advance payment)
  • Cancel the operation Cancel: Releases the reserved service resources during the Try phase and rolls back and forth the data in the Try phase. Similarly, the Cancel operation needs to be idempotent. (Payment scenario: Release advance payment)

    @Service public class UserTccServiceImpl implements IUserTccService {

    @Autowired UserPojoMapper userPojoMapper; @Override public boolean prepare(BusinessActionContext actionContext, UserPojo UserPojo) {system.out.println ("actionContext getXID commit>>> "+ rootcontext.getxid ()); int storage =userPojoMapper.updateByPrimaryKey(userPojo); if (storage > 0){ return true; } return false; } @override public Boolean commit(BusinessActionContext actionContext) {system.out.println ("actionContext obtains the Xid commit>>> "+actionContext.getXid()); return true; } @override public Boolean rollback(BusinessActionContext actionContext) {system.out.println ("actionContext obtains the Xid rollback>>> "+actionContext.getXid()); UserPojo userPojo = JSONObject.toJavaObject((JSONObject)actionContext.getActionContext("userPojo"),UserPojo.class); Userpojo. setName(" Name rolled back "); int storage = userPojoMapper.updateByPrimaryKey(userPojo); if (storage > 0){ return true; } return false; }Copy the code

    }

Business implementation class

  • A branch transaction that does not involve RPC calls does not trigger ROLLBACK, and the branch transaction shares the same global transaction ID (XID). The branch transaction itself also has atomicity, which can ensure the atomicity of the local transaction. The branch transaction is executed sequentially under the coordination of RM and TC.
  • TC (Transaction Coordinator) : The confirm will be executed only after all tries are successfully executed. If a try fails (or the RPC call fails due to network problems), the rollback of distributed transactions will be enabled, which meets the atomicity of distributed transactions.
  • GlobalTransactionScanner starts both RM and TM client.

    @Service public class UserServiceImpl implements IUserService{

    @Autowired UserPojoMapper userPojoMapper; @Autowired UserTccServiceImpl userTccService; private static Logger logger = LogManager.getLogger(LogManager.ROOT_LOGGER_NAME); @override @GlobalTransactional Public String ceshi(String input) {logger.info(" global XID: {}", rootContext.getxID ()); UserPojo userPojo = userPojoMapper.selectByPrimaryKey("1"); // userPojo.setId("11111111"); Userpojo.setname (" Normal commit "); if (userTccService.prepare(null,userPojo)){ return "Hello World,"+input+"! ,I am "+ userPojo.getName(); } return "failed "; }Copy the code

    }

Invoke the service for validation

Log printing: Distributed transaction information

Visit the 127.0.0.1:8081 / Hello page