Sub-database sub-table Combat and Middleware (II)

preface

In the sub-database sub-table combat and Middleware (I), we used the method of hard coding in the code to manually define routing rules and obtain different tables for sub-table design.

This article uses middleware for sub-table operations, used here, see the official link for details.

ShardingSphere

Introduction to the

Apache ShardingSphere is an open source distributed database middleware ecosystem. It is composed of three independent products, sharding-JDBC, Sharding-Proxy and Sharding-Sidecar (under planning). They all provide standardized data sharding, distributed transaction and database governance functions, which can be applied to diverse application scenarios such as Java isomorphism, heterogeneous languages, container cloud native and so on.

The status of ShardingSphere project is as follows:

ShardingSphere is positioned as relational database middleware, aiming at fully and reasonably utilizing the computing and storage capabilities of relational databases in distributed scenarios, rather than realizing a brand new relational database.

Specifically divided into three parts:

  • Sharding-jdbc: Positioned as a lightweight Java framework, additional services provided in Java’s JDBC layer are used as JAR packages.
  • Sharding-proxy: it is positioned as a transparent database Proxy. It provides a server version that encapsulates the database binary protocol and supports heterogeneous languages.
  • Sharding-sidecar: A cloud native database broker positioned as Kubernetes or Mesos, as DaemonSet

Manage all access to the database.

The specific relationship is shown in the figure below: Our application layer uses Sharding-JDBC to operate data, and then it can be Proxy by sharding-proxy.Sharding-jdbc, Sharding-proxy, and Sharding-Sidecar differ as follows:

Sharding-JDBC

Sharding-jdbc is positioned as a lightweight Java framework that provides additional services in Java’s JDBC layer. It uses the client directly connected to the database, in the form of JAR package to provide services, without additional deployment and dependence, can be understood as the enhanced VERSION of JDBC driver, fully compatible with JDBC and various ORM framework use.

  • Works with any Java-based ORM framework, such as JPA, Hibernate, Mybatis, Spring JDBC Template or directly using JDBC.
  • Database connection pool based on any third party such as DBCP, C3P0, BoneCP, Druid, HikariCP, etc.
  • Support for any database that implements the JDBC specification. Currently supports MySQL, Oracle, SQLServer and PostgreSQL.

The main function

Data fragmentation

  • Separate library, separate table
  • Reading and writing separation
  • Shard strategy
  • Distributed primary key

Distributed transaction

  • Standardized transaction interface
  • XA strong consistent transactions
  • Flexible transaction

Database governance

  • Configuration dynamic
  • Orchestration and governance
  • Data desensitization
  • Visual link tracking

The yellow part of the figure represents the sharding-JDBC entry API, provided in the form of a factory method. There are currently

  • ShardingDataSourceFactory and MasterSlaveDataSourceFactory two factory class.
  • ShardingDataSourceFactory support depots table separation, read and write operations
  • MasterSlaveDataSourceFactory separation support, speaking, reading and writing operations

The blue parts of the figure represent sharding-JDBC configuration objects, which provide flexible configuration methods.

  • ShardingRuleConfiguration is the core of the depots table configuration and entry, it can contain more than one
  • TableRuleConfiguration and MasterSlaveRuleConfiguration. TableRuleConfiguration encapsulates the fragment Configuration information of a table. There are five Configuration forms corresponding to different Configuration types.
  • MasterSlaveRuleConfiguration separation encapsulation is reading and writing configuration information.

The red parts of the figure represent internal objects, which are used internally by Sharding-JDBC and are of no concern to application developers. ShardingJDBC through ShardingRuleConfiguration and MasterSlaveRuleConfiguration generate rules for ShardingDataSource and MasterSlaveDataSource real object . ShardingDataSource and MasterSlaveDataSource realize DataSource interface, is the complete implementation of JDBC.

Rule configuration

Sharding-jdbc can be configured in Java, YAML, Spring namespace, and Spring Boot Starter. The developer can select the appropriate configuration mode based on the scenario.

Create a DataSource

Through ShardingDataSourceFactory factories and get ShardingDataSource rule configuration object, then can use native JDBC development through the DataSource selection, or use the JPA, MyBatis ORM tools, etc. DataSource dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap, shardingRuleConfig, props);

Knowledge of data sharding theory

The core concept

Table concept

  • Truth table

A physical table that actually exists in the database. For example, b_order0, b_order1

  • Logical table

The name of the same class table structure (assembly) after sharding. Such as b_order.

  • Data nodes

After sharding, it consists of data sources and data tables. Such as the ds0. B_order1

  • The binding table

For example, b_ORDER and b_order_item are sharded according to order_ID, so the two tables are bound to each other. Cartesian product association does not occur in multi-table associated query between bound tables, which can improve the efficiency of associated query.

B_order: b_order0, b_order1 b_order_item: B_order_item0, b_order_ITEM1 SELECT * from b_order o join b_order_item I on(O.order_id = I.order_id) where O.order_id in (10, 11);

If the binding table relationship is not configured and cartesian product association is used, four SQL will be generated

Select * from b_order0 o join b_order_item0 I on(o.o._id = i.o.o._id) where o.o._id in (10,11); Select * from b_order0 o join b_order_item1 I on(o.o._id = i.o.o._id) where o.o._id in (10,11); Select * from b_order1 o join b_order_item0 I on(o.o._id = i.o.o._id) where o.o._id in (10,11); Select * from b_order1 o join b_order_item1 I on(o.o._id = i.o.o._id) where o.o._id in (10,11);

If the binding table relationship is configured, 2 SQL is generated

Select * from b_order0 o join b_order_item0 I on(o.o._id = i.o.o._id) where o.o._id in (10,11); Select * from b_order1 o join b_order_item1 I on(o.o._id = i.o.o._id) where o.o._id in (10,11);

  • The broadcast table

In use, some tables, such as dictionary tables and province information, are not necessary to be sharded, because their data volume is not large, and this kind of table may need to be associated with the table of massive data query. Broadcast tables are stored on different data nodes with the same table structure and data.

ShardingAlgorithm

Since sharding algorithm is closely related to service implementation, it does not provide built-in sharding algorithm. Instead, it extracts various scenarios through sharding strategy, provides higher-level abstraction, and provides interfaces for application developers to implement sharding algorithm by themselves. Currently, four sharding algorithms are available.

  • PreciseShardingAlgorithm

Used to handle scenarios where = and IN are sharded using a single key as the sharding key.

  • RangeShardingAlgorithm

This section describes the sharding scenarios in which BETWEEN AND, >, <, >=, AND <= are sharded using a single key.

  • Composite ComplexKeysShardingAlgorithm subdivision algorithm

It is used to handle the scenario where multiple sharding keys are used as sharding keys. The logic of multiple sharding keys is complex, requiring application developers to handle the complexity by themselves.

  • HintSharding algorithm HintShardingAlgorithm

Used to handle scenarios where Hint row sharding is used. SQL Hint can be used to flexibly inject shard fields in scenarios where shard fields are not determined by SQL but are determined by other external conditions. Example: In the internal system, the primary key is used to log in to the database by employee, but this field does not exist in the database. SQL Hint supports use of both Java APIS and SQL annotations.

ShardingStrategy

Sharding strategy includes sharding key and sharding algorithm. The sharding key + sharding algorithm is the sharding strategy that can be used for sharding operation. Currently, five sharding strategies are available.

  • StandardShardingStrategy

Supports only single sharding keys, AND supports the sharding operations of =, >, <, >=, <=, IN, AND BETWEEN AND IN SQL statements. PreciseShardingAlgorithm and RangeShardingAlgorithm are provided. PreciseShardingAlgorithm is mandatory, and RangeShardingAlgorithm is optional. However, range operation is used in SQL. If RangeShardingAlgorithm is not configured, full-library route scanning is adopted, which is inefficient.

  • ComplexShardingStrategy

Support for multiple sharded keys. Supports sharding of =, >, <, >=, <=, IN AND BETWEEN AND IN SQL statements. Due to the complex relationship between multiple sharding keys, it does not carry out too much encapsulation. Instead, the sharding key value combination and sharding operator are directly transmitted to the sharding algorithm, which is completely implemented by the application developer to provide maximum flexibility.

  • Line expression sharding strategy InlineShardingStrategy

Only single shard keys are supported. Groovy expressions are used to support sharding of = and IN IN SQL statements. Simple sharding algorithms can be used through simple configuration, thus avoiding tedious Java code development. For example, t_user_$-> {u_id % 8} indicates that the T_user table is divided into 8 tables according to u_id mode 8, and the table names are t_user_0 to t_user_7.

  • HintSharding strategy HintShardingStrategy

The strategy for sharding by Hint specifying shard values rather than extracting them from SQL.

  • Non-sharding strategy

The no-sharding strategy.

Fragment Policy Configuration

There are two dimensions of sharding policy: data source sharding policy and table sharding policy. The APIS of the two policies are identical.

  • Data source sharding strategy

Used to configure the target data source to which data is allocated.

  • Table Sharding strategy

It is used to configure the target table to which data is allocated. Since the table exists in the data source, the table sharding policy depends on the result of the data source sharding policy.

Process analysis

SQL parsing

SQL parsing is divided into lexical parsing and syntax parsing. The SQL is first broken down into non-separable words through a lexical parser. The SYNTAX parser is then used to understand the SQL and ultimately extract the parsing context. Sharding-jdbc uses different parsers to parse SQL. The parsers are of the following types:

  • MySQL parser
  • Oracle parser
  • Essentially a parser
  • PostgreSQL parser
  • Default SQL parser
Query optimization

Be responsible for merging and optimizing sharding conditions such as OR, etc.

SQL routing

The sharding policy configured by the user is matched according to the parsing context and the routing path is generated. Currently, fragment and broadcast routes are supported.

Rewrite the SQL

Rewrite SQL into statements that would execute correctly in a real database. SQL rewriting is divided into correctness rewriting and optimization rewriting.

SQL execution

Execute SQL asynchronously through a multi-threaded executor.

Results the merge

Merge multiple execution result sets for easy output through a unified JDBC interface. Result merge includes streaming merge, in-memory merge, and append merge using decorator pattern.

SQL Usage Specification

For details, please refer to the official documentation

Sharding – use JDBC

First, we introduce dependencies, where we use JPA directly to operate

			<dependency>
                <groupId>org.apache.shardingsphere</groupId>
                <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
                <version>$4.1.0</version>
            </dependency>
	
		 <dependency>
                <groupId>mysql</groupId>
                <artifactId>mysql-connector-java</artifactId>
                <version>5.1.48</version>
            </dependency>

			<dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-jdbc</artifactId>
                <version>${springboot.version}</version>
            </dependency>

            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-data-jpa</artifactId>
                <version>${springboot.version}</version>
            </dependency>
Copy the code

First, we will demonstrate the split table strategy, that is, multiple tables in a single library

Let’s prepare two tables

Entity class

@Entity
@Table(name="position")
public class Position implements Serializable {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private long id;

    @Column(name = "name")
    private String name;

    @Column(name = "salary")
    private String salary;

    @Column(name = "city")
    private String city;
/ / set the get omitted
}
Copy the code

JPA interface

public interface PositionRepository  extends JpaRepository<Position.Long> {}
Copy the code

Policy and data source configuration

# print shardingsphere SQL spring. Shardingsphere. Props. SQL. The show = true # # # table depots information configuration Spring. Shardingsphere. The datasource. Names = test0 configuration information # # configuration database connection pool spring.shardingsphere.datasource.test0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test0.jdbc-url=jdbc:mysql://localhost:3306/test0 Spring. Shardingsphere. The datasource. Test0. Username = root spring. Shardingsphere. The datasource. Test0. Password = # root table configuration spring.shardingsphere.sharding.tables.position.actual-data-nodes=test0.position_${0.. 1} # database field configuration divided spring. Shardingsphere. Sharding. Tables. The position. The table - strategy. The inline. Sharding - column = id # configuration fragmentation rules Here, module 2 is taken according to ID for single library sub-table spring.shardingsphere.sharding.tables.position.table-strategy.inline.algorithm-expression=position_$->{id % 2} # Configuration specified using snow algorithm to generate the id # corresponds to spring. The primary key field name shardingsphere. Sharding. Name the position. The key to the generator. The corresponding primary key column = id # class getType returns the content spring.shardingsphere.sharding.tables.position.key-generator.type=SNOWFLAKECopy the code
Shard configuration

Here we use row expressions for sharding configuration

  • Inline row expressions

InlineShardingStrategy: Uses Inline line expressions to configure sharding. Inline is used to simplify data node and sharding algorithm configuration information. The solution is to simplify and integrate configuration.

Specific configuration

Just use expression or {expression} or expression or ->{expression} to identify row expressions in the configuration. Such as:

If there are multiple or {} or ->{} expressions in a row expression, the entire expression result combines each subexpression result in a Cartesian (product) combination. For example, the following line expression:

It will eventually resolve to:

Data node configuration:

Fragment algorithm configuration

The expression inside the row expression is essentially a Groovy piece of code that returns the corresponding real data source or real table name, depending on how the shard key is evaluated.

Ds ${id % 10} or ds$->{id % 10}Copy the code

Ds0, ds1, ds2… ds9

Distributed primary key

ShardingSphere not only provides built-in distributed primary key generators, such as UUID and SNOWFLAKE, but also removes the interface of distributed primary key generators so that users can implement self-defined auto-increment primary key generators by themselves.

Built-in primary key generator:

  • UUID

Generate distributed primary keys using uuID.randomuuid ().

  • SNOWFLAKE

In the fragment rule configuration module, you can configure the primary key generation policy for each table. By default, snowflake algorithm is used to generate long integer data of 64bit.

Custom primary key generator:

  • Custom primary key class, implement ShardingKeyGenerator interface

  • Configure custom primary key classes according to the SPI specification

In Apache ShardingSphere, many function implementation classes are loaded through SPI injection. Note: In the resources directory meta-inf folder, new new services folder again, and then a new file name is org. Apache. Shardingsphere. Spi. The keygen. ShardingKeyGenerator, open the file, Copy the full path of the custom primary key class to a file and save it.

  • User-defined primary key application configuration
# corresponds to the primary key field name spring. Shardingsphere. Sharding. Name t_book. Key - the generator. The column = id # corresponding to a primary key class getType returns the content spring.shardingsphere.sharding.tables.t_book.key- generator.type=TestKEYCopy the code

Then test the sub-table

    @org.junit.Test
    public void insert(a){


        for (int i = 0; i < 100; i++) {
            Position position = new Position();
            position.setCity("test -> " + i);
            position.setName("name"+i);
            position.setSalary(""); positionRepository.save(position); }}Copy the code

You can see in the log

And then look at the database

We can see that all the tables ending in 0 are even and all the tables ending in 1 are odd, so we have succeeded in our single library partition table.

But how do we get and store data that is heavily skewed? For example, the data corresponding to different customers in the last blog is inconsistent. We put several customers with small data volume in one table (or store the original table without separation), and some parties with large data volume in one table. At this point, the split table strategy is not suitable for our business scenario, so we can customize a sharding test.

Custom table sorting algorithm

# custom table algorithm spring. Shardingsphere. Sharding. Name the position. The table - strategy. Standard, sharding - column = name spring.shardingsphere.sharding.tables.position.table-strategy.standard.precise-algorithm-class-name=com.udeam.config.MyP reciseShardingAlgorithmCopy the code

User-defined table partitioning policy We divide tables according to name. Position_0 contains 1 inch and 1 inch and 3 inches

public class MyPreciseShardingAlgorithm implements PreciseShardingAlgorithm<String> {

    @Override
    public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<String> shardingValue) {
        AvailableTargetNames indicates that all tables are configured in the node spring.shardingsphere.sharding.tables.position.actual-data-nodes=test0.position_${0.. 4}
        System.out.println("availableTargetNames->" + availableTargetNames.toString());

        if (shardingValue.getValue().endsWith("0")) {
            return "position_0";
        } else if (shardingValue.getValue().endsWith("1")) {
            return "position_1";
        } else {
            return "position_3"; }}}Copy the code

In fact, this set of variables is all the tables that we defined, and we can choose according to our own rules.

ShardingValue here is our sharding property, and we can select different tables based on this property

You can see that if you test it

Custom primary key

We still use the snowflake algorithm to generate the primary key, but use our own class in the configuration class.

Sharding provides an interface for customizing primary keys

public interface ShardingKeyGenerator extends TypeBasedSPI { Comparable<? > generateKey(); }Copy the code

You can see the internal default implementation of the snowflake algorithm and UUID, and a TestID is our own primary key implementation class

We customize our own primary key policy by implementing the interface

public class TestId implements ShardingKeyGenerator {
    @Override
    publicComparable<? > generateKey() { System.out.println("---- custom primary key ----");
        return new SnowflakeShardingKeyGenerator().generateKey();
    }

    @Override
    public String getType(a) {
        return "TESTKEY"; // The destined primary key type name is required
    }

    @Override
    public Properties getProperties(a) {
        return null;
    }

    @Override
    public void setProperties(Properties properties) {}}Copy the code

configuration

# # corresponds to the primary key field name spring. Shardingsphere. Sharding. Name the position. The key to the generator, the column corresponding primary key class = id # getType returns the content spring.shardingsphere.sharding.tables.position.key-generator.type=TESTKEYCopy the code

We also need to specify fully qualified names in the classpath. The folder and filename are fixed and cannot be changed arbitrarily. The internal implementation class name is our own algorithm class.

After testing, you can see that our log is printed out, along with the primary key generated by the compiled SQL

depots

Split table solves the problem of too much data and query efficiency in a single library, but does not solve the performance of concurrency and data IO. For database concurrency and IO and other problems, we can carry out the operation of split database, we can store data in two or more libraries.

In the following figure, we have test0 and test1 libraries respectivelypositiontable

Select * from test0; select * from test1

Since we have two databases, we need to use a separate database strategy, because the tables are the same and we don’t need to separate tables.

The configuration is as follows

# print shardingsphere SQL spring. Shardingsphere. Props. SQL. The show = true # # # depots depots information configuration Spring. Shardingsphere. The datasource names = test0, test1 configuration information # # configuration database connection pool spring.shardingsphere.datasource.test0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test0.jdbc-url=jdbc:mysql://localhost:3306/test0 Spring. Shardingsphere. The datasource. Test0. Username = root spring. Shardingsphere. The datasource. Test0. Password = # root configuration connection pool spring.shardingsphere.datasource.test1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test1.jdbc-url=jdbc:mysql://localhost:3306/test1 Spring. Shardingsphere. The datasource. Test1. Username = root spring. Shardingsphere. The datasource. Test1. Password = # root configuration shard database field Spring. Shardingsphere. Sharding. Tables. The position. The database - the strategy. The inline. Sharding - column = id # configuration fragmentation rules According to the id here modulus 2 depots spring.shardingsphere.sharding.tables.position.database-strategy.inline.algorithm-expression=test$->{id % 2} # Configuration specified using snow algorithm to generate the id # corresponds to spring. The primary key field name shardingsphere. Sharding. Name the position. The key to the generator. The corresponding primary key column = id # class getType returns the content spring.shardingsphere.sharding.tables.position.key-generator.type=SNOWFLAKECopy the code

test

Again, use the test case above

   @org.junit.Test
    public void insert(a){


        for (int i = 0; i < 100; i++) {
            Position position = new Position();
            position.setCity("test -> " + i);
            position.setName("name"+i%4);
            position.setSalary(""); positionRepository.save(position); }}Copy the code

From the log we can see that even numbers exist in test0 library and odd numbers exist in test1 library

Sub-library + vertical sub-table

In business, for example, a large table has a lot of fields, some uncommon fields are rarely used, and the table frequently updates some common fields. We use vertical splitting to split it into a leading table, and associate it with the primary key. Here we demonstrate sub-library + vertical sub-table.

It should be noted that: in the sub-library, we try to make the table after the sub-library in a database, so that we can kua library transaction, connection problems, from the design to avoid these problems.

The configuration is as follows:

Test0 and TES1 have tables position and POSItion_detail associated by PID

# print shardingsphere SQL spring. Shardingsphere. Props. SQL. The show = true # # depots: Split table + depots # depots information configuration spring. Shardingsphere. The datasource. Names = test0, test1 configuration information # # configuration database connection pool spring.shardingsphere.datasource.test0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test0.jdbc-url=jdbc:mysql://localhost:3306/test0 Spring. Shardingsphere. The datasource. Test0. Username = root spring. Shardingsphere. The datasource. Test0. Password = # root configuration connection pool spring.shardingsphere.datasource.test1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test1.jdbc-url=jdbc:mysql://localhost:3306/test1 Spring. Shardingsphere. The datasource. Test1. Username = root spring. Shardingsphere. The datasource. Test1. Password = # root configuration shard database field Spring. Shardingsphere. Sharding. Tables. The position. The database - the strategy. The inline. Sharding - column = id # configuration fragmentation rules According to id here modulus depots table 2 Spring. Shardingsphere. Sharding. Tables. The position. The database - the strategy. The inline. Algorithm - expression = test $- > % 2} {id # configuration shard database Field in the spring. Shardingsphere. Sharding. Tables. Position_detail. Database - strategy. The inline. Sharding - column = pid # configuration fragmentation rules According to id modulus 2 here We're going to branch out using the id of the vertical branch spring.shardingsphere.sharding.tables.position_detail.database-strategy.inline.algorithm-expression=test$->{pid % 2} # Configuration specified using snow algorithm to generate the id # corresponds to spring. The primary key field name shardingsphere. Sharding. Name the position. The key to the generator. The corresponding primary key column = id # class getType returns the content Spring. Shardingsphere. Sharding. Tables. The position. The key - generator. Type = SNOWFLAKE # corresponding to the primary key field name Spring. Shardingsphere. Sharding. Tables. Position_detail. Key - the generator. The column = id # corresponding to a primary key class getType returns the content spring.shardingsphere.sharding.tables.position_detail.key-generator.type=SNOWFLAKECopy the code

The dao code

public interface PositionDetailRepository extends JpaRepository<PositionDetail.Long> {}Copy the code

The entity code

@Entity
@Table(name = "position_detail")
public class PositionDetail implements Serializable {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private long id;

    @Column(name = "pid")
    private long pid;

    @Column(name = "description")
    private String description;

/ / set the get omitted
}

Copy the code

You can see that in the tests

   @org.junit.Test
    public void insert2(a){


        for (int i = 0; i < 100; i++) {
            Position position = new Position();
            position.setCity("test -> " + i);
            position.setName("name"+i%4);
            position.setSalary("");
            positionRepository.save(position);


            PositionDetail positionDetail = new PositionDetail();
            positionDetail.setDescription("Description");
            positionDetail.setPid(position.getId()); // Vertical table primary key associationpositionDetailRepository.save(positionDetail); }}Copy the code

You can see that the ID of position is the same as the PID of the POSItion_detail table, and the POSItion_detail table ID is also generated by the snowflake algorithm

Looking at the table, you can see that the vertical split is successful in the same library.

Depots table

Separate library solves the performance problem of IO and single library, and separate table solves the problem of large amount of data. Separate library, separate table and solve the number of problems to solve concurrency and IO performance problems.

So let’s look at the database

We have two librariestest0.test1Then the position table is divided into four tables ending with _0,_1,_2, and test0. In order topositionThe primary key is perpendicular to the tableposition_detailIn the table.

Even numbers in test0, even numbers in position 0,1,2, odd numbers in position_detail, odd numbers in position 0,1,2, odd numbers in position_detail

I can’t modulo 4 here because 4 is the same thing as 2.

The specific configuration is as follows

# print shardingsphere SQL spring. Shardingsphere. Props. SQL. The show = true # # # depots + table, vertical table depots information configuration Spring. Shardingsphere. The datasource names = test0, test1 configuration information # # configuration database connection pool spring.shardingsphere.datasource.test0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test0.jdbc-url=jdbc:mysql://localhost:3306/test0 Spring. Shardingsphere. The datasource. Test0. Username = root spring. Shardingsphere. The datasource. Test0. Password = # root configuration connection pool spring.shardingsphere.datasource.test1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test1.jdbc-url=jdbc:mysql://localhost:3306/test1 Spring. Shardingsphere. The datasource. Test1. Username = root spring. Shardingsphere. The datasource. Test1. Password = root # # cartesian product Position_0,test0.position_1,,test0.position_2, Position_1,test1. Position_1,,test1. Position_2, 3 form spring. Shardingsphere. Sharding. Name the position. The actual data - nodes = test ${0.. 1}.position_${0.. # 3} depots fields spring. Shardingsphere. Sharding. Name the position. The database - the strategy. The inline. Sharding - column = id # according to the position id % 2 is divided into two libraries spring. Shardingsphere. Sharding. Name the position. The database - the strategy. The inline. Algorithm - expression = test $- > % 2} {id # Table fields spring. Shardingsphere. Sharding. Tables. The position. The table - strategy. The inline. Sharding - column = id # table four tables spring.shardingsphere.sharding.tables.position.table-strategy.inline.algorithm-expression=position_$->{id % 3} # Configuration specified using snow algorithm to generate the id # corresponds to spring. The primary key field name shardingsphere. Sharding. Name the position. The key to the generator. The corresponding primary key column = id # class getType returns the content Spring. Shardingsphere. Sharding. Tables. The position. The key - generator. Type = SNOWFLAKE position_detail # configuration depots table strategy There is no table spring.shardingsphere.sharding.tables.position_detail.actual-data-nodes=test${0.. 1}.position_detail ## POSItion_detail the position_detail library policy divides two tables vertically based on position ID spring.shardingsphere.sharding.tables.position_detail.database-strategy.inline.sharding-column=pid spring.shardingsphere.sharding.tables.position_detail.database-strategy.inline.algorithm-expression=test$->{pid % 2} # Position_detail algorithm id spring snow. Shardingsphere. Sharding. Name position_detail. Key - the generator. The column = id # corresponding to a primary key class getType returns the content  spring.shardingsphere.sharding.tables.position_detail.key-generator.type=SNOWFLAKECopy the code

You can see the data if you test it

! [Insert picture description here]Img – blog. Csdnimg. Cn / 20210401233…

Let’s look at our position_detail table

Test1, we’re not going to look at it, it’s going to be odd

A dictionary table

Dictionary tables are also known as broadcast tables, where the table does not need to be sharded but all of our data needs to be used, so we save a copy in each library, such as our common city table, configuration table and so on.

Our city table

Here’s a quick demonstration

Java code

@Entity
@Table(name = "city")
public class City implements Serializable {

    @Id
    @Column(name = "id")
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private long id;

    @Column(name = "name")
    private String name;

    @Column(name = "province")
    private String province;
Copy the code

The dao layer code

public interface CityRepository extends JpaRepository<City,Long> {
}

Copy the code

configuration

# print shardingsphere SQL spring. Shardingsphere. Props. SQL. The show = true # # broadcast table Each repository is a # depots configuration information Spring. Shardingsphere. The datasource names = test0, test1 configuration information # # configuration database connection pool spring.shardingsphere.datasource.test0.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test0.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test0.jdbc-url=jdbc:mysql://localhost:3306/test0 Spring. Shardingsphere. The datasource. Test0. Username = root spring. Shardingsphere. The datasource. Test0. Password = # root configuration connection pool spring.shardingsphere.datasource.test1.type=com.zaxxer.hikari.HikariDataSource spring.shardingsphere.datasource.test1.driver-class-name=com.mysql.jdbc.Driver spring.shardingsphere.datasource.test1.jdbc-url=jdbc:mysql://localhost:3306/test1 spring.shardingsphere.datasource.test1.username=root spring.shardingsphere.datasource.test1.password=root Spring. Shardingsphere. Sharding. Broadcast - tables = city id # table snowflakes algorithm Spring. Shardingsphere. Sharding. Tables. City. The key - the generator. The column = id # corresponding to a primary key class getType returns the content spring.shardingsphere.sharding.tables.city.key-generator.type=SNOWFLAKECopy the code

Test, let’s not look at the database and look at the log, insert the same ID into table test0 and table test1