ShardingSphere sharding JDBC

Shardingsphere-5.0.0-beta shardingSphere-5.0.0-beta

1. Obtain the source code

Clone at github: github.com/apache/shar…

Settings before cloning:

Git config --global core.longpaths trueCopy the code

Clone, you need to climb over the wall, otherwise the network speed is too slow will be officially forced to disconnect:

git clone https://github.com/apache/shardingsphere.git
Copy the code

1.1. Open the project

Idea open the project and wait for Maven to download dependencies. During the download process, some JARS may be stuck. Restart IDEA and continue to download.

At that time, the 4.3.31.build-snapshot package of Spring-core failed to be downloaded. You only need to find and delete it in the local Maven repository, refresh Maven in idea and download it again.

When org.apache.maven.plugins fail to load, you can fix the problem by placing it in dependency for download

After maven has downloaded all dependencies, run the following command in idea

mvn install -Dmaven.test.skip=true -Dmaven.javadoc.skip=true
Copy the code

1.2. Add examples

Examples is not loaded by Maven by default. Use the Maven + symbol on the right side of idea to import examples

2. Initialize the database

2.1. Initialize the sample database

In the examples/ SRC /resources directory, you can see a manual_schema.sql file. After executing the database script, you can complete the initialization of the sample database

My environment is docker mysql5.7 in vm, directly connect to the database through Navicat Premium, execute this script, this part is not required, as long as the mysql database is ok.

2.2. Modify connection information

To run Demo, you need to modify the JDBC configuration, including the DataSourceUtil class and yamL configuration

In the examples – > shardingsphere – JDBC – example – > sharding – example – > sharding – raw – JDBC – example

Modify DataSourceUtil class in examples->example-core->example- API

Yaml > resources-> databases. Yaml > resources-> databases

3. ShardingSphere JDBC function test

Enter the OrderServeiceImpl class based on the test class, run the breakpoint to see what is going on, and trace the breakpoint for the processSuccess method

Ps: The point of breakpoints is that the test case deletes data, and you can’t see the effect clearly without the breakpoint

3.1. Example of branch library

The code defaults to a branch library

private static ShardingType shardingType = ShardingType.SHARDING_DATABASES;
Copy the code

Depots rules

rules: - ! SHARDING tables: t_order: actualDataNodes: ds_${0.. 1}.t_order keyGenerateStrategy: column: order_id keyGeneratorName: snowflake t_order_item: actualDataNodes: ds_${0.. 1}.t_order_item keyGenerateStrategy: column: order_item_id keyGeneratorName: snowflake bindingTables: - t_order,t_order_item broadcastTables: - t_address defaultDatabaseStrategy: standard: shardingColumn: user_id shardingAlgorithmName: database_inline defaultTableStrategy: none: shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds_${user_id % 2} keyGenerators: snowflake: type: SNOWFLAKE props: worker-id: 123 props: sql-show: falseCopy the code

From the ds_ ${user_id % 2}; User_id = 2; user_id = 2; user_id = 2;

3.2 examples of separate tables

Use the table

private static ShardingType shardingType = ShardingType.SHARDING_TABLES;
Copy the code

Table rules

rules: - ! SHARDING tables: t_order: actualDataNodes: ds.t_order_${0.. 1} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: t_order_inline keyGenerateStrategy: column: order_id keyGeneratorName: snowflake t_order_item: actualDataNodes: ds.t_order_item_${0.. 1} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: t_order_item_inline keyGenerateStrategy: column: order_item_id keyGeneratorName: snowflake bindingTables: - t_order,t_order_item broadcastTables: - t_address shardingAlgorithms: t_order_inline: type: INLINE props: algorithm-expression: t_order_${order_id % 2} t_order_item_inline: type: INLINE props: algorithm-expression: t_order_item_${order_id % 2} keyGenerators: snowflake: type: SNOWFLAKE props: worker-id: 123 props: sql-show: falseCopy the code

You can see that the T_ORDER and T_ORDER_item tables are divided into tables based on the rules of module 2

After analysis, it was found that the mold fetching rule was order_id, so it is ok to mold user_id instead

Modify the sharding – tables. Yaml:

shardingColumn: user_id
algorithm-expression: t_order_${user_id % 2}
Copy the code

3.3 examples of sub-database and sub-table

Use sub – library sub – table

private static ShardingType shardingType = ShardingType.SHARDING_DATABASES_AND_TABLES;
Copy the code

Rules of database and table

rules: - ! SHARDING tables: t_order: actualDataNodes: ds_${0.. 1}.t_order_${0.. 1} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: t_order_inline keyGenerateStrategy: column: order_id keyGeneratorName: snowflake t_order_item: actualDataNodes: ds_${0.. 1}.t_order_item_${0.. 1} tableStrategy: standard: shardingColumn: order_id shardingAlgorithmName: t_order_item_inline keyGenerateStrategy: column: order_item_id keyGeneratorName: snowflake bindingTables: - t_order,t_order_item broadcastTables: - t_address defaultDatabaseStrategy: standard: shardingColumn: user_id shardingAlgorithmName: database_inline defaultTableStrategy: none: shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds_${user_id % 2} t_order_inline: type: INLINE props: algorithm-expression: t_order_${order_id % 2} t_order_item_inline: type: INLINE props: algorithm-expression: t_order_item_${order_id % 2} keyGenerators: snowflake: type: SNOWFLAKE props: worker-id: 123 props: sql-show: falseCopy the code

As you can see, the default sorting algorithm is modulo user_id by 2; Sub-table algorithm is also according to their respective ID by 2 modules, is a simple routing.

The same as the sub-table, there are configuration problems, the changes are as follows:

rules: - ! SHARDING tables: t_order: actualDataNodes: ds_${0.. 1}.t_order_${0.. 3} tableStrategy: standard: shardingColumn: user_id shardingAlgorithmName: t_order_inline keyGenerateStrategy: column: order_id keyGeneratorName: snowflake t_order_item: actualDataNodes: ds_${0.. 1}.t_order_item_${0.. 3} tableStrategy: standard: shardingColumn: user_id shardingAlgorithmName: t_order_item_inline keyGenerateStrategy: column: order_item_id keyGeneratorName: snowflake bindingTables: - t_order,t_order_item defaultDatabaseStrategy: standard: shardingColumn: user_id shardingAlgorithmName: database_inline defaultTableStrategy: standard: shardingColumn: user_id shardingAlgorithmName: t_order_inline shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds_${user_id % 2} t_order_inline: type: INLINE props: algorithm-expression: t_order_${user_id % 2} t_order_item_inline: type: INLINE props: algorithm-expression: t_order_item_${user_id % 2}Copy the code

The problem is that there are two redundant tables in each database, but it still fails to meet the expectation. In the next attempt, we will use address_id to divide the table, and user_id to divide the database. This guess should be my understanding of the sub-database sub-table is not in place.

Exception in thread "main" org.apache.shardingsphere.infra.exception.ShardingSphereException: Insert statement does not support sharding table routing to multiple data nodes.
Copy the code

3.4. Example of read/write separation

Use read-write separation

private static ShardingType shardingType = ShardingType.READWRITE_SPLITTING;
Copy the code

My test environment: simulated 2 mysql servers through Docker and set the master/slave synchronization

1, Master 192.168.10.135:3310 (docker1) 2, Salve 192.168.10.135:3311 (docker2)Copy the code

Read/write separation rule

dataSources: write_ds: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: . Com. Mysql. JDBC Driver jdbcUrl: JDBC: mysql: / / 192.168.10.135:3310 / demo_write_ds? serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8 username: root password: 123456 read_ds_0: dataSourceClassName: com.zaxxer.hikari.HikariDataSource driverClassName: com.mysql.jdbc.Driver jdbcUrl: JDBC: mysql: / / 192.168.10.135:3311 / demo_write_ds? serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8 username: root password: 123456 rules: - ! READWRITE_SPLITTING dataSources: pr_ds: writeDataSourceName: write_ds readDataSourceNames: [read_ds_0] props: sql-show: falseCopy the code

To verify read/write separation, I delete the data synchronized to the slave database and find printData(); The printed data is indeed slave, and the read/write separation test is complete

3.5. Encryption Example

In the examples – > shardingsphere – JDBC – example – > other – feature – example – > encrypt – example – > encrypt – raw – JDBC – example

The processSuccess method in the UserServiceImpl class

Remember to modify the database connection to encrypt-conditions.yaml

Encryption rules:

rules: - ! ENCRYPT tables: t_user: columns: user_name: plainColumn: user_name_plain cipherColumn: user_name encryptorName: name_encryptor pwd: cipherColumn: pwd assistedQueryColumn: assisted_query_pwd encryptorName: pwd_encryptor encryptors: name_encryptor: type: AES props: aes-key-value: 123456abc pwd_encryptor: type: assistedTestCopy the code

3.6. Rule Description

T_order: # actualDataNodes: ds_${0.. 1}.t_order # primary key generation strategy column: order_ID # primary key name keyGeneratorName: Snowflake bindingTables: # Bind tables are relational tables with consistent sharding rules and are bound to each other. Cartesian product association does not occur in multi-table associated query between bound tables, which can improve the efficiency of associated query. Table (s) : broadcastTables (s) : broadcastTables (s) : ShardingColumn: user_id # shardingAlgorithmName: database_inline defaultTableStrategy None: shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds_${user_id % 2} none: shardingAlgorithms: database_inline: type: INLINE props: algorithm-expression: ds_${user_id % 2}Copy the code

conclusion

This study for Shardingsphere source code is mainly sharding JDBC in examples module. The purpose is to have a preliminary understanding of Sharding JDBC through the official test cases provided. The main test is the JDBC version of the sub-library, sub-table, read and write separation, password encryption and other functions, demo has the same implementation of JPA and Mybatis, its principle is similar, did not further explore. This time, I took a closer look at the rules configuration, worked through the default configuration, and found the inline configuration to be nice. I will continue to study the source code of Shardingsphere in the future.