Sharding-Sphere
Sharding-jdbc is a database and table framework used by Dangdang at the earliest. It was opened to open source in 2017. In recent years, with the continuous iteration of a large number of community contributors, its functions have been gradually improved and it has been renamed ShardingSphere. It became an Apache Software Foundation Top Level project on April 16, 2020.
The core functions of ShardingSphere have become diversified with the continuous change of versions. As shown in Figure 7-1, ShardingSphere ecosystem includes three open source distributed database middleware solutions, sharding-JDBC, Sharding-Proxy and Sharding-Sidecar.
Version 5.x of Apache ShardingSphere is committed to providing a pluggable architecture in which the functional components of the project can be flexibly extended in a pluggable manner. At present, functions such as data sharding, read/write separation, data encryption, shadow library pressure measurement, as well as support for SQL and protocols such as MySQL, PostgreSQL, SQLServer and Oracle are woven into the project through plug-ins. Developers can customize their own unique systems like building blocks. Apache ShardingSphere currently offers dozens of SPIs as extension points to the system, and more are being added.
Figure 7-2 shows the overall architecture of Sharding-Sphere.
Sharding-JDBC
Sharding-JDBC is a commonly used component, which locates an enhanced VERSION of JDBC driver. Simply speaking, it is to complete routing and Sharding operations related to database sub-database sub-table at the application end. It is also the component that we focus on analyzing in this stage.
We introduced a Sharding-JDBC dependency in the project so that our business code would connect to the database through sharding-JDBC code when operating on the database. That is, some of the core actions of the sub-table, such as SQL parsing, routing, execution, result processing, are done by it, and it works on the client side.
Sharding-Proxy
Sharding-proxy is similar to Mycat in that it provides a database level Proxy. To put it simply, our application was directly connected to the database before, and after introducing Sharding-proxy, our application was directly connected to Sharding-proxy, and then sharding-proxy was processed and forwarded to mysql.
The advantage of this method is that the user does not need to be aware of the existence of sub-tables, which is equivalent to normal access to mysql. Currently, Sharding-Proxy supports Mysql and PostgreSQL, as shown in Figure 7-4.
Sharding – sidecars (TODO)
Sidecar is a service grid architecture, which is targeted at Kubernetes’ cloud native database broker, which brokers all access to the database as a Sidecar. Sharding-sidecar is currently under development and has not yet been released.
Sharding-JDBC
Sharding-jdbc is an enhanced version of the original JDBC driver. In a scenario where databases and tables are divided, sharding-JDBC provides the functions shown in Figure 7-5.
Sharding-JDBC overall architecture
As shown in Figure 7-6, The Java application accesses the database through the Sharding-JDBC driver. In Sharding-JDBC, it completes functions such as library and table routing and distributed transactions according to relevant configurations. Therefore, it can be considered as an enhancement of the JDBC driver.
The Registry Center represents the Registry for centralized sharding configuration rule management, dynamic configuration, and data source information.
! [] (mic-blob-bucket.oss-cn-beijing.aliyuncs.com/20211020203…” alt=”ShardingSphere-JDBC Architecture)
Sharding- Basic use of JDBC
For a better understanding of Shading JDBC, let’s take a brief look at Sharding-JDBC with a case study. ‘
Shardingsphere.apache.org/document/cu…
For a more intuitive understanding of Sharding-JDBC, a native example is presented below.
Figure 7-8 shows the overall project structure.
Introducing Maven dependencies
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>shardingsphere-jdbc-core</artifactId>
<version>5.0.0 - alpha</version>
</dependency>
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
<version>3.4.1 track</version>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
Copy the code
Order
The entity object that defines the Order table.
@Data
public class Order implements Serializable {
private static final long serialVersionUID = 661434701950670670L;
private long orderId;
private int userId;
private long addressId;
private String status;
}
Copy the code
OrderReporitoryImpl
Define the database operations layer
public interface OrderRepository {
void createTableIfNotExists(a) throws SQLException;
Long insert(final Order order) throws SQLException;
}
public class OrderRepositoryImpl implements OrderRepository {
private final DataSource dataSource;
public OrderRepositoryImpl(final DataSource dataSource) {
this.dataSource = dataSource;
}
@Override
public void createTableIfNotExists(a) throws SQLException {
String sql = "CREATE TABLE IF NOT EXISTS t_order (order_id BIGINT NOT NULL AUTO_INCREMENT, user_id INT NOT NULL, address_id BIGINT NOT NULL, status VARCHAR(50), PRIMARY KEY (order_id))";
try(Connection connection = dataSource.getConnection(); Statement statement = connection.createStatement()) { statement.executeUpdate(sql); }}@Override
public Long insert(final Order order) throws SQLException {
String sql = "INSERT INTO t_order (user_id, address_id, status) VALUES (? ,? ,?) ";
try (Connection connection = dataSource.getConnection();
PreparedStatement preparedStatement = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS)) {
preparedStatement.setInt(1, order.getUserId());
preparedStatement.setLong(2, order.getAddressId());
preparedStatement.setString(3, order.getStatus());
preparedStatement.executeUpdate();
try (ResultSet resultSet = preparedStatement.getGeneratedKeys()) {
if (resultSet.next()) {
order.setOrderId(resultSet.getLong(1)); }}}returnorder.getOrderId(); }}Copy the code
OrderServiceImpl
Define the database access layer
public interface ExampleService {
/** * Initialize the table structure **@throws SQLException SQL exception
*/
void initEnvironment(a) throws SQLException;
/** * Successfully executed **@throws SQLException SQL exception
*/
void processSuccess(a) throws SQLException;
}
Copy the code
public class OrderServiceImpl implements ExampleService {
private final OrderRepository orderRepository;
Random random=new Random();
public OrderServiceImpl(final DataSource dataSource) {
orderRepository=new OrderRepositoryImpl(dataSource);
}
@Override
public void initEnvironment(a) throws SQLException {
orderRepository.createTableIfNotExists();
}
@Override
public void processSuccess(a) throws SQLException {
System.out.println("-------------- Process Success Begin ---------------");
List<Long> orderIds = insertData();
System.out.println("-------------- Process Success Finish --------------");
}
private List<Long> insertData(a) throws SQLException {
System.out.println("---------------------------- Insert Data ----------------------------");
List<Long> result = new ArrayList<>(10);
for (int i = 1; i <= 10; i++) {
Order order = insertOrder(i);
result.add(order.getOrderId());
}
return result;
}
private Order insertOrder(final int i) throws SQLException {
Order order = new Order();
order.setUserId(random.nextInt(10000));
order.setAddressId(i);
order.setStatus("INSERT_TEST");
orderRepository.insert(order);
returnorder; }}Copy the code
DataSourceUtil
public class DataSourceUtil {
private static final String HOST = "192.168.221.128";
private static final int PORT = 3306;
private static final String USER_NAME = "root";
private static final String PASSWORD = "123456";
public static DataSource createDataSource(final String dataSourceName) {
HikariDataSource result = new HikariDataSource();
result.setDriverClassName("com.mysql.jdbc.Driver");
result.setJdbcUrl(String.format("jdbc:mysql://%s:%s/%s? serverTimezone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8", HOST, PORT, dataSourceName));
result.setUsername(USER_NAME);
result.setPassword(PASSWORD);
returnresult; }}Copy the code
Sharding-jdbc Sharding rule configuration
public class ShardingDatabasesAndTableConfiguration {
// Create two data sources
private static Map<String,DataSource> createDataSourceMap(a){
Map<String, DataSource> dataSourceMap=new HashMap<>();
dataSourceMap.put("ds0",DataSourceUtil.createDataSource("shard01"));
dataSourceMap.put("ds1",DataSourceUtil.createDataSource("shard02"));
return dataSourceMap;
}
private static ShardingRuleConfiguration createShardingRuleConfiguration(a){
ShardingRuleConfiguration configuration=new ShardingRuleConfiguration();
configuration.getTables().add(getOrderTableRuleConfiguration());
// configuration.getBindingTableGroups().add("t_order,t_order_item");
//
//
{uid%8} specifies that the T_user table is divided into eight tables according to u_id%8. */
configuration.setDefaultDatabaseShardingStrategy(
new StandardShardingStrategyConfiguration("user_id"."inline"));
/** * Set the table sharding rule */
configuration.setDefaultTableShardingStrategy(new StandardShardingStrategyConfiguration("order_id"."order_inline"));
Properties props=new Properties();
props.setProperty("algorithm-expression"."ds${user_id%2}"); // the target table is modulated according to user_id
/** * Defines a specific sharding rule algorithm, which is used to provide the algorithm rules */
configuration.getShardingAlgorithms().put("inline".new ShardingSphereAlgorithmConfiguration("INLINE",props));
Properties properties=new Properties();
properties.setProperty("algorithm-expression"."t_order_${order_id%2}");
configuration.getShardingAlgorithms().put("order_inline".new ShardingSphereAlgorithmConfiguration("INLINE",properties));
configuration.getKeyGenerators().put("snowflake".new ShardingSphereAlgorithmConfiguration("SNOWFLAKE",getProperties()));
return configuration;
}
private static Properties getProperties(a){
Properties properties=new Properties();
properties.setProperty("worker-id"."123");
return properties;
}
// Create a sharding rule for the order table
private static ShardingTableRuleConfiguration getOrderTableRuleConfiguration(a){
ShardingTableRuleConfiguration tableRule=new ShardingTableRuleConfiguration("t_order"."ds${0.. 1}.t_order_${0.. 1}");
tableRule.setKeyGenerateStrategy(new KeyGenerateStrategyConfiguration("order_id"."snowflake"));
return tableRule;
}
public static DataSource getDataSource(a) throws SQLException {
return ShardingSphereDataSourceFactory.createDataSource(createDataSourceMap(), Collections.singleton(createShardingRuleConfiguration()),newProperties()); }}Copy the code
Main method test
public class ExampleMain {
public static void main(String[] args) throws SQLException {
DataSource dataSource=ShardingDatabasesAndTableConfiguration.getDataSource();
ExampleService exampleService=newOrderServiceImpl(dataSource); exampleService.initEnvironment(); exampleService.processSuccess(); }}Copy the code
Sharding-JDBC usage summary
From the above case, Sharding-JDBC provides us with the configuration of Sharding rules through configuration. However, based on the native usage mode, the configuration is more complicated. We can directly integrate it into spring-boot, which will be simpler to use.
Spring Boot integrates Sharding-JDBC Sharding
The following is a demonstration of how to implement the configuration of sub-library sub-table in the case of integrating Mybatis with Springboot application.
The project code is sharding-jdbc-spring-boot-example. Figure 7-8 shows the project structure.
Dal MybatisPlusGeneratorConfig, used to generate t_order tables, the service, the controller code, since the code is generated based on mybatis – plus, don’t do too much to describe here
Introduce POM dependencies
<dependencies> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <scope>runtime</scope> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-boot-starter</artifactId> <version>Rule 3.4.3</version> </dependency> <dependency> <groupId>com.baomidou</groupId> <artifactId>mybatis-plus-generator</artifactId> <version>3.4.1 track</version> </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> <version>1.18.12</version> </dependency> <dependency> <groupId>org.apache.shardingsphere</groupId> <artifactId>shardingsphere-jdbc-core-spring-boot-starter</artifactId> <version>5.0.0 - alpha</version> </dependency> <dependency> <groupId>com.zaxxer</groupId> <artifactId>HikariCP</artifactId> <version>3.4.2</version> </dependency></dependencies>
Copy the code
Application. The properties configuration
# configuration data source name spring. Shardingsphere. The datasource. Names = ds 0, mon. Ds-1spring.shardingsphere.datasource.com. Type = com zaxxer. Hikari. H ikariDataSourcespring.shardingsphere.datasource.common.driver-class-name=com.mysql.jdbc.Driver# Configure multiple data sources respectively the details of the spring. The shardingsphere. The datasource. Ds - 0. The username = rootspring. Shardingsphere. The datasource. Ds - 0. Password = 123456 sp Ring. Shardingsphere. The datasource, ds - 0. JDBC - url = JDBC: mysql: / / 192.168.221.128:3306 / shard01? ServerTimezone = UTC&useSSL = false&u seUnicode=true&characterEncoding=UTF-8spring.shardingsphere.datasource.ds-1.username=rootspring.shardingsphere.datasourc E.d s - 1. The password = 123456 spring. Shardingsphere. The datasource, ds - 1. JDBC - url = JDBC: mysql: / / 192.168.221.128:3306 / shard02? ServerTim Ezone =UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8 Spring will behind the database - the inline statement. Shardingsphere. Rules. Sharding. Default - database - strategy. Standard. Sharding - column = user_idsp ring.shardingsphere.rules.sharding.default-database-strategy.standard.sharding-algorithm-name=database-inline# Configure a split table policy for the T_ORDER table, where t-order-inline is followed by a declaration that the # line expression identifier can be used with ${... } or $- > {... }, but the former conflicts with Spring's own property file placeholders, so using line expression identifiers in Spring environments is recommended to use $->{... }spring.shardingsphere.rules.sharding.tables.t_order.actual-data-nodes=ds-$->{0.. 1}.t_order_$->{0.. 1}spring.shardingsphere.rules.sharding.tables.t_order.table-strategy.standard.sharding-column=order_idspring.shardingsph ere.rules.sharding.tables.t_order.table-strategy.standard.sharding-algorithm-name=t-order-inline# Configuration order_id snowflakes algorithm to generate global id strategy spring. Shardingsphere. Rules. Sharding. Name t_order. Key - generate - strategy. The column = order_idspring. shardingsphere.rules.sharding.tables.t_order.key-generate-strategy.key-generator-name=snowflake# Concrete rules of the depots table configuration spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. The database - the inline. Type = INLINEspring. Shardingsphere. Url es.sharding.sharding-algorithms.database-inline.props.algorithm-expression=ds-$->{user_id % 2}spring.shardingsphere.rules.sharding.sharding-algorithms.t-order-inline.type=INLINEspring.shardingsphere.rules.shardin g.sharding-algorithms.t-order-inline.props.algorithm-expression=t_order_$->{order_id % 2}spring.shardingsphere.rules.sharding.sharding-algorithms.t-order-item-inline.type=INLINEspring.shardingsphere.rules.sh arding.sharding-algorithms.t-order-item-inline.props.algorithm-expression=t_order_item_$->{order_id % 2}# Configure spring snow algorithm. Shardingsphere. Rules. Sharding. Key - generators. Snowflake. Type = SNOWFLAKEspring. Shardingsphere. Rules. Sharding. K ey-generators.snowflake.props.worker-id=123
Copy the code
Add logical code
TOrderMapper
@Update("CREATE TABLE IF NOT EXISTS t_order (order_id BIGINT AUTO_INCREMENT, user_id INT NOT NULL, address_id BIGINT NOT NULL, status VARCHAR(50), PRIMARY KEY (order_id))")void createTableIfNotExists(a);
Copy the code
TOrderServiceImpl
@Servicepublic class TOrderServiceImpl extends ServiceImpl<TOrderMapper.TOrder> implements ITOrderService { @Autowired TOrderMapper orderMapper; Random random=new Random(); @Override public void initEnvironment(a) throws SQLException { orderMapper.createTableIfNotExists(); } @Override public void processSuccess(a) throws SQLException { System.out.println("-------------- Process Success Begin ---------------"); List<Long> orderIds = insertData(); System.out.println("-------------- Process Success Finish --------------"); } private List<Long> insertData(a) throws SQLException { System.out.println("---------------------------- Insert Data ----------------------------"); List<Long> result = new ArrayList<>(10); for (int i = 1; i <= 10; i++) { TOrder order = new TOrder(); order.setUserId(random.nextInt(10000)); order.setAddressId(i); order.setStatus("INSERT_TEST"); orderMapper.insert(order); result.add(order.getOrderId()); } return result; }}
Copy the code
TOrderController
Provides test interfaces.
@RestController@RequestMapping("/t-order")public class TOrderController { @Autowired ITOrderService orderService; @GetMapping public void init(a) throws SQLException { orderService.initEnvironment(); orderService.processSuccess(); }}
Copy the code
Sharding-jdbc concept description
In fact, from this point of view, Sharding-JDBC is equivalent to enhancing the functionality of the JDBC driver, so that developers can easily complete the implementation of the function of library and table only through configuration.
In Sharding-JDBC, there are some table concepts that need to be introduced to everyone: logical table, real table, shard key, data node, dynamic table, broadcast table, bound table.
Logical table
A logical table can be understood as a view in a database and is a virtual table. It can be mapped to a single physical table or composed of multiple physical tables from different data sources. For mysql, Hbase, and ES, you only need keys with the same meaning to form a logical table. This key is the primary key in mysql and the rowkey in Hbase. It is the key in ES.
The definition of the logical table T_ORDER is used in the previous configuration of the sub-database and sub-table rules. When we operate on the T_ORDER table, related transaction operations will be performed according to the sharding rules mapped to the actual physical table, as shown in Figure 7-9. The logical table will be replaced with the real table name during SQL parsing and routing.
spring.shardingsphere.rules.sharding.tables.t_order.actual-data-nodes=ds-$->{0.. 1}.t_order_$->{0.. 1}
Copy the code
The broadcast table
Broadcast tables are also called global tables, meaning that they exist redundantly in multiple libraries to avoid cross-library query problems.
For example, basic data such as provinces and dictionaries can be synchronized to each database node, as shown in Figure 7-10, to avoid cross-database problems in querying basic data associated with tables after databases and tables are divided.
In Sharding-JDBC, the configuration is as follows
# broadcast table, The master node is ds0spring shardingsphere. Sharding. Broadcast - tables = t_configspring. Shardingsphere. Sharding. Name t_config. Actual - da ta-nodes=ds$->{0}.t_config
Copy the code
The binding table
Some of our tables have logical primary and foreign key relationships. For example, the order table order_info stores the total number of goods and the amount of goods. Order_detail is the price of each item, the number of items, and so on. Or a dependency, the relationship between a parent table and a child table. There are often associated query operations between them. If the data of the parent table and the data of the child table are stored in different databases, cross-library associated query is also troublesome. So can we drop the parent table and the data on the same node as the data belonging to the parent table?
For example, if order_id=1001 is stored on node1, all its details are stored on node1. The data of order_id=1002 is stored on node2. All details of order_id=1002 are stored on Node2, so that it is still in the same database during associated query, as shown in Figure 7-11
# binding table rules, more set of binding rules using an array configuration spring. Shardingsphere. Rules. Sharding. Binding - tables = t_order, t_order_item
Copy the code
If more than one binding table rule exists, you can declare it as an array
spring.shardingsphere.rules.sharding.binding-tables[0]= # binding rules of table lists spring. Shardingsphere. Rules. Sharding. Binding - tables [1] = # Binding table rules list spring. Shardingsphere. Rules. Sharding. Binding - tables list rules [x] = # binding table
Copy the code
Sharding- Sharding policy in JDBC
Sharding-JDBC has many common Sharding strategies built in. These algorithms are mainly for two dimensions
- Data source sharding
- Data table sharding
Sharding-JDBC Sharding strategy includes Sharding key and Sharding algorithm.
- The sharding key, the database field used for sharding, is the key field used to split the database (table) horizontally. For example, if the mantissa of the order primary key in the order table is modulo sharded, the order primary key is a sharding field. If there is no fragment field in SQL, full routing is performed, resulting in poor performance. In addition to the support for single sharding fields, ShardingSphere also supports sharding based on multiple fields.
- Sharding algorithm is the calculation rule used to realize sharding.
Sharding-JDBC provides a variety of built-in Sharding algorithms, including four types respectively
- Automatic sharding algorithm
- Standard sharding algorithm
- Compound sharding algorithm
- Hinit sharding algorithm
Automatic sharding algorithm
Automatic Sharding algorithm is to complete the automatic distribution of data according to the algorithm expression configured by us. Five automatic Sharding algorithms are provided in Sharding-JDBC
- Module segmentation algorithm
- Hash module sharding algorithm
- Range sharding algorithm based on sharding capacity
- Range sharding algorithm based on sharding boundary
- Automatic time slice algorithm
Module segmentation algorithm
The most basic modulus taking algorithm, it will according to the value of the fragment field and sharding-count for modulus taking operation, get a result.
ModShardingAlgorithm
# Database - mod is custom string name spring shardingsphere. Rules. Sharding. Default - database - strategy. Standard, sharding algorithm - name = dat Abase - mod# MOD said modulus algorithm type spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. The database - MOD. Type = mod# Said subdivision number spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. The database - mod. Props. Sharding - count = 2
Copy the code
Hash module sharding algorithm
And modulus algorithm is the same, the only difference is for the shard key hash value and then modulus
HashModShardingAlgorithm
# Database - mod is custom string name spring shardingsphere. Rules. Sharding. Default - database - strategy. Standard, sharding algorithm - name = dat abase-hash-modspring.shardingsphere.rules.sharding.sharding-algorithms.database-hash-mod.type=HASH_MODspring.shardingsph ere.rules.sharding.sharding-algorithms.database-hash-mod.props.sharding-count=2
Copy the code
Fragment capacity range
The sharding capacity range is simply defined as the value range of a field. For example, if there is a requirement, how to configure it?
(0-199) are saved in Table 0. [200-399] are saved in Table 1. [400-599] are saved in Table 2Copy the code
Build a T_ORDER_COLume_range table by referring to section 7.2.3, and use Mybatis – Plus to generate the relevant code, as shown in Figure 7-12
Add the following configuration to activate the different configurations with spring.profiles.active=volumn-range.
server.port=8080spring.mvc.view.prefix=classpath:/templates/spring.mvc.view.suffix=.htmlspring.shardingsphere.datasource.names=ds-0s pring.shardingsphere.datasource.common.type=com.zaxxer.hikari.HikariDataSourcespring.shardingsphere.datasource.common.dr iver-class-name=com.mysql.jdbc.Driverspring.shardingsphere.datasource.ds-0.username=rootspring.shardingsphere.datasource . Ds 0. Password = 123456 spring. Shardingsphere. The datasource, ds - 0. JDBC - url = JDBC: mysql: / / 192.168.221.128:3306 / shard01? ServerTime zone=UTC&useSSL=false&useUnicode=true&characterEncoding=UTF-8spring.shardingsphere.rules.sharding.tables.t_order_volume_ range.actual-data-nodes=ds-0.t_order_volume_range_$->{0.. 2}spring.shardingsphere.rules.sharding.tables.t_order_volume_range.table-strategy.standard.sharding-column=user_idspring .shardingsphere.rules.sharding.tables.t_order_volume_range.table-strategy.standard.sharding-algorithm-name=t-order-volum e-rangespring.shardingsphere.rules.sharding.tables.t_order_volume_range.key-generate-strategy.column=order_idspring.shar dingsphere.rules.sharding.tables.t_order_volume_range.key-generate-strategy.key-generator-name=snowflakespring.shardings Phere. Rules. Sharding. Sharding - algorithms. T - order - volume - range. Type = VOLUME_RANGE # minimum, 0-200 spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. T - order - volume - range. Props. The range - the lower = 200 # the largest scope, 600, If it's over 600, Complains spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. T - order - volume - range. Props. Range - upper = 600 # Said each table has a capacity of 200 spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. T - order - volume - range. Props. Sprin sharding - volume = 200 g.shardingsphere.rules.sharding.key-generators.snowflake.type=SNOWFLAKEspring.shardingsphere.rules.sharding.key-generato rs.snowflake.props.worker-id=123
Copy the code
Range sharding algorithm based on sharding boundary
The sharding capacity range mentioned above is a balanced sharding method, if there are unbalanced scenarios, such as the following case
(0~1000) save to Table 0[1000~20000] Save to Table 1[20000~300000] Save to Table 2[300000~ infinity) save to Table 3Copy the code
We can use the range sharding algorithm based on the sharding boundary to complete the configuration method as follows
BoundaryBasedRangeShardingAlgorithm
# BOUNDARY_RANGE Said subdivision algorithm type spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. T - order - a boundary - range. Type = # BOUNDARY_RANGE subdivision of the scope of the border, Multiple range boundaries comma-separated spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. T - order - a boundary - range. Props. Sharding - ranges = 1000 000300, 000,
Copy the code
Automatic time slice algorithm
IntervalShardingAlgorithm
Sharding by time segment if you want to implement the following functions
(1970-01-01 23:59:59 ~ 2020-01-01 23:59:59) Table 0[2020-01-01 23:59:59 ~ 2021-01-01 23:59:59) Table 1[2021-01-01 23:59:59 ~ 2021-02-01 23:59:59) Table 2[2022-01-01 23:59:59 ~ 2024-01-01 23:59:59) Table 3Copy the code
The configuration method is as follows: Indicates the data in the time range from 2010-01-01 to 2021-01-01. Divide the data into a table for each year
spring.shardingsphere.rules.sharding.tables.t_order_volume_range.actual-data-nodes=ds-0.t_order_volume_range_$->{0.. 2}spring.shardingsphere.rules.sharding.tables.t_order_volume_range.table-strategy.standard.sharding-column=create_datesp ring.shardingsphere.rules.sharding.tables.t_order_volume_range.table-strategy.standard.sharding-algorithm-name=t-order-a uto-intervalspring.shardingsphere.rules.sharding.tables.t_order_volume_range.key-generate-strategy.column=order_idspring .shardingsphere.rules.sharding.tables.t_order_volume_range.key-generate-strategy.key-generator-name=snowflakespring.shar Dingsphere. Rules. Sharding. Sharding - algorithms. T - order - auto - interval. Type = # AUTO_INTERVAL shard starting time range, the timestamp format: yyyy-MM-dd HH:mm:ssspring.shardingsphere.rules.sharding.sharding-algorithms.t-order-auto-interval.props.datetime-lower=2010-01-01 23:59:59# Fragment end time range yyyy-MM-dd HH:mm:ssspring.shardingsphere.rules.sharding.sharding-algorithms.t-order-auto-interval.props.datetime-upper=2021-01-01 The maximum time that a single shard can carry, in: Second, The Numbers below 1 year spring. Shardingsphere. Rules. Sharding. Sharding - algorithms. T - order - auto - interval. Props. Sharding - seconds = '31536000 '
Copy the code
Note that if you fragment data based on time segments, do not use function queries. Otherwise, the query results in full routing.
select * from t_order where to_date(create.'yyyy-mm-dd')=' '
Copy the code
Standard sharding algorithm
StandardShardingStrategy (Sharding strategy), which only supports a single Sharding key (field) based on the sub-database sub-table, Sharding-JDBC provides two algorithm implementation
Row expression sharding algorithm
Type: the INLINE
Use Groovy expressions to provide support for sharding = and IN IN SQL statements. Only single sharding keys are supported. For simple sharding algorithms, you can use simple configurations to avoid tedious Java code development. For example, t_user_$->{u_id % 8} indicates that the T_user table is divided into eight tables according to u_id mode 8. The tables are named t_user_0 to t_user_7
The configuration method is as follows.
spring.shardingsphere.rules.sharding.sharding-algorithms.database-inline.type=INLINEspring.shardingsphere.rules.sharding.sharding-algorithms.database-inline.props.algorithm-expression=ds-$->{user_id % 2}spring.shardingsphere.rules.sharding.sharding-algorithms.t-order-inline.type=INLINEspring.shardingsphere.rules.shardin g.sharding-algorithms.t-order-inline.props.algorithm-expression=t_order_$->{order_id % 2}
Copy the code
Time range sharding algorithm
The automatic time segment sharding algorithm is similar to the automatic sharding algorithm.
Type: the INTERVAL
Configurable properties:
The attribute name | The data type | instructions | The default value |
---|---|---|---|
datetime-pattern | String | The timestamp format of the shard key must follow the Java DateTimeFormatter format. For example, yyyY-MM-DD HH: MM :ss | – |
datetime-lower | String | Time fragment lower bound in the format of anddatetime-pattern The timestamp format defined is consistent |
– |
datetime-upper (?) | String | Time fragment upper bound in the format of anddatetime-pattern The timestamp format defined is consistent |
The current time |
sharding-suffix-pattern | String | The suffix format of the shard data source or real table must follow the Java DateTimeFormatter formatdatetime-interval-unit Be consistent. For example: yyyyMM |
– |
datetime-interval-amount (?) | int | Shard key interval, after which the next shard will be entered | 1 |
datetime-interval-unit (?) | String | A unit of shard key time interval that must follow the enumerated value of Java ChronoUnit. For example, have |
Compound sharding algorithm
Usage scenario: SQL statements contain operators such as >, >=, <=, <, =, IN, AND BETWEEN AND. The compound sharding policy supports multiple sharding health operations.
Sharding-JDBC built a composite Sharding algorithm implementation.
Type: COMPLEX_INLINE, implementation class: ComplexInlineShardingAlgorithm
The attribute name | The data type | instructions | The default value |
---|---|---|---|
sharding-columns (?) | String | Shard column name. Multiple columns are separated by commas. If no configuration fails, the verification fails | – |
algorithm-expression | String | The row expression of the sharding algorithm | – |
allow-range-query-with-inline-sharding (?) | boolean | Whether range queries are allowed. Note: Range queries ignore sharding policies and perform full routing |
The current version has not been released (it is already available in the Github repository), but if you want to implement sharding compliant algorithms, you need to implement them manually.
Custom sharding algorithm
In addition to the default Sharding algorithm, we can customize the Sharding algorithm according to actual requirements. Sharding-JDBC also provides several types of extension implementation
- Standard sharding algorithm
- Compound sharding algorithm
- Hinit Sharding policy
- Non-sharding strategy
The interface definition of sharding policy is as follows. It has four subclasses corresponding to the above four sharding policies. You can extend the custom sharding policy by inheriting different sharding policies.
public interface ShardingStrategy { Collection<String> getShardingColumns(a); ShardingAlgorithm getShardingAlgorithm(a); Collection<String> doSharding(Collection<String> var1, Collection<ShardingConditionValue> var2, ConfigurationProperties var3); }Copy the code
Custom standard sharding algorithm
public class StandardModTableShardAlgorithm implements StandardShardingAlgorithm<Long> { private Properties props=new Properties(); /** * is used to handle sharding of = and IN. *@paramCollection represents a collection of target shards *@paramPreciseShardingValue Specifies the logical table information *@return* / @Override public String doSharding(Collection<String> collection, PreciseShardingValue<Long> preciseShardingValue) { for(String name:collection){ / / modulus according to the value of the order_id, get a target if (name. EndsWith (String) the valueOf (preciseShardingValue. The getValue () % 4))) {return name; } } throw new UnsupportedOperationException(); } /** * RangeShardingAlgorithm (RangeShardingAlgorithm); BETWEEN AND in SQL will be treated as full library routing * @param Collection * @Param rangeShardingValue * @return */ @override public Collection
doSharding(Collection
collection, RangeShardingValue
rangeShardingValue) { Collection
result=new LinkedHashSet<>(collection.size()); for(Long i=rangeShardingValue.getValueRange().lowerEndpoint(); i<=rangeShardingValue.getValueRange().upperEndpoint(); i++){ for(String name:collection){ if(name.endsWith(String.valueOf(i%4))){ result.add(name); } } } return result; } /** * Public void init() {} /** * Specifies the type of sharing-algorithms public String getType() { return "STANDARD_MOD"; } @Override public Properties getProps() { return this.props; } @props public void setProps(properties properties) {this.props=properties; }}
Copy the code
Extended through the SPI mechanism
-
In the resource directory to create a meta-inf/service/org. Apache. Shardingsphere. Sharding. Spi. ShardingAlgorithm file
-
Write the full path of the custom standard sharding algorithm into the above file
com.gupao.sharding.example.StandardModTableShardAlgorithm
Copy the code
Add the application-custom-standard.properties file
spring.shardingsphere.rules.sharding.tables.t_order_standard.actual-data-nodes=ds-0.t_order_standard_$->{0.. 3}spring.shardingsphere.rules.sharding.tables.t_order_standard.table-strategy.standard.sharding-column=order_idspring.sh ardingsphere.rules.sharding.tables.t_order_standard.table-strategy.standard.sharding-algorithm-name=standard-modspring.s hardingsphere.rules.sharding.tables.t_order_standard.key-generate-strategy.column=order_idspring.shardingsphere.rules.sh arding.tables.t_order_standard.key-generate-strategy.key-generator-name=snowflakespring.shardingsphere.rules.sharding.sh arding-algorithms.standard-mod.type=STANDARD_MODspring.shardingsphere.rules.sharding.sharding-algorithms.standard-mod.pr ops.algorithm-class-name=com.gupao.sharding.example.StandardModTableShardAlgorithm
Copy the code
STANDARD_MOD is the type of our custom sharding algorithm.
Table and code generation
Copy the T_ORDER table with a T_ORDER_standard and generate the business code from mybatis-plus.
See sharding-jdbc-Springboot-example for code engineering
Distributed sequence algorithm
Two distributed sequence algorithms are provided by default in Sharding-JDBC
- UUID
- Snowflakes algorithm
Both of these have been said before, so I will not repeat them.
Distributed sequence algorithm is to ensure global uniqueness after horizontal partitioning. The definition of snowflake algorithm is as follows.
Type: the SNOWFLAKE
Configurable properties:
The attribute name | The data type | instructions | The default value |
---|---|---|---|
worker-id (?) | long | Unique identification of working machine | 0 |
max-vibration-offset (?) | int | Maximum jitter upper limit, range [0, 4096] note: If the value generated by this algorithm is used as the fragment value, you are advised to configure this attribute. The result is always 0 or 1 after the key generated by this algorithm is modular 2^n (2^n is generally the number of branches or tables) in different milliseconds | 1 |
max-tolerate-time-difference-milliseconds (?) | long | Maximum time allowed for a clock rollback, in milliseconds |