preface

As we all know, there are several classes for database operations: Datasource, Connection, Statement, ResultSet.

This chapter focuses on the creation of ShardingDataSource.

  • Multiple data sources and ShardingRuleConfiguration configuration, how to convert the runtime DataSource?
  • What is the difference between a sharded data source and a normal data source?

A, ShardingDataSourceFactory

org.apache.shardingsphere.shardingjdbc.api.ShardingDataSourceFactoryUsed to createShardingDataSource. All package names in Sharding-JDBC are API, which are finally exposed to the user.

  • DataSourceMap: Mapping between data source names and data sources.
  • ShardingRuleConfig: Sharding rule configuration.
  • Props: configuration, for examplesql.show=true.
public final class ShardingDataSourceFactory { public static DataSource createDataSource( final Map<String, DataSource> dataSourceMap, final ShardingRuleConfiguration shardingRuleConfig, final Properties props) throws SQLException { return new ShardingDataSource(dataSourceMap, new ShardingRule(shardingRuleConfig, dataSourceMap.keySet()), props); }}Copy the code

Methods in the construction of the first, called ShardingRule converts ShardingRuleConfiguration configuration ShardingRule. This step is ignored directly, which is to parse the configuration (see Chapter 1) into the runtime object, just knowing that everything we configured will eventually be found in the ShardingRule.

public class ShardingRule implements BaseRule { private final ShardingRuleConfiguration ruleConfiguration; private final ShardingDataSourceNames shardingDataSourceNames; private final Collection<TableRule> tableRules; private final Collection<BindingTableRule> bindingTableRules; private final Collection<String> broadcastTables; private final ShardingStrategy defaultDatabaseShardingStrategy; private final ShardingStrategy defaultTableShardingStrategy; private final ShardingKeyGenerator defaultShardingKeyGenerator; private final Collection<MasterSlaveRule> masterSlaveRules; private final EncryptRule encryptRule; public ShardingRule(final ShardingRuleConfiguration shardingRuleConfig, final Collection<String> dataSourceNames) { Preconditions.checkArgument(null ! = shardingRuleConfig, "ShardingRuleConfig cannot be null."); Preconditions.checkArgument(null ! = dataSourceNames && ! dataSourceNames.isEmpty(), "Data sources cannot be empty."); this.ruleConfiguration = shardingRuleConfig; shardingDataSourceNames = new ShardingDataSourceNames(shardingRuleConfig, dataSourceNames); tableRules = createTableRules(shardingRuleConfig); broadcastTables = shardingRuleConfig.getBroadcastTables(); bindingTableRules = createBindingTableRules(shardingRuleConfig.getBindingTableGroups()); defaultDatabaseShardingStrategy = createDefaultShardingStrategy(shardingRuleConfig.getDefaultDatabaseShardingStrategyConfig()); defaultTableShardingStrategy = createDefaultShardingStrategy(shardingRuleConfig.getDefaultTableShardingStrategyConfig()); defaultShardingKeyGenerator = createDefaultKeyGenerator(shardingRuleConfig.getDefaultKeyGeneratorConfig()); masterSlaveRules = createMasterSlaveRules(shardingRuleConfig.getMasterSlaveRuleConfigs()); encryptRule = createEncryptRule(shardingRuleConfig.getEncryptRuleConfig()); }}Copy the code

Finally, the ShardingDataSource constructor is called to create the data source.

Second, the ShardingDataSource

skipjava.sqlThe three classes in the package were introduced in HikariCP.

1, WrapperAdapter

In Sharding-JDBC, basically all database-driver related classes inherit this WrapperAdapter.

First, WrapperAdapter implements the java.sql.Wrapper interface, providing implementations of the isWrapperFor and unwrap methods.

public abstract class WrapperAdapter implements Wrapper { @Override public final <T> T unwrap(final Class<T> iface) Throws SQLException {// Check whether the current object is an instance of iFace if (isWrapperFor(iface)) {return (T) this; Format ("[%s] cannot be unwrapped as [%s]", getClass().getName(), iface.getName())); } @Override public final boolean isWrapperFor(final Class<? > iface) {return iface.isinstance (this); }}Copy the code

Second, WrapperAdapter extends two public methods.

  • RecordMethodInvocation: Records a method invocation to the JDB MethodInvocation collection.
  • Replaymethod invocation: method calls from all jdbcMethodInvocations collections are redirected for an object.
public abstract class WrapperAdapter implements Wrapper {
    private final Collection<JdbcMethodInvocation> jdbcMethodInvocations = new ArrayList<>();
    @SneakyThrows
    public final void recordMethodInvocation(final Class<?> targetClass, final String methodName, final Class<?>[] argumentTypes, final Object[] arguments) {
        jdbcMethodInvocations.add(new JdbcMethodInvocation(targetClass.getMethod(methodName, argumentTypes), arguments));
    }
    public final void replayMethodsInvocation(final Object target) {
        for (JdbcMethodInvocation each : jdbcMethodInvocations) {
            each.invoke(target);
        }
    }
}
Copy the code

Why does Sharding-JDBC inherit most of the java.sql driver-related implementation classesWrapperAdapter?

The ShardingDataSource is understandable because the implementation of java.sql.DataSource must also implement java.sql.Wrapper. WrapperAdapter provides the default implementation. But there is no need to implement java.sql.wrapper for java.sql.Connection.

In addition,recordMethodInvocationandreplayMethodsInvocationWhat’s the point?

For traditional JDBC open transactions (set autocommit = 0The operation process of) is as follows:The sharding-JDBC process looks like this:

The Connection implementation sharding-JDBC exposes to the user is ShardingConnection. SetAutoCommit (false); sharding-JDBC does not know the actual data sources, so it can only be recorded. Wait for the user to perform connection. PrepareStatement (” XXX “), through the parse SQL sharding – JDBC can know the actual data source is which a few, to obtain the actual connection, replay setAutoCommit method is carried out.

2, AbstractUnsupportedOperationDataSource

AbstractUnsupportedOperationDataSource DataSource interface for the two methods of the default implementation.

public abstract class AbstractUnsupportedOperationDataSource extends WrapperAdapter implements DataSource { @Override public final int getLoginTimeout() throws SQLException { throw new SQLFeatureNotSupportedException("unsupported getLoginTimeout()"); } @Override public final void setLoginTimeout(final int seconds) throws SQLException { throw new SQLFeatureNotSupportedException("unsupported setLoginTimeout(int seconds)"); }}Copy the code

In the org. Apache. Shardingsphere. Shardingjdbc. JDBC. Unsupported package all the classes, and AbstractUnsupportedOperationDataSource similar, Since some of the sharding-JDBC methods for the Java.SQL interface are not implemented, an abstract UnsupportedOperationXXX class is provided. Purpose is not to let each implementation class implements it again without the support of these methods, just throw a SQLFeatureNotSupportedException anomalies.

3, AbstractDataSourceAdapter

AbstractDataSourceAdapter adaptation layer is a DataSource.

  • providesgetLogWriter/setLogWriterThe implementation of the
  • providesdataSourceMap(Multiple data sources) anddatabaseTypeThe get method
  • forgetConnection(username,password)Method provides a default implementation (calls getConnection directly with no arguments)
@Getter public abstract class AbstractDataSourceAdapter extends AbstractUnsupportedOperationDataSource implements AutoCloseable { private final Map<String, DataSource> dataSourceMap; private final DatabaseType databaseType; @Setter private PrintWriter logWriter = new PrintWriter(System.out); public AbstractDataSourceAdapter(final Map<String, DataSource> dataSourceMap) throws SQLException { this.dataSourceMap = dataSourceMap; databaseType = createDatabaseType(); } @Override public final Connection getConnection(final String username, final String password) throws SQLException { return getConnection(); }}Copy the code

In addition, the AutoCloseable interface is implemented and the close method is used to close the resource.

public abstract class AbstractDataSourceAdapter extends AbstractUnsupportedOperationDataSource implements AutoCloseable {
    @Override
    public final void close() throws Exception {
        close(dataSourceMap.keySet());
    }
    public void close(final Collection<String> dataSourceNames) throws Exception {
        for (String each : dataSourceNames) {
            close(dataSourceMap.get(each));
        }
        getRuntimeContext().close();
    }
    
    private void close(final DataSource dataSource) {
        try {
            Method method = dataSource.getClass().getDeclaredMethod("close");
            method.setAccessible(true);
            method.invoke(dataSource);
        } catch (final ReflectiveOperationException ignored) {
        }
    }
    
    protected abstract RuntimeContext getRuntimeContext();
}
Copy the code

In addition, subclasses need to implement the getRuntimeContext method to getRuntimeContext.

4, ShardingDataSource

@Getter
public class ShardingDataSource extends AbstractDataSourceAdapter {
    
    private final ShardingRuntimeContext runtimeContext;
    
    static {
        NewInstanceServiceLoader.register(RouteDecorator.class);
        NewInstanceServiceLoader.register(SQLRewriteContextDecorator.class);
        NewInstanceServiceLoader.register(ResultProcessEngine.class);
    }
    
    public ShardingDataSource(final Map<String, DataSource> dataSourceMap, final ShardingRule shardingRule, final Properties props) throws SQLException {
        super(dataSourceMap);
        checkDataSourceType(dataSourceMap);
        runtimeContext = new ShardingRuntimeContext(dataSourceMap, shardingRule, props, getDatabaseType());
    }
    
    private void checkDataSourceType(final Map<String, DataSource> dataSourceMap) {
        for (DataSource each : dataSourceMap.values()) {
            Preconditions.checkArgument(!(each instanceof MasterSlaveDataSource), "Initialized data sources can not be master-slave data sources.");
        }
    }
    
    @Override
    public final ShardingConnection getConnection() {
        return new ShardingConnection(getDataSourceMap(), runtimeContext, TransactionTypeHolder.get());
    }
}
Copy the code

Using JDKSPI

The static block of code, using the JDK SPI, will RouteDecorator (routing), SQLRewriteContextDecorator (SQL rewrite), ResultProcessEngine (results) the realization of the three interfaces of the Class, Register in NewInstanceServiceLoader#SERVICE_MAP.

Public final class NewInstanceServiceLoader {private static final Map< class, Collection< class <? >>> SERVICE_MAP = new HashMap<>(); public static <T> void register(final Class<T> service) { for (T each : ServiceLoader.load(service)) { registerServiceClass(service, each); } } private static <T> void registerServiceClass(final Class<T> service, final T instance) { Collection<Class<? >> serviceClasses = SERVICE_MAP.get(service); if (null == serviceClasses) { serviceClasses = new LinkedHashSet<>(); } serviceClasses.add(instance.getClass()); SERVICE_MAP.put(service, serviceClasses); }}Copy the code

Note that SERVICE_MAP does not hold the global singleton object collection of the implementation Class, but rather the Class object collection of the implementation Class. These sharing-JDBC implementation classes, introduced through the SPI mechanism, are non-singletons, and each call to the newServiceInstances method of NewInstanceServiceLoader creates instances of all implementation classes. As the Javadoc comment for New styles Server reader says:

SPI service loader for new instance for every call.

public static <T> Collection<T> newServiceInstances(final Class<T> service) { Collection<T> result = new LinkedList<>();  if (null == SERVICE_MAP.get(service)) { return result; } for (Class<? > each : SERVICE_MAP.get(service)) { result.add((T) each.newInstance()); } return result; }Copy the code

A constructor

private final ShardingRuntimeContext runtimeContext;

public ShardingDataSource(final Map<String, DataSource> dataSourceMap, final ShardingRule shardingRule, final Properties props) throws SQLException {
    super(dataSourceMap);
    checkDataSourceType(dataSourceMap);
    runtimeContext = new ShardingRuntimeContext(dataSourceMap, shardingRule, props, getDatabaseType());
}

private void checkDataSourceType(final Map<String, DataSource> dataSourceMap) {
    for (DataSource each : dataSourceMap.values()) {
        Preconditions.checkArgument(!(each instanceof MasterSlaveDataSource), "Initialized data sources can not be master-slave data sources.");
    }
}
Copy the code
  • The constructors willdataSourceMapPasses to the parent class constructor.
  • CheckDataSourceType Verifies incoming dataDataSourceDoes not containMasterSlaveDataSource.
  • structureShardingRuntimeContextTo providegetRuntimeContextMethod implementation.

Core method getConnection implementation

The getConnection method simply returns a new ShardingConnection to the user.

@Override
public final ShardingConnection getConnection() {
    return new ShardingConnection(getDataSourceMap(), runtimeContext, TransactionTypeHolder.get());
}
Copy the code

Third, ShardingRuntimeContext

ShardingRuntimeContextIs a context object for the Sharding-JDBC runtime that contains all the information needed for the runtime.

1, RuntimeContext

RuntimeContext is a sharding-jdbc RuntimeContext abstract interface.

public interface RuntimeContext<T extends BaseRule> extends AutoCloseable {

    T getRule();

    ConfigurationProperties getProperties();
    
    DatabaseType getDatabaseType();
    
    ExecutorEngine getExecutorEngine();
    
    SQLParserEngine getSqlParserEngine();
}
Copy the code
  • getRule: getBaseRuleThe most commonly used isShardingRuleAnd the wholeRuntimeContextIs for a certainBaseRule.
  • getProperties: Gets the configuration, for examplesql.show=trueSuch as configuration.
  • getDatabaseType: getDatabaseType.DataSourceTypeContains data source types such as MySQL and Oracle, host, port, Catalog, and Schema.
  • getExecutorEngine: Gets the execution engine. There is only one implementation of the execution engineExecutorEngine.
  • getSqlParserEngine: Gets the SQL parsing engine. There is only one implementation of the parsing engineSQLParserEngine.

2, AbstractRuntimeContext

@Getter public abstract class AbstractRuntimeContext<T extends BaseRule> implements RuntimeContext<T> { private final T rule; private final ConfigurationProperties properties; private final DatabaseType databaseType; private final ExecutorEngine executorEngine; private final SQLParserEngine sqlParserEngine; protected AbstractRuntimeContext(final T rule, final Properties props, final DatabaseType databaseType) { this.rule = rule; properties = new ConfigurationProperties(null == props ? new Properties() : props); this.databaseType = databaseType; executorEngine = new ExecutorEngine(properties.<Integer>getValue(ConfigurationPropertyKey.EXECUTOR_SIZE)); sqlParserEngine = SQLParserEngineFactory.getSQLParserEngine(DatabaseTypes.getTrunkDatabaseTypeName(databaseType)); } protected abstract ShardingSphereMetaData getMetaData(); @Override public void close() throws Exception { executorEngine.close(); }}Copy the code

AbstractRuntimeContext implements all RuntimeContext abstract methods.

The AbstractRuntimeContext constructor requires subclasses to provide BaseRule, Properties, and DatabaseType and creates ExecutorEngine and SQLParserEngine.

AbstractRuntimeContext Requires subclasses to implement the ShardingSphereMetaData fetch method.

3, MultipleDataSourcesRuntimeContext

ShardingSphereMetaData

@RequiredArgsConstructor
@Getter
public final class ShardingSphereMetaData {
    
    private final DataSourceMetas dataSources;
    
    private final SchemaMetaData schema;
}
Copy the code

ShardingSphereMetaData encapsulates two member variables:

  • DataSourceMetas: Manage allDataSourceMeta.DataSourceMetaHost, port, Catalog, schema are saved.
public final class DataSourceMetas { private final Map<String, DataSourceMetaData> dataSourceMetaDataMap; } public interface DataSourceMetaData { String getHostName(); int getPort(); // For MySQL, the library name is String getCatalog(); // For MySQL Null String getSchema(); }Copy the code
  • SchemaMetaData: Manages all table metadata.
Public final class SchemaMetaData {// logical table name - TableMetaData private final Map<String, TableMetaData> tables; } public final class TableMetaData { private final Map<String, ColumnMetaData> columns; private final Map<String, IndexMetaData> indexes; private final List<String> columnNames = new ArrayList<>(); private final List<String> primaryKeyColumns = new ArrayList<>(); }Copy the code

MultipleDataSourcesRuntimeContext

MultipleDataSourcesRuntimeContext in AbstractRuntimeContext realized ShardingSphereMetaData getter methods on the basis of, Namely MultipleDataSourcesRuntimeContext has the ability to get a DataSource and Table metadata information.

@Getter public abstract class MultipleDataSourcesRuntimeContext<T extends BaseRule> extends AbstractRuntimeContext<T> { private final ShardingSphereMetaData metaData; protected MultipleDataSourcesRuntimeContext(final Map<String, DataSource> dataSourceMap, final T rule, final Properties props, final DatabaseType databaseType) throws SQLException { super(rule, props, databaseType); metaData = createMetaData(dataSourceMap, databaseType); } private ShardingSphereMetaData createMetaData(final Map<String, DataSource> dataSourceMap, final DatabaseType databaseType) throws SQLException { // 1. Create DataSourceMetas DataSourceMetas = new DataSourceMetas(databaseType, getDatabaseAccessConfigurationMap(dataSourceMap)); // 2. Create SchemaMetaData SchemaMetaData SchemaMetaData = loadSchemaMetaData(dataSourceMap); ShardingSphereMetaData result = new ShardingSphereMetaData(dataSourceMetas, schemaMetaData); return result; } private Map<String, DatabaseAccessConfiguration> getDatabaseAccessConfigurationMap(final Map<String, DataSource> dataSourceMap) throws SQLException { Map<String, DatabaseAccessConfiguration> result = new LinkedHashMap<>(dataSourceMap.size(), 1); for (Entry<String, DataSource> entry : dataSourceMap.entrySet()) { DataSource dataSource = entry.getValue(); try (Connection connection = dataSource.getConnection()) { DatabaseMetaData metaData = connection.getMetaData(); result.put(entry.getKey(), new DatabaseAccessConfiguration(metaData.getURL(), metaData.getUserName(), null)); } } return result; } protected abstract SchemaMetaData loadSchemaMetaData(Map<String, DataSource> dataSourceMap) throws SQLException; }Copy the code

In the method of constructing first MultipleDataSourcesRuntimeContext also provide BaseRule need to subclass, the Properties and the DatabaseType.

Next create ShardingSphereMetaData createMetaData, including DataSourceMetas created by MultipleDataSourcesRuntimeContext implementation, Subclasses need to implement the loadSchemaMetaData method to create SchemaMetaData.

4, ShardingRuntimeContext

@Getter public final class ShardingRuntimeContext extends MultipleDataSourcesRuntimeContext<ShardingRule> { private final CachedDatabaseMetaData cachedDatabaseMetaData; private final ShardingTransactionManagerEngine shardingTransactionManagerEngine; public ShardingRuntimeContext(final Map<String, DataSource> dataSourceMap, final ShardingRule shardingRule, final Properties props, final DatabaseType databaseType) throws SQLException { super(dataSourceMap, shardingRule, props, databaseType); cachedDatabaseMetaData = createCachedDatabaseMetaData(dataSourceMap); shardingTransactionManagerEngine = new ShardingTransactionManagerEngine(); shardingTransactionManagerEngine.init(databaseType, dataSourceMap); }}Copy the code

CachedDatabaseMetaData

ShardingRuntimeContext cache Java. SQL. DatabaseMetaData.

CachedDatabaseMetaData is created by DatabaseMetaData.

public final class ShardingRuntimeContext extends MultipleDataSourcesRuntimeContext<ShardingRule> { private final CachedDatabaseMetaData cachedDatabaseMetaData; public ShardingRuntimeContext(final Map<String, DataSource> dataSourceMap, final ShardingRule shardingRule, final Properties props, final DatabaseType databaseType) throws SQLException { super(dataSourceMap, shardingRule, props, databaseType); cachedDatabaseMetaData = createCachedDatabaseMetaData(dataSourceMap); / /... } private CachedDatabaseMetaData createCachedDatabaseMetaData(final Map<String, DataSource> dataSourceMap) throws SQLException { try (Connection connection = dataSourceMap.values().iterator().next().getConnection()) { return new CachedDatabaseMetaData(connection.getMetaData());  }}}Copy the code

CachedDatabaseMetaData only provide some getter method, all attributes from Java. The SQL. DatabaseMetaData.

/** * Cached database meta data. */ @Getter public final class CachedDatabaseMetaData { private final String url; private final String userName; private final String databaseProductName; private final String databaseProductVersion; private final String driverName; private final String driverVersion; / /... Omit other properties retrieved from DatabaseMetaData}Copy the code

ShardingTransactionManagerEngine

Because ShardingDatasource operating multiple data sources, may result in a distributed transaction, so here introduces shard ShardingTransactionManagerEngine transaction processing engine.

public final class ShardingTransactionManagerEngine { private final Map<TransactionType, ShardingTransactionManager> transactionManagerMap = new EnumMap<>(TransactionType.class); public ShardingTransactionManagerEngine() { loadShardingTransactionManager(); } private void loadShardingTransactionManager() { for (ShardingTransactionManager each : ServiceLoader.load(ShardingTransactionManager.class)) { if (transactionManagerMap.containsKey(each.getTransactionType())) { continue; } transactionManagerMap.put(each.getTransactionType(), each); }}}Copy the code

public enum TransactionType {
    LOCAL, 
    XA, 
    BASE
}
Copy the code

For different TransactionType will through the SPI corresponds a ShardingTransactionManager instance. XA corresponding XAShardingTransactionManager; Corresponding SeataATShardingTransactionManager BASE.

Implement loadSchemaMetaData

ShardingDataSource implements MultipleDataSourcesRuntimeContext loadSchemaMetaData method of the parent class definition.

@Override protected SchemaMetaData loadSchemaMetaData(final Map<String, DataSource> dataSourceMap) throws SQLException {// The maximum number of connections that a query request can use in each database instance, The default 1 int maxConnectionsSizePerQuery = getProperties().<Integer>getValue(ConfigurationPropertyKey.MAX_CONNECTIONS_SIZE_PER_QUERY); // Whether to check sub-table metadata consistency at startup, Default false Boolean isCheckingMetaData = getProperties().<Boolean>getValue(ConfigurationPropertyKey.CHECK_TABLE_METADATA_ENABLED); SchemaMetaData result = new ShardingMetaDataLoader(dataSourceMap, getRule(), maxConnectionsSizePerQuery, isCheckingMetaData).load(getDatabaseType()); / / use ShardingTableMetaDataDecorator to SchemaMetadata do secondary decoration / / if there is a primary key generation strategy, Set SchemaMetadata. TableMetaData. ColumnMetaData. Generated set to true / / on behalf of this column need to generate a primary key ID result = SchemaMetaDataDecorator.decorate(result, getRule(), new ShardingTableMetaDataDecorator()); // If there are encryption rules, Using EncryptTableMetaDataDecorator SchemaMetadata do secondary decoration / / Will be SchemaMetadata desensitization fields. TableMetaData. ColumnMetaData encapsulated as EncryptColumnMetaData / / extend three fields Cipher text columns, column, auxiliary query column if the original (! getRule().getEncryptRule().getEncryptTableNames().isEmpty()) { result = SchemaMetaDataDecorator.decorate(result, getRule().getEncryptRule(), new EncryptTableMetaDataDecorator()); } return result; }Copy the code

conclusion

  • Converting ShardingRuleConfiguration runtime ShardingRule contains all shard configuration information.
  • Multiple data sources as ShardingDataSource member variables, is managed by AbstractDataSourceAdapter and provide getter method.
  • ShardingRuntimeContext is a runtime context that holds ShardingRule sharding rules, various engines (SQL parsing engine, SQL execution engine, transaction engine), metadata information (data sources, tables).
  • Sharding-jdbc records the execution of object methods through WrapperAdapter, and when SQL parsing is complete, replay these methods at appropriate times. It allows users to use traditional Java.SQL related interfaces without caring about the underlying shard execution logic, and solves the problem of inconsistency between user call timing and actual execution timing.