Sharding-Jdbc source code analysis

Sharding-jdbc is one of ShardingSphere open source distributed database middleware products, providing standardized data Sharding, distributed transaction and database governance functions, which can be applied to diverse application scenarios such as Java isomorphism, heterogeneous language, cloud native and so on. Sharding-JDBC provides additional services in the Java JDBC layer, it uses the client directly connected to the database, in the form of JAR package to provide services, without additional deployment and dependence, can be understood as the enhanced VERSION of JDBC driver, fully compatible with JDBC and various ORM frameworks

Apache Sharding – Jdbc analysis

The JDBC call process is as follows: SQL parsing > SQL routing > SQL rewriting > SQL execution > SQL merging.

Take a look at the overall architecture of the ShardingSphere-JDBC package:

Shardingsphere-jdbc ├─ ShardingSphere-JDBC-Core ├─ ShardingSphere-JDBC-Governance ├─ Shardingsphere-jdbc-core-spring │ ├─ ShardingSphere-JDBC-Governance │ ├─ ShardingSphere-JDBC-Spring-Infra │ └ ─ ─ shardingsphere - JDBC ws-transaction - springCopy the code

Four important JDBC objects

Common classes in JDBC are:

The DriverManager; (Register driver class, call the class to execute the content of the static code block, register yourself)

The Connection; The database connection, you can get the statement, a preparedStatement: Java. SQL. class

The Statement; The Statement interface represents static SQL statements and is used to create and execute generic SQL statements using Java programs

The ResultSet. A ResultSet (ResultSet) is an object returned by the query results in the data. It can be said that the ResultSet is an object that stores the query results. However, the ResultSet does not only have the function of storage, but also has the function of manipulating data, and may complete the update of data.

Shardingsphere – JDBC – the core contains

ShardingSphereDataSource  
ShardingSphereConnection 
ShardingSphereResultSet 
ShardingSphereStatement 
Copy the code

Three of the four overridden JDBC objects inherit from the WrapperAdapter class, so let’s start with a little discussion of WrapperAdapter,

First, we see that there is a JdbcObject interface, which generally refers to the DataSource, Connection, Statement, and other core interfaces of the JDBC API. As mentioned earlier, these interfaces inherit from the Wrapper interface. ShardingSphere provides an implementation class WrapperAdapter for the Wrapper interface, as shown in the figure. In ShardingSphere sharding code project – JDBC – the core of the org. Apache. ShardingSphere. Shardingjdbc. JDBC. Adapter package contains all adapter related implementation class

WrapperAdapter

/** * record method invocation. * * @param targetClass target class * @param methodName method name * @param argumentTypes argument types * @param arguments arguments */ @SneakyThrows(ReflectiveOperationException.class) public final void recordMethodInvocation(final Class<? > targetClass, final String methodName, final Class<? >[] argumentTypes, final Object[] arguments) { jdbcMethodInvocations.add(new JdbcMethodInvocation(targetClass.getMethod(methodName, argumentTypes), arguments)); } /** * Replay methods invocation. * * @param target target object */ public final void replayMethodsInvocation(final Object target) { jdbcMethodInvocations.forEach(each -> each.invoke(target)); }Copy the code

WrapperAdapter includes recordMethodInvocation and replaymethod invocation, both of which invoke JdbcMethodInvocation, The JdbcMethodInvocation class uses reflection to invoke methods based on the method and arguments objects that are passed in.

After understanding the JDB method invocation class, it is easy to understand the function of the recordMethodInvocation and replay method invocation methods. The recordMethodInvocation is used to record the method and parameters that need to be executed, while the replay method invocation is executed based on those methods and parameters via reflection.

For the replay method invocation, we must first find the invocation entry for the recordMethodInvocation. Call relationship through code, you can see on the call in AbstractConnectionAdapter, specifically setAutoCommit, setReadOnly and setTransactionIsolation at these three methods. Here is an example of an implementation of the setReadOnly method:

@Override
public final void setReadOnly(final boolean readOnly) throws SQLException {
    this.readOnly = readOnly;
    recordMethodInvocation(Connection.class, "setReadOnly", new Class[]{boolean.class}, new Object[]{readOnly});
    forceExecuteTemplate.execute(cachedConnections.values(), connection -> connection.setReadOnly(readOnly));
}

Copy the code

On the other hand, from the kind of relationship, can see AbstractConnectionAdapter directly inherited AbstractUnsupportedOperationConnection rather than WrapperAdapter, And in AbstractUnsupportedOperationConnection is a set of direct method throws an exception. The following code

@Override
public final CallableStatement prepareCall(final String sql) throws SQLException {
    throw new SQLFeatureNotSupportedException("prepareCall");
}
Copy the code

Basically AbstractUnsupportedOperationConnection this class is used to check and clear subclass AbstractConnectionAdapter operation permissions. Separation of responsibilities is a good design pattern.

ShardingSphereDataSource

First let’s look at the ShardingSphereDataSource class

public final class ShardingSphereDataSourceFactory {
    
    /**
     * Create ShardingSphere data source.
    
     */
    public static DataSource createDataSource(final Map<String, DataSource> dataSourceMap, final Collection<RuleConfiguration> configurations, final Properties props) throws SQLException {
        return new ShardingSphereDataSource(dataSourceMap, configurations, props);
    }
   
    public static DataSource createDataSource(final DataSource dataSource, final Collection<RuleConfiguration> configurations, final Properties props) throws SQLException {
        Map<String, DataSource> dataSourceMap = new HashMap<>(1, 1);
        dataSourceMap.put(DefaultSchema.LOGIC_NAME, dataSource);
        return createDataSource(dataSourceMap, configurations, props);
    }

Copy the code

ShardingSphereDataSourceFactory it implements two initialization dataSource method, Map

dataSourceMap, final Collection

Configurations, These three parameters initialize the ShardingSphereDataSource

,>

The ShardingSphereDataSource references the configuration interfaces of both contexts

private final MetaDataContexts metaDataContexts;

private final TransactionContexts transactionContexts;

Copy the code

First, take a look at the implementation class of metaDataContexts

metaDataContextsThere are two implementations of governance infra, respectively

StandardMetaDataContexts, a basic metadata configuration context, is implemented in infra package, and ExecutorEngine CalciteContextFactory and other very important configuration and engine methods are introduced.

@Getter
public final class StandardMetaDataContexts implements MetaDataContexts {
    
    private final Map<String, ShardingSphereMetaData> metaDataMap;
    
    private final ExecutorEngine executorEngine;
    
    private final CalciteContextFactory calciteContextFactory;
    
    private final Authentication authentication;
    
    private final ConfigurationProperties props;
    
    private final LockContext lockContext;
    
    private final StateContext stateContext;
    
    public StandardMetaDataContexts() {
        this(new LinkedHashMap<>(), null, new DefaultAuthentication(), new ConfigurationProperties(new Properties()));
    }
    
    public StandardMetaDataContexts(final Map<String, ShardingSphereMetaData> metaDataMap, 
                                    final ExecutorEngine executorEngine, final Authentication authentication, final ConfigurationProperties props) {
        this.metaDataMap = new LinkedHashMap<>(metaDataMap);
        this.executorEngine = executorEngine;
        calciteContextFactory = new CalciteContextFactory(metaDataMap);
        this.authentication = AuthenticationEngine.findSPIAuthentication().orElse(authentication);
        this.props = props;
        lockContext = new StandardLockContext();
        stateContext = new StateContext();
    }
Copy the code

There’s TransactionContext, the configuration class

/**
 * Transaction contexts.
 */
public interface TransactionContexts extends AutoCloseable {
    
    /**
     * Get transaction manager engines.
     * 
     * @return transaction manager engines
     */
    Map<String, ShardingTransactionManagerEngine> getEngines();
    
    /**
     * Get default transaction manager engine.
     *
     * @return default transaction manager engine
     */
    ShardingTransactionManagerEngine getDefaultTransactionManagerEngine();
}
Copy the code

Then we will first go back to the contents of sharding-JDBC package, infra and Governance, and then analyze it. Start theShardingSphereDataSourceFactoryTestThe test class creates a breakpoint where the DataSource is createdThe createDatasource method was calledmetaDataContextsThe reference context is passed into the test data

private ShardingRuleConfiguration createShardingRuleConfiguration() { ShardingRuleConfiguration result = new ShardingRuleConfiguration(); result.getTables().add(new ShardingTableRuleConfiguration("logicTable", "logic_db.table_${0.. 2} ")); return result; }Copy the code

The test class generates the shadingTable rule configuration and initializes the dataSource by injecting a reference context into the createDataSource method.

On the same side and YamlShardingSphereDataSourceFactory class, its implementation is by yml rules engine first translated yml file, Then through ShardingSphereDataSourceFactory createDatasource to start

public static DataSource createDataSource(final File yamlFile) throws SQLException, IOException {
    YamlRootRuleConfigurations configurations = YamlEngine.unmarshal(yamlFile, YamlRootRuleConfigurations.class);
    return ShardingSphereDataSourceFactory.createDataSource(DATASOURCE_SWAPPER.swapToDataSources(configurations.getDataSources()),
            SWAPPER_ENGINE.swapToRuleConfigurations(configurations.getRules()), configurations.getProps());
}
Copy the code

ShardingSphereConnection

ShardingSphereConnection extends AbstractConnectionAdapter implements ExecutorJDBCManager
Copy the code

ShardingSphereConnection inherited AbstractConnectionAdapter class, has realized the ExecutorJDBCManager interface, the bottom of the class is the Java SQL. The Connection

ShardingSphere through AbstractConnectionAdapter wraps the Connection of implementation class, Adapter classes have part of the discussion above, here is not to say more.

Run ShardingSphereConnectionTest test class, let’s take a look at ShardingSphereConnection running process execution method later calling getConnection ()

@Test
public void assertGetConnectionFromCache() throws SQLException {
    assertThat(connection.getConnection("test_primary_ds"), is(connection.getConnection("test_primary_ds")));
}

Copy the code

And then call to

public Connection getConnection(final String dataSourceName) throws SQLException { return getConnections(dataSourceName,  1, ConnectionMode.MEMORY_STRICTLY).get(0); } @Override public List<Connection> getConnections(final String dataSourceName, final int connectionSize, final ConnectionMode connectionMode) throws SQLException { DataSource dataSource = dataSourceMap.get(dataSourceName); Preconditions.checkState(null ! = dataSource, "Missing the data source name: '%s'", dataSourceName); Collection<Connection> connections; synchronized (getCachedConnections()) { connections = getCachedConnections().get(dataSourceName); }Copy the code

We can see that the data in the breakpoint containing dataSourceName is passed in to test_primary_ds and test_REPLICa_ds

The test program initializes the original table and copies both tables during initialization

@BeforeClass
public static void init() throws SQLException {
    DataSource primaryDataSource = mockDataSource();
    DataSource replicaDataSource = mockDataSource();
    dataSourceMap = new HashMap<>(2, 1);
    dataSourceMap.put("test_primary_ds", primaryDataSource);
    dataSourceMap.put("test_replica_ds", replicaDataSource);
}
Copy the code

ShardingConnection inherited AbstractConnectionAdapter directly. In AbstractConnectionAdapter found a cachedConnections properties, it is a Map object.

@Getter
private final Multimap<String, Connection> cachedConnections = LinkedHashMultimap.create();

Copy the code

This object actually caches the real Connection object behind the encapsulated ShardingConnection. If we are to a AbstractConnectionAdapter reuse, so these cachedConnections will always be cached, to call the close method

So you get the cached data from both data sources in getConnection. In the actual use of the process is also relying on this class to obtain all the data sources used, so as to achieve the ability of a statement to query multiple library tables in the case of a separate library.