One, foreword

On an older system, as the volume of data grew, we noticed that some paging statements were slowing us down.

Pagehelper and Mybatis – Plus are not used on the old system, and druid connection pool is used on the new system.

Two, old code

The old code is to use a Mybatis plug-in for paging, the core code of paging is as follows:

// SQL to record statistics
String countSql = "select count(0) from (" + sql+ ") tmp_count";
PreparedStatement countStmt = connection.prepareStatement(countSql);
BoundSql countBS = new BoundSql(mappedStatement.getConfiguration(), countSql, boundSql.getParameterMappings(), parameterObject);
setParameters(countStmt, mappedStatement, countBS,parameterObject);
Copy the code

It wraps a count SQL around the original SQL, as many plug-ins do.

Druid PagerUtil

Example SQL (with complex coordinate calculations)

SELECT g.*
    , ROUND(6378.138 * 2 * ASIN(SQRT(POW(SIN((? * PI() / 180 - t.latitude * PI() / 180) / 2), 2) + COS(? * PI() / 180) * COS(t.latitude * PI() / 180) * POW(SIN((? * PI() / 180 - t.longitude * PI() / 180) / 2), 2))), 2) AS distancecd
    , t.agentname, t.agentlogo, t.compaddress
FROM t_bas_integral_goods g
    LEFT JOIN t_bas_agent t ON g.agentid = t.AGENTID
WHERE t.AGENTTYPE = '2'
    AND t.pass = '0'
    AND t.dl_type = '4'
    AND g.type = 0
ORDER BY distancecd ASC
Copy the code

SQL > select count from Druid;

String countSql = PagerUtils.count(sql, DbType.mysql);
System.out.println(countSql);
Copy the code

Output:

SELECT COUNT(*)
FROM t_bas_integral_goods g
  LEFT JOIN t_bas_agent t ON g.agentid = t.AGENTID
WHERE t.AGENTTYPE = '2'
  AND t.pass = '0'
  AND t.dl_type = '4'
  AND g.type = 0
Copy the code

We can see that the optimized count SQL is very concise and coordinates are discarded. Note: PagerUtil also has a limit method that generates a limit statement.

4. Modify mybatis paging plug-in

4.1 The road to the pit

Druid PagerUtils count was optimized so I started working on it. At first I just changed countSql.

String countSql = PagerUtils.count(sql, dbType);
PreparedStatement countStmt = connection.prepareStatement(countSql);
BoundSql countBS = new BoundSql(mappedStatement.getConfiguration(), countSql, boundSql.getParameterMappings(), parameterObject);
setParameters(countStmt, mappedStatement, countBS,parameterObject);
Copy the code

SQL > alter table SQL > alter table SQL > alter table SQL > alter table SQL > alter table SQL Variables, optimized SQL has no variables, the plug-in will continue to set variables for him. How can we solve this problem?

Let’s go back to the implementation of PageHelper and Mybatis – Plus! Both parse and process SQL based on JSQLParser.

Need another JSQLParser? PagerUtils. Count = pagerUtils.count = pagerUtils.count = pagerUtils.count = pagerUtils.count = pagerUtils.count = pagerUtils.count = pagerUtils.count = pagerUtils.count = pagerUtils.count

I looked at the source code and thought, is it necessary to be so complicated? Is there a better way? DynamicSqlSource = #{XXX}

Can I use Druid to optimize SQL for mybatis placeholders first? Let’s try:

Sample SQL:

select * from xxx where type = #{type} order by xx
Copy the code

Output:

SELECT COUNT(*)
FROM xxx
WHERE type = #{type}
Copy the code

Perfect!! 4.2 Continue to step on the hole

Debug RawSqlSource: RawSqlSource: RawSqlSource: RawSqlSource: RawSqlSource: RawSqlSource In the form of a number.

@Select("select * from xxx where type = #{type} order by xx")
Object test(@Param("type") String type);
Copy the code

So I’m just going to see if I can extend it, and I found that it’s initialized in XMLLanguageDriver, which is pretty easy, because I’ve extended XMLLanguageDriver before, and it’s configurable. So I rewrote RawSqlSource and added rawSql fields containing mybatis parameter placeholders (#{}).

/** * raw SQL to facilitate pagination by druid tools **@author L.cm
 */
public class MicaRawSqlSource implements SqlSource {
    private final String rawSql;
    private final SqlSource sqlSource;
    public MicaRawSqlSource(Configuration configuration, SqlNode rootSqlNode, Class
        parameterType) {
        this(configuration, getSql(configuration, rootSqlNode), parameterType);
    }
    public MicaRawSqlSource(Configuration configuration, String sql, Class
        parameterType) {
        SqlSourceBuilder sqlSourceParser = newSqlSourceBuilder(configuration); Class<? > clazz = parameterType ==null ? Object.class : parameterType;
        this.rawSql = sql;
        this.sqlSource = sqlSourceParser.parse(sql, clazz, new HashMap<>());
    }
    / /... .
}
Copy the code

Now that all the logic has been worked out, let’s take a look at the core code of our PagePlugin:

// Paging
Configuration configuration = mappedStatement.getConfiguration();
SqlSourceBuilder sqlSourceParser = newSqlSourceBuilder(configuration); Class<? > parameterType = parameterObject.getClass(); Connection connection = (Connection) invocation.getArgs()[0];
// 1. Check the SQL, if not? Is directly processed
String boundRawSql = boundSql.getSql();
if (boundRawSql.indexOf(CharPool.QUESTION_MARK) == -1) {
    // Does not include? No.
    String countSql = PagerUtils.count(boundRawSql, dbType);
    SqlSource newSqlSource = sqlSourceParser.parse(countSql, parameterType, new HashMap<>());
    BoundSql newBoundSql = newSqlSource.getBoundSql(parameterObject);
    int count = getCount(connection, mappedStatement, parameterObject, newBoundSql);
    StringBuilder sqlBuilder = new StringBuilder(boundRawSql);
    Page page = getPageParam(parameterObject, sqlBuilder, count);
    String pageSql = generatePageSql(sqlBuilder.toString(), dbType, page);
    // Reflect paging SQL statements back to BoundSql.
    setField(boundSql, "sql", pageSql);
    return invocation.proceed();
}
// 2. Press SqlSource to parse
SqlSource sqlSource = mappedStatement.getSqlSource();
// Dynamic SQL in XML
int count;
if (sqlSource instanceof DynamicSqlSource) {
    SqlNode rootSqlNode = PagePlugin.getField(sqlSource, "rootSqlNode");
    DynamicContext context = new DynamicContext(configuration, parameterObject);
    rootSqlNode.apply(context);
    // Generate count SQL with #{XXX} variable
    String countSql = PagerUtils.count(context.getSql(), dbType);
    SqlSource newSqlSource = sqlSourceParser.parse(countSql, parameterType, context.getBindings());
    BoundSql newBoundSql = newSqlSource.getBoundSql(parameterObject);
    count = getCount(connection, mappedStatement, parameterObject, newBoundSql);
} else if (sqlSource instanceof MicaRawSqlSource) {
    String rawSql = ((MicaRawSqlSource) sqlSource).getRawSql();
    DynamicContext context = new DynamicContext(configuration, parameterObject);
    // Generate count SQL with #{XXX} variable
    String countSql = PagerUtils.count(rawSql, dbType);
    SqlSource newSqlSource = sqlSourceParser.parse(countSql, parameterType, context.getBindings());
    BoundSql newBoundSql = newSqlSource.getBoundSql(parameterObject);
    count = getCount(connection, mappedStatement, parameterObject, newBoundSql);
} else {
    throw new IllegalArgumentException("Unsupported SQL pagination, please use XML or annotations");
}
Copy the code

Five, the conclusion

The entire old service was significantly speed-enhanced by switching to micA (deep customization) microservice architecture (demo environment is only single service with low memory configuration), which we will continue to optimize later.

I am Spring elder brother (Dreamlu), attached micA open source version address: gitee.com/596392912/m… Welcome to star and talk with us.