1. Causes of the problem

  • SqlServer has limits on the number of statements and the number of parameters, 1000 and 2100 respectively
  • Mysql has a limit on the length of statements. The default is 4M
  • Mybatis has no limitation on the number of dynamic statements.

2. Solutions

Use of ExecutorType in Mybatis

There are three built-in executortypes of Mybatis: SIMPLE, REUSE, and BATCH. The default is simple, which creates a new pre-processed statement for each statement execution, with a single SQL submission; However, batch mode repeatedly uses preprocessed statements and executes all updated statements in batches. Obviously, batch performance will be better. However, batch mode also has its own problems. For example, during Insert operations, there is no way to obtain the incremented ID before the transaction is committed, which is not suitable for business requirements in some cases.

Plug in a large number of data solutions, using ExecutorType

    @Autowired
    private SqlSessionTemplate sqlSessionTemplate;


    public void insertExcelData( List<ExcelEntity> orderList) {
        // If auto commit is set to true, the number of commits will not be controlled
        SqlSession session = sqlSessionTemplate.getSqlSessionFactory().openSession(ExecutorType.BATCH, false);
        // Do not commit automatically
        try {
            ExcelEntity userDao = session.getMapper(ExcelEntity.class);
            for (int i = 0; i < orderList.size(); i++) {
                execlDao.importExeclDate2(orderList.get(i));
                if (i % 400= =0 || i == orderList.size() - 1) {
                    // Manually submit every 400 entries and cannot be rolled back after submission
                    session.commit();
                    // Clear the cache to prevent overflowsession.clearCache(); }}}catch (Exception e) {
            // Uncommitted data can be rolled back
            session.rollback();
        } finally{ session.close(); }}Copy the code

Single-insert is used here, using the for loop directly, but using executorType.bacth is equivalent to manual commit. That’s what we want, so we check inside the loop if we’re at the 400th mark, and if we’re at the 400th mark we just commit, and then we clear the cache to prevent overflow. In this way, batch inserts are effectively realized and overflow problem is not guaranteed

  int beginIndex = 0;
        if (readManagers.size() > 0) {
            int endIndex = 0;
            int limit = 200;

            for (; beginIndex < readManagers.size(); ) {
                endIndex = beginIndex + limit;
                if (endIndex > readManagers.size() - 1) {
                    endIndex = readManagers.size() - 1;
                }
                    readBookOperatorDao.updateReadManagerOperator(readManagers.subList(beginIndex, endIndex + 1));
                beginIndex = endIndex + 1; }}Copy the code