preface

SpringBoot integrates Mybatis, Druid, and PageHelper with multiple data sources and pagination. SpringBoot integration of Mybatis, which has been described in a previous article, here is more than explained. The focus is on how to configure using Druid and PageHelper for multiple data sources.

SpringBoot + Mybatis + Druid + PageHelper

Druid introduction and use

Before using Druid, let’s take a quick look at Druid.

Druid is a database connection pool. Druid is arguably the best database connection pool out there! Because of its excellent function, performance and scalability, favored by developers.

Druid has already deployed more than 600 applications on Alibaba, and has been rigorously deployed in a production environment for over a year. Druid is a database connection pool developed by Alibaba for monitoring.

Druid is more than just a database connection pool. The Druid core has three main parts:

  • Plug-in system based on filter-chain mode.

  • DruidDataSource efficiently manages the database connection pool.

  • SQLParser

The main features of Druid are as follows:

  • Is an efficient, powerful and scalable database connection pool.

  • Can monitor database access performance.

  • Database password encryption

  • Get SQL execution logs

  • Extended JDBC

The introduction of this part will not say more, specific can see the official documents. So let’s start with Druid.

The first is Maven dependencies. You only need to add the Druid JAR.

<dependency>         
<groupId>com.alibaba</groupId>         
<artifactId>druid</artifactId>         
<version>1.1.8</version>  </dependency>
Copy the code

In terms of configuration, the main thing is to add the following to application.properties or application.yml.

Note: Because I am using two data sources here, it is slightly different. Druid configuration is explained in more detail below, but I won’t cover it here.

# the default data source master. The datasource. Url = JDBC: mysql: / / localhost: 3306 / springBoot? useUnicode=true&characterEncoding=utf8&allowMultiQueries=truemaster.datasource.username=rootmaster.datasource.password=1 23456master.datasource.driverClassName=com.mysql.jdbc.Driver # Another source cluster. The datasource. Url = JDBC: mysql: / / localhost: 3306 / springBoot_test? useUnicode=true&characterEncoding=utf8cluster.datasource.username=rootcluster.datasource.password=123456cluster.datasour DriverClassName = com.mysql.jdbc.driver Spring. The biggest datasource. Type = com. Alibaba. Druid. Pool. DruidDataSourcespring. The datasource. InitialSize = 5 spring. The datasource. MinIdle = 5 spring. The datasource. MaxActive = 20 # configuration access connection wait timeout time. Spring datasource. MaxWait = 60000 # configuration interval how to conduct a test, testing needs to be closed free connection, Unit is ms spring. The datasource. TimeBetweenEvictionRunsMillis = 60000 # to configure a connection in the pool minimum survival time, Unit is ms spring. The datasource. MinEvictableIdleTimeMillis = 300000 spring. The datasource. ValidationQuery = 1 FROM the SELECT DUALspring.datasource.testWhileIdle=truespring.datasource.testOnBorrow=falsespring.datasource.testOnReturn=false# Open the PSCache, And the size of the specified on each connection PSCache spring. The datasource. PoolPreparedStatements = truespring. The datasource. MaxPoolPreparedStatementPerConnectionS Ize =20 # Configure filters for monitoring statistics interception. 'wall' firewall used in spring. The datasource. Filters = stat, wall, log4j # by connectProperties attribute to open mergeSql function; Slow SQL record spring. The datasource. ConnectionProperties = druid. Stat. MergeSql = true; druid.stat.slowSqlMillis=5000Copy the code

Now that the configuration file is successfully added, let’s write our DruID-related classes.

First is MasterDataSourceConfig. Java class – this class, this is the default data source configuration classes.

@Configuration @MapperScan(basePackages = MasterDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "masterSqlSessionFactory")public class MasterDataSourceConfig { static final String PACKAGE = "com.pancm.dao.master"; static final String MAPPER_LOCATION = "classpath:mapper/master/*.xml"; @Value("${master.datasource.url}") private String url; @Value("${master.datasource.username}") private String username; @Value("${master.datasource.password}") private String password; @Value("${master.datasource.driverClassName}") private String driverClassName; @Value("${spring.datasource.initialSize}") private int initialSize; @Value("${spring.datasource.minIdle}") private int minIdle; @Value("${spring.datasource.maxActive}") private int maxActive; @Value("${spring.datasource.maxWait}") private int maxWait; @Value("${spring.datasource.timeBetweenEvictionRunsMillis}") private int timeBetweenEvictionRunsMillis; @Value("${spring.datasource.minEvictableIdleTimeMillis}") private int minEvictableIdleTimeMillis; @Value("${spring.datasource.validationQuery}") private String validationQuery; @Value("${spring.datasource.testWhileIdle}") private boolean testWhileIdle; @Value("${spring.datasource.testOnBorrow}") private boolean testOnBorrow; @Value("${spring.datasource.testOnReturn}") private boolean testOnReturn; @Value("${spring.datasource.poolPreparedStatements}") private boolean poolPreparedStatements; @Value("${spring.datasource.maxPoolPreparedStatementPerConnectionSize}") private int maxPoolPreparedStatementPerConnectionSize; @Value("${spring.datasource.filters}") private String filters; @Value("{spring.datasource.connectionProperties}") private String connectionProperties; @Bean(name = "masterDataSource") @Primary public DataSource masterDataSource() { DruidDataSource dataSource = new DruidDataSource(); dataSource.setUrl(url); dataSource.setUsername(username); dataSource.setPassword(password); dataSource.setDriverClassName(driverClassName); / / specific configuration dataSource. SetInitialSize (initialSize); dataSource.setMinIdle(minIdle); dataSource.setMaxActive(maxActive); dataSource.setMaxWait(maxWait); dataSource.setTimeBetweenEvictionRunsMillis(timeBetweenEvictionRunsMillis); dataSource.setMinEvictableIdleTimeMillis(minEvictableIdleTimeMillis); dataSource.setValidationQuery(validationQuery); dataSource.setTestWhileIdle(testWhileIdle); dataSource.setTestOnBorrow(testOnBorrow); dataSource.setTestOnReturn(testOnReturn); dataSource.setPoolPreparedStatements(poolPreparedStatements); dataSource.setMaxPoolPreparedStatementPerConnectionSize(maxPoolPreparedStatementPerConnectionSize); try { dataSource.setFilters(filters); } catch (SQLException e) { e.printStackTrace(); } dataSource.setConnectionProperties(connectionProperties); return dataSource; } @Bean(name = "masterTransactionManager") @Primary public DataSourceTransactionManager masterTransactionManager() { return new DataSourceTransactionManager(masterDataSource()); } @Bean(name = "masterSqlSessionFactory") @Primary public SqlSessionFactory masterSqlSessionFactory(@Qualifier("masterDataSource") DataSource masterDataSource) throws Exception { final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean(); sessionFactory.setDataSource(masterDataSource); sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources(MasterDataSourceConfig.MAPPER_LOCATION)); return sessionFactory.getObject(); }}Copy the code

Here are two notes:

  • @primary: indicates that this Bean is considered first when there are multiple candidates of the same class. When configuring multiple data sources, note that you must have a Primary data source and use @primary to mark the Bean.

  • MapperScan: scan Mapper interfaces and manage containers.

Note that sqlSessionFactoryRef means defining a unique instance of SqlSessionFactory.

With the above configuration done, you can use Druid as a connection pool. However, Druid is not simply a connection pool. It is also a monitoring application. It has a Web monitoring interface that allows you to see SQL information clearly.

To use Druid’s monitoring function in SpringBoot, you only need to write StatViewServlet and WebStatFilter classes to register services and filter rules. Here we can write the two together using @Configuration and @bean.

For ease of understanding, configuration instructions are also written in the code, so I won’t go into details here.

The code is as follows:

Configurationpublic class DruidConfiguration {@bean public ServletRegistrationBean druidStatViewServle() ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean( new StatViewServlet(), "/druid/*"); / / white list is empty, said all can access and use commas when multiple IP) servletRegistrationBean. AddInitParameter (" allow ", "127.0.0.1"); When there is a common / / IP blacklist (deny prior to allow) servletRegistrationBean. AddInitParameter (" deny ", "127.0.0.2"); / / set the login user name and password. ServletRegistrationBean addInitParameter (" loginUsername ", "pancm"); servletRegistrationBean.addInitParameter("loginPassword", "123456"); / / if we can reset the data. ServletRegistrationBean addInitParameter (" resetEnable ", "false"); return servletRegistrationBean; } @Bean public FilterRegistrationBean druidStatFilter() { FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean( new WebStatFilter()); / / add filter rules filterRegistrationBean. AddUrlPatterns (" / * "); / / add don't need to ignore the format of the information filterRegistrationBean. AddInitParameter (" exclusions ", "*. Js, *. GIF, * JPG, *. PNG, *. CSS, *. Ico, / druid / *"); System.out.println("druid initialized successfully!" ); return filterRegistrationBean; }}Copy the code

After writing, start the program, the browser input: http://127.0.0.1:8084/druid/index.html, then input the user name and password set, then can access the Web interface.

Multi-data Source Configuration

Before configuring multiple data sources, run the following script in the springBoot and springBoot_test mysql databases respectively.

CREATE TABLE 't_user' (' id' int(11) NOT NULL AUTO_INCREMENT COMMENT 'increment ID ', 'name' varchar(10) DEFAULT NULL COMMENT 'age ',' age 'in t(2) DEFAULT NULL COMMENT' age ', PRIMARY KEY (' id ') ENGINE=InnoDB AUTO_INCREMENT=15 DEFAULT CHARSET=utf8-- springBoot_test  `id` int(11) NOT NULL AUTO_INCREMENT, `name` varchar(16) DEFAULT NULL, `age` int(11) DEFAULT NULL, PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8Copy the code

Note: for laziness, the structure of the two tables is made the same! But does not affect the test!

The information for these two data sources has been configured in application.properties, and the configuration has been posted once, so it will not be posted here.

Here we focus on the configuration of the second data source. And the above MasterDataSourceConfig. Java is similar, the difference between in and did not use @ Primary annotations and a different name. . It is important to note MasterDataSourceConfig Java on the package and mapper, scanning is accurate to the directory where the second data source.

The code is as follows:

@Configuration@MapperScan(basePackages = ClusterDataSourceConfig.PACKAGE, sqlSessionFactoryRef = "clusterSqlSessionFactory")public class ClusterDataSourceConfig { static final String PACKAGE = "com.pancm.dao.cluster"; static final String MAPPER_LOCATION = "classpath:mapper/cluster/*.xml"; @Value("${cluster.datasource.url}") private String url; @Value("${cluster.datasource.username}") private String username; @Value("${cluster.datasource.password}") private String password; @Value("${cluster.datasource.driverClassName}") private String driverClass; // Like MasterDataSourceConfig, @bean (name = "clusterDataSource") public DataSource clusterDataSource() {DruidDataSource DataSource = new DruidDataSource(); dataSource.setUrl(url); dataSource.setUsername(username); dataSource.setPassword(password); dataSource.setDriverClassName(driverClass); MasterDataSourceConfig = MasterDataSourceConfig return dataSource; } @Bean(name = "clusterTransactionManager") public DataSourceTransactionManager clusterTransactionManager() { return new  DataSourceTransactionManager(clusterDataSource()); } @Bean(name = "clusterSqlSessionFactory") public SqlSessionFactory clusterSqlSessionFactory(@Qualifier("clusterDataS ource") DataSource clusterDataSource) throws Exception { final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean(); sessionFactory.setDataSource(clusterDataSource); sessionFactory.s etMapperLocations(new PathMatchingResourcePatternResolver().getResources(ClusterDataSourceConfig. MAPPER_LOCATION)); return sessionFactory.getObject(); }}Copy the code

After successfully writing the configuration, start the program and test it.

Use interfaces to add data to the springBoot and springBoot_test libraries, respectively.

t_user

POST to http://localhost:8084/api/user {" name ":" zhang ", "age" : 25} {" name ":" bill ", "age" : 25} {" name ":" detective ", "age" : 25}Copy the code

t_student

POST to http://localhost:8084/api/student {" name ":" student A ", "age" : 16} {" name ":" student B ", "age" : 17} {" name ":" student C ", "age" : 18}Copy the code

After successfully adding the data, you can then call different interfaces for query.

Request:

GET http://localhost:8084/api/user? Name = li si

Returns:

{"id": 2, "id": 2, "age": 25}Copy the code

Request:

GET http://localhost:8084/api/student? Name = C students

Returns:

{"id": 1, "name": "student C", "age": 16}Copy the code

As you can see from the data, multiple data sources have been configured successfully.

PageHelper paging implementation

PageHelper is a pagination plugin for Mybatis, very useful! Highly recommended here!!

PageHelper is easy to use. Just add the PageHelper dependency to Maven.

Maven has the following dependencies:

<dependency> <groupId>com.github.pagehelper</groupId> <artifactId>pagehelper-spring-boot-starter</artifactId> The < version > 1.2.3 < / version > < / dependency >Copy the code

Note: HERE I am using the springBoot version! Other versions are also available.

After you add the dependencies, you just need to add the following configuration or code.

First, add it to application.properties or application.yml

pagehelper:  helperDialect: mysql  offsetAsPageNum: true  rowBoundsWithCount: true  reasonable: false
Copy the code

In the mybatis. XML configuration, add:

<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSourc e" ref="dataSource" /> <! <property name="mapperLocations" value="classpath:mapper/*.xml"></property> <! - configuration page plug-in - > < property name = "plugins" > < array > < bean class = "com. Making. Pagehelper. Pagehelper" > < property name="properties"> <value> helperDialect=mysql offsetAsPageNum=t rue rowBoundsWithCount=true reasonable=false </value> </property> </bean> </array> </property> </bean>Copy the code

Third, add it to your code and use the @bean annotation to initialize it at startup time.

@Bean public PageHelper pageHelper(){ PageHelper pageHelper = new PageHelper(); Properties properties = new Properties(); // database property.setProperty ("helperDialect", "mysql"); SetProperty ("offsetAsPageNum", "true"); SetProperty ("rowBoundsWithCount", "true"); SetProperty ("reasonable", "false"); pageHelper.setProperties(properties); }Copy the code

Because we are using multiple data sources here, the configuration is slightly different. We need to configure it here at sessionFactory. Here to MasterDataSourceConfig. Java modified accordingly.

In the masterSqlSessionFactory method, add the following code.

@Bean(name = "masterSqlSessionFactory") @Primary public SqlSessionFactory masterSqlSessionFactory( @Qualifier("masterDataSource") DataSource masterDataSource) throws Exception { final SqlSessionFactoryBean sessionFactory = new SqlSessionFactoryBean(); sessionFactory.setDataSource(masterDataSource); sessionFactory.setMapperLocations(new PathMatchingResourcePatternResolver() .getResources(MasterDataSourceConfig.MAPPER_LOCATION)); // Interceptor Interceptor = new PageInterceptor(); Properties properties = new Properties(); // database property.setProperty ("helperDialect", "mysql"); SetProperty ("offsetAsPageNum", "true"); SetProperty ("rowBoundsWithCount", "true"); SetProperty ("reasonable", "false"); interceptor.setProperties(properties); sessionFactory. setPlugins(new Interceptor[] {interceptor}); return sessionFactory.getObject(); }Copy the code

Note: When other data sources also want to perform paging, just refer to the above code.

Note that a reasonable parameter, which indicates paging rationalization, defaults to false. If this parameter is set to true, the first page is queried when pageNum<=0, and the last page is queried when pageNum> Pages exceeds the total number. If the default value is false, query is performed based on parameters.

Pagehelper. startPage(pageNum,pageSize); pageHelper. startPage(pageNum,pageSize); pageHelper. startPage(pageNum,pageSize); If you want to know the total number, add page.gettotal () after the SQL statement.

Code examples:

public List<T> findByListEntity(T entity) { List<T> list = null; try { Page<? > page = PageHelper. StartPage (1, 2); System.out.println(getClassName(entity)+" set first page two data!" ); list = getMapper().findByListEntity(entity); System. Out.println (" a total of: "+ page. GetTotal () +" article data, the actual return: "+ list. The size () +" two data!" ); } catch (Exception e) {logger.error(" query "+getClassName(entity)+" failed! The reason is: ", e); } return list; }Copy the code

Once the code is written, the final testing begins.

Select * from t_user; select * from t_user;

Request:

GET http://localhost:8084/api/user

Returns:

[{" id ": 1," name ":" zhang ", "age" : 25}, {" id ": 2," name ":" bill ", "age" : 25}]Copy the code

Console printing:

Start query... User set the first page of two data! The 2018-04-27 19:55:50. 6152-769 the DEBUG [IO - 8084 - exec - 10] C.P.D.M.U serDao. FindByListEntity_COUNT: = = > Preparing: SELECT count(0) FROM T_user WHERE 1 = 12018-04-27 19:55:50. c.p.d.m.UserDao.findByListEntity_COUNT : = = > Parameters: the 2018-04-27 19:55:50. 6152-771 the DEBUG/IO - 8084 - exec - 10 C.P.D.M.U serDao. FindByListEntity_COUNT: < = = Total: 12018-04-27 19:55:50. 6152-772 the DEBUG/IO - 8084 - exec - 10 C.P.D ao. Master. UserDao. FindByListEntity: ==> Preparing: select id, name, age from t_user where 1=1 LIMIT ? The 2018-04-27 19:55:50. 6152-773 the DEBUG [IO - 8084 - exec - 10] C.P.D ao. Master. UserDao. FindByListEntity: = = > the Parameters: 2 (Integer) in 2018-04-27 19:55:50. 6152-774 the DEBUG/IO - 8084 - exec - 10 C.P.D ao. Master. UserDao. FindByListEntity: <== Total: 2 There are 3 pieces of data in Total. Actually, 2 pieces of data are returned.Copy the code

Select * from T_student; select * from t_student;

Request:

GET http://localhost:8084/api/student

Returns:

[{" id ": 1," name ":" student A ", "age" : 16}, {" id ": 2," name ":" student B ", "age" : 17}]Copy the code

Console printing:

Start query... Studnet set up the first page of two data! The 2018-04-27 19:54:56. 6152-155 the DEBUG [nio - 8084 - exec - 8] C.P.D.C.S.F indByListEntity_COUNT: = = > Preparing: SELECT count(0) FROM t_student WHERE 1 = 12018-04-27 19:54:56.155 DEBUG 6152 -- [niO-8084-exec-8] c.p.d.c.S.findByListEntity_COUNT : = = > Parameters: the 2018-04-27 19:54:56. 6152-156 the DEBUG [nio - 8084 - exec - 8] C.P.D.C.S.F indByListEntity_COUNT: < = = Total: The 12018-04-27 19:54:56. 6152-157 the DEBUG [nio - 8084 - exec - 8] C.P.D.C.S tudentDao. FindByListEntity: = = > Preparing: select id, name, age from t_student where 1=1 LIMIT ? The 2018-04-27 19:54:56. 6152-157 the DEBUG [nio - 8084 - exec - 8] C.P.D.C.S tudentDao. FindByListEntity: = = > the Parameters: 2 (Integer) 19:54:56 2018-04-27. 6152-157 the DEBUG [nio - 8084 - exec - 8] C.P.D.C.S tudentDao. FindByListEntity: < = = Total: 2 Total :3 pieces of data, actually returned :2 two pieces of data!Copy the code

Once you’ve done that, let’s look at Druid’s monitoring interface.

In the browser input: http://127.0.0.1:8084/druid/index.html

You can see the operation record very clearly!

If you want to learn more about Druid, check out the official Druid documentation!

conclusion

This article is finally finished, in the code writing, encountered a lot of problems, and then slowly try and find information to solve. This article has only scratched the surface of these related uses; practical applications can be more complex. If you have better ideas and suggestions, feel free to leave comments for discussion!


Reference article :www.bysocket.com/?p=1712

Durid official address :github.com/alibaba/dru…

PageHelper official address :github.com/pagehelper/…