Druid’s Monitor project was implemented yesterday, and today it happens to be implemented in a way that leverages database and front-end configuration of database information, rather than in yML configuration.
In yesterday’s demo we implemented a utility class that manages the pool of connections retrieved.
Implement dynamic connection pooling
@param connectName @return */ public static DruidDataSource getDataSource(String connectName,String driver, String url, String username, String password) throws SQLException { DruidDataSource ds = null ; ds = getDruidDataSource(connectName, ds); if(ds == null){ ds = createDataSource(connectName,driver,url,username,password); } return ds ; } @param connectName * @param ds * @return */ public static DruidDataSource getDruidDataSource(String connectName, DruidDataSource ds) { for (DruidDataSource datasource : DruidDataSourceStatManager.getDruidDataSourceInstances()) { if (connectName.equals(datasource.getName())) { ds = datasource; break; } } return ds; }Copy the code
The first step is to get the class of the connection pool and, through the properties of the JDBC connection configuration, call the method, either directly if there is an already started connection pool or newly started connection pool method if there is not.
Public static DruidDataSource * @param driver * @param URL * @param username * @param password */ createDataSource(String connectName,String driver, String url, String username, String password) throws SQLException { DruidDataSource dataSource = new DruidDataSource(); dataSource.setName(connectName); dataSource.setDriverClassName(driver); dataSource.setUrl(url); dataSource.setUsername(username); dataSource.setPassword(password); / / set the initial minimum connections dataSource. SetInitialSize (5); dataSource.setMinIdle(5); // Set the maximum wait time datasource.setmaxWait (60000); // Datasource.setMaxActive (30); / / prevent overdue dataSource. SetValidationQuery (" select 1 from dual "); dataSource.setTestOnBorrow(true); dataSource.setTestWhileIdle(true); / / configure how long is the interval to a testing, testing needs to be closed free connection, unit is milliseconds dataSource. The setTimeBetweenEvictionRunsMillis (60000); / / configuration idle connections minimum survival time dataSource. SetMinEvictableIdleTimeMillis (300000); / / disable automatic reconnection mechanism dataSource. SetBreakAfterAcquireFailure (true); dataSource.setConnectionErrorRetryAttempts(0); / * / / exceed the time limit is recycling dataSource. SetRemoveAbandoned (true); // Timeout; The unit is second. 180 seconds = 3 minutes dataSource. SetRemoveAbandonedTimeout (180); / / close abanded dataSource. Connect the output error log setLogAbandoned (true); */ dataSource.init(); return dataSource; }Copy the code
Next is a method to start the connection pool based on the configuration, some configurations can be written to nacOS to be configured at any time, and finally a DruidDataSource. Init method is called to create the connection pool we need.
Public static Map<String,Object> getDataSourceStat(String dsName){/** * getDataSourceStat(String dsName){ DruidDataSource dataSource = null; DruidDataSource ds = getDruidDataSource(dsName,dataSource) ; return ds! =null ? ds.getStatData() : new HashMap<String , Object>() ; @param dsName public static void closeDataSource(String dsName){DruidDataSource dataSource = null; dataSource = getDruidDataSource(dsName, dataSource); if(dataSource ! = null){ dataSource.close(); }} /** * finally execute * @param dataSource * @param Connection * @param preparedStatement * @param resultSet */ public static void finallyExecute(DruidDataSource dataSource, DruidPooledConnection connection, PreparedStatement preparedStatement, ResultSet resultSet) { if (connection ! = null) { try { connection.close(); dataSource.removeAbandoned(); } catch (Exception e) { e.printStackTrace(); } } closeAll(preparedStatement, resultSet); } @param resultSet @param resultSet public static void closeAll(PreparedStatement, ResultSet resultSet){ try { if (resultSet ! = null) { resultSet.close(); }} catch (Exception e) {throw new RuntimeException(" result set resource release failed "); } try { if (statement ! = null) { statement.close(); }} catch (Exception e) {throw new RuntimeException(" SQL statement resource release failed "); }}Copy the code
Finally, there are interfaces to close links and get connection pool information, which are used to return connection pool resources
We can also use a cache key to control multiple users to preempt connection pool resources, so that only one connection pool init succeeds at the same time, which can save mysql resources better. This cache key can be implemented using Redis. Then let’s test our util class and write a simple test class that returns:
public List<User> getDruid() { List<User> userS=new ArrayList<>(); PreparedStatement ps = null; ResultSet result = null; String sql = "select * from user "; try { DruidDataSource druidDataSource= DataSourceToolsUtils.getDataSource("dynamic","com.mysql.jdbc.Driver", "jdbc:mysql://localhost:3306/test? useUnicode=true&characterEncoding=utf-8&serverTimezone=UTC","root","root"); Connection connection= druidDataSource.getConnection(); ps = connection.prepareStatement(sql); Result = ps.executeQuery(); while(result.next()){ User usera=new User(); usera.setId(result.getInt("id")); usera.setAge(result.getInt("age")); usera.setName(result.getString("name")); userS.add(usera); } } catch (SQLException e) { e.printStackTrace(); } return userS; }Copy the code
Druid connection pool initialization was successful
The interface also returns SQL values. And many calls do not init new connection pool, demo initial success.
Conclusion:
Today we implemented a demo that uses dynamic DB information to start a connection pool. There is still a lot of room for improvement in this demo: db information verification, connection pool sharing and closing, and monitoring of the entire link resource. But if you think about it, you’ve implemented the druid source code. There are still some things that Druid doesn’t include, such as some optimizations for our custom query criteria, which will be implemented in the code.