“This is the 13th day of my participation in the First Wenwen Challenge 2022.First challenge in 2022”
One, foreword
A “good” data-driven framework needs to make a “trade-off” from the three aspects of “time”, “manpower” and “revenue”.
Scripts that took hours cannot be executed because of some change in the business system under test. At the same time, we need to look at the “benefits” and not reduce the amount of work we have to do in order to always want to see 100% success, which may require a lot of maintenance.
Therefore, it is not easy to strike a balance between these three aspects. To improve ROI (input-output ratio), we must start from two aspects:
- Reduce development costs.
- Increased ease of use.
To reduce development costs, we need to:
- Reduce the cost of persistence layer development. Reduce development and maintenance time as much as possible, and use existing tools or components as much as possible.
- Reduce use case entry costs. Simplify the way of test case entry, so that scripts and test data can be decoupled, if you can develop some batch generation test data tools;
- Reduce use case maintenance costs. Reduce use case maintenance costs by making simple changes to parameters rather than a lot of code.
For “increasing the convenience of use”, we need to do:
- Manual testing can also be used, without the need for interface use case logic development, you can also prepare test data;
- In the development and debugging stage, it can help us locate problems faster;
- In the test operation and maintenance process, it can help us record most of the abnormal information;
- Supports real-time monitoring and early warning of database test status.
So, I developed a data-driven framework to implement some of my practices on data-driven thinking.
Two, the current pain point
1. Test case management
It is not recommended to hardcode test cases directly into Java files, as this can cause many problems:
- Modifying test cases requires significant code changes;
- The code is not easy to hand over to other students, because everyone has their own coding style and use case design style, so the handover will eventually be overridden by the next student to rewrite;
- If the test framework is changed, the use case data cannot be migrated, and only manual intervention is required, which is very inefficient.
It is necessary to separate “test data” from “script”.
While many examples on the web are data-driven using Excel, I recommend using MySQL, a relational database. Why? Generally, our scripts and codes are submitted to the company’s GitLab warehouse. It is obviously not convenient to modify test cases frequently if Excel is used, because we need to achieve version control and centralized management. There is no such worry with MySQL, because the data is separated from the script, and you only need to modify the data. The script reads the latest use case data in the database each time for testing. At the same time, it can also prevent some misoperations when operating code.
2. Multiple business data sources
As a test developer, automated interface testing often involves connecting to N data sources. For multiple data sources, the web provides override: data library side configuration:
mybatis.config-location=classpath:mybatis/mybatis-config.xml spring.datasource.test1.jdbc-url=jdbc:mysql://localhost:3306/test1? serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8&useSSL=true spring.datasource.test1.username=root spring.datasource.test1.password=root spring.datasource.test1.driver-class-name=com.mysql.cj.jdbc.Driver spring.datasource.test2.jdbc-url=jdbc:mysql://localhost:3306/test2? serverTimezone=UTC&useUnicode=true&characterEncoding=utf-8&useSSL=true spring.datasource.test2.username=root spring.datasource.test2.password=root spring.datasource.test2.driver-class-name=com.mysql.cj.jdbc.DriverCopy the code
Select * from test1; select * from test2;
@Configuration
@MapperScan(basePackages = "com.zuozewei.mapper.test1", sqlSessionTemplateRef = "test1SqlSessionTemplate")
public class DataSource1Config {
@Bean(name = "test1DataSource")
@ConfigurationProperties(prefix = "spring.datasource.test1")
@Primary
public DataSource testDataSource(a) {
return DataSourceBuilder.create().build();
}
@Bean(name = "test1SqlSessionFactory")
@Primary
public SqlSessionFactory testSqlSessionFactory(@Qualifier("test1DataSource") DataSource dataSource) throws Exception {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setDataSource(dataSource);
bean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mybatis/mapper/test1/*.xml"));
return bean.getObject();
}
@Bean(name = "test1TransactionManager")
@Primary
public DataSourceTransactionManager testTransactionManager(@Qualifier("test1DataSource") DataSource dataSource) {
return new DataSourceTransactionManager(dataSource);
}
@Bean(name = "test1SqlSessionTemplate")
@Primary
public SqlSessionTemplate testSqlSessionTemplate(@Qualifier("test1SqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
return newSqlSessionTemplate(sqlSessionFactory); }}Copy the code
The DataSource is created, the SqlSessionFactory is created, the transaction is created, and the SqlSessionTemplate is wrapped. The mapper file address and dao layer code of the branch library need to be specified:
@MapperScan(basePackages = "com.zuozewei.mapper.test1", sqlSessionTemplateRef = "test1SqlSessionTemplate")
Copy the code
This annotation indicates that the DAO layer is scanned and the dao layer is injected with the specified SqlSessionTemplate. All @beans need to be specified correctly by name. Dao layer and XML need according to the library to points in different directories, such as: test1 library dao layer in com. Zuozewei. Mapper. The test1 package, test2 library in com. Zuozewei. Mapper. Test2.
This approach is indeed available, but the disadvantage is that different packages need to be created according to different data sources. Once the data sources change, the package in which they are located needs to be changed. We looked at dynamic data sources, and that’s not what we want either.
3. Persistence layer development
When using Mybatis, Dao interface, Entity Entity class, and the CORRESPONDING XML of each Entity class have to be written by themselves, which is actually a lot of work and difficult to maintain.
4. Log management
A mature interface testing framework, log management this is essential. During development and debugging, logging helps us locate problems faster; In the process of test operation and maintenance, the log system can help us record most abnormal information. Usually, many test frameworks will collect log information to monitor and warn the interface test status in real time, such as slow SQL.
5. Mainstream technology stack
The following aspects are mainly considered:
- Easier development;
- Tests are easier;
- Configuration is simpler;
- Simpler deployment;
- Mainstream-based frameworks;
- Have the ability of market competition.
Three, the main functions
So, to sum up the above requirements, I drew a diagram:
Iv. Function Description
- Flexible support for multiple business data sources;
- Centralized management of test cases, structured data;
- Data driven script and test data decoupling;
- Rich log management functions, support abnormal monitoring, easy to develop and debug;
- Support performance monitoring, such as slow SQL for business data sources;
- The convenience of development can save repetitive work and reduce development cost;
- Flexible scalability to meet custom data types;
- The mainstream technology stack can keep up with the pace of Internet technology and is not easy to be eliminated quickly.
- Friendly code structure and comments, easy to read and secondary development.
Five, the summary
This article summarizes some common pain points in data-driven frameworks and some essential features that will determine how we design our frameworks in the future and hopefully inspire you.
References:
- [1] : Lego-Meituan interface automation test practice
- [2] : Annotation based data source implementation scheme on Mybatis- Spring