background

The best way to do more realistic unit testing of interfaces in a project is to ditch mocks and embrace real data environments. How?

If we use data directly from the test environment for testing, we will not only disrupt the normal development testing process, but also use different data for multiple tests. In this case, not only can not guarantee the independence of a single test, but also will bring trouble to the subsequent positioning problems.

Along these lines, we can briefly summarize what we want:

1. A standalone, dedicated data environment for unit testing.

2. Ensure that the data is the same before each test.

Let’s consider how these two requirements can be implemented.

The first is the data environment. Redis and mysql are used in our project. As straightforward as that, we set up an identical environment locally for testing. However, it is important to consider that this environment is not just for your own use, but for everyone involved in the project to easily use this environment to test their own code. In addition, this test can be added to the CI process later, so that the unit test of the interface can be triggered when the code is submitted to further ensure the quality of the code.

Therefore, we choose to use Docker application container to build our data environment, and combine redis and mysql application containers into a data environment service with Docker-compose. So as long as developers to install the docker, docker – compose can a line command to start the data environment, feel much better than local building directly.

With the first requirement point solved, let’s move on to the second.

To ensure that the data used for each test is the same, we can export a data file to our test environment to use as template data for subsequent tests. This way, before each test, we can import our template data into the database after starting the data environment.

So much for the idea, let’s get started!

Setting up the Test Environment

Before we start, we need to install dokcer and Docker-compose, two basic services, you can learn to install through the official website or third-party tutorials, we will not go into the details.

Because we are all using a common image, what we need to do is compose the data environment service using Docker-compose. To take things a step further, we just need to tell Docker-compose what to do with an YML configuration file.

I’ll put out the configuration file and then explain the configuration items briefly.

Configuration file infrastructure

1, the version

Define version information.

2, services

Define service configuration information.

3, the image

Defines the image on which the service is initialized and attempts to pull the image remotely if it does not exist locally.

4, container_name

Define the container service name.

5, the command

Defines commands to be executed after the container is started. This overrides the default commands to be executed after the container is started as defined in the image.

6, ports

Defines mapping port information to map service ports in a container to the host. The mapping rule is (host port: container port).

7, volumes

Defines how to map a directory in a physical host to a container (based on docker’s data volume mechanism). The mapping rule is (host directory: directory in the container).

8 and the environment

Define environment variables.

Two, why such configuration, what problems solved

1. Although the port of redis service in the container has been mapped to the host, why is it still not connected?

We tried using medis to connect to the redis service in the container on the host, but it was not connected at this time. The first thing that comes to mind is not the password. Since it is only for testing, there is no need to worry about security issues. Can we set it to secret free mode?

After retrieval, it was found that the secret free mode of Redis could be enabled by setting the protected-mode configuration item in the configuration file to no.

This raises the question of what to do with our configuration files. Since container services are one-off, you can’t manually add a configuration file every time you start. So we used volumes to map the host’s files to the container. This way, we can put the configuration file in the project directory and it will be automatically mapped to the container when the service is started.

We can add the startup command through the command configuration item and specify the configuration file for service startup. (Because the Redis profile is quite long, I just attached a link to download the template profile.)

Redis profile:

YML configuration items:

But the problem doesn’t end there. You still can’t connect to the Redis service. Why?

There is actually a bind configuration item in the Redis configuration file with a default value of 127.0.0.1, which means redis is only allowed to make local connections. So we cannot connect to the redis service in the container in the host environment. The solution is to comment out the configuration item.

Redis profile:

Got a packet bigger than ‘MAX_allowed_packet’ bytes’

From the error message, it can be seen that the SQL file is too large to import the upper limit, so we need to add a custom configuration file to modify the upper limit. Again, put this configuration file in your project and use the Volumes configuration item to map it to the container.

Mysql configuration file:

[mysqld]
# added to avoid err "Got a packet bigger than 'max_allowed_packet' bytes"
#
net_buffer_length=1000000 
max_allowed_packet=1000000000
innodb_buffer_pool_size = 2000000000
#
Copy the code

YML configuration items

Automate the test process

Once the YML file is configured, our data environment is set up. So what’s standing in the way of our goal of test process automation? I don’t know if you had any questions when you solved the last question, how do you import data into the database?

The answer is that we need scripts to create the database and import the template data before the test process begins.

Create a new database

Our project uses KNEx to connect to the database, but KNEx needs to specify a library name to connect to when it initializes the connection to the database. But what if we want to create a new database and import data into it?

After some experimentation, we can first connect to a database in the original library. In this way, in mysql database we can connect to the original mysql library, and then create a new database through SQL statement, and connect to the library.

Our script is as follows:

const TEST_DB = 'test_db';
const cp = require('child_process');
const Knex = require('knex');
const hasDB = (dbs = [], dbName) = > {
  return dbs.some(item= > item.Database === dbName);
};

const getDBConnectionInfo = ({
  host = '127.0.0.1',
  port = 6606,
  user = 'root',
  password = '123456',
  database = 'mysql',}) = > ({
  host,
  port,
  user,
  password,
  database,
});

const createDB = async() = > {// Initialize the connection
  let knex = Knex({
    client: 'mysql'.connection: getDBConnectionInfo({ database: 'mysql'})});// Determine if a library already exists to create, and if so, delete it
  const dbInfo = await knex.raw('show databases');
  if (hasDB(dbInfo[0], TEST_DB)) {
    await knex.raw(`drop database ${TEST_DB}`);
  }
  
  // Create a new library and connect
  await knex.raw(`create database ${TEST_DB}`);
  knex = Knex({
    client: 'mysql'.connection: getDBConnectionInfo({ database: TEST_DB }),
  });
}
Copy the code

2. Import template data

Now that the database is created, it’s time to consider how to import the template data into the library (the template data is an SQL file exported from the Test environment).

SQL > execute SQL file with knex (failed)

The first thing that comes to mind is that you can read all the SQL statements in an SQL file and provide them to KNEx for execution. Unfortunately, there was no way to get KNEx to execute multiple SQL statements, so the first approach failed.

SQL > select * from db where SQL > select * from DB

The first method failed and we had to import the SQL file directly into the library.

Here’s the idea. First, we use the Docker filter to get the hash value that marks the mysql container. The data is then imported by executing commands in the container through Docker Exec.

Our script is as follows:

const createDB = async() = > {// Initialize the connection
  let knex = Knex({
    client: 'mysql'.connection: getDBConnectionInfo({ database: 'mysql'})});// Determine if a library already exists to create, and if so, delete it
  const dbInfo = await knex.raw('show databases');
  if (hasDB(dbInfo[0], TEST_DB)) {
    await knex.raw(`drop database ${TEST_DB}`);
  }
  
  // Create a new library and connect
  await knex.raw(`create database ${TEST_DB}`);
  knex = Knex({
    client: 'mysql'.connection: getDBConnectionInfo({ database: TEST_DB }),
  });
  
  let containerHash;
  
  // Get the container hash value
  try {
    containerHash = await execCommand(
      "docker ps --filter 'name=project_database' -q"
    );
  } catch (e) {
    console.log('Failed to get docker container hash', e);
  }
	
  // Inject data
  try {
    await execCommand(
      `docker exec -i ${containerHash.replace('\n'.' ')} /usr/init.sh`
    );
  } catch (e) {
    console.log('Import data failed', e);
  }
  
  // Destroy the connection
  knex.destroy();
}
Copy the code

Docker exec command construction

  1. If the hash value of the obtained container contains a newline character, the command will fail. Therefore, you need to remove the newline character.

  2. If the mysql command to import SQL files is directly placed after Docker exec, command execution will fail due to command priority. So we add init.sh here, and map the script file to the container along with the SQL file. This avoids the priority issue by simply executing the script file.

The content of the init.sh file is as follows:

# init. Sh file
#! /bin/bash

Inject template data
mysql -uroot -p123456 test_db < /usr/test_db.sql
Copy the code

YML configuration file:

Finally, we execute the script before the unit test starts, and we use Jest for unit testing in our project, so we execute the createDB function in the beforeAll provided by Jest, which automates the injection of template data before the test.

beforeAll(async() = > {await createDB();
  server = server.start(50000);
});
Copy the code

Add to GitLab’s CI link

With test automation in place, we can now add it to the CI segment. Those who are not clear about CI and gITLab’s CI configuration process can refer to this article.

Here we will not discuss all CI configuration steps, just show you the main Gitlab CI configuration files.

.gitlab-ci.yml configuration file:

image: docker:stable

services:
  - docker:stable-dind

before_script:
  - apk add --no-cache --quiet py-pip
  - pip install --quiet Docker - compose ~ = 1.23.0
  - apk add nodejs npm

test:
  stage: test
  script:
    - npm install --unsafe-perm=true --registry=http://r.cnpmjs.org/
    - nohup docker-compose up & npm run test
Copy the code

Because our Runner environment of GitLab CI is a Docker container, we need to declare through image that our Runner environment is initialized based on the Docker image.

In addition, our data environment is also built by Docker, so we need to use Docker in the Docker container to start our data environment. This requires declaring additional services under services. Here we use Docker: stables -dind. This image helps us create additional container services within the Docker container.

Before the job starts, we need to declare the docker-compose that needs to be installed separately and the nodeJS, NPM needed to run the project under before_script.

With the environment ready, we go to the test job. Install dependencies, start the data environment with Docker-compose, and then run Jest for unit testing (jEST imports the template data into the database before running the unit tests).

Afterword.

Up to this point, the mission has been successfully accomplished. Reviewing the whole process, the road is tortuous, but the harvest is happy. If you have any good ideas, suggestions or questions, please feel free to comment.