Test platform, some people say it is chicken ribs, some people say it is useful, some people say it is wheels, opinions vary, it is better to start from their own, consider whether to do test platform:

  • Stage 1, automation with the Python+ Requests write interface.
  • In stage 2, select Either UnitTtest or PyTest, more familiar with PyTest and select PyTest.
  • In the third stage, pyTest project scaffolding was quickly built and TEP test tools were encapsulated.
  • In the fourth stage, the test scripts are managed through Git and the code is merged with multiple branches.
  • In phase 5, the local environment synchronization hassle is removed to make it easier for teams to share scripts.

You need a test bed.

Use paper

The environment variable

Environment variables are string key-value pairs, globally scoped. For example, different domain names in different environments:

Use: env_vars.name.

fixtures

Fixtures is the Fixture of pytest, which adds custom functions for use by the test case. For example, wrapper login interface returns token:

Tep. fixture provides URL fixtures that automatically splices the environment variable env_vars.domain + URI.

The test case

Write code in front page, 1 use case corresponds to 1 test_name.py file of PyTest. For example, using login fixtures to login:

Local writing

A: PyCharm (a measure of quality) is used to download the project code containing environment variables and fixtures from the platform, and fixtures (a measure of fixtures) is used to compile the use case.

Local and platform environment consistency, eliminating early setup, focusing on tests use cases.

Ability to scale

Use cases are Python code that can theoretically be written in Python and supported by the platform, such as HTTP, WebSocket, Protobuf, etc.

The principle of article

Pytest kernel

  1. vue2-ace-editorAs a front-end code editing component.
  2. The front end passes the code to the back end via HTTP requests.
  3. The back end stores the code into the MySQL database.
  4. Run the use case, pull the code from the database, and generate the PyTest file.
  5. Shell command invocationpytest -s test_name.py, perform tests.
  6. The back end returns the running result logs to the front end for display.

The reason for messing with the database is that the files in the Docker container are cleared after each deployment and can only be generated dynamically.

The teb scaffolding

Test platform functions are extracted from tep project scaffolding:

  • thefixture_env_vars.pyTo make theThe environment variableFunction.
  • thefixture_login.pySuch as makefixturesFunction.
  • thetestsTo make theThe test caseFunction.

Run the use case

The overall process is as follows:

tep startproject project_name

When running the use case, determine if the project directory exists, and if it does not, call TEP StartProject project_name to create the project scaffolding.

Update env in conf.yaml

Update the current runtime environment passed from the front end to conf.yaml:

env: qa
Copy the code

Dynamically generate or update the fixture_env_vars.py file

Dynamically generate the fixture_env_vars.py file based on the data from the environment variable function module:

#! /usr/bin/python
# encoding=utf-8

from tep.dao import mysql_engine
from tep.fixture import *


@pytest.fixture(scope="session")
def env_vars(config) :
    class Clazz(TepVars) :
        env = config["env"]

        """Variables define start"""
        # Environment and variables
        mapping = {
            "qa": {
                "domain": "https://qa.com",},"release": {
                "domain": "https://release.com",}# Add your environment and variables
        }
        # Define properties for auto display
        domain = mapping[env]["domain"]
        """Variables define end"""

    return Clazz()
Copy the code

Dynamically generate or update all files in the fixtures directory

Fixtures dynamically generates all files such as fixture_login.py based on the fixtures module data:

from tep.client import request
from tep.fixture import *


def _jwt_headers(token) :
    return {"Content-Type": "application/json"."authorization": f"Bearer {token}"}


@pytest.fixture(scope="session")
def login(url) :
    # Code your login
    logger.info("Administrator login")
    response = request(
        "post",
        url=url("/api/users/login"),
        headers={"Content-Type": "application/json"},
        json={
            "username": "admin"."password": "123456",})assert response.status_code < 400
    response_token = jmespath.search("token", response.json())

    class Clazz:
        token = response_token
        jwt_headers = _jwt_headers(response_token)

    return Clazz

Copy the code

Conftest. py will automatically look for post-import and tests cases will use it directly.

Dynamically generate or update tests a test_ file

Get the use case code from the database and dynamically generate the test_ file.

Shell executes the Pytest command

Get case_path from the previous step and call PyTest -s case_Path to perform the test.

The addition of suite and marker for batch execution of use cases is planned later.

summary

This article introduces the use and principle of the test platform I did for the first time. The technical stack is Vue+Django+Django REST Framework+JWT+MySQL+ Pytest +Git+BitBucket+Drone+Nginx+Docker+K8S, which has been landed in the company. It has not been produced in a large scale. The server has a large number of DISK I/O reads and writes. Therefore, the performance of disks is not known after a large number of disks are used. The bottom layer of the test platform is PyTest, which uses TEP, so it is called Teprunner.

References:

Github.com/dongfanger/…