This article is participating in Python Theme Month. See the link for details

Today we introduce an open source project for Python: Httprunner, an interface automation tool. Input for the first time, it is inevitable that there are not thoughtful places, light spray ~

Introduction:

HttpRunner is a simple and elegant yet powerful HTTP(S) testing framework. Define test cases in YAML or JSON format to ensure the consistency and maintainability of test case description. When the program executes, it processes the yML/JSON file entered by the user and generates test files based on the template. Pytest.main ([]) is used to execute the generated use-case file. Users only need to maintain the use cases through JSON/YML files, and do not need to care about how the program handles JSON/YML files, how to generate test files, etc., simply and quickly run the use cases through PyTest, and obtain detailed test reports.

Main Features:

  • Define test cases in YAML or JSON format to ensure the consistency and maintainability of test case description
    • testsuite > testcase > teststep(api)
    • Support for designing a series of test scenarios, each of which can contain multiple TestSteps.
    • Supports parametric design
    • Support for variables/ extract/ validate/hooks (extracting and validating JSON responses using Jmespath)
    • Support to add logic operation assist function (debugtalk.py), in the test script to achieve complex dynamic logic
  • Record and generate test cases with HAR support. (Using Charles to grab the request, the resulting use-case file may need to be handled manually.)
  • usepytestExecute test files with hundreds of plug-ins available at any time. useallureTest reports can be very powerful.
    • Run_testcase () : Data before processing the request
    • __run_step() > __run_step_request(use requests to initiate API requests and export variables referenced by other use cases (__step_datas))
  • By reusing LocUST, you can perform performance testing without additional work.
  • Support CLI commands, and CI/CD perfect combination.

The working process

The best way to see how httprunner works is to use Debug mode. How to use DEBUG to navigate through httprunner’s execution? Follow me:

Debugging technique

Debugging is done using PyCharm, and the Python environment is managed using Virtualenv

  • PIP install Httprunner
  • in/venv/bin/Find the hrun file in the directory

  • Edit the hrun file to assign sys.argv. The first is the absolute path to the Hrun, and the second is the absolute path to the use case
  • insys.exit(main_hrun_alias())Make a breakpoint here, and you can start your fun debugging journey
# -*- coding: utf-8 -*-
import re
import sys
from httprunner.cli import main_hrun_alias
if __name__ == '__main__':
    sys.argv = ['/Users/boyizhang/PycharmProjects/apitest/venv/bin/hrun'.'/Users/boyizhang/PycharmProjects/apitest/hruntests/testcases/testheader.yml']

    sys.argv[0] = re.sub(r'(-script\.pyw|\.exe)? $'.' ', sys.argv[0])
    sys.exit(main_hrun_alias())

Copy the code

workflow

Debug log:

  • Enter main_hrun_alias() and insert a new entry “run” into the sys.argv list (because we are executing the use case)

  • Go to main(), and since we have “run”, start executing the use case now.

  • Go to main_run(extra_ARgs) and generate yML/JSON files into Python files (test files) based on the EXTRA_args (yML/JSON path)

  • Enter main_make() to check whether the path exists

  • Go to __make(), load the yML /json file, and prepare the data (copy the data in order of priority, from lowest to highest)

  • Go to make_testCase () and generate the corresponding PY file from the yML/JSON file based on the template

  • Finally, go back to the main_run() method and continue following main_make(). You can see that the program ends up passing the created py file to Pytest.main () for execution

  • The above illustrates the process of generating use-case files from yML/JSON to PY files. You can then continue debugging the generated py file.
  • Here I have made a relatively simple demonstration, partners if interested, you can continue your debugging tour according to my ideas.

Use case management

Key concepts:

Generally speaking, the core of the test case layering mechanism is to separate the interface definition, test steps, test cases, and test scenarios for separate description and maintenance, so as to reduce the maintenance cost of automated test cases as much as possible.

  • Testcases should be complete and independent, and each testcase should be run independently
  • A test case (test scenario) is an ordered collection of test steps (TestSteps), each of which corresponds to an API request description
  • Testsuite is an unordered set of test cases. Test cases in the set should be independent of each other without sequential dependence. If sequential dependencies do exist, they need to be handled in the test case
    • Multiple test scenarios compose a test suite. You can run a test suite and execute multiple test scenarios simultaneously.
  • Interface definition: To better manage interface descriptions, an independent file is used to store interface descriptions. That is, each file corresponds to one interface description.

V2.0

We recommend that you use the latest version of Httprunner, but those interested in learning about architectural changes to Httprunner can continue reading the V2.0 module.

Layered test case model

In version 2.0, there is the concept of API Definition.

  • All apis are placed in the API folder, and each API creates a YML/JSON file for management
  • Teststep in testCase refers to the corresponding API

The advantages are: convenient API management; The disadvantage is that TESTStep in TestCase can refer directly to API Definition, whether testCase is a simple scenario with a single step or a complex scenario with multiple steps. In this way, for simple scenarios with a single step, API Definition is very similar, which leads to repeated descriptions and confusion. 支那

So, how do we reference the API in testCase >teststep? How do you refer to TestCase?支那

V2.0 case study

In TestStep, YOU can refer to API Definition through the API field and to TestCase through the TestCase field. Httprunner — startProject hruntest2.0 (httpruner3.0 or later)

$Httprunner -- startProject hrunTest2.0&& tree hruntest2.0Start to create new project: hruntest2.0 CWD: / Users/boyizhang/PycharmProjects/apitest created folder: Hruntest2.0 Created Folder: HrunTest2.0 / API Created Folder: HrunTest2.0 / TestCases created Folder: Hruntest2.0 / Testsuites Created Folder: Hruntest2.0 /reports Created File: Hruntest2.0 / API /demo_api. yML Created File: Hruntest2.0 / the testcases/demo_testcase. Yml created file: hruntest2.0 / testsuites/demo_testsuite yml created file: Hruntest2.0 /debugtalk.py created file: hruntest2.0/. Env created file: ├── ├─ ├─ ├─ hruntest2.0/.getignore └─ API │ ├─ hruntest2.0/.getignore └─ API │ ├─ hruntest2.0 │ ├─ hruntest2.0/.getignore └─ API │ ├─ hruntest2.0 │ ├─ hruntest2.0/.getignore └─ API │ ├─ hruntest2.0 │ ├─ ├ ─ 088, ├ ─ 088, ├ ─ 088, ├ ─ 088Copy the code
  • api definition
# api/demo_api.yml
name: demo api
variables:
    var1: value1
    var2: value2
request:
    url: /api/path/$var1
    method: POST
    headers:
        Content-Type: "application/json"
    json:
        key: $var2
validate:
    - eq: ["status_code".200]

Copy the code
  • testcase
config:
    name: "demo testcase"
    variables:
        device_sn: "ABC"
        username: ${ENV(USERNAME)}
        password: ${ENV(PASSWORD)}
    base_url: "http://127.0.0.1:5000"

teststeps:
-
    name: demo step 1
    api: path/to/api1.yml
    variables:
        user_agent: 'the iOS / 10.3'
        device_sn: $device_sn
    extract:
        - token: content.token
    validate:
        - eq: ["status_code".200]
Copy the code

Use the API option in Teststep to reference THE API Definition, and execute the testCase/demo_testCase.yml use-case file. (Note: API Definition does not belong to TestCase and cannot execute yML/JSON files under API/directly). Fields passed in TestStep take precedence over fields passed in API Definition. That is, if testStep passes a field, then only the one passed by Teststep is used, and if not, API Definition’s field is used.


V3.0

For simplicity, the API concept in HttpRunner v2.x has been removed. You can define an API as a test case with only one request step ** (focus here) **.

Layered test case model

  • Remove the concept of API, and turn the concept of API into the concept of test cases. Define the API as a test case with only one request step. With a unified concept: test cases, make the maintenance of use cases more convenient.
  • The logic of V2.0 requires that we maintain API Definitions and testCases with only one test step, whereas in V3.0 we only need to maintain TestCases. The v2->v3 transformation enables the test case dependencies to be resolved without duplicative descriptions, thus ensuring that each test case is independently runnable.

So, how do we refer to one testCase within testStep of another testCase?

V3.0 case study

In teststep, you can reference other test cases through the testcase field in the absolute or relative path of the corresponding testcase file. A relative path is recommended. The path base is the root directory of the project, that is, the path of the debugtalk.py directory.

** Quickly create an item through ** Httprunner startProject hruntest3.0.

$Tree hruntest3.0├── Heavy Exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercises, exercisesCopy the code
# demo_testcase_request.yml
config:
    name: "request methods testcase with functions"
    variables:
        foo1: config_bar1
        foo2: config_bar2
        expect_foo1: config_bar1
        expect_foo2: config_bar2
    base_url: "https://postman-echo.com"
    verify: False
    export: ["foo3"]

teststeps:
-
    name: post form data
    variables:
        foo2: bar23
    request:
        method: POST
        url: /post
        headers:
            User-Agent: HttpRunner/${get_httprunner_version()}
            Content-Type: "application/x-www-form-urlencoded"
        data: "foo1=$foo1&foo2=$foo2&foo3=$foo3"
    validate:
        - eq: ["status_code".200]
        - eq: ["body.form.foo1"."$expect_foo1"]
        - eq: ["body.form.foo2"."bar23"]
        - eq: ["body.form.foo3"."bar21"]

Copy the code
# demo_testcase_ref.yml 
config:
    name: "request methods testcase: reference testcase"
    variables:
        foo1: testsuite_config_bar1
        expect_foo1: testsuite_config_bar1
        expect_foo2: config_bar2
    base_url: "https://postman-echo.com"
    verify: False

teststeps:
-
    name: request with functions
    variables:
        foo1: testcase_ref_bar1
        expect_foo1: testcase_ref_bar1
    testcase: testcases/demo_testcase_request.yml
    export:
        - foo3
-
    name: post form data
    variables:
        foo1: bar1
    request:
        method: POST
        url: /post
        headers:
            User-Agent: HttpRunner/${get_httprunner_version()}
            Content-Type: "application/x-www-form-urlencoded"
        data: "foo1=$foo1&foo2=$foo3"
    validate:
        - eq: ["status_code".200]
        - eq: ["body.form.foo1"."bar1"]
        - eq: ["body.form.foo2"."bar21"]

Copy the code

Yml: demo_testcase_ref.yml: demo_testcase_ref.yml: demo_testcase_ref.yml: demo_testCase_ref. yml: name = request with functionstestcases/demo_testcase_request.yml. In the root directory, runhrun testcases/demo_testcase_ref.ymlAs you can see, the program also generates the case:testcases/demo_testcase_request.ymlTest file for:

Hook mechanism

Hook processing involved

background

In automated testing, some pre-processing needs to be done before the use case is executed, and some cleanup needs to be done after the use case is executed. If you do it manually, it’s not very suitable. Therefore, the hook mechanism is needed to execute the hook function before and after the use case execution.

use

The Hook mechanism is divided into two levels:

  • Testcase layer (testcase)
    • At the test case level, two key words setup_hooks and teardown_hooks are added to the config field. In this case, the main purpose is for pre-test preparation and post-test cleanup.
  • Teststep level (teststep)
    • Add the keywords setup_hooks and teardown_hooks to each teststep. The realization of the request content of the request for preprocessing and implementation of the response to modify the response.

Writing hook functions

  • The hook function definition is placed in the project’s debugtalk.py, and the hook function is still called in YAML/JSON in the form of {func(a, $b)}.
  • User-defined parameters: Hook functions at the test case level are the same as the user-defined functions in YAML/JSON, and can be flexibly used in the form of user-defined parameters.
  • Transferable Request and Response parameters: For a single test case level hook function, in addition to passing custom parameters, you can also pass in information related to the current test case, including the requested
    r e q u e s t (request header, request body, request method, etc.) and response Of the request (header, body, method, etc.) and the response
    Response (for requests.Response, that is, the response body), for flexible applications in more complex scenarios. Such as${print_req(``$request``)} .
# ${print_request($request)}The 2021-07-18 09:30:46. 740 | DEBUG | httprunner. Runner: __call_hooks: 121 - call hook function: ${print_reqeust($request)} {'method': 'GET', 'url': '/get', 'params': {'foo1': 'bar11', 'foo2': 'bar21', 'sum_v': '{3},' headers'. The user-agent ':' HttpRunner / 3.1.5 ', 'HRUN -- the Request ID: 'HRUN-7768261f-0abf-4ce5-abf2-06327de85fd7-846739'}, 'req_json': None, 'data': None, 'cookies': {}, 'timeout': 120, 'allow_redirects': True, 'verify': False}# ${print_req($response)}The 2021-07-18 09:36:03. 019 | DEBUG | httprunner. Runner: __call_hooks: 121 - call hook function: ${print_req($response)} <httprunner.response.ResponseObject object at 0x109c87e50>Copy the code

Environment (.env)

Handling involving env, documentation

# parser.py
def get_mapping_function(
    function_name: Text, functions_mapping: FunctionsMapping
) - >Callable:
    # omit
    if function_name in functions_mapping:
        return functions_mapping[function_name]

    elif function_name in ["environ"."ENV"] :return utils.get_os_environ
	# omit
    raise exceptions.FunctionNotFound(f"{function_name} is not found.")

# utils.py

def set_os_environ(variables_mapping) :
    """ set variables mapping to os.environ """
    for variable in variables_mapping:
        os.environ[variable] = variables_mapping[variable]
        logger.debug(f"Set OS environment variable: {variable}")
        
def get_os_environ(variable_name) :
    try:
        return os.environ[variable_name]
    except KeyError:
        raise exceptions.EnvNotFound(variable_name)
Copy the code

Env files are loaded into the environment before the use case is loaded, and if the use case contains env or environ, the corresponding values are read through os.environ.

The parameterized:

implementation

HttpRunner realize parameterized data driven mechanism: debugtalk.com/post/httpru… HttpRunner reconsider the parameterized data driven mechanism: debugtalk.com/post/httpru…

use

Note that as of V2.0, parameterization is only supported in TestSuite. Parameterization in test case files is no longer supported.

Parameter Configuration Overview

  • Independent parameter: a parameter is independent of other parameters and is not associated with any other parameters.
  • Associated parameters: If more than one parameter is defined in a test case, the test case will perform a Cartesian product combination of the parameters at run time, covering all parameter combinations.

The specific use v2.httprunner.org/prepare/par…

extract/export

The underlying use of JMespath to extract and validate JSON responses makes extraction easier. Extract is used to fetch fields in the response body, and export exports the fields fetched by the current use case for use by use cases referencing the use case.

The test report

  • Pytest built-in reports
    • hrun /path/to/testcase –html=report.html
  • Allure report
    • pip install allure-pytest
    • Run hrun /path/to/ testCase –alluredir=/ TMP /my_allure_results
    • Open report online: Allure serve/TMP/my_allure_Results or generate HTML report: allure generate reports/allure -o reports/allure/ HTML

conclusion

Take advantage of Httprunner, which covers 80% of all scenarios and is a great tool to use. In addition to learning how to use it, we should also learn the design ideas of others so that we have a chance to do better later. A Java version of the interface automation tool is also recommended and you can refer to RESTFUL Assured. Httprunner is implemented in much the same way as REar-Assured, except that instead of managing use cases via JSON/YAML, Rear-Assured writes test files directly (similar to httprunner’s generated PY files). You can compare and learn ~.