This article was first published on:Walker AI

Pytest is a unit testing framework for Python. Compared with UnitTest, Pytest is simpler and more efficient to use. Pytest is also the first choice for most people who write test cases in Python.

In addition to the functions provided by the framework itself, Pytest also supports hundreds of third-party plug-ins, good scalability can better meet the different needs of people in the use case design. This article takes a closer look at the following five commonly used plug-ins.

1. Use case dependencies

When writing use cases, we pay attention to the independence of each use case, but some use cases are related and cannot be completely independent, so we can set the dependency between use cases by using the pytest-dependency plug-in. If use case A depends on use case B, if use case B fails to execute, use case A is automatically skipped. This way, you can avoid executing a use case that is bound to fail, the equivalent of Pytest.Mark.skip.

(1) Installation:

pip install pytest-dependency
Copy the code

(2) Instructions:

First, when marking dependent use cases, the decorator PyTest.mark.Dependency () needs to be added to the dependent use cases, and the dependent cases need to be executed before the associated use cases. Dependent use cases can also be aliased by adding the name parameter.

Pytest.mark.dependency (depends=[‘ use case name ‘]) is also needed to add decorator pytest.mark.dependency(depends=[‘ use case name ‘]). The difference is that the decorator must fill in the depends parameter to complete the correlation of the use cases, and can be separated by “, “when there are more than one related dependent use cases.

In addition, you can specify the scope that the use case depends on through the scope parameter, which is also session, Package, Module, and class, which are not detailed here.

The following examples and execution results are used to further illustrate.

(3) Analysis of examples and execution results

Example:

import pytest


class TestCase:

    # Mark the current use case as a dependent use case via the decorator @pytest.mark.dependency(), which requires precedence over associated use case execution
    @pytest.mark.dependency()
    def test_01(self) :
        print("Test case 01, execution failed.")
        assert 1= =2

    Relate dependent use cases by using decorators, and relate use cases by specifying the use case name with the depends parameter
    @pytest.mark.dependency(depends=['test_01'])
    def test_02(self) :
        print("Test Case 02, skipped")

    You can specify an alias with the name parameter when you mark dependent use cases
    @pytest.mark.dependency(name="func_2")
    def test_03(self) :
        print("Test case 03, executed successfully!")

    Use the depends parameter to specify the defined alias correlation use cases
    @pytest.mark.dependency(depends=['func_2'])
    def test_04(self) :
        print("Test case 04, executed successfully!")

    The depends parameter can be associated with multiple test cases, separated by commas (,)
    @pytest.mark.dependency(depends=['test_01'.'func_2'])
    def test_05(self) :
        print("Test Case 05, skip")


if __name__ == '__main__':
    pytest.main(['-vs'])
Copy the code

The result is as follows:

We can see that the current use case will only be executed if the dependent use case is successfully executed, otherwise it will be skipped. If you rely on multiple use cases, they will be executed only if all of them succeed. Otherwise, they will be skipped.

2. Fail to run again

The Pytest-Rerunfailures plug-in allows you to re-execute a use case after a failure and set the maximum number of re-runs. This ensures the accuracy of the results of use case execution.

(1) Installation:

pip install pytest-rerunfailures
Copy the code

(2) Instructions:

Failed reruns can be used in two ways, through decorators and on the command line.

To use decorators, add decorator Pytest.marker.flaky to use cases (reruns= maximum number of reruns, reruns_delay= interval of execution (unit: second)). During execution, use cases with decorators added will be re-executed according to the set number and time if they fail to execute.

When running the command, you also need to specify “rerun” and “rerun-delay”, for example, pytest –reruns maximum times to reruns –reruns-delay interval.

Note: Reruns is the maximum number of reruns. If the use case is successfully executed before this number is reached, reruns will not be continued and the use case is judged to have passed. Otherwise, the use case still fails after the maximum number of times, and the execution fails.

The following examples and execution results further illustrate the details.

(3) Analysis of examples and execution results

Example:

import pytest
import random


class TestCase:
    
    Use the decorator to set the maximum number of times a use case can be reexecuted after failure and the time between each execution (in seconds).
    @pytest.mark.flaky(reruns=3, reruns_delay=1)
    def test_01(self) :
        result = random.choice(['a'.'b'.'c'.'d'.'e'])
        print(f"result={result}")
        assert result == 'c'


if __name__ == '__main__':
    pytest.main(['-vs'])
Copy the code

The result is as follows:

As we can see, when a use case assertion fails, it is reexecuted until the set maximum number of times is reached or until execution succeeds.

3. Specify the sequence of use cases

By default, pyTest executes use cases in the order in which they are executed. Sometimes, when maintaining test cases, we may need to change the order in which the use cases are executed. However, it is not conducive to maintenance to change the sequence of a large chunk of use case code each time. Therefore, the pytest-ordering plug-in can be used to quickly set the execution order of a use case. In future maintenance, you only need to modify the corresponding execution order parameters.

(1) Installation:

pip install pytest-ordering
Copy the code

(2) Instructions:

Set the order of use cases by adding the decorator Pytest.mark.Run (Order = order of execution) to the use cases. At execution time, use cases using the decorator Pytest.mark.Run take precedence over use cases without decorators, and use cases with the execution order set are executed in ascending order of the size set by the Order parameter.

The following examples and execution results further illustrate the details.

(3) Analysis of examples and execution results

Example:

import pytest


class TestCase:

    def test_01(self) :
        print("Test Case 01")

    def test_02(self) :
        print("Test Case 02")
    
    Use the decorator to set the execution order to 2
    @pytest.mark.run(order=2)
    def test_03(self) :
        print("Test Case 03")
    
    Use the decorator to set the execution order to 1
    @pytest.mark.run(order=1)
    def test_04(self) :
        print("Test Case 04")


if __name__ == "__main__":
    pytest.main(['-vs'])
Copy the code

Execution Result:

We can see that the order of execution is the same as expected. Priority execution indicates the order of execution of the use cases and executes them in ascending order.

4. Distributed running

When a project has many use cases, execution often takes a long time, and by running distributed, the overall use case execution time can be significantly reduced. The Pytest-Xdist plug-in helps with distributed running of test cases.

(1) Installation:

pip install pytest-xdist
Copy the code

(2) Instructions:

When executing the use case on the command line, use the -n parameter to set the number of processes to be started in parallel. In addition to the specific number of cpus, you can also set this parameter to Auto. In this case, the operation depends on the number of cpus on the device.

In addition, use case groups can be set up with the –dist parameter so that use cases within the same group are executed in the same process.

  • –dist= loadScope Use cases on the same module or in the same class are assigned to the same group, grouped by class before module.
  • –dist= loadFile Use cases in the same.py file are assigned to the same group.

The following examples and execution results further illustrate the details.

(3) Analysis of examples and execution results

Example:

import pytest
from time import sleep


class TestCase1:

    @pytest.mark.parametrize('keyword'['a'.'b'.'c'.'d'.'e'.'f'.'g'.'h'.'i'.'j'])
    def test_baidu_search(self, keyword) :
        sleep(1)
        print(F 'searches for keywords{keyword}')


class TestCase2:

    @pytest.mark.parametrize('user'['user1'.'user2'.'user3'.'user4'.'user5'.'user6'.'user7'.'user8'.'user9'.'user10'])    def test_login(self, user) :
        sleep(1)
        print(F users'{user}Login successful ')


if __name__ == '__main__':
    # pytest.main(['-vs']) # run without pytest-xdist
    pytest.main(['-vs'.'-n'.'2']) Run pytest-xdist
Copy the code

Execution Result:

As you can see from the two results above, the run time of the use case is significantly reduced with the distributed run. The use cases in the example are not related to each other, and if they are actually used in a dependency relationship, you can use the –dist parameter to group the use cases to ensure that the associated use cases are in the same group.

5. Multiple assertions

Sometimes, in a use case, we need to make multiple assertions on different dimensions of the result, but with assert assertions, if one fails, subsequent assertions will not proceed. We can now solve this problem by using the Pytest-assume plug-in to continue with subsequent assertions when an assertion fails.

(1) Installation:

pip install pytest-assume
Copy the code

(2) Instructions:

Instead of making assertions using assert, use Pytest.assume () in your use cases.

The following examples and execution results further illustrate the details.

(3) Analysis of examples and execution results

Example:

import pytest


class TestCase:
    
    # Use assert assertions
    def test_01(self) :
        print("Claim 1")
        assert 1= =1
        print('assertion 2')
        assert 2= =1
        print("Assertion 3")
        assert 3= =3
        print(End of Use case)
    
    # pytest.assume() assert
    def test_02(self) :
        print('assertion 1')
        pytest.assume(1= =1)
        print('assertion 2')
        pytest.assume(2= =1)
        print('assertion 3')
        pytest.assume(3= =3)
        print(End of Use case)


if __name__ == '__main__':
    pytest.main(['-vs'])
Copy the code

Execution Result:

As you can see from the execution results, when assert assertions fail, subsequent content is not executed; With pytest.assume() assertions, the assertion failure is still executed until the end of the use case. This makes it easier to get all the error information in the use case execution at once.

6. Summary

This article introduces some commonly used PyTest framework plug-ins, which can help us solve some problems encountered in the actual use of the process. At present, PyTest supports up to 868 plug-ins. In addition to the five common plug-ins introduced in this article, there are many plug-ins that support other requirements. You can try to find and use relevant plug-ins according to your own needs, so that you can better design test cases in line with business scenarios.


PS: more dry technology, pay attention to the public, | xingzhe_ai 】, and walker to discuss together!