This article has participated in the third “High Yield more text” track of the Denver Creators Training Camp, check out the details:Digg project | Creator Boot Camp phase 3 is underway, “write” to make a personal impact.
HTTPRUNNER2.x
For details, see Httprunner2. x user manual
Tool is introduced
- HttpRunner is a universal testing framework for HTTP(S). You only need to write and maintain a YAML/JSON script to implement automatic testing, performance testing, online monitoring, and continuous integration.
Design idea
- Fully reuse good frameworks to assemble more powerful and complete frameworks
- Convention over configuration must be followed, fully reflected in automated testing
- The pursuit of input-output ratio, one input can achieve a variety of testing requirements
The core function
- Inherit the full functionality of Requests
- Json/YAML format is used as the test case data carrier
- Support debugtalk.py auxiliary functions, enhance the flexibility in script practice, to achieve dynamic calculation logic
- Support test case layer, realize test case reuse: data layer, business (logical layer) layer
- Support unitTest framework setup, teardown processing mechanism: hook mechanism
- Support for HAR recording scripts to generate test cases (HAR2Case)
- Distributed performance testing is supported in conjunction with the LocUST framework
- Support CLI command line call mode, easy to integrate Jenkins
- Presentation layer: test result statistics and beautiful HTML report + log
- Expansibility: Support secondary development and realization of web platform
On the idea of hierarchical testing
Distinguish between automated test layering and test framework layering and project structure layering
- The so-called automation test layering, based on the input and output ratio from top to bottom: UI(system testing), service (API/integration testing), bottom layer (unit testing)
- Testing framework layer, can realize automatic testing: interface, UI, performance, etc.; The separation strategy of test data and script is adopted
-
- Interface automation test framework layers: Data layer (test data/use cases), business layer (logical layer), common layer (common method of business invocation)
-
- UI automation test framework layers (take PO design pattern as an example) : page object layer (including element positioning), business logic layer (calling page objects), test data layer (data and script separation)
- After testing layered structures, engineering structure, the business layer (call or assembly test cases), the data layer (data from test cases/scripts), public library base class (public methods), tools layer (log system, database processing, report processing), display layer (HTML report show), the persistence layer (such as use cases, such as HTML report whether to store database)
Setting up the development environment
We’re testing. Why is it called a development environment? Because all the scripting and data debugging of the test is done locally, it is called the development environment. The test environment is suitable for pre-production test validation.
Python environment
Although Python2 and Python3 are officially supported, it is recommended to use Python3 in version 3.6 and above for our actual development.
Httprunner Installation mode
Why specify the install version? Because the current stable version of Httprunner hosted on PyPI is version 3.x.
- PIP install httprunner = = 2.3.2
Version update
- pip install -U HttpRunner
Install the check
- Hrun -v or –help
- har2case -V
About the Python development environment
In the actual development process, it is suggested to develop a good development mode, and create different Python virtual development environment in different projects, for the convenience of management, and the convenience of project migration
There are several ways to create virtual environments: Anaconda or Python -m venv is recommended
- Python -m venv /path Virtual path
Basic concept
Tip: Before trying to use a tool/framework, first understand its rationale/design concept and then understand the corresponding theoretical concept.
- Testcase (testcase), teststep (teststep), testcase set (testsuite)
-
- Testcases under testSuite contain multiple TestCases
{
"config": {
"name": "Test Case Set: Business Name"."variables": {
"token": "Store public variables that can be used in testCases under TestCases.",},"base_url": "${ENV(URL)}"
},
"testcases": [{
"name": "It could be a test case or API."."testcase": "testcases/homeIndex.json"
},
{
"name": "Support assembly of multiple test cases"."testcase": "testcases/myPractiecs.json"}}]Copy the code
-
- Testcase contains one or more tests, which can also contain apis and testCases. The following is an example of testSteps being recorded in a HAR2Case conversion HAR file:
{
"config": {
"name": "Can describe a business scenario or a single use case business."."base_url": "https://www.xxx.com"
},
"teststeps": [{
"name": "Interface Business Name Description"."skip": "skip login"."request": {
"url": "/xx/xx/passwordLogin"."method": "POST"."headers": {
"Content-Type": "application/json; charset=UTF-8"."User-Agent": "IeltsBroV3/10.0.0 (com) yasiBro) v2; build:5; IOS 13.3.0) Alamofire / 4.9.0"
},
"json": {
"password": "111111"."deviceid": "99B5D095-67AD-4B79-A995-6C869A895873"."mobile": "13800138000"."pushToken": "d90e1e770a37ac3464930c968fed9885"."deviceName": "iPhone 8 Plus"."verifyCode": ""."loginType": 0."channel": 0."deviceType": "ios"."zone": "86"}},"extract": [{
"token": "content.content.token"}]."validate": [{
"eq": [
"status_code".200]}]}Copy the code
-
- The normal demo in the framework is
[{
"config": {
"name": "Global Settings."."variables": {
"user_agent": "Store flexible variables"
},
"verify": false}}, {"test": {
"name": "Call API to get response token"."api": "api/login.yml"."extract": {
"token": "content.content.token"
},
"validate": [{
"eq": [
"status_code".200]}]}}, {"test": {
"name": "Request a business interface"."api": "api/get_status.yml"."validate": [{
"eq": [
"status_code".200}]}}]Copy the code
-
- The API method
{
"name": "Login interface"."base_url": "${ENV(URL)}"."variables": {
"expected_status_code": 200
},
"request": {
"url": "/xxx/xxx/passwordLogin"."method": "POST"."json": {
"appVersion": 9."channel": 1."deviceName": "HUAWEI EML-AL00"."deviceType": "android"."deviceid": "0795cd72-3e05-40ba-9733-5df1c1fa0970"."loginType": 0."mobile": "${ENV(MOBILE)}"."password": "${ENV(PASSWORD)}"."pushToken": "25b9b066dc939b9863fe9feb3fca654d"."systemVersion": "8.1.0"."zone": 86}},"validate": [{
"eq": [
"status_code".200]]}}Copy the code
The framework still recommends using yamL as the test case data carrier, which is confusing and difficult to maintain. Yaml is the result of a dict or list, and is much more readable!
- An example of the API is as follows:
name: Login interface
base_url: ${ENV(URL)}
variables:
expected_status_code: 200
request:
url: /hcp/apiLogin/passwordLogin
method: POST
json:
appVersion: 9.0
channel: 1
deviceName: HUAWEI EML-AL00
deviceType: android
deviceid: 0795cd72-3e05-40ba-9733-5df1c1fa0970
loginType: 0
mobile: ${ENV(MOBILE)}
password: ${ENV(PASSWORD)}
pushToken: 25b9b066dc939b9863fe9feb3fca654d
systemVersion: 8.1. 0
zone: 86
validate:
- eq: [status_code.200]
Copy the code
Rapid application
The basic concepts of the Httprunner framework have been described above. Using its features, you can quickly test httprunner.
- Briefly explain how to use the framework quickly (text description)
Har default JSON format, -2Y parameter indicates YAML format. Third step: execute, hrun XXx. yml, after execution, generate reports/ XXx. HTML report in the current directoryCopy the code
Knowledge points that should be mastered
Hook mechanism
Testcase testcase, config added keywords: setup_hooks, teardown_hooks, which are used to prepare or clear the testcase before and after execution
- setup_hooks
- teardown_hooks
-
- Testcase layer
- config:
name: Basic configuration
request:
base_url: http://127.0.0.1:8080/
setup_hooks:
- ${hook_print(setup)}
teardown_hooks:
- ${hook_print(teardown)}
Copy the code
-
- Teststep layer teststep
Request: URL: /get_status method: GET teardown_hooks: - ${get_status($response)} validate: -eq: ["status_code", 500]Copy the code
The environment variable
Application scenario: In addition to information security considerations, you should also consider how to switch the environment, switch the configuration, so you need to use global or local variables
- Env files can be added in any directory and environment variables can be set
ACCOUNT=13800013800
PASSWD=111111
BASEURL=https://www.baidu.com
Copy the code
- Reference: {ENV (ACCOUNT)} \ {ENV (PASSWD)} ${ENV (BASEURL)}
Data driven
It will not be demonstrated here, there are detailed usage instructions in other articles or networks, only the way it is supported.
- Specify the parameter list directly in YAML/JSON, using httprunner as an example in the documentation
config:
name: "demo"
testcases:
testcase1_name:
testcase: /path/to/testcase1
parameters:
User_agent-version is used to create a nested list of dictionaries
user_agent: ["IOS / 10.1"."IOS / 10.2"."IOS / 10.3"]
testcase2_name:
testcase: /path/to/testcase2
parameters:
User_agent-version if the dictionary is multi-list, use - join: user_agent-version
user_agent-version:
- ["IOS / 10.1"."10.1"]
- ["IOS / 10.2"."10.2"]
Copy the code
- The CSV file is referenced through the built-in parameterize (abbreviated to P) function
config:
name: "demo"
testcases:
testcase1_name:
testcase: /path/to/testcase1
parameters:
# Use it when there is a large amount of data
user_id: ${P(data/user_id.csv)}
Copy the code
- Call the custom function in debugtalk.py to generate the parameter list. This method is flexible
def get_account(num) :
accounts = []
for index in range(1, num+1):
accounts.append(
{"username": "user%s" % index, "password": str(index) * 6},)return accounts
Copy the code
-
- The function defined by the debugTalk is referenced in the test case file as ${func_name(parameters)} :
config:
name: "demo"
testcases:
testcase1_name:
testcase: /path/to/testcase1
parameters:
username-password: ${get_account(10)}
Copy the code
Keyword application
The components referenced here include, but are not limited to, use cases
- Extract Keywords to extract the response parameters of the current interface as the request parameters of the next interface, used when parameters are associated
- Validate response assertion
- Parameters Global parameters
- Setup_hooks: debugTalk: debugTalk: debugTalk: debugTalk: debugTalk: debugTalk: debugTalk: debugTalk: debugTalk: debugTalk
- Variables defined in the test step, scoped to the current file, but supported to override TestCase from TestSuite
- Base_url in TestCase the path part can be the full address, but not if there is a Base_URL
- The API can invoke API layer test cases in TestCase
- Testcase has the same effect as an API
- Output List of arguments output by the entire use case
The use case files shown here are in yamL format
API demo example, including the above basic keywords, first write debugTalk function
def set_up() :
"" Do the preparatory work in the Config layer ""
print("I'm preparing in the Config layer.")
def tear_down() :
"" Do the preparatory work in the Config layer ""
print("I'm cleaning up data in the Config layer.")
def output_request(request) :
"""setup, print request parameters """
print(request)
def output_response(response) :
"""teardown prints the response result """
print(response.text)
Copy the code
Organize the API file in two copies: a single API and testCase
-
- A single API
name: Login interface
base_url: Interface request address
variables:
expected_status_code: 200
mobile: Your account
passwd: Your password
request:
url: /xxx/xxx/passwordLogin
method: POST
json:
appVersion: 9.0
channel: 1
deviceName: HUAWEI EML-AL00
deviceType: android
deviceid: 0795cd72-3e05-40ba-9733-5df1c1fa0970
loginType: 0
mobile: $mobile
password: $passwd
pushToken: 25b9b066dc939b9863fe9feb3fca654d
systemVersion: 8.1. 0
zone: 86
validate:
- eq: [status_code.$expected_status_code]
Copy the code
-
- The test case
- config:
name: "Set keyword usage"
variables:
user_agent: 'the iOS / 10.3'
verify: False
setup_hooks:
- ${set_up()}
teardown_hooks:
- ${tear_down()}
output:
- user_agent
- test:
name: First Login
api: api/login.yml
extract:
token: content.content.token
validate:
- eq: ["status_code".200]
output:
- token
- test:
name: Second Login
api: api/login.yml
setup_hooks:
- ${output_request($request)}
teardown_hooks:
- ${output_response($response)}
extract:
token: content.content.token
validate:
- eq: ["status_code".200]
# the output is invalid
output:
- token
Copy the code
Execute to see results, console
Engineering structure
API directory, debugtalk, testCase must be in the same directory.
In hrun, there are API, TestCases, Testsuites,.env, etc. The independent API /xxx.yml can be run separately. How do you implement the TestCases layer to call the API layer without going into TestCases and having the API layer in the current directory? But why can the TestSuites layer move from the TestCases layer to the API layer?
So the conclusion is that when you use the co-call login to get testCases, you actually call the API in TestSuite and pass it to TestCases? This is wrong, this is wrong!!
In conclusion: Httprunner — startProject supports API, TestCases, testSuites. That is, yML testcase files under Testsuites, TestCases, AND APIS need to be executed at/root path < project >.
Httprunner extension
Sometimes the framework’s API classes are also used to execute test cases in debugtalk.py
- Execute test cases, either directories or files, or a mix of directories and use cases :[“path”,”test_file”]
from httprunner.api import HttpRunner
runner = HttpRunner(
failfast=True,
save_tests=True,
log_level="INFO",
log_file="test.log"
)
summary = runner.run(path_or_tests)
Copy the code
- You can also generate a test report by calling the API, because the response results have elements used in the HTML report
from httprunner import report
report_path = report.gen_html_report(
summary,
report_template="/path/to/custom_report_template",
report_dir="/path/to/reports_dir",
report_file="/path/to/report_file_path"
)
Copy the code