The framework is introduced

1. HttpRunner is a universal testing framework for HTTP(S) protocol. You only need to write and maintain a YAML/JSON script to achieve automatic testing, performance testing, online monitoring, continuous integration and other testing requirements.

Locust Locust is an easy-to-use distributed user load testing tool. It is used to load test a web site (or other system) and determine how many concurrent users the system can handle. HttpRunner reuses Locust to run performance tests directly without any modifications to YAML/JSON.

3. Httprunner User manual: cn.httprunner.org/

Environmental installation

1 Install Httprunner: PIP install Httprunner ==2.2.5

PIP install har2Case

3 Run the hrun -h/-v command to check whether the installation is successful. har2case -h/-v

4 Install locust: PIP install locustio

The new command

After HttpRunner is successfully installed, the following five commands are added:

1 Httprunner: core command

2 Hrun: indicates the abbreviation of Httprunner. It has the same function as Httprunner

3 Locusts: Implement performance tests based on Locust

4 Har2Case: An aid tool for converting standard common HAR format (HTTP Archive) into YAML/JSON format test cases

Use cases generated

(1) Use Fiddler/Charles to capture the interface data and export the result as xx.har file

(2) Convert the exported XX.har file to JSON file/YAML file

Har to a YAMl file: har2Case xx. har-2y /--to-ymlCopy the code

(3) YamL files after successful conversion are as follows:

Status_code: request status code

Headers. Content-type: Validates the Content format of the response header

Content. MSG: The keyword of the response content for authentication

Config: Global configuration item for the entire set of test cases, including variables (variables,name)

Test: indicates a single test case

Name The name of the test (name of the use case)

Request The test sends the following information about the HTTP request:

Url request path (base_URL + URL is used if config has defined base_URL)

Method Requests methods POST, GET, and so on

Headers: request header

Request body: Data in JSON format

Validate: The validation to be performed after the request is completed. If all the verification content passes the test, the test will be passed. Otherwise, the test will fail.

The parameterized:

testcases:
-
name: call demo_testcase with data 1
testcase: testcases/test_login.yml
parameters:
#      username: ["admin1"."admin"]
      -username:
        - ["admin1"]
        - ["admin"]
Copy the code

Testcases are nested with TestCases

Test case set is an unordered set of test cases, and the test cases in the set should be independent of each other without sequential dependence. What if there are indeed sequential dependencies, such as login and order functions? The correct approach would be to log in as part of the pre-order test case.

- config:
   name:order product
- test:
   name:login
   testcase:testcases/login.yml
- test:
   name:add to cart
   api:api/add_cart.yml
- test:
   name:make order
   api:api/make_order.yml
Copy the code

Variable values:

Inside the test case, HttpRunner divides the variable space scope context into two layers.

  • Config: as the global configuration item of the entire test case, scoped to the entire test case;
  • Test: The variable space context of the test step inherits or overrides the contents defined in config;
    • If a variable is defined in config but not in test, the test inherits the variable
    • If a variable is defined in config and in test, the test uses its own variable value
  • The variable space of each test step is independent and does not affect each other.
  • If you need to pass parameter values in more than one test step, you need to use the EXTRACT keyword and only pass them backwards

Extraction of response header and response body:

//response headers:{the content-type:"application/json", "the Content - Length" :69
}
//response body:{" success ":false, "person" : {"name": {" first_name ":"cs""Last_name" :"css",
     },
      “age”:29, "cities" :"Guangzhou"."Shenzhen"]}}Copy the code

Then the corresponding field extraction method is:

"Headers. The content-type = >""application/json""Headers. The content - length = >"69"Body. Success"/"content.success"/"text.success=>false

"content.person.name.first_name"= >"cs"
"content.person.age"= >29
"content.person.cities"= > ["Guangzhou"."Shenzhen"]
"content.person.cities.0"= >"Guangzhou"
"content.person.cities.1"= >"Shenzhen"
Copy the code

As you can see, with the point (.) operator, we can navigate down to a specific field:

  • If the next level is a dictionary, the next level is specified by.key, for example. Person, which is the person node under content;
  • When the next level is a list, the.index is used to specify the next level of nodes, such as.0, which specifies the first element under cities.

Extracting the content of HTML (regular expressions)

Uploading files scenario

Generate project documentation

1. Go to drive D

2. Enter the hrun — startProject project name

Testcases: Stores use cases

Testsuites: Store suites

Reports: Automatically generates reports and stores them in this directory

3. Go to the directory and view the corresponding files and directories

You can see the three generated directories and a PY file, API debugtalk.py testcases suite

4. Place the transformed XX.har file in the testCases directory of the project folder

(1) Run a single use case: specify a specific XX. yml or xx.json file

Hrun file path /hrun file path –log-level info

(2) Run multiple use cases: testCases directory of the hrun file

(3) Run the TestSuites directory: Run all yML use cases in the TestSuites directory

Parameterization: Parameterization of three types of multiple parameters

Check the report

In the project folder Report, the test report is generated in HTML format, including log log records (including request and response data) Matters needing attention:

(1) For highly dependent test cases, after the execution of the current test case fails, all subsequent test cases will fail, so there is no need to execute it. In case of failure, the following commands will not be continued:

$ hrun filepath/testcase.yml --failfast
Copy the code

(2) View more detailed response information in run logs

$ hrun docs/data/demo-quickstart-6.json --log-level debug
Copy the code
hrun xx.yaml --log-level debug
Copy the code

(3) Save the intermediate data during running as a log: hrun xx.yaml –log-file xx.logInsert a picture description here

Performance test for the interface –locusts

1. Perform the following performance test cases: locusts -f File path (relative path) — PROCESSES (In Locust, the ability to use multi-core processors is required,– Processes parameter, one master and multiple salves can be started at a time. If no specific value is specified after — PROCESSES, the number of slaves started is the same as the number of CPU cores on the machine.

2. After the case is executed, enter http://localhost:8089/ in the address box of the browser to view the Locust page

  • Number of Total Users to SIMULATE: Specifies the Number of concurrent users
  • Hatch Rate (Users SPAWNED /second): How many seconds do I have to start these concurrency
  • Host: indicates the Host ADDRESS of the interface

Locustos performs the requested action3. View the performance test report:(1) RPS: Requests Per Second(2) Response Times, unit: ms(3) The number of virtual users at different times: stable value (after a certain time)(4) Check the concurrent CPU pressure: similar to load balancing –CPU usage(5) Parameter description

host:http:/ / 10.0.10.27:10080Request path (name) : / API /loginCopy the code

(6) Script execution result

As you can see from the test results, there are not actually 10 requests per second (RPS less than 10), Because there is a wait mechanism in locustFile.py min_wait = 1000 Min wait time 1 SEC max_wait = 5000 Max wait time 5 SEC Change max_wait to 1000 to achieve 10 RPS locustfile.py When locusts is started, a locustfile.py file is automatically generated in the current directory. This is the locust script file

Execute the locustFile.py script file using locust

locust -f locustfile.py

Test result RPS=9.9 is close to 10

🌻 Exchange learning

Finally, I provide you with a PDF of all series of knowledge points including test development introduction, basic to advanced automation, as well as test development learning notes, interview questions summary documents, test job selection resume learning resource package. Hope to help you… Check out all 🤗.

🌻 Recommended reading:

💜 After writing “proficient” on my resume, I was suffocated by an interviewer with work experience

💜 Bytedance test post interview failed, I reviewed the reasons for the failure, decided to fight again

💜 “My monthly salary has changed from 5K to 24K since I switched to test development for more than a year”, confessions of a liberal arts girl

💜 test engineers can easily break the monthly income of ten thousand? After looking at these 20 charts (market + learning skills), I fell to my knees!