background

Recently, the team wants to strengthen the testing part, reduce some repetitive work, so that some content can be automated, and at the same time, the performance of the interface development written by the students also began to have requirements.

Considering that no one in the team has experience in test development, the early choice is still based on tools, and programming is secondary.

Speaking of tools, Jmeter is widely used, which can handle both automated testing and performance testing.

I will also update my experience with Jmeter from time to time.

The beginning is always a simple one.

So this article is mainly to take an HTTP interface out, and then through the configuration, Jmeter successfully access the interface, and assert whether the request is successful, check the result.

Finally, the CLI will run the test plan and output the REPORT in HTML format.

Here we go!

Prepare an HTTP interface

Here we create an ASP.NET Core Web API project and write a simple interface that returns a JSON string.

[ApiController] [Route("[controller]")] public class RunController : ControllerBase { [HttpGet] public IActionResult Get() { return Ok(new { code = 0, msg = "ok" }); }}Copy the code

Turn this project on, exposing port 8532.

Now it’s time to configure and debug this interface in Jmeter.

Adding a thread Group

Add a Thread Group to the test plan

After adding, you can see the following

In Thread groups, the most important configuration is Thread Properties, but this will not be changed here, leaving the default.

Because the first thing to do is to debug the test interface! If the interface is not tuned properly, there is no point in setting more threads!

So I just changed the name to sample1.

Thread group has, equivalent to a skeleton, the following is about to fill the thread group content, let it rich up.

Since we are primarily testing the HTTP interface, most of the content will be HTTP related.

Add HTTP request defaults

Add HTTP Request Defaults to the thread group.

Here is generally configured with some things that do not change much, the normal is the domain name of the interface, after the designation, you can not fill in the domain name of the interface.

Now fill in the IP and port of the test interface.

Add the HTTP header manager

Request headers. Most interfaces require them. The most common one is content-Type.

Here you can add an HTTP Header Manager to manage these request headers.

The test interface is in JSON format, so you need to configure content-Type and Application/JSON.

The next step is the actual request.

Adding an HTTP Request

Add an HTTP Request Sampler to the thread group.

In this step, fill in the corresponding information according to the corresponding test interface.

Take the test interface as an example:

  1. You don’t need to fill in the Web Server, which is already configured in the HTTP request default.
  2. The test interface is a GET request, and the relative path of the test interface is/RUN
  3. The interface parameter is filled ina=b&c=dSince it is a GET request, it is ok to put it on the relative path.

At this point, the request content for this interface is ready.

How do you determine if the request for this interface was successful?

Whether the interface is successful or not is generally identified in several categories. One is if the status code is 2xx, the interface is regarded as successful; the other is if the returned JSON contains a code, and the value of the code is used to judge the interface.

The above test interface falls into the second category, so it is up to you to determine what the code value is in the returned content.

Thinking back to unit tests, there is an assertion step to determine whether the expected results were achieved.

Jmeter also has this content here. The JSON assertion is chosen here.

Adding JSON assertions

Add a JSON Assertion to the thread group

The sample interface is successful when it returns code 0, so you can fill it in this way

The first step is to determine whether the node exists or not. The second step is to check the assertion value and fill in the expected value.

Now that I have the request, now that I have the assertion, how do I see the result?

This is where the magic listener comes in.

Adding listeners

There are many types of listeners. Here we choose View Results Tree and Aggregate Report.

At this point, the entire test plan looks like this:

Run it, open the result tree, and you can see that the test interface has run successfully, and the code returned is indeed 0.

If you set the code in the JSON assertion to 1, you will get an error message in the result tree:

Let’s see what the aggregate report looks like:

Average response time, median, error rate, throughput are the usual metrics.

Does this end there?

Of course, there is no end, from the results just now, it is obvious that the request of an interface, how to test the pressure of an interface?

In fact, until you see the result is successful to request the interface, return normal data, just put the interface that part of the configuration debugging, and no real pressure to test the interface.

We used the default value of 1 to create thread groups. Now we can adjust some configuration of thread groups to achieve the purpose of pressure measurement.

Let’s say you set the number of threads to 100 and loop 100 times.

CLI run Jmeter

When you start Jmeter, you’ll see the following statement.

Let’s do stress tests not with a graphical interface, but with a script.

To do this with a script, you still need to have the configuration file, which is in the SAME JMX file that holds the test plan.

Here are some common parameter descriptions:

Parameter names meaning
-n Specifies that JMeter will run in CLI mode
-t The name of the JMX file containing the test plan
-l The name of the JTL file that records the test results
-j The name of the file that logs the Jmeter run
-g Output report file (.csv file)
-e Generate test reports in HTML format
-o Folder for generating test reports The folder does not exist or is empty

Let’s go through the CLI and generate an HTML report.

.\jmeter.bat -n -t .. \.. \jmeterfiles\jmx\sample1.jmx -l result\sample1.jtl -e -o result\sample1Copy the code

Sample1.jmx is the script file saved above.

Also look at the test report output

Open index.html to see the test report.

The contents of this panel are quite detailed.

Write in the last

This article is more basic, is to walk through the basic operation of Jmeter.

For some common parameterizations, referencing custom JAR packages is not covered.

Jmeter is relatively OK to use in test scenarios.