I have written two articles to share my own testing of several performance testing frameworks: a first look at the performance testing framework comparison, which performance framework is strong – JMeter, K6, Locust, FunTester horizontal comparison.
In the last test, I started a service based on the Framework diagram of FunTester Moco Server on the LAN, and the QPS of the single server reached the bottleneck around 15K. However, it was preliminatively determined that it was caused by LAN bandwidth. Due to time constraints, I did not conduct in-depth investigation. Just because a friend wanted to know how Gatling performance testing framework compared to other frameworks in the actual test, I set up a local MOCO service over the weekend to test the performance of K6, Gatling and FunTester in the 100k QPS level test.
The preparatory work
This machine hardware 2.6 GHz six-core Intel Core I7, CPU statistics from the active monitor, 100% means that the consumption of a CPU thread, theoretically all CPU resources as 1200%, memory data also from the active monitor.
First OF all, I used the FunTester Moco Server framework architecture diagram test framework to set up a test service in the LAN environment, only a backpocket interface. The Groovy script looks like this:
import com.mocofun.moco.MocoServer
class TestDemo extends MocoServer{
static void main(String[] args) {
def log = getServerNoLog(12345)
log.response("hello funtester!!!")
def run = run(log)
waitForKey("fan")
run.stop()
}
}
Copy the code
Since it is placed on the machine, there is basically no need to consider the problem of network bandwidth. After self-testing, the measured QPS data is up to 120,000, so the test results of this time are basically at the level of 100,000 QPS. Moreover, the number of single-machine threads will start at a lower level of 1 concurrency, and when it reaches 10 concurrency, the local CPU is already full (the service under test consumes about 25% of the CPU).
Since the scripting language Scala used by Gatling and Groovy used by FunTester testing framework are both jVM-based languages, I used default configuration for testing and did not modify JVM parameters any more, mainly because Scala does not modify JVM parameters.
Script preparation
K6
The script content is as follows: Performance framework – JMeter, K6, Locust, FunTester horizontal comparison.
FunTester
Groovy SDK Version: Groovy Version: 3.0.8 JVM Java heap memory set to 1G, other parameters default.
The script content is as follows: Performance framework – JMeter, K6, Locust, FunTester horizontal comparison.
Gatling
The script content is adapted from the template. The content is as follows:
package computerdatabase
import scala.concurrent.duration._
import io.gatling.core.Predef. _import io.gatling.http.Predef. _class FunTester extends Simulation {
val httpProtocol = http
.baseUrl("http://localhost:12345/m")
val scn = scenario("FunTester").repeat(120000){
exec(http("FunTester").get("/m"))
}
setUp(scn.inject(atOnceUsers(10)).protocols(httpProtocol))
}
Copy the code
Actual combat began
As I mentioned earlier, it doesn’t make much sense to continue increasing the number of concurrent requests because QPS are so high that the native CPU is being overrun by very low threads. So local is measured at the lower number of threads.
To explain the number of threads and concurrency, in some frameworks, some frameworks are called users, and some frameworks are called threads and concurrency. This issue has become the number of concurrent, and the old number of concurrent.
Since the average response time (RT) used by each framework is measured in ms units, I write the average response time as 1ms when the average impact time is less than 1ms.
1 the concurrent
Test results:
The framework | CPU | memory | QPS | RT |
---|---|---|---|---|
K6 | 136.75 | 97 | 10543 | 1 |
Gatling | 88.01 | 344 | 19506 | 1 |
FunTester | 56.12 | 539 | 18859 | 1 |
I was a little surprised that the Gatling testing framework used a higher CPU when calculating test results and generating test reports. Here, the QPS of K6 test is low, which is a little unexpected. FunTester consumes a lot of memory here, which is acceptable.
5 concurrent
Test results:
The framework | CPU | memory | QPS | RT |
---|---|---|---|---|
K6 | 449.15 | 139.5 | 37219 | 1 |
Gatling | 341.19 | 350.5 | 63624 | 1 |
FunTester | 243.19 | 945.0 | 71930 | 1 |
Gatling computes test results and generates test reports with the same CPU consumption as single-thread, around 100%, but the time consumption is significantly increased. So far, The performance of FunTester is ok. I summarize the reason for the high memory usage. It should be that I stored the test data in memory during the test. Here, the K6 test framework measured about half the QPS of the other two frameworks.
10 concurrent
Test results:
The framework | CPU | memory | QPS | RT |
---|---|---|---|---|
K6 | 702.05 | 299.9 | 61087 | 1 |
Gatling | 524.70 | 350.2 | 94542 | 1 |
FunTester | 460.13 | 1170 | 91360 | 1 |
The output report of Gatling takes a little long time. The time consumed by 3 million data is a little unacceptable. The K6 is consuming a bit more CPU at this point. But QPS is still a bit low. FunTester has taken up more than 1 GB of memory.
At this point the native CPU usage is over *90%*. Add 20 concurrent tests to see what happens when the CPU is low.
20 concurrent
Test results:
The framework | CPU | memory | QPS | RT |
---|---|---|---|---|
K6 | 718.74 | 370.0 | 75980 | 1 |
Gatling | 585.97 | 350.0 | 113355 | 1 |
FunTester | 528.03 | 1770 | 104375 | 1 |
The test is completed. The performance of K6 in this round of test is a little poor. It should be that THE CPU has bottleneck, leading to the low QPS in the test. Both belong to JVM language, Gatling and FunTester keep the same basic data, among which FunTester consumes a lot. As far as this is concerned, I don’t think it has a great impact and will not optimize it for the time being.
PS: Privately tested the higher concurrency, the result is similar to the 20 concurrency.
conclusion
There is a phenomenon in this test, Gatling framework test QPS is a little higher than FunTester, here I summarize the reasons:
- FunTester makes more adaptations in marking objects
- FunTester synchronously performs more judgments, as reflected in termination conditions
- FunTester stores test data synchronously
What I have observed here is that the FunTester framework uses more memory, Gatling creates more threads (I suspect asynchronously doing something here), and Gatling does not leave compatibility functionality (such as marking objects, error logging personalization) at the possible business level.
Based on this, I have listed several FunTester optimization directions:
- Make non-essential processing asynchronous
- Try changing the test metadata storage mode
- Phasing out business-related compatible code (completed)
First take a look at the core execution code:
@Override
public void run(a) {
try {
before();
long ss = Time.getTimeStamp();
int times = 0;
long et = ss;
while (true) {
try {
executeNum++;
long s = Time.getTimeStamp();
doing();
et = Time.getTimeStamp();
int diff = (int) (et - s);
costs.add(diff);
} catch (Exception e) {
logger.warn("Mission failed!", e);
errorNum++;
} finally {
if ((isTimesMode ? executeNum >= limit : (et - ss) >= limit) || ThreadBase.needAbort() || status())
break; }}long ee = Time.getTimeStamp();
if ((ee - ss) / 1000 > RUNUP_TIME + 3)
logger.info("Thread :{}, number of executions :{}, number of errors :{}, Total time :{} s", threadName, executeNum, errorNum, (ee - ss) / 1000.0);
Concurrent.allTimes.addAll(costs);
Concurrent.requestMark.addAll(marks);
} catch (Exception e) {
logger.warn("Mission failed!", e);
} finally{ after(); }}Copy the code
Have Fun ~ FunTester!
FunTester.Tencent Cloud Author of the Year,Boss direct hire contract author.Official GDevOps media partner, non-famous test development, welcome to follow.
- FunTester test framework architecture diagram
- FunTester will share the second video review
- Three ways to first meet Postman, SayHi
- How to become a Full stack automation engineer
- JsonPath utility class unit test
- Selenium Automation: Code and no-code testing
- Automated Testing trends in 2021
- Java thread synchronizes three musketeers
- Bind mobile phone number performance test
- Java multithreaded programming applied in JMeter
- Moco framework interface hit ratio statistics practice
- Gets the Java utility class for the JVM dump file
- Handle JMeter variables with Groovy