preface

Explore ES- nested objects and parent objects for ElasticSearch (ES- nested objects and parent objects)

You can use benchmarks to test ElasticSearch’s performance.

The benchmark

Environment to prepare

Since there are no good Linux servers available, I will test them in my Own Windows environment.

  • Machine: Windows10 6 core 16GB memory
  • ElasticSearch startup parameter: JVM 1GB
  • The mapping file:
{
  "mapping": {
    "doc": {
      "properties": {
        "message": {
          "type": "text"."fields": {
            "keyword": {
              "type": "keyword"."ignore_above": 256}}},"postDate": {
          "type": "date"
        },
        "user": {
          "type": "text"."fields": {
            "keyword": {
              "type": "keyword"."ignore_above": 256
            }
          }
        }
      }
    }
  }
}
Copy the code

Then we started ElasticSearch, Kibana, and Jmeter on the machine. Kibana is used to monitor ElasticSearch performance, and Jmeter is used to send Index requests and Http requests.

Configuring Jmeter

Create an Http Request component and configure the IP address and port required by ElasticSearch.

Note that the request cannot be sent normally at this point, and the text/plain head is incorrect.

The solution to this problem is to create an HTTP Header Manager. Specify content-type as Application /json on the HTTP Header Manager.

ElasticSearch is already monitored in Kibana, but you can configure the TPS component to be monitored in Jmeter. Configure Summary Report and View Results Tree.

After Jmeter is started, the Index request can be sent normally, and the new document is created in ElasticSearch.

Have a problem

In an initial attempt to use 100 threads to pressure ElasticSearch.

The following error information is displayed in the Summary Report.

java.net.BindException: Address already in use: connect
Copy the code

The port is already in use. Preliminary analysis is due to the number of threads opened in the current system is too many, resulting in insufficient port. But where do you need to modify parameters?

There is a saying in the industry

What you're dealing with, someone's been dealing with for 800 years.Copy the code

So, let’s Google it. StackOverflow is a powerful source of knowledge in StackOverflow, with apache-multiple-requisition-with-jmeter.

Then StackOverflow mentions a foreign blog with a solution to this problem.

  1. Open the registry and run the inputregedit
  2. Located to a location calledParametersThe key of
  3. Modify the insideMaxUserPortIs 66534

After this modification, you can use Jmeter pressure measurement temporarily on Windows.

Although, as I later found myself increasing the number of threads, I still had this problem from time to time. It is also recommended on StackOverflow that Jmeter be deployed on a Linux machine to stress test other applications.

100 threads

Okay, with that out of the way, let’s start the first scenario with the stress test.

It can be seen from the Index Index monitoring module in Kibana that the Index Rate is around 300. Currently there are 400K documents. A total of 21.6MB is stored.

Check the ElasticSearch node status.

The CPU is increasing rapidly. The JVM is still in a relatively stable state.

The Segment Count is floating up and down in ES, and new Segment values are created. When the number of segments reaches a certain threshold, ES automatically merges the Segment Count. So, the number of segments keeps going up and down.

The Latency returns within milliseconds, indicating that the Lantency of ES is relatively stable under this number and request. It is not within the pressure range of ES.

500 threads

Because obviously, the number of threads above is still not within the limits of the current ES. So, try increasing the number of threads to 500.

You can see that CPU usage increases again at 500 threads compared to the original 200 threads. The CPU bump is possible because Jmeter and ElasticSearch are enabled on the same machine. Next time you need to prevent Jmeter from testing ElasticSearch on another machine, it might be better, but it’s not too much of a problem because the CPU usage is only 40%, far below the CPU bottleneck.

You can see that JVM usage continues to increase as new documents are indexed.

Index memory-Lucene’s total Memory increased continuously, but Doc Values did not change, probably because

The Index memory-Lucene instructions show that Index memory-Lucene occupies a portion of the JVM’s Memory.

What is the cause of Index memroy-Lucene’s increase? It can be seen in the second picture that the curves of Lucene Total and Term basically keep the same spacing. It can be considered to be caused by the increase of Term.

Back to the index monitoring interface. You can see that the Request Rate has significantly increased from 300tps to 600tps.

1000 threads

There were two problems with 1000 threads, and Jmeter started flooding the following error. Tuning the registry field again didn’t work, so we’ll probably have to put Jmeter into a Linux environment to fix the problem.

java.net.BindException: Address already in use: connect
Copy the code

The other is to reduce the impact of Jmeter on the CPU of the machine to a certain extent. Otherwise, the results of the test may not be accurate.

However, without removing the above factors, TPS of around 620 seems to be the limit.

About writing

From now on, I will write an article here every day, with no limit on subject matter, content or word count. Try to put your daily thoughts into it.

If this article has helped you, give it a thumbs up and even better follow it.

If none of these are available, write down what you want to say when you finish reading? Effective feedback and your encouragement are the biggest help to me.

And I’m going to pick up my blog. Welcome to visit and eat watermelon.

I’m shane. Today is August 31, 2019. Thirty-eighth day of the hundred day writing project, 38/100.