This case I think I should today will give you a higher position, no matter how familiar with you to the JVM, see this article, should still a bit surprised, but I think this case, I share, is want to say no matter how strange phenomenon, please be sure to hold down, will let you slowly become stronger, I have been curious about strange phenomenon, So if you have any weird questions, please send them to me, preferably related to the JVM

The problem

Because editing is more troublesome, directly in the way of screenshots sent outAlso attached test source code:The problem description is also very detailed, also attach the test code, I prefer this way of asking questions, can be simple to simulate the problem, this is good analysis, problem simplified is a very important step, this can save a lot of time, even don’t have to describe some phenomenon is too much, will soon be able to seize the point

Analysis of the

Simplify procedures

Some int[] objects don’t know where they come from, so I ran around with his example. It seems to be true, so I kept simplifying his code and finally realized that the number of int[] objects is related to the creation of the last byte array in the thread run method.

Preliminary suspect

Jvisualvm communicates with the target process. It is quite normal to transfer some data to the target process. I was busy at that time, so I replied to it. The JMX port is also disabled. You can verify this by using jmap-histo.

Doubt again

Today, I suddenly received an email from my classmate. He also noticed that the byte array had a certain relationship with itHe also noticed that jmap-histo was executing with a large number of ints, but after adding the live parameter, it dropped. This is easier to explain because the live parameter will do an FGC and reclaim some dead objects

Again, analysis

I then took its demo to run again, according to the following steps to operate

  • Execute jmap-histo to find a large number of int arrays
  • Execute jmap-histo :live to see the int array drop down
  • Continue with jmap-histo, the int array is a little too large

At this point, I start to suspect Jmap. Is it because of Jmap? So I started to check the implementation of jmap, including the JDK and JVM logic, to find out where the possibility of creating an int array, the JDK level can basically ignore, because it is impossible to think of any logic that can create an int array, just sent a command to the JVM process, so I focused on the implementation of the JVM level. The following logic is used when we do a dump using Jmap or when gc occursBecause the GC or memory dump, must do a traversal of memory, so must suspend the Java thread, prevent when traverse the object in the memory, memory allocation, but each thread allocates memory are preferred walking tlab (unique to each thread a piece of small memory block) in Eden, in order to rapid traverse object, And there is no discontinuity of memory, then the JVM to tlab do a filling, filling is an int array object (learned) from the above code, the remaining is not allocated tlab memory to fill, therefore in the process of the system is running in fact may be accompanied by a lot of useless objects, ha ha, suddenly you see here?

Can you explain the following question?

  • The more threads, the faster the int array grows
  • No byte arrays allocated, int arrays grow very slowly or not at all, okay?
  • Jmap is not the only factor

This case is still very interesting, I throw out the above questions for you, you can answer the above questions in the comments below, if no one answers or there is no correct answer, I will then leave a comment below, to see how enthusiastic you are? Please forward it to more people.

Welcome to PerfMa community, recommended reading: Java Multithreading Knowledge Cheat sheet (1) Massive connection server CMS tuning Notes