This is the 16th day of my participation in the August Text Challenge.More challenges in August
Dynamic object age determination
The case used in this article is followed by the last article. If you are not clear, please check the last article first
The HotSpot virtual machine does not always require objects to be -xx in order to better accommodate the memory conditions of different applications: MaxTenuringThreshold can be promoted to the old age. If the total size of all objects of the same age in Survivor space is greater than half of Survivor space, objects older than or equal to this age can directly enter the old age without waiting until -xx: MaxTenuringThreshold Specifies the required age.
Let’s look at the memory after execution:
Heap
def new generation total 9216K, used 4316K [0x00000000fec00000, 0x00000000ff600000, 0x00000000ff600000)
eden space 8192K, 52% used [0x00000000fec00000, 0x00000000ff037008, 0x00000000ff400000)
from space 1024K, 0% used [0x00000000ff400000, 0x00000000ff4002d8, 0x00000000ff500000)
to space 1024K, 0% used [0x00000000ff500000, 0x00000000ff500000, 0x00000000ff600000)
tenured generation total 10240K, used 4949K [0x00000000ff600000, 0x0000000100000000, 0x0000000100000000)
the space 10240K, 48% used [0x00000000ff600000, 0x00000000ffad5400, 0x00000000ffad5400, 0x0000000100000000)
Metaspace used 3265K, capacity 4496K, committed 4864K, reserved 1056768K
class space used 354K, capacity 388K, committed 512K, reserved 1048576K
Copy the code
The ratio of old age is 48%, 8% more than the expected ratio of allocation2 objects to 40%, so that both Allocation1 and Allocation2 objects are directly old, and do not wait until the critical age of 15. Because the two objects together have reached 4.25MB, and they are the same age, they meet the rule that an object in the same year reaches half of the Survivor space.
Let’s illustrate consolidation with the following example code:
private static final int _1MB = 1024 * 1024; public static void testTenuringThreshold() { byte[] allocation1, allocation2, allocation3,allocation4; allocation1 = new byte[_1MB / 4]; // Allocation1 +allocation2 > survivo half allocation2 = new byte[_1MB / 4]; allocation3 = new byte[4 * _1MB]; allocation4 = new byte[4 * _1MB]; allocation4 = null; allocation4 = new byte[4 * _1MB]; }Copy the code
Allocation1 objects, Allocation2 objects, and Allocation3 objects can be stored in Eden. When allocation4 objects apply for allocation, the space is insufficient, and then the first GC is collected.
Allocation1 and allocation2 enter s1 and allocation3 enter the old age directly:
Then execute the last two pieces of code:
allocation4 = null;
allocation4 = new byte[4 * _1MB];
Copy the code
A second GC is triggered, but since a1+ A2 together have reached 512KB and are of the same age, it meets the rule that an object in the same year reaches half of the Survivor space. According to the rule of dynamic age judgment, this time directly into the old age:
Space allocation guarantee
As we mentioned earlier, if an object in Eden could not be saved into Survivor, it would enter the old age directly through space allocation guarantee.
But!!! Have you ever thought about the question: what if there wasn’t enough space for these objects in the old days? What should I do? Don’t worry, we’ll go step by step.
Old time space is enough
First: Before a Minor GC occurs, the virtual machine must check that the maximum available contiguous space of the old generation is greater than the total space of all objects of the new generation. If this condition is true, then the Minor GC is safe this time.
In the extreme case that all objects survive after MinorGC, all objects will enter the old age. If the old age determines that the remaining space is larger than all objects, then it is safe to enter the old age
There’s not enough space in the old era
However, if before performing a Minor collection, the available memory of the old generation is less than the total size of the new generation objects, then it is possible that all the objects in the new generation will survive after the Minor collection and need to be migrated to the old generation, but there is not enough space for the old generation objects. So before the Minor GC, when the JVM determines that the available memory of the old generation is less than the total object size of the new generation, it looks at a parameter: ** “-xx: HandlePromotionFailure” ** is set. If this parameter is set, it continues to check whether the maximum available contiguous space of the old age is greater than the average size of objects promoted to the old age over time. When it is determined that the average size is less than the available memory in the old years, a Minor GC is attempted, although this Minor GC is risky; If less than, or -xx: HandlePromotionFailure is not set, then do a Full GC instead.
For example, after each Minor collection, the average number of objects in the old age was around 10MB, so the available memory of the old age is more than 10MB, which means that it is very likely that objects in the old age will also be around 10MB after this Minor collection, and the old age space will be sufficient.
Taking the historical average is still a gambling solution, meaning that if the number of objects surviving a Minor GC is significantly higher than the historical average, the guarantee will still fail. If there is a guarantee failure, then Full GC has to be honestly restarted, which is a long pause. Although the loop for guarantee failures is the largest, the -xx: HandlePromotionFailure switch is usually turned on to avoid Full GC too often.
We use a complete flow chart to help you better clarify the entire JVM space guarantee principle:
Summary: From the above analysis, we actually know that there are two opportunities for garbage collection in the old days:
- The Minor GC detects that there are too many objects to fit into the old age, triggers the Fu’l GC, and then carries it through the Minor GC
- After the Mionr GC, there are too many remaining objects to store in the old age, triggering the Full GC
Old age garbage collection algorithm – tag collation algorithm
So what’s the algorithm for old garbage collection?
The efficiency of mark-copy algorithm will decrease when the survival rate of the object is high. More importantly, if you do not want to waste 50% of the space, you need to have extra space for allocation guarantee, in order to cope with the extreme case that all objects in the used memory are 100% alive, so this algorithm generally cannot be directly selected in the old days.
In 1974, Edward Lueders proposed another targeted “mark-compact” algorithm for the survival characteristics of old objects. In this algorithm, the marking process is still the same as the “mark-clean” algorithm, but the next step is not to clean the recyclable objects directly. Instead, all surviving objects are moved to one end of the memory space, and then the memory beyond the boundary is cleared directly. A schematic of the mark-tidy algorithm is shown below.
The essential difference between the mark-clean algorithm and the mark-tidy algorithm is that the former is a non-mobile recovery algorithm while the latter is mobile. Whether to move the recovered living object is a risky decision with both advantages and disadvantages:
If mobile live objects, especially has a large number of objects in every time the recycling old s live area, mobile live objects and update all the references to these objects will be an extremely load operation, and the objects move operation must suspend the user application to all the way, it’s more for the users to have to carefully weigh its disadvantages, Pauses like this were graphically described by The original virtual machine designers as “Stop The World.”
The old garbage collection algorithm is at least 10 times slower than the new generation garbage collection algorithm! If the old Full GC frequently occurs, the system performance will be seriously affected, and the situation of frequent lag occurs!
All of the following examples will show you how to analyze step by step why Full GC is frequently triggered under the production failure of various business systems, and how to optimize by adjusting JVM parameters!
If you have a thorough understanding of the principles of JVM operation covered in recent articles, you can understand that the so-called JVM optimization is to make the object allocation and collection in the new generation as much as possible, to avoid frequent old Full GC, and at the same time give the system enough memory to avoid frequent garbage collection in the new generation. To better ensure the operating efficiency of the system.
There will be plenty of examples of how to optimize the JVM.