From a JVM perspective, there are several optimizations:
Stack allocation:
Converting the behavior of allocating object space up to stack allocation reduces YGC and provides performanceSynchronous omit
Synchronized code block lock eliminationScalar replacement
Provides the basis for stack allocation, and stack allocation collocation
These optimizations require JVM configuration, but they still need to be coordinated when writing code. Otherwise, JVM optimizations won’t work. Baibai provides optimizations and you don’t
So much code optimization points, I feel better to iose it to understand some……
On the stack
Assignment on the stack: The JVM interview question [intermediate], I will not write again
It does have an advantage to write the variables inside the method better and more carefully to do the stack assignment, so here’s an example
public class Max {
public static void main(String[] args) {
long startTime = System.currentTimeMillis();
for (int i = 0; i < 10000000; i++) {
newValue();
}
long endTime = System.currentTimeMillis();
System.out.println("Time:" + (endTime - startTime));
try {
Thread.sleep(100000);
} catch(InterruptedException e) { e.printStackTrace(); }}public static void newValue(a) {
Dog dog = newDog(); }}Copy the code
Turn off on-stack allocation
- The JVM configuration:
-Xms256m -Xmx256m -XX:-DoEscapeAnalysis -XX:+PrintGCDetails
- Log: 2YGC, 91 ms, pretty long W (゚ д ゚) W
[PSYoungGen: 65536K->761K] 65536K->769K(251392K), 0.0016895 secs] [Times: User =0.01 sys=0.00, real=0.00 secs] [GC (Allocation Failure) [PSYoungGen: [Times: user=0.00 sys=0.00, real= 0.01secs] Time:91 [Times: user=0.00 sys=0.00, real= 0.01secsCopy the code
- Memory snapshot: 1.94 million Dog objects, that’s a lot, and that’s after GC
- GC snapshot: The new generation of 64M to eat 47M, such a short period of time to create a large number of objects, if the life cycle is longer, will directly burst into the old age, estimated to be OOM, I here heap memory only 256M
Start stack allocation
- The JVM configuration:
-Xms256m -Xmx256m -XX:+DoEscapeAnalysis -XX:+PrintGCDetails
- Log: No GC, 7 ms, the time difference is quite large O(≧ ≦)O
Time:7
Copy the code
- – memory snapshot: there are only 70,000 Dog objects. Of course, there is a bit of a difference between reality and theory. In theory, there is no Dog object in the heap
- GC snapshot: The new generation of 64M eat 25M, and the above gap is quite big
conclusion
We don’t look at the test case method is executed 10 million times, which means there is no practical significance, we write the program, minute you think the method is executed less, can do stack allocation heap performance is extremely interesting, the gap we all see it
Yet!
* * * * * * * * * * * * * * * * * *
The technology for escape analysis has been around since 1999 and it wasn’t until JDK1.6 Hotspot that it started supporting the initial escape analysis, even though the technology is still immature and there is a lot of room for improvement. The immature reason is that the calculation cost of escape analysis is very high, and the performance advantage brought by escape analysis is not even guaranteed to be higher than the calculation cost. In practical applications, especially large applications, it is found that escape analysis may appear unstable state. This technology is not turned on by default until JDK7, when Java programs in service mode are supported
Even after escape analysis is enabled, there are still a lot of Dog objects in the heap memory, which is quite different from the theory
Synchronous omit
Too lazy to type, we look at the picture:
Typical examples:
public void test(a) {
Dog dog = new Dog();
synchronized(dog) { System.out.println(dog); }}Copy the code
For the above method, the JIT dynamic compiler will automatically remove the lock for us, but it is optimized at run time and the lock instruction is still visible when compiled into bytecode
In addition, although there is a JIT optimization, but compared to us directly do not write lock, after optimization performance is not as good, we would better write like this, performance is better:
public void test(a) {
Dog dog = new Dog();
System.out.println(dog);
}
Copy the code
Scalar replacement
In Java:
Scalar:
A piece of data that can no longer be broken down into smaller pieces, such as an underlying data typeAggregate amount:
As opposed to things that can be decomposed again, like objects-XX:+EliminateAllocations
The JDK7 startup is enabled by default
In the JIT phase, if an object in a method meets the rules for allocation on the stack after escape analysis, the object is stored on the stack as a scalar, further saving the operation of allocating the object
For example, an object:
public class Dog extends Max {
public int age = 10;
public String name = "AA";
}
Copy the code
After scalar substitution, a Dog object is stored in the local variable list in this way:
public void test(a) {
// A Dog object looks like this
int age = 10;
String name = "AA";
}
Copy the code
Scalar substitution provides a good basis for on-stack allocation (⊙﹏⊙)
If scalar replacement is disabled in the JVM configuration, allocation on the stack does not work. Both scalar replacement and allocation on the stack must be set to work
PS: Want to scold the mother, the deeper the depth of the technology, but the point is all like this pull out the radish with mud, one crop after another, dizzyingly, more and more dizzy, if the data again complete, jump, the hell want to die of heart (” > eyes <)