“This is the fifth day of my participation in the First Challenge 2022. For details: First Challenge 2022.”

Lock the optimization

The JVM uses spin, adaptive, lock elimination, lock coarsing and other optimization methods to improve the efficiency of code execution.

Spin-locking and adaptive spin

Now most processors are multi-core processors, if there are two or more threads running in parallel in a multi-core processor, we can keep a waiting thread from giving up the execution time of the processor. Set a wait timeout to see if the thread can quickly release the lock. During this wait time, an empty loop can be executed, allowing the current thread to continue to occupy the CPU’s time slice. This is called a spin lock.

Spin-locking can be enabled in the JVM via +XX:UseSpinning, which is enabled for us by default after JDK1.6. Since the use of spin locks causes competitors for the lock to consume more processor time, the JVM specifies a number of spins as a parameter. We can do this with -xx :PreBlockSping (10 times by default).

The state transformation of biased lock and lightweight lock and the relationship between the object Mark Word are shown below:

Lock elimination

Lock elimination is an optimization strategy implemented by the virtual machine just-in-time compiler to eliminate locks when it detects that a certain code that needs to be synchronized cannot compete for shared data. The main judgment of lock elimination is based on escape analysis. If a piece of code is judged that all data on the heap will not escape to be accessed by another thread, it is treated as data on the stack and treated as private, and synchronization locking is not required.

Here is an example of adding three strings x, y, and z, without synchronization either source code or logic

public String concatStr(String x, String y, String z) {
    return  x + y + z;
}
Copy the code

String is an immutable class. Chaining characters is always done by generating new Strings, so the Javac compiler optimizes String chaining automatically, converting String chaining to StringBuffer before JDK5; After jdk5, the StringBuilder object is converted to a continuous append() operation.

public String concatStr(String x, String y, String z) {
    StringBuilder sb = new StringBuilder();
    sb.append(x);
    sb.append(y);
    sb.append(z);
    return  sb.toString();
}
Copy the code

Let’s seejavapResult of decompilation:You might be worried hereStringBuilderCan thread-safe operations be a problem if they are not thread-safe? The answer here is no,x + y + zOptimization of operationsThrough escape analysisAfter that, its dynamic scope is limited toconcatStrWithin a method, that is, what is actually being executedStringBuilderOperating inconcatStrInside the method,Other external threads cannot access itTo PI, so hereThere is a lock, but it can be safely removed. So when we compile, the code ignores all synchronization and executes directly.

Lock coarsening

In principle, we at the time of writing code, the function of the synchronized block is always recommended limit small — just as much as possible in the actual operation scope of the Shared data to make the synchronization, it is also to make need to be synchronized operation as less as possible, even if there is a lock of competition, the waiting thread lock can quickly obtain the lock. Most of the time, the above principle is true, but if a series of consecutive operations are repeatedly locking and unlocking the same object, even if the locking operations occur within the body of the loop, frequent interoperations can cause unnecessary performance losses even without threads competing

StringBuffer buffer = new StringBuffer();
/** Lock coarsening */ 
public void append(a){ 
	buffer.append("aaa").append(" bbb").append(" ccc"); 
}
Copy the code

The above code requires locking and unlocking each time the buffer.append method is called. If the JVM has a string of consecutive locking and unlocking operations on the same object, it will combine them into a single, larger locking and unlocking operation at the time the first Append method is executed. Unlock after the last append method.

Escape analysis

Escape Analysis is a cross-global function data flow Analysis algorithm that may reduce synchronization load and memory heap allocation stress in effective Java programs. Through escape analysis, the Java Hotspot compiler can analyze a new object reference scope to determine whether or not to allocate the object to the heap. The basic behavior of escape analysis is to analyze the dynamic scope of the object.

Methods the escape

When an object is defined inside a method, it may be referenced by an external method, such as calling parameters passed to other methods, called method escape.

Thread escape

When an object can be accessed by an external thread, such as assigning a value to an instance variable accessed by another thread, this is called thread escape.

The compiler optimizes code through escape analysis

If you can prove that an object does not escape out of a method or out of a thread (other threaded methods or threads cannot access the variable through any method), or if the degree of escape is low (only the method escapes but not the thread), you can apply different degrees of optimization to the object:

Stack Allocations are local variables that don’t escape at all, and objects that don’t escape from thread, so objects are destroyed at the end of a method. To reduce the pressure on the garbage collector.

2. Scalar Replacement An object may be accessible without needing to be stored as a continuous stored result, and some (or all) of the object can be stored not in memory, but in A CPU register.

If an object is found to be accessible only on one thread, operations on that object can be considered asynchronous.

The resources

  • Docs.oracle.com/javase/tuto…
  • www.cnblogs.com/xidongyu/p/…
  • www.cnblogs.com/kkkkkk/p/55…