As we know, Java has been fully optimized for Synchronized locking since JDK1.6, and in some cases has even surpassed the performance of Lock synchronization. So let’s see how it’s optimized.

Original question

Synchronized is implemented based on the Mutex Lock of the underlying operating system. Every Lock acquisition and Lock release operation will bring the switch between user mode and kernel mode, thus increasing the system performance overhead.

As a result, Synchronized locks perform poorly in competitive locking situations and are often referred to as heavyweight locks.

In JDK1.5 version, the Lock interface was added in the package to realize the Lock function. It provides the synchronization function similar to Synchronized keyword, but in use, it needs to display the Lock acquisition and Lock release.

Lock performance in JDK1.5 is much better than that of Synchronized, which at the time did not have reentrant locks, in the case of repeated Lock requests by a single thread.

So how did Synchronized work back then? Why not reentrant?

Principle of Synchronized

Synchronization in the JVM is implemented based on incoming and outgoing Monitor objects. Each object instance has a Monitor, which can be created and destroyed with the object.

When multiple threads access a piece of synchronized code at the same time, multiple threads are stored in an EntryList (also known as a blocking queue), and threads that are BLOCKED are added to the list.

Then, when the thread obtains the object’s Monitor, the Monitor implements mutual exclusion by relying on the Mutex Lock of the underlying operating system. If the thread successfully applies for a Mutex, it will hold the Mutex, and other threads will not be able to obtain the Mutex.

If a thread calls wait(), it releases the currently held Mutex and enters a WaitSet (also known as a wait queue), waiting to be woken up the next time. The thread is WAITING or TIMEDWAITING.

Mutex is also released if the current thread completes the method successfully.

In general, synchronous locking is implemented in this way. Because Monitor relies on the implementation of the underlying operating system, there is a switch between the user state and the kernel state (can be understood as a context switch), which increases the performance overhead.

Lock escalation

To improve performance, JDK1.6 introduced the concepts of biased locking, lightweight locking, and heavyweight locking to reduce context switching due to lock contention, and it was the new Java object header that enabled lock escalation.

By lock upgrade, I mean

Synchronized Synchronized locks are initially biased locks. With the increasing competition among threads, the biased locks are upgraded to lightweight locks and eventually to heavyweight locks.

Biased locking

Biased lock is mainly used to optimize the competition of the same thread applying for the same lock many times, that is, Synchronized lock has the function of reentrant lock.

Why biased locking? Because in our application, the same thread may be competing for lock resources most of the time (such as a single thread operating on a thread-safe container), if that thread is acquiring and releasing locks every time, it is constantly switching between kernel and user states.

With bias locks, when a thread accesses the synchronized code or method again, it can simply check the object header to see if the current thread holds the bias lock.

The bias lock is revoked whenever other threads compete for the lock resource. A partial lock cancellation requires waiting for the global safe point (the JVM’s Stop the world), suspending the thread holding the lock, and checking to see if the thread is still executing the method. If so, the lock is upgraded, and otherwise it is preempted by another thread.

Lightweight lock

When another thread compets for the lock, the lock is biased. When the thread ID in the object header is not its own thread ID, the CAS operation will be performed to obtain the lock. If the lock is successfully obtained, the thread ID in the object header will be directly replaced with its own ID, and the lock will remain biased. If the lock fails to be obtained, it indicates that the current lock has some competition. Biased locks are upgraded to lightweight locks.

Lightweight locks are suitable for scenarios where threads alternately execute synchronized blocks, and most locks do not compete for long periods of time throughout the synchronization cycle.

Lightweight locks also support spin, so if another thread tries to grab the CAS again and fails, it will no longer block but keep spinning.

The spin is better because, as mentioned earlier, the default thread does not hold the lock for very long, which can be more costly if the thread is suspended and blocked.

If the attempt fails after the spin lock retries, the synchronous lock is upgraded to the heavyweight lock.

Heavyweight lock

In this state, any thread that does not acquire the lock enters Monitor and then blocks in the WaitSet, becoming a pre-optimization Synchronized lock.

JVM parameter optimization

Stop the world occurs when biased locks are upgraded to lightweight locks. If the system is often multi-threaded, disabling biased locks may be a better choice, which can be optimized with the following JVM parameters:

// Disable biased locks (enabled by default) -xx: -usebiasedlocking // Set heavyweight locks -xx :+UseHeavyMonitorsCopy the code

Lightweight locks have the function of spin locks, so if the thread holds the lock for a long time, then the competing threads will also be in the spin state, which consumes the system CPU and increases the system overhead, then the optimization of turning off the spin lock can be better:

-XX:-UseSpinningCopy the code

conclusion

These are Java optimizations for Synchronized locks, and ConcurrentHashMap uses Synchronized locks again after JDK1.8 because of these optimizations. If you have any thoughts, feel free to leave them in the comments below.

If you are interested, you can visit my blog or pay attention to my public number and headline number. Maybe there will be unexpected surprises.

death00.github.io/