“This is the fifth day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

A high concurrency when the use of locks

Synchronous calls should consider the performance cost of locking. If you can use unlocked data structures, do not use locks. If you can lock blocks, do not use whole method bodies.

Lock the upgrade process

Synchronized lock: synchronized lock is reused by the Mark Word in the object header according to the different lock flag bit and lock upgrade strategy.

Java5 only weight locks are available.

Java6 introduced biased and lightweight locks.

Sync heavyweight locks cannot be used directly.

Why can every object be a lock

Every Java object comes out of the womb with an invisible lock called an internal lock or Monitor lock.

The essence of Monitor is to rely on the Mutex Lock implementation of the underlying operating system. Switching between threads of the operating system requires a very expensive transition from user mode to kernel mode.

Types of synchronized locks and upgrade steps

1 Upgrade Process

Synchronized locks are stored in Mark Word, the header of a Java object.

The lock upgrade function mainly depends on the lock flag bit and the release bias lock flag bit in MarkWord.

Below :(important)

2 unlocked

Direct code execution without locking.

Parse the value from the graph :(combined with the 64-bit map above) (only the first and second lines of hashcode are used)

Starting from the second line, the first paragraph of data 00100011 to 01011110 on the first line is the hashCode information, and the dark 001 in the front is the biased lock bit and the lock flag bit (indicating no lock). The last three paragraphs after the second line 00000000 are the 25 bits in the figure above, because there are only 24. So we have to fill in the first 0.

3 biased locking

Function:

When a piece of synchronized code is accessed multiple times by the same thread, since there is only one thread, the thread automatically acquires the lock on subsequent accesses

In the case of multi-threading, locks not only do not compete, but also exist in the case that the lock is acquired by the same thread many times, so biased locking is generated.

This means that there is only one thread that uses the lock from beginning to end.

However, if there is a race, when thread 1 and thread 2 are competing for resources, the lock is not biased to the same thread, so it is not biased to lock, and the lock needs to be upgraded. Upgrade to lightweight lock. To ensure fairness. (Biased locks do not release automatically)

** Purpose: ** To improve performance when synchronization is performed by only one thread.

** Bias lock implementation: ** Through CAS to modify the thread ID in Markword

Execution process (using an Account object as an example)

The operation of bias locking does not need to be directly exposed to the operating system, does not involve user conversion to the kernel, and does not need to be directly upgraded to the highest level. Let’s take the “object header” of an Account object as an example.

If a thread executes a synchronized block, the JVM uses CAS to record the thread pointer ID into the Mark Word and modifies the bias flag to indicate that the current thread acquires the lock. Lock objects become biased locks (by modifying the lock flag bit in the object’s head via CAS), which literally means “biased in favor of the first thread to acquire it.”

The thread does not actively release bias locks after executing synchronized blocks of code. At this point the thread acquires the lock and can execute the synchronized code block. When the thread reaches the block for the second time, the JVM determines whether the thread holding the lock is still holding its own (the thread holding the lock ID is also in the object header). The JVM uses the Mark Word of the account object to determine whether the current thread ID is still holding the lock on the object, and can continue to work in the critical section. Since the lock was not released before, there is no need to re-lock it. If only one thread is using the lock all the time, it is clear that biased locking has little additional overhead and high performance.

Conclusion: The JVM does not have to negotiate with the operating system to set up Mutex(for the kernel), it just needs to record the thread ID to indicate that it has acquired the current lock, without the operating system. This is known as bias lock: bias the current thread until no other thread is competing, and the current thread can continue to execute.

Bias lock JVM command

Java – XX: + PrintFlagsInitial | grep BiasedLock * (query Java information of biased locking)

Bias locking is actually enabled by default after JDK1.6, but the startup time is delayed.

So you need to add parameters – XX: BiasedLockingStartupDelay = 0, let the application starts immediately start.

Open bias lock

XX:+UseBiasedLocking -XX:BiasedLockingStartupDelay=0

Close bias lock:

By default, the application goes directly to the lightweight lock state

-XX:-UseBiasedLocking

Closing delay (demo bias lock opening)

-XX:BiasedLockingStartupDelay=0

** after jdk1.6, the program will not be activated until a few seconds after it starts and will be closed in the JVM. The offset of bias lock is 101 in real time.

Bias lock revocation

Competing threads fail to attempt CAS to update the object header and wait until the global safe point (at which point no code is executed) to revoke the biased lock.

Revocation of biased locks Biased locks use a mechanism that waits until a contention occurs to release the lock. The original thread holding the biased lock is revoked only when other threads compete for the lock.

Undo requires waiting for the global safe point (at which no bytecode is executing) while checking to see if the thread holding the biased lock is still executing:

① The first thread is executing the synchronized method (in the synchronized block), but it is not finished. When other threads snatch it, the bias lock will be cancelled and the lock will be upgraded. At this point, the lightweight lock is held by the thread that originally held the bias lock and continues to execute its synchronization code, while the competing thread goes into spin and waits to acquire the lightweight lock.

② The first thread completes the synchronized method (exit the synchronized block), then the object head is set to the state of no lock, the biased lock is revoked, and the biased again.

The red line shows the process of biased lock acquisition and revocation:

4 Lightweight Lock

Essence: spin lock

Flag bit: 0, 0;

Upgrade to heavyweight lock: Spin first, then block.

Long-term spin, will empty CPU resources, spin is a certain number of times, if the number of times is not successful, upgrade heavy lock.

Before JDK6: Enabled by default, 10 spins by default. Or the number of threads is more than half of the CPU cores.

After JDK6: based on the last spin of the same spinlock and the state of the thread that owns the lock. (Adaptive)

The difference and difference between lightweight lock and bias lock (weight)

1 When a lightweight lock is lost, spin attempts to grab the lock.

Lightweight locks release locks every time they exit a synchronized block, whereas biased locks release locks only when a race occurs.

5 Heavyweight Locks

When do you need it?

A large number of threads compete. Using the sync.

conclusion

Synchronized upgrade process: spin first, not block again.

Four-lock coarsening and lock elimination

JIT: Just in Time Compiler

12345678910111213141516171819 / / lock to eliminate public class SafeDoubleCheckTwo {static Object Object = new Object (); public void test(){ Object object = new Object(); Synchronized (object){}} public static void main(String[] args) {}}CopyCopy the code

The object that is locked in this method is not shared. It makes no difference whether it is accessed by the current thread or the next thread. Everyone can take a share of the lock and enter the new object space.

The lock is deactivated, that is, lock elimination.

Lock coarse-understanding: If the method is end-to-end and the synchronized blocks are the same lock object, then the JIT compiler will merge these synchronized blocks into a large block, increase the range of bold, and apply for the lock once to avoid repeated application and release of the lock, improving performance.