The introduction

In this article, we continue to review the synchronized keyword. Synchronized is one of the simplest and most commonly used methods to solve Java concurrency problems. The main function is to ensure that threads are mutually exclusive access to synchronized code.

1. Principle of synchronized

Synchronized is implemented through a monitor object. In Java, every object has a monitor. The initial value of monitor is 0, and monitor increases 1 when a thread enters and decreases 1 when a thread exits. Monitor then returns to its initial value of 0, allowing other threads to enter again.

In Java, each object has one and only one synchronized lock. This also means that synchronized locks are object-dependent.

Here’s another point: As the thread and the operating system in the Java native threads is one-to-one, thread is blocked or awakening will always switch from user mode to kernel mode, from semantic memory, lock in the process of working memory is cleared Shared variables, then read from the main memory, and the process of releasing the lock is Shared variables from the working memory write back into main memory, This conversion is very performance intensive.

Let’s look at the underlying implementation, the Java source code, by decompilating a piece of code:

public class SynchronizedTest {
    public void test(a){
        synchronized (this) {}}}Copy the code

Decompiled bytecode:

// omit some bytecode above......
public void test(a);
    descriptor: ()V
    flags: ACC_PUBLIC
    Code:
      stack=2, locals=3, args_size=1
         0: aload_0
         1: dup
         2: astore_1
         3: monitorenter
         4: aload_1
         5: monitorexit
         6: goto          14
         9: astore_2
        10: aload_1
        11: monitorexit
        12: aload_2
        13: athrow
        14: return
// Omit part of bytecode below...
Copy the code

From the above decomcompiled bytecode, we notice that a Monitorenter at tag 3 acquires ownership of monitor, a Monitorexit at tag 5 releases ownership of Monitor, and a Monitorexit at tag 11. Student: Would that cause the monitor to decrease 2 times to be -1? Monitorexit at tag 11 will only execute if the program executes abnormally, so it will eventually release monitor ownership regardless of how the program executes.

2, in-depth source code

Synchronized actually has two queues, waitSet and entryList. Synchronized actually has two queues, waitSet and entryList.

  1. When multiple threads enter a synchronized code block, the entryList is entered first.
  2. When a thread acquires the monitor lock, it assigns a value to the current thread and the counter +1.
  3. If a thread calls wait, the lock is released, the current thread is set to NULL, the counter is -1, and the thread enters waitSet to be awakened. After calling notify or notifyAll, the thread enters entryList to compete for the lock.
  4. If the thread completes, the lock is also released, counter -1, and the current thread is set to NULL

3. Lock optimization mechanism

Since JDK1.6, synchronized itself has been improving its locking mechanism, and in some cases it’s not a heavyweight lock. Optimization mechanisms include adaptive locking, spin locking, lock elimination, lock coarser, lightweight locking and biased locking.

The lock status from low to high is no lock -> biased lock -> lightweight lock -> heavyweight lock, upgrading process is from low to high, degradation is also possible under certain conditions.

Spinlocks: Because most of the time, locks are held for a short period of time and shared variables are locked for a short period of time, there is no need to suspend threads, and back-and-forth context switching between user-mode and kernels seriously affects performance. The concept of spin is to let the thread execute a busy loop, which can be understood as doing nothing to prevent the transition from user to kernel mode. The spin lock can be started by setting -xx :+UseSpining. The default number of spins is 10, and you can use -xx :PreBlockSpin.

Adaptive lock: Adaptive lock is an adaptive spin lock, the spin time is not a fixed time, but the previous spin time on the same lock and lock holder state determined.

Lock elimination: Lock elimination is when the JVM detects synchronized blocks of code that are completely free of data contention, meaning that no locks are required and lock elimination is performed.

Lock coarsening: Lock coarsening occurs when many operations lock the same object, extending the lock synchronization beyond the entire operation sequence.

Biased locking: When threads access synchronized blocks access to lock in the object lock records in the head and the stack frame store thread ID, biased locking this thread enters the synchronized block again after all don’t need the CAS to lock and unlock, biased locking will bias the first thread gets the lock, if no other threads subsequent won this lock, thread holding the lock will never need to synchronize, Conversely, when there are other threads competing for the biased lock, the thread holding the biased lock will release the biased lock. You can use the setting -xx :+UseBiasedLocking to enable biased locking.

Lightweight locks: The JVM’s object header contains some lock flags. When code enters a synchronized block, the JVM will attempt to acquire the lock using CAS. If the update is successful, the status bits in the object header will be marked as lightweight locks.

The whole lock upgrade process is very complicated, I try to remove some useless links, to describe the entire upgrade mechanism.

In simple terms, biased locks are compared by the biased thread ID of the object’s head, and do not even require CAS. Lightweight locks are mainly implemented by CAS to modify the record and spin of the object’s head lock. Heavyweight locks are blocked by all but the thread that owns the lock.

4. Object and class locks

Object locks and class locks are probably the most confusing in development, as illustrated by a pseudocode

pulbic class ObjectTest {
	1 / / method
    public synchronized void syncA(a){}
    2 / / method
    public synchronized void syncB(a){}
    3 / / method
    public static synchronized void syncC(a){}
    4 / / method
    public static synchronized void syncD(a){}}Copy the code

Methods 1 and 2 are both object locks, and methods 3 and 4 are static class locks

  1. X.s yncA and x.s yncB
  2. X.s yncA and y.s yncA
  3. X.s yncC and y.s yncD
  4. X.s yncA and ObjectTest syncC

The verification code is limited by space and will not be posted, here is the conclusion:

  1. Cannot be accessed simultaneously. Because syncA() and syncB() are synchronization locks that access the same object (object X).
  2. Can be accessed simultaneously. X.sinca () accesses the synchronization lock of X and y.sinca () accesses the synchronization lock of Y because the synchronization lock of the same object is not accessed.
  3. Cannot be accessed simultaneously. Because syncC() and syncD() are static types, x.cyncc () corresponds to ObjectTest.syncc () and y.cyncd () corresponds to ObjectTest.syncd (), so they share a synchronization lock and cannot be questioned simultaneously.
  4. Can be accessed simultaneously. Because syncA() is the instance method, x.sinca () uses the lock of object X; Whereas syncC() is a static method, ObjectTest. SyncC () can be understood to use a “class lock”. Therefore, they can be accessed simultaneously.

5, summary

Synchronization is optimized at the bottom of the JDK and varies from version to version. Just remember that in Java, each object has only one synchronized lock. When we call the synchronized method of an object, we acquire the synchronized lock of the object.

If you found this post helpful, please give it a thumbs up and a follow.