Synchronized is one of the oldest locks in Java multithreading and one of the most frequently used in interviews.

In Java, synchronized is probably one of the first locks we’ve encountered. Prior to JDK1.5, synchronized was a heavyweight Lock, and relatively heavy compared to juc packages.

The good news is that Java has made major improvements to Synchronized at the JVM level since Java 6, so synchronized lock efficiency is now pretty good.

I. Use of synchronized

1. The role of synchronized

There are three main functions of synchronized:

  • (1),atomic** Atomicity refers to an operation or operations that are either all performed without interruption by any factor, or none at all. * * besynchronizedAll operations on a modified class or object are atomic, because the class or object’s lock must be acquired before the operation can be performed and cannot be released until completion.
  • (2) Visibility: ** Visibility means that when multiple threads access a resource, the status and value information of the resource are visible to other threads. ** Synchronized and volatile are visible, where synchronized locks a class or object in order to access the class or object, a thread must acquire its lock, and the lock status is visible to all other threads. Changes to variables are flushed to shared memory before locks are released to ensure visibility of resource variables.
  • (3) Orderliness: orderliness value program execution order according to the sequence of code execution. Both synchronized and volatile are ordered. Java allows the compiler and processor to reorder instructions, but reordering does not affect the order of a single thread, but rather the order of concurrent execution by multiple threads. Synchronized ensures that only one thread accesses the synchronized code block at each moment, which also determines that the thread executes the synchronized code block sequentially and ensures the orderliness.

2. Use of synchronized

Synchronized has three main uses:

  • (1), modify the instance method: the effect on the current object instance lock, into the synchronization code to obtain the current object instance lock

    synchronized void method(a) {
      // Business code
    }
    Copy the code
  • (2) Modify static methods: that is, to lock the current class, will be applied to all object instances of the class, before entering the synchronization code to obtain the current class lock. Static members do not belong to any instance object, they are members of the class (static means that this is a static resource of the class, no matter how many objects are new, there is only one copy). Therefore, if thread A calls A non-static synchronized method of an instance object, and thread B needs to call the static synchronized method of the class that the instance object belongs to, this is allowed and mutual exclusion does not occur. Because a lock used to access a static synchronized method is the current class lock, and a lock used to access a non-static synchronized method is the current instance object lock.

synchronized void staic method(a) {
  // Business code
}
Copy the code
  • (3),Modify code block: Specifies the object to lock for the given object/class.synchronized(this|object)To obtain before entering the synchronous code baseLock for a given object.Synchronized (class. The class)Represents to be obtained before entering the synchronization codeThe current class lock
synchronized(this) {
  // Business code
}
Copy the code

A quick summary:

Static methods and synchronized(class) blocks lock classes.

The synchronized keyword is added to an instance method to lock the object instance.

Here’s a classic example of synchronized using a thread-safe singleton:

public class Singleton {
    // Ensure order and prevent command reordering
    private volatile static Singleton uniqueInstance;

    private Singleton(a) {}public  static Singleton getUniqueInstance(a) {
       // Check whether the object has been instantiated before entering the lock code
        if (uniqueInstance == null) {
            // Class objects are locked
            synchronized (Singleton.class) {
                if (uniqueInstance == null) {
                    uniqueInstance = newSingleton(); }}}returnuniqueInstance; }}Copy the code

Ii. Principle of synchronized

Data synchronization depends on locks. Who does lock synchronization depend on? Synchronized’s answer is to rely on the JVM at the software level, while J.U.C. Ock’s answer is to rely on special CPU instructions at the hardware level.

1. Principle of synchronized statement block

public class SynchronizedDemo {
	public void method(a) {
		synchronized (this) {
			System.out.println(Synchronized code block); }}}Copy the code

Use the JAVap command to query the SynchronizedDemo class bytecode information: Java command to generate the compiled.class file. Then execute javap -c -s -v -l synchronizedDemo.class.


As can be seen from the figure:

Synchronized blocks are implemented using monitorenter and Monitorexit directives, where Monitorenter points to the start of the synchronized block and Monitorexit specifies the end of the synchronized block. **

When monitorenter is executed, the thread attempts to acquire the lock that holds the object monitor, monitor.

In the Java virtual machine (HotSpot), Monitor is implemented based on C++ and implemented by ObjectMonitor. Each object has an ObjectMonitor object built into it.

Wait/notify method also depends on the monitor object, which is why only in the synchronized block or method calls to wait/notify method, otherwise will be thrown. Java lang. The cause of the exception IllegalMonitorStateException.

When monitorenter executes, it attempts to acquire the lock on an object. If the lock counter is 0, it is available for acquisition, and when it is acquired, it increments the lock counter by 1.

After executing the Monitorexit directive, set the lock counter to 0 to indicate that the lock is released. If the object lock fails to be acquired, the current thread blocks and waits until the lock is released by another thread.

2. Principle of synchronized modification method

public class SynchronizedDemo2 {
	public synchronized void method(a) {
		System.out.println("A synchronized method"); }}Copy the code

Decompiler:

The synchronized modified method does not have the Monitorenter and Monitorexit directives; instead, it does have the ACC_SYNCHRONIZED identifier, which identifies the method as a synchronized method. The JVM uses the ACC_SYNCHRONIZED access flag to tell if a method is declared to be a synchronized method and to perform the corresponding synchronized call.

A quick summary:

Synchronized blocks are implemented using monitorenter and Monitorexit directives, where Monitorenter points to the start of the synchronized block and Monitorexit specifies the end of the synchronized block.

The synchronized modified method does not have the Monitorenter and Monitorexit directives; instead, it does have the ACC_SYNCHRONIZED identifier, which identifies the method as a synchronized method.

But the essence of both is to get the object monitor, monitor.

The concept of synchronized

Java object header

In the JVM, objects are laid out in memory in three areas: object headers, instance data, and aligned padding.

Synchronized uses locks that are stored in Java object headers.

Hotspot has two types of object headers:

  • Array type. If the object is an array type, the virtual machine stores the object header in three Word widths
  • Non-array type: If the object is a non-array type, store the object header with a 2 word width.

The object header consists of two parts

  • Mark Word: Stores its own runtime data, such as HashCode, GC age, lock-related information, and so on.
  • Klass Pointer: A Pointer to its class metadata.

Mark Word is a 64-bit vm. The data stored in Mark Word changes with the change of the lock flag.

2. Monitor

Any object has a Monitor associated with it, and when a Monitor is held, it is locked. Synchronized implementations in the JVM are based on entering and exiting Monitor objects to implement method synchronization and code block synchronization. Although the implementation details vary, they are implemented through pairs of MonitorEnter and MonitorExit directives.

  1. MonitorEnter directive: Inserted at the beginning of the synchronized code block. When the code executes to MonitorEnter, it attempts to acquire ownership of the object Monitor, i.e., the lock on the object.
  2. MonitorExit directive: Inserted at the end of a method and at exceptions, the JVM guarantees that each MonitorEnter must have a MonitorExit;

So what is Monitor? It can be understood as a synchronization tool or described as a synchronization mechanism, and is usually described as an object.

Like all objects, all Java objects are born Monitor, and every Java object has the potential to become a Monitor, because in Java design, every Java object comes out of the womb with an invisible lock called an internal lock or Monitor lock.

The MarkWord lock identifier bit is 10, where the pointer points to the starting address of the Monitor object. In the Java Virtual Machine (HotSpot), Monitor is implemented by ObjectMonitor.

4. Synchronized optimization

From JDK5, CAS atomic manipulation was introduced in modern operating systems (the synchronized keyword is not optimized in JDK5, so it has better performance in the Concurrent package in this version), starting with JDK6, Besides the CAS spin introduced by JDK5, optimization strategies such as adaptive CAS spin, lock elimination, lock coarser, biased lock and lightweight lock are also added. The keyword optimization greatly improves the performance, has clear semantics, simple operation, and does not need to be manually closed. Therefore, it is recommended to use this keyword when possible, and there is room for optimization in terms of performance.

A lock can be upgraded from a biased lock to a lightweight lock and then to a heavyweight lock. However, the upgrade of the lock is one-way, that is, it can only be upgraded from low to high, and there is no degradation of the lock.

1, biased lock

Biased lock is an important introduction in JDK6, because HotSpot author found through research practice that in most cases, lock not only does not exist multi-thread competition, and is always obtained by the same thread many times, in order to make the cost of locking thread lower, the introduction of biased lock.

When a thread, when accessing a synchronized block and obtain the lock will lock in the head and stack frame object record store to lock in the thread ID, after this thread synchronization block on the entry and exit for CAS operation is not needed to lock and unlock, simply test object head Mark in the Word is stored with biased locking points to the current thread.

If the test succeeds, the thread has acquired the lock. If the test fails, you need to test whether the Mark of biased lock is set to 1 (indicating that it is currently a biased lock) in The Mark Word. If not, CAS is used to compete for the lock. If set, try using CAS to point the bias lock of the object header to the current thread.

Biased locks use a mechanism that waits until a race occurs to release the lock, so the thread holding the biased lock will release the lock only when other threads attempt to contest the biased lock.

The cancellation of bias locks requires waiting for the global security point (at which no bytecode is being executed). It first suspends the thread with the bias lock, then checks whether the thread with the bias lock is alive. If the thread is not active, the object header is set to lock free. If the thread is still alive, the stack with the biased lock is executed, the lock record of the biased object is traversed, the lock record in the stack and the Mark Word of the object header are either rebiased to other threads, or reverted to lockless or marked that the object is not suitable as a biased lock, and finally the suspended thread is woken up.

In the figure below, thread 1 illustrates the process of biased lock initialization and thread 2 illustrates the process of biased lock cancellation:

2. Lightweight locks

The main purpose of introducing lightweight locks is to reduce the performance cost of traditional heavyweight locks using operating system mutex without multi-threaded competition. When biased locks are disabled or multiple threads compete for biased locks and the biased lock is upgraded to a lightweight lock, an attempt is made to acquire the lightweight lock.

(1) Lightweight lock plus lock

Before a thread executes a synchronized block, the JVM will create a space for storing lock records in the stack frame of the current thread, and copy the Mark Word in the object header into the lock record, officially known as the product Mark Word. The thread then tries to use CAS to replace the Mark Word in the object header with a pointer to the lock record. If it succeeds, the current thread acquires the lock; if it fails, other threads compete for the lock and the current thread attempts to acquire the lock using spin.

(2) Lightweight lock unlock

When the lightweight product is unlocked, the atomic CAS operation will be used to replace the product of the bier Mark Word back to the object head. If the product succeeds, no competition will occur. If it fails, it indicates that the current lock is competing, and the lock expands to a heavyweight lock.

The following is a flow chart of two threads competing for a lock at the same time, resulting in lock inflation:

Because spin consumes CPU, once a lock is upgraded to a heavyweight lock, it does not revert to a lightweight lock state in order to avoid unwanted spin (such as when the thread that acquired the lock is blocked). When the lock is in this state, other threads trying to acquire the lock will be blocked. When the thread holding the lock releases the lock, these threads will be awakened, and the awakened thread will start a new round of contention for the lock.

3. Compare the advantages and disadvantages of lock

Locks are not substitutes for each other, but are different choices in different scenarios, and by no means are heavyweight locks inappropriate. Each lock can only be upgraded, can not be downgraded, that is, by partial lock -> lightweight lock -> heavyweight lock, and this process is the process of increasing overhead.

If used in a single thread, biased locking is undoubtedly the least expensive and can solve the problem without CAS, just comparing object headers in memory.

If there are competing threads, biased locks are upgraded to lightweight locks;

If other threads fail a certain number of CAS attempts, the heavyweight lock is entered;

The advantages and disadvantages of locks are compared in the following table:

The lock advantages disadvantages Applicable scenario
Biased locking Locking and unlocking require no additional cost, and the difference between performing asynchronous methods is only nanoscale If there is lock contention between threads, there is additional lock cancellation cost This applies to synchronized block scenarios where only one thread accesses the block
Lightweight lock Competing threads do not block, increasing the corresponding speed of the program If you never get a thread competing for the lock, using spin consumes CPU Pursuit response time

The synchronization response is very fast
Heavyweight lock Thread contention does not use spin and does not consume CPU Threads are blocked and response time is slow Throughput pursuit

The synchronization block execution speed is long





Reference:

[1] : 2020 latest Java concurrency advancements often meet test summary. Md

[2] : Fang Tengfei et al., The Art of Java Concurrent Programming

[3] : Implementation principle of synchronized

[4] : In-depth analysis of the principle of Synchronized

[5] : ☆ Gnawing concurrency (vii) : In-depth analysis of the principle of Synchronized

[6] : An in-depth understanding of the underlying principle of synchronized, an article is enough!