Synchronized (version 1.8)

JAVA concurrent source code version 1.8


  • Synchronized (version 1.8)
    • 1. Java lock
      • 1.1 Memory semantics of locking
      • 1.2 Lock Release
      • 1.3 Obtaining locks
      • 1.4 Release and obtain locks
    • 2. Review of Synchronized
    • 3. The use of Synchronized
      • 3.1 Three application methods of Synchronized
      • 3.2 Rules for using Synchronized
        • 3.2.1 Common methods are not associated with synchronous method calls
        • 3.2.2 All synchronization methods can only be accessed by one thread
        • 3.2.3 A block of synchronized code for a lock can only be accessed by one thread at a time
        • 3.2.4 The execution order of multiple synchronization codes accessing the same lock at the same time between threads is variable
        • 3.2.5 Non-blocking access between different locks
      • 3.3 Reentrancy of Synchronized
    • 4. Implementation principle of Synchronized
      • 4.1 Synchronization
      • 4.2 decompiling
        • 2 prepared
        • 4.2.1 Generating. Class Files
        • 4.2.2 JavAP Decompilation
      • 4.3 Synchronizing Code Blocks Synchronizing principles
        • 4.3.1 Monitor Monitor
        • 4.3.2 monitorenter instruction
        • 4.3.3 monitorexit instruction
        • 4.3.4 Implementation Principle
        • 4.3.4 supplement
      • 4.4 Synchronization Methods Synchronization principles
    • 5. The principle of progression
      • 5.1 Monitor Obejct mode
        • 5.1.1 Monitor Obejct Model overview
        • 5.1.2 Monitor Obejct mode architecture
        • 5.1.3 Monitor Obejct Mode collaboration process
      • 5.2 object head
        • 5.2.1 Objects in JVM memory
        • 5.2.2 Overview of object headers
        • 5.2.3 Storage structure of Mark Word
      • 5.3 Monitor Record
        • 5.3.1 Monitor Record Overview
        • 5.3.2 Monitor Record Structure
        • 5.3.3 Working mechanism of Monitor Record
    • 6. Optimize lock
      • 6.1 the spin lock
      • 6.2 Adaptive spin lock
      • 6.3 blocking locks
        • 6.3.1 blocking locks
        • 6.3.2 fair lock
        • 6.3.3 Unfair Lock
      • 6.4 lock coarsening
      • 6.5 lock elimination
      • 6.6 Upgrading locks
      • 6.7 Heavyweight Locks
      • 6.8 Lightweight Lock
        • 6.8.1 Lightweight Locks Overview
        • 6.8.2 Lightweight Lock Flowchart
        • 6.8.3 Lightweight Locks Adding Locks
        • 6.8.4 Lightweight Lock Unlocking
        • 6.8.5 Precautions for Lightweight Locks
      • 6.9 biased locking
        • 6.9.1 Overview of biased locking
        • 6.9.2 Flowchart of biased locking
        • 6.9.3 Biased Lock Initialization
        • 6.9.4 Revoke a lock based on bias
        • 6.9.5 Biased Lock Closes the lock
        • 6.9.6 Precautions for biased locking
      • 6.10 Biased Locks vs. Lightweight locks vs. Heavyweight Locks
  • Please read the author’s concurrent post @Thread (version 1.7) first.
  • KiraSally’s nuggets blog thanks for your support

1. Java lock

1.1 Memory semantics of locking

  • A lock allows critical sections to be mutually exclusive and allows the thread releasing the lock to send a message to the thread of the same lock
  • Locks are released following the happens-before principle (Lock rule: Unlocking must occur before subsequent locking)
  • How locks are represented in JavaSynchronizedLock

1.2 Lock Release

  • After thread A releases the lock, the shared change operation is flushed to main memory

1.3 Obtaining locks

  • When thread B acquires the lock, the JMM invalidates the thread’s local memory, and the critical section code protected by the monitor must read the shared variable from main memory

1.4 Release and obtain locks

  • Lock acquisition has the same memory semantics as volatile reads, as can be seen in my concurrent article @Java Memory Model &Volatile Passage (version 1.7).
  • When thread A releases A lock, thread A essentially tells the next thread to acquire the lock that it has changed the shared variable
  • When thread B acquires A lock, thread B gets A message from thread A telling it to change the shared variable (before releasing the lock)
  • Thread A releases the lock, and thread B then compets for the lock. Essentially, thread A sends A message to thread B via main memory to inform thread B that it has changed the shared variable

2. Review of Synchronized

  • Synchronization: Synchronized is an implementation of the Java synchronization mechanism, known as a mutex. The lock it acquires is called a mutex
  • Mutex: A lock can only be assigned to one thread at a time and can only be held by one thread at a time
  • Role: Synchronized is used to ensure that only one thread can enter the critical area at the same time, and ensure the visibility, atomicity and orderliness of shared variables
  • Use: When a thread attempts to access a synchronized code method (block), it must first acquire the lock and release the lock when it exits or throws an exception

3. The use of Synchronized

3.1 Three application methods of Synchronized

3.2 Rules for using Synchronized

/** * define a test template class. */ public class SynchronizedDemo {public static synchronized void SynchronizedDemo {public static synchronized void SynchronizedDemo StaticMethod (){system.out.println (thread.currentThread ().getName() + "staticMethod"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (thread.currentthread ().getname () + "staticMethod"); } public static void staticMethod2(){system.out.println (thread.currentThread ().getName() + "staticMethod2"); synchronized (SynchronizedDemo.class){ System.out.println(Thread.currentThread().getName() + "Synchronizeddemo.class obtained in staticMethod2 method "); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); }}} public synchronized void synMethod(){system.out.println (thread.currentThread ().getName()); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (thread.currentThread ().getName() + "end access synMethod"); } public synchronized void synMethod2(){system.out.println (thread.currentThread ().getName() + ""); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (thread.currentThread ().getName() + "End access synMethod2"); } public void method(){system.out.println (thread.currentThread ().getName()); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } system.out.println (thread.currentThread ().getName() + "end access to common method"); } private Object lock = new Object(); Public void chunkMethod(){system.out.println (thread.currentThread ().getName() + "access to chunkMethod "); Synchronized (lock){system.out.println (thread.currentThread ().getName() + "lock in chunkMethod "); synchronized (lock){system.out.println (thread.currentThread ().getName() +" lock in chunkMethod "); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); }} public void chunkMethod2(){system.out.println (thread.currentThread ().getName() + "access to chunkMethod2 "); Println (thread.currentThread ().getName() + "Lock in chunkMethod2 "); synchronized (lock){system.out.println (thread.currentThread ().getName() +" lock in chunkMethod2 "); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); }} public void chunkMethod3(){system.out.println (thread.currentThread ().getName() + "access to chunkMethod3 "); System.out.println(thread.currentThread ().getName() + "This in chunkMethod3 "); synchronized (this){system.out.println (thread.currentThread ().getName() +" this in chunkMethod3 "); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); }}}}Copy the code

3.2.1 Common methods are not associated with synchronous method calls

  • When one thread enters a synchronous method, other threads can normally access other asynchronous methods
public static void main(String[] args) { SynchronizedDemo synDemo = new SynchronizedDemo(); Thread1 = new Thread(() -> {// call syndemo.method (); }); Thread2 = new Thread(() -> {// call syndemo.synmethod (); }); thread1.start(); thread2.start(); } --------------------- // output: Thread-1 accesses synchronization methods synMethod thread0 accesses common methods method thread0 Ends Accessing common methods method thread-1 ends accessing synchronization methods synMethod // Analysis: The results show that the normal and synchronous methods execute non-blockingCopy the code

3.2.2 All synchronization methods can only be accessed by one thread

  • When one thread executes a synchronized method, no other thread can access any synchronized method
public static void main(String[] args) { SynchronizedDemo synDemo = new SynchronizedDemo(); Thread thread1 = new Thread(() -> { synDemo.synMethod(); synDemo.synMethod2(); }); Thread thread2 = new Thread(() -> { synDemo.synMethod2(); synDemo.synMethod(); }); thread1.start(); thread2.start(); } --------------------- // output: Thread-0 accesses the synchronization method synMethod thread-0 ends the synchronization method synMethod thread-0 accesses the synchronization method synMethod2 Thread-0 ends the synchronization method synMethod2 Thread-1 accesses synMethod2 Thread-1 ends access synMethod2 Thread-1 ends access synMethod Thread-1 ends access synMethod According to the result, the execution of the task is blocked, and Thread-1 must wait for the completion of Thread0 before continuingCopy the code

3.2.3 A block of synchronized code for a lock can only be accessed by one thread at a time

  • When synchronized blocks are all the same lock, methods can be accessed by all threads, but synchronized blocks with the same lock can only be accessed by one thread at a time
public static void main(String[] args) { SynchronizedDemo synDemo = new SynchronizedDemo(); Thread1 = new Thread(() -> {// call syndemo.chunkmethod (); synDemo.chunkMethod2(); }); Thread2 = new Thread(() -> {// call syndemo.chunkmethod (); synDemo.synMethod2(); }); thread1.start(); thread2.start(); } --------------------- // Output: Thread0 accesses the chunkMethod method thread1 accesses the chunkMethod method thread0 accesses the chunkMethod lock... Pause and wait... Thread-1 fetches the lock in the chunkMethod method... Pause and wait... Thread0 accesses the chunkMethod2 method and thread0 gets a lock in the chunkMethod2 method... Pause and wait... Thread-1 accesses the chunkMethod2 method. Thread-1 accesses the chunkMethod2 method. Thread-1 accesses the chunkMethod2 method. Comparing lines 18 and 19 shows that even though normal methods have synchronized code blocks, access to the method is non-blocking and any thread is free to enter //2. A comparison of lines 20, 22, and 25 and 27 shows that access to a synchronized block of code for the same lock must be blockedCopy the code

3.2.4 The execution order of multiple synchronization codes accessing the same lock at the same time between threads is variable

  • Multiple synchronization codes accessing the same lock at the same time between threads execute in a variable order, even if the same object lock is used, which is quite different from the synchronization method
  • ?? Readers can first think about why such a problem?
public static void main(String[] args) { SynchronizedDemo synDemo = new SynchronizedDemo(); Thread1 = new Thread(() -> {// call syndemo.chunkmethod (); synDemo.chunkMethod2(); }); Thread2 = new Thread(() -> {// call syndemo.chunkmethod2 (); synDemo.chunkMethod(); }); thread1.start(); thread2.start(); } --------------------- // Output: Thread0 accesses the chunkMethod method thread1 accesses the chunkMethod2 method thread0 accesses the lock in the chunkMethod method... Pause and wait... Thread0 accesses the chunkMethod2 method. Thread1 fetches a lock in the chunkMethod2 method... Pause and wait... Thread-1 accesses the chunkMethod method and Thread0 gets the lock in the chunkMethod2 method... Pause and wait... // If the chunkMethod method is used, the chunkMethod lock object is not blocked. If the chunkMethod lock object is not blocked, the chunkMethod lock object is blocked. If the chunkMethod lock object is not blocked, the chunkMethod lock object is blocked. When thread0 accesses the chunkMethod method, the lock will be released first, so thread1 has the chance to acquire the lock and execute it first. If thread0 accesses the chunkMethod method, the lock will be released first. If thread0 accesses the chunkMethod, the lock will be released first. However, it is necessary that access to a block of synchronized code for the same lock must be blocked: all synchronized methods are blocked because the synDemo object has been held internally by the thread and has not been released.Copy the code

3.2.5 Non-blocking access between different locks

  • Because the lock objects are different in all three ways, there is no effect on each other
  • Except in two cases:

    • 1. If the Class object used by the synchronized code block is the same as the Class object, it belongs to the same lock3.2.3The principle of
    • 2. When the synchronized code block uses this, it belongs to the same lock as the synchronized method, as shown above3.2.2and3.2.3The principle of
public static void main(String[] args) { SynchronizedDemo synDemo = new SynchronizedDemo(); Thread thread1 = new Thread(() -> synDemo.chunkMethod() ); Thread thread2 = new Thread(() -> synDemo.chunkMethod3()); Thread thread3 = new Thread(() -> staticMethod()); Thread thread4 = new Thread(() -> staticMethod2()); thread1.start(); thread2.start(); thread3.start(); thread4.start(); } --------------------- // output: Thread-1 accesses the chunkMethod3 method. Thread-1 accesses the staticMethod. Thread-1 accesses the chunkMethod method Thread0 fetches lock in the chunkMethod method and thread3 accesses staticMethod2... Pause and wait... StaticMethod: SynchronizedDemo.class = synchronizedDemo. class Lines 16, 18 and 24, 25 show that access to different blocks of the same lock object is non-blocking. The root cause is lock release and re-competition. When Thread0 accesses chunkMethod, it will release the lock first, and then Thread-1 has a chance to obtain the lock and execute it first. Then, when line 24 and line 25, Thread0 will acquire the lock again and execute it firstCopy the code

3.3 Reentrancy of Synchronized

  • Reentrant lock: When a thread requests to hold the critical resource of the object lock again, this is a reentrant lock and the request will succeed
  • Implementation: A thread that acquires an object lock is allowed to request the object lock again. For each reentrant, the number of monitor entries is +1
public static void main(String[] args) { SynchronizedDemo synDemo = new SynchronizedDemo(); Thread thread1 = new Thread(() -> { synDemo.synMethod(); synDemo.synMethod2(); }); Thread thread2 = new Thread(() -> { synDemo.synMethod2(); synDemo.synMethod(); }); thread1.start(); thread2.start(); } --------------------- // output: Thread-0 accesses the synchronization method synMethod thread-0 ends the synchronization method synMethod thread-0 accesses the synchronization method synMethod2 Thread-0 ends the synchronization method synMethod2 Thread-1 accesses synMethod2 Thread-1 ends access synMethod2 Thread-1 ends access synMethod Thread-1 ends access synMethod In line 16 and line 18, another synchronized method of the current instance object is called in the code block. If the current instance lock is requested again, the method body code will be allowed to execute. This is the most direct representation of the reentrant lockCopy the code

4. Implementation principle of Synchronized

4.1 Synchronization

Synchronization in the Java Virtual Machine is implemented by monitor entry and exit, either explicitly (by use of the monitorenter and monitorexit instructions) or implicitly (by the method invocation and return instructions).


For code written in the Java programming language, perhaps the most common form of synchronization is the synchronized method. A synchronized method is not normally implemented using monitorenter and monitorexit. Rather, it is simply distinguished in the run-time constant pool by the ACC_SYNCHRONIZED flag, which is checked by the method invocation instructions (§2.11.10).


  • The Java® Virtual Machine Specification 3.14. Synchronization
  • In the JVM, synchronization is achieved by entering and exiting the monitor lock, or by appearing to passmonitorentermonitorexitDirective implementation, either implicitly through method calls and return directives
  • Perhaps the most common implementation of synchronization for Java code is the synchronization method. Where synchronizing code blocks is done by usingmonitorentermonitorexitWhile the synchronous method is usedACC_SYNCHRONIZEDThe implicit implementation of the token is to check whether the method is included in the constant pool by means of a method call instructionACC_SYNCHRONIZEDmarkers
  • This article will not addressSynchronizedThe bytecode implementation of the analysis, just so far, interested readers may refer toSynchronized implementation of JVM source analysis(Of course, I will re-analyze it if I have the opportunity to run the JVM.)

4.2 decompiling

2 prepared

To get an idea of how Synchronized works, decomcompile the SynchronizedDeme class’s class file and see what happens

package concurrent; public class SynchronizedDemo { public static synchronized void staticMethod() throws InterruptedException { System.out.println(" Static synchronization method starts "); Thread.sleep(1000); System.out.println(" static synchronization method ends "); } public synchronized void method() throws InterruptedException {system.out.println (" start the instance synchronization method "); Thread.sleep(1000); System.out.println(" instance synchronization method ends "); } public synchronized void method2() throws InterruptedException {system.out.println (" start instance synchronization method2 "); Thread.sleep(3000); System.out.println(" Instance synchronization method 2 ends "); } public static void main(String[] args) { final SynchronizedDemo synDemo = new SynchronizedDemo(); Thread thread1 = new Thread(() -> { try { synDemo.method(); } catch (InterruptedException e) { e.printStackTrace(); }}); Thread thread2 = new Thread(() -> { try { synDemo.method2(); } catch (InterruptedException e) { e.printStackTrace(); }}); thread1.start(); thread2.start(); }}Copy the code

4.2.1 Generating. Class Files

javac SynchronizedDemo.java Copy the code
  • Note: Since the default encoding of the author’s OS is UTF-8, the following error may occur

  • The solution is as follows: Just specify -encoding to specify the encoding mode
javac -encoding UTF-8 SynchronizedDemo.javaCopy the code

We end up with a.class file, synchronizedDemo.class

4.2.2 JavAP Decompilation

javap -v SynchronizedDemoCopy the code

Decompilation gives us different compilation results for constant pools, synchronized methods, and synchronized code blocks, which we will cover later

  • Constant pool diagram

  • In addition to primitive types and constant values for strings and arrays, the constant pool also contains symbolic references in text form:

    • Fully qualified names of classes and interfaces
    • The name and descriptor of the field
    • Method and name and descriptor
  • Synchronization method diagram

  • The synchronous method will contain oneACC_SYNCHCRONIZEDmarkers
  • Synchronized code block icon

  • Synchronized code blocks are inserted into the codemonitorentermonitorexistinstruction

4.3 Synchronizing Code Blocks Synchronizing principles

4.3.1 Monitor Monitor

  • Each object has a monitor that the JVM passes through in a synchronized code blockmonitorenterandmonitorexistCommand to achieve synchronization lock acquisition and release function
  • When a thread acquires a synchronous lock, it acquires the lock by acquiring the Monitor

4.3.2 monitorenter instruction

Each object is associated with a monitor. A monitor is locked if and only if it has an owner. The thread that executes monitorenter attempts to gain ownership of the monitor associated with objectref, as follows:


• If the entry count of the monitor is associated with objectref is zero, the thread enters the monitor and sets its entry count to one. The thread is then the owner of the monitor.


• If the thread already owns the monitor associated with objectref, it reenters the monitor, incrementing its entry count.


• If another thread already owns the monitor with objectref, the thread blocks until the monitor’s entry count is zero, then tries again to gain ownership.


  • Each object has a monitor. The monitor is locked when it is occupied (or a synchronization lock is acquired when the monitor is acquired). When a thread executes the Monitorenter directive, it attempts to take ownership of the monitor as follows:

    • If the number of entries for the monitor is 0, the thread enters the monitor and sets the number of entries to 1, at which point the thread becomes the owner of the monitor
    • If the thread already owns the monitor and re-enters, the number of entries is +1
    • If the monitor is already owned by another thread, the thread is blocked until the number of monitor entries reaches zero, after which the threads compete for ownership of the monitor
    • Only the thread that first acquired the lock is allowed to continue to acquire multiple locks

4.3.3 monitorexit instruction

The thread that executes monitorexit must be the owner of the monitor associated with the instance referenced by objectref.


The thread decrements the entry count of the monitor associated with objectref. If as a result the value of the entry count is zero, the thread exits the monitor and is no longer its owner. Other threads that are blocking to enter the monitor are allowed to attempt to do so.


  • Executing the Monitorexit directive follows the following steps:

    • The thread executing the Monitorexit directive must be the owner of the monitor to which the object instance corresponds
    • When the instruction executes, the thread will first change the number of entries to -1. If the number of entries after -1 becomes 0, the thread will exit the monitor (that is, release the lock).
    • Other threads that block on the monitor can recontest ownership of the monitor

4.3.4 Implementation Principle

  • In a synchronized code block, the JVM passesmonitorenterandmonitorexistCommand to achieve synchronization lock acquisition and release function
  • monitorenterInstructions are inserted at the beginning of a synchronized code block after compilation
  • monitorexitInstructions are inserted at the end of the method and at the exception
  • The JVM needs to ensure that everymonitorenterThere has to be a correspondingmonitorexitWith the matching
  • Any object has a Monitor associated with it, and when a Monitor is held, it is locked
  • Threads executemonitorenterDirective will attempt to acquire ownership of the monitor corresponding to the object, that is, to acquire the lock of the object
  • Threads executemonitorexitCommand will release the monitor with the entry count -1 until it becomes 0
  • Only one thread can succeed at a time, and other threads that fail are BLOCKED and put into a synchronized queue, where they are BLOCKED

4.3.4 supplement

  • See objectrefHow locks are used
  • Due to wait/notify such methods as the underlying implementation is based on the monitor, so only in the synchronous method (block) can call wait/notify method, otherwise will be thrown. Java lang. The cause of the exception IllegalMonitorStateException

4.4 Synchronization Methods Synchronization principles

  • Unlike the monitor implementation of synchronized code blocks, synchronized methods are implemented by usingACC_SYNCHRONIZEDToken implicit implementation
  • The principle is to check whether a method is included in the constant pool through a method call instructionACC_SYNCHRONIZEDIf so, the JVM requires the thread to request the lock before calling it

5. The principle of progression

5.1 Monitor Obejct mode

5.1.1 Monitor Obejct Model overview

  • Monitor is a synchronization tool, or a synchronization mechanism. It is usually described as an object, characterized by mutual exclusion and signaling
  • Mutually exclusive: A Monitor lock can be held by only one thread at a time. No other thread can hold it
  • Signal: A thread that fails to hold a Monitor lock will temporarily give up contention and wait for a predicate to come true (condition variable), but when the condition is true, the current thread will notify other threads that are waiting for the condition variable by releasing the lock so that they can recontest the lock
  • Signal mechanism of Mesa
  • Mesa signal is a Blocking condition variable
  • When a thread that holds a Monitor lock issues a release notice, it does not immediately lose the lock, but lets other threads wait in the queue and recontest the lock
  • In this mechanism, after the waits get the lock, they cannot determine whether other waits have entered Monitor in this time difference. Therefore, the predicate cannot be guaranteed to be true, so the judgment of conditions must be usedwhile
  • Mesa’s Singal mechanism is used in Javanotify

5.1.2 Monitor Obejct mode architecture

In the Monitor Object schema, there are four main types of participants:

5.1.3 Monitor Obejct Mode collaboration process

  • 1. Synchronous method invocation and serialization:

    • When a client thread calls a synchronous method on a monitor object, it must first acquire its monitor lock
    • The fetch operation will not succeed as long as there are other synchronized methods being executed on the monitor object
    • When the monitor object is occupied by a thread (that is, the synchronized method is being executed), the client thread is blocked until it acquises the monitor lock
    • When the client thread successfully acquires the monitor lock, it enters the critical section and executes the service implemented by the method
    • Once the synchronized method completes execution, the monitor lock is automatically released to give other client threads the opportunity to call the synchronized method executing the monitor object
  • 2. Synchronous method thread suspends: If the client thread calling the synchronous method must be blocked or cannot proceed immediately for some other reason, it can wait on a Monitor Condition, which causes the client thread to temporarily release the Monitor lock and be suspended on the Monitor Condition
  • 3. Monitoring condition notification: A client thread can notify a monitoring condition in order to notify the thread blocking on the monitoring condition (the monitoring lock) to resume running
  • 4. Synchronization method thread recovery:

    • Once a synchronous method thread that was previously suspended on the monitoring condition gets the notification, it continues execution at the original point where the monitoring condition was awaited
    • Monitor locks are acquired automatically (threads automatically compete for locks) before notified threads are allowed to resume executing synchronous methods
  • For Monitor the author will be inHe tweeted @reentractlockFurther elaboration in

5.2 object head

5.2.1 Objects in JVM memory

  • In the JVM, the layout of objects in memory is divided into three areas: object headers, sample data, and aligned padding
  • Object header: The object header mainly stores the object’s hashCode, lock information, type pointer, array length (if array), and so on
  • Example data: Stores the attribute data of the class, including the attribute information of the parent class, and the array length if it is the instance part of the array, which is aligned by 4 bytes
  • Populate data: Since the JVM requires that the object’s starting address be an integer multiple of 8 bytes, populate automatically when 8 bytes are not met (so populate data is not required, just for byte alignment)

5.2.2 Overview of object headers

  • Synchcronized locks are stored in the header of a Java object
  • The JVM stores the object header with 3 subwidths (Word) if the object is an array type, and 2 subwidths otherwise
  • In a 32-bit VM, a sub-width is 4 bytes, that is, 32 bits. 64-bit means 8 bytes, or 64bit

5.2.3 Storage structure of Mark Word

Default Mark Word storage structure for 32-bit JVMS (unlocked state)

During runtime, the data stored in Mark Word changes with the lock flag bit (32 bits).

Default Mark Word storage structure for 64-bit JVMS (25 bits not used for 32-bit lockless state)

5.3 Monitor Record

5.3.1 Monitor Record Overview

  • Monitorrecords (Mrs) are private data structures for Java threads. Each thread has a list of available Mrs, as well as a global list of available ones
  • A locked object is associated with a MR (the LockWord in the MarkWord of the object header points to the starting address of MR)
  • An Owner field in MR holds the unique identity of the thread that owns the lock, indicating that the lock is occupied by that thread

5.3.2 Monitor Record Structure

5.3.3 Working mechanism of Monitor Record

  • A thread that successfully obtains a monitor lock becomes the owner of the monitor lock object
  • At any given time, the monitor object belongs to only one active thread.
  • The owner can call the wait method to automatically release the watch lock and enter the wait state

6. Optimize lock

6.1 the spin lock

  • Pain point: Since thread blocking/waking requires the CPU to switch between user and kernel states, frequent transitions place a heavy burden on the CPU, which in turn has a significant impact on concurrency performance
  • Symptom: Through extensive analysis, it has been found that the lock state of an object lock usually lasts only a short period of time, and there is no need to frequently block and wake up threads
  • How it works: By executing a meaningless empty loop to make a thread wait a certain amount of time without being immediately suspended, the thread holding the lock can release the lock quickly. If the lock is released quickly, the current thread has a chance to acquire the lock without blocking, thus reducing switching and improving performance
  • Pitfalls: Spin efficiency is good if the lock can be released quickly (the fewer spins actually performed, the better the efficiency, the lower the wait time); However, if the lock is held all the time, then the spin is not doing anything meaningful but is wasting CPU resources
  • Note: there must be a limit to the number of spins (or spin time). If you exceed the number of spins (time) without obtaining the lock, you will be blocked and suspended
  • Use:In JDK1.6, this function is enabled by default-XX:+UseSpinning, the number of spins can pass-XX:PreBlockSpinAdjust, 10 times by default

6.2 Adaptive spin lock

  • Pain point: Because the spin lock can only specify a fixed number of spins, but due to the difference in the task, the optimal number of spins for each time varies
  • Principle: By introducing the concept of “intelligent learning”, the number of spins is determined by the previous spin time on the same lock and the state of the lock holder. In other words, the number of spins is not fixed, but can be calculated by analyzing the last time, the next time, more intelligent
  • Implementation: If the current thread spins successfully for a lock, then the next spin may increase at this time (because the JVM considers this success to be the basis for the next success). Anyway, if the spin is rarely successful, then the number of spins will decrease (reduce idle waste) or even skip the spin process and block directly (since the spin is completely meaningless, it’s better to block directly).
  • Add: With adaptive spin locks, JVMS become smarter and more accurate at predicting lock conditions as application execution and performance monitoring information improves

6.3 blocking locks

6.3.1 blocking locks

  • Lock success: When a lock contention occurs, only the thread that acquired the lock can continue execution
  • Lock failure: A thread that fails to compete is blocked from running and placed in a wait queue associated with the target lock
  • Unlock: When the thread holding the lock exits the critical section, the lock is released and a blocked thread in the waiting queue is awakened to re-participate in the lock contention
  • Note: This article does not cover specific JVM models, but for those interested in HotSopt JVMS, check out the JVM locking mechanism 1-synchronized

6.3.2 fair lock

  • In terms of implementation, when a thread is competing for an object lock, as long as the waiting queue for the lock is not empty, the thread must be blocked and pushed to the end of the queue (the end of the queue is usually inserted through a CAS operation to ensure that no lock is released during the process of insertion).

6.3.3 Unfair Lock

  • On the other hand, in the scenario of unfair lock, each thread will compete for the lock first, and will be put into the waiting queue only when the competition fails or the current lock has been locked. In this implementation, the last thread may directly compete for the lock without entering the waiting queue (randomness).

6.4 lock coarsening

  • Pain point: Lock and unlock operation caused by multiple connections
  • Principle: The lock and unlock operations that are connected together for many times are merged into one time, and multiple consecutive locks are expanded into a larger range of locks
  • Use: to contract multiple synchronized blocks close to each other in a synchronized block or to combine multiple synchronized methods into one method
  • Add-on: In JDK apis, StringBuffer, Vector, and HashTable all have implicit locking operations and can be merged
/** * StringBuffer is a thread-safe string handling class. * Each call to stringBuffer.append requires locking and unlocking. If the virtual machine detects a series of locking and unlocking operations on the same object in a row, it combines them into a single, larger locking and unlocking operation. */ StringBuffer StringBuffer = new StringBuffer(); public void append(){ stringBuffer.append("kira"); stringBuffer.append("sally"); stringBuffer.append("mengmeng"); }Copy the code

6.5 lock elimination

  • Pain point: According to code escape techniques, a piece of code is considered thread-safe and does not need to be locked if it is determined that data on the heap will not escape from the current thread
  • How it works: The JVM saves money by eliminating unwanted locks by removing unnecessary lock operations at compile time by removing locks that cannot compete for shared resources by describing the running context
  • Use:Escape analysis and lock elimination can use parameters, respectively-XX:+DoEscapeAnalysisand-XX:+EliminateLocksLock elimination must be performed in- server modeOpened) under the
  • Note: In JDK apis such as StringBuffer, Vector, and HashTable, implicit locking can be eliminated
Public static void main(String[] args) {SynchronizedDemo SynchronizedDemo = new SynchronizedDemo(); for (int i = 0 ; i < 10000 ; i++){ synchronizedDemo.append("kira","sally"); }} public void appEnd (String str1,String str2){// Because the StringBuffer object is wrapped inside the method, there can be no contention for shared resources. Delete lock operations at compile time. Knowing that there will be no thread-safety issues, StringBuilder should be used during the code phase otherwise StringBuffer will not be optimized without lock elimination enabled. StringBuffer StringBuffer = new StringBuffer(); stringBuffer.append(str1).append(str2); }Copy the code

6.6 Upgrading locks

  • As of JDK1.6, there are four lock states: no lock, biased lock, light lock, and weight lock
  • The status of the lock escalates as the race progresses, and the lock allows escalation but not degradation
  • The purpose of disallowing degradation is to improve the efficiency of acquiring and releasing locks
  • I will explain this in reverse order: heavyweight -> Lightweight -> biased locks, because it is usually an optimization of the former
  • Lock upgrade process

6.7 Heavyweight Locks

  • Heavyweight locking is implemented through monitor inside objects (see Monitor Object schema above)
  • The essence of Monitor relies on the MutexLock implementation of the underlying operating system. The switching between threads implemented by the operating system is completed by switching between user mode and kernel mode, and the switching cost is very high
  • At the heart of MutexLock is the ideaTry to acquire the lock. If available, possess it. If not, go to sleep and wait
  • Interested readers can read a brief introduction to Mutex (Lock), which gives a good explanation of Liunx’s MutexLock

6.8 Lightweight Lock

6.8.1 Lightweight Locks Overview

  • Pain point: Since thread blocking/waking requires the CPU to switch between user and kernel states, frequent transitions place a heavy burden on the CPU, which in turn has a significant impact on concurrency performance
  • Main objective: To reduce the performance cost of traditional heavyweight locks using OS mutex without multithreading competition
  • Upgrade timing: When biased locks are disabled or multiple threads compete for biased locks, biased locks are upgraded to lightweight locks
  • Principle: Further improves performance when only one thread executes a synchronized block
  • Data structure: including Pointers to lock records in the stack, lock flag bits
  • Supplement:Readers are advised to read it first<< Take a closer look at the JVM virtual machine >>The chapter 8Virtual machine bytecode execution engineStack frame related knowledge
  • Supplement:Please refer to the author for CASConcurrent @ AbstractQueuedSynchronizer(Scheduled)

6.8.2 Lightweight Lock Flowchart

  • Thread 1 and thread 2 compete for the lock at the same time, causing the lock to swell into a heavyweight lock

6.8.3 Lightweight Locks Adding Locks

  • 1. JVM will create a space for storing lock records in the stack frame of the current thread before the thread executes the synchronized block and copy the product of the product header into the lock record (product is called the Mark Word)
  • 2. After the copy is successful, the thread attempts to use CAS to replace the Mark Word in the Object header with the pointer to the lock record (update the Mark Word in the Object header to the pointer to the lock record, and change the Owner pointer in the lock record to the Object Mark Word).

    • If the update succeeds, the current thread acquires the lock and continues to execute the synchronization method
    • If the update fails, other threads compete for the lock, and the current thread attempts to acquire the lock using spin. If the spin fails, the lightweight lock is upgraded to a heavyweight lock, and the current thread is blocked

6.8.4 Lightweight Lock Unlocking

  • When unlocking the product of herbier’s product, use CAS operation to replace the product with the object header.

    • If the unlock succeeds, no competition occurs
    • If the lock fails to be unlocked, it indicates that the current lock is in contention, and the lock expands to a heavyweight lock. When the lock is released, the blocked thread needs to wake up, and then the threads need to re-compete for the heavyweight lock according to the heavyweight lock rules

6.8.5 Precautions for Lightweight Locks

  • Hidden danger: For lightweight locks, there is a premise that “there is no multi-threaded race environment”. Once this premise is crossed, in addition to the cost of mutual exclusion, it will also increase the cost of additional CAS operation. In multi-threaded race environment, lightweight locks are even slower than gravity locks

6.9 biased locking

6.9.1 Overview of biased locking

  • Pain point: Hotspot authors find that in most cases there is no multi-thread contention, but the same thread obtains the same lock multiple times. In order to make the lock cheaper for the thread, they design biased locking (this is very related to business usage).
  • Main purpose: To minimize unnecessary lightweight lock execution paths without multithreading contention
  • Principle: Performance is further improved by adding token checks to reduce CAS operations when only one thread executes a synchronized block
  • Data structure: including the id of the thread that owns the lock, whether it is a biased lock, epoch(the timestamp of the biased lock), age of the object generation, lock flag bit

6.9.2 Flowchart of biased locking

  • Thread 1 demonstrates the initialization of a biased lock and thread 2 demonstrates the unlocking of a biased lock

6.9.3 Biased Lock Initialization

  • When a thread accesses a synchronized block and acquires the lockStore the thread ID of the bias lock in the lock record in the object header and stack frameLater, when the thread enters and exits the synchronized blockNo CAS operation is required to lock and unlock“, but simple firstCheck whether threads are stored in the MarkWord of the object header:

    • If yes, the thread has obtained the lock and continues to execute the task
    • If it has not been stored, it needs to be stored againCheck whether the current lock is biased(that is, whether the lock bias identifier in the object header is set to 1 and the lock identifier bit is set to 01) :

      • If it is not set, CAS is used to compete for the lock.
      • If set, try using CAS to point the bias lock of the object header to the current thread, which is the thread ID in the structure

6.9.4 Revoke a lock based on bias

  • Biased locking uses a mechanism that waits until a contention occurs to release the lock, and only when other threads compete for the lock will the thread holding the biased lock release the lock
  • Revocation of bias locks requires waiting for the global safe point (at which no bytecode is being executed)
  • To undo biased locks, follow these steps:

    • First of all beSuspends a thread that has a bias lock and checks whether the thread is alive:

      • If the thread is inactive, set the object header to lock free (other threads will regain the bias lock)
      • If the thread isActive state, the stack with the biased lock is executed, traverses the lock record of the biased object, andThe MarkWord for lock records and object headers in the stack is reset:

        • Or re-bias to another thread (giving the bias lock to another thread is equivalent to the current thread “being” released)
        • Either revert to no lock or mark the lock object as unsuitable as a bias lock (at which point the lock is upgraded to lightweight)
    • Finally, the suspended thread is woken up, and the thread blocked at the safe point continues to execute the synchronized code block

6.9.5 Biased Lock Closes the lock

  • Bias locking is enabled by default in JDK1.6 or higher and will be activated several seconds after the application starts
  • It is necessary to use JVM parameters to turn off latency- XX: BiasedLockingStartupDelay = 0
  • If it is determined that the lock is normally in contention, the JVM argument can be used-XX:-UseBiasedLocking=falseIf bias locking is turned off, lightweight locking is entered by default

6.9.6 Precautions for biased locking

  • Advantage: Biased locks only need to rely on the CAS atomic instruction once when the ThreadID is replaced, and no CAS instruction is required at all times (compared to other locks)
  • Hidden danger: Since biased locks must be revoked in the event of multithreaded contention, the performance loss of biased lock cancellation must be less than the performance cost of the saved CAS atomic instruction (which is usually only known through extensive pressure measurement).
  • Comparison: Lightweight locking is designed to improve performance when threads alternately execute synchronized blocks, while biased locking is designed to further improve performance when only one thread executes synchronized blocks

6.10 Biased Locks vs. Lightweight locks vs. Heavyweight Locks

  • Explore the Java synchronization mechanism and talk about concurrency (ii) — Synchronized in Java SE1.6