Welcome to pay attention to github.com/hsfxuebao, I hope to help you, if you think it can trouble to click on the Star ha

0. Meet often

Start with ali and other big factory interview questions

  • Talk about your understanding of Synchronized

  • Synchronized

  • Synchronized is not necessarily inferior to Lock

  • Synchronized: When are monitori objects generated? Do you know how monitor’s Monitorenter and Monitorexit ensure synchronization, or how these operations are performed at the bottom of the computer

  • For example, you mentioned the optimization process for synchronized. What is the difference between biased locks and lightweight locks?

Background of synchronized lock optimization: Using locks can achieve data security, but will bring performance degradation. Lockless programs can improve performance based on thread parallelism, but reduce security. Balance??

The lock upgrade process is as follows: no lock -> partial lock -> Lightweight lock -> heavyweight lock

Synchronized lock: the Mark Word in the object header is reused according to the different lock flag bits and the lock upgrade policy

1. Performance changes of Synchronized

1.1 Java5 earlier

Before java5, there was only Synchronized, which was a heavyweight operation at the operating system level,

Heavyweight locking, performance degradation if lock competition is fierce, switching between user and kernel mode before Java5. (To put it bluntly, heavyweight locks require switching to the underlying kernel state of the operating system, which consumes performance.)

The Java thread is mapped to the native thread of the operating system. If you want to block or wake up a thread, the operating system needs to intervene and switch between the user state and the core state, which will consume a lot of system resources, because the user state and the kernel state have their own dedicated memory space, special registers, etc. Many variables and parameters need to be passed to the kernel when the user mode is switched to the kernel mode. The kernel also needs to protect some register values and variables during the user mode switchover, so that after the kernel-mode call is finished, the user mode is switched back to continue working.

In earlier versions of Java, synchronized was a heavyweight Lock, which was inefficient because monitor locks depended on the underlying operating system’s Mutex Lock. Both suspending and resuming threads needed to be implemented in kernel mode. Blocking or wake a Java thread need switch operating system CPU status to complete, this state switching costs processor time, if the content is too simple in the synchronized code block, the switch time than user code execution time is long, time cost is relatively high, which is why the reason of low efficiency of early synchronized

After Java 6, lightweight and biased locks were introduced to reduce the performance cost of acquiring and releasing locks

1.2 Why can every object be a lock?

The markoop.hpp code looks like this:

MonitorCan be understood as a synchronization tool or a synchronization mechanism, and is often described as a Java object.Java objects are natural Monitors. Every Java object has the potential to become a Monitor because in Java design, every Java object comes out of the womb with an invisible lock called an internal lock or Monitor lock.As follows:

The nature of Monitor is dependent on the underlying operating systemMutex LockImplementation, the operating system implementation thread switching between the need to convert from user state to kernel state, the cost is very high.

Mutex Lock: Monitor is implemented underneath the JVM in c++ code. The essence is the implementation of Mutex Lock, which depends on the underlying operating system. The switching between threads of the operating system requires the conversion from user state to kernel state, which requires a lot of processor time and high cost. Synchronized is therefore a heavyweight operation in the Java language.

How does Monitor relate to Java objects and threads?

  • If a Java object is locked by a thread, theMark WordIn the fieldLockWordPoint to theStart address of monitor
  • The Owner field of MonitorWill store owning associatedThe thread ID of the object lock

The switch to Mutex Lock requires a transition from user to core state, so the state transition takes a lot of processor time.

1.3 Java6 Version Later

Java6 began with the optimization of Synchronized. After java6, lightweight and biased locks were introduced to reduce the performance cost of acquiring and releasing locks

2. Types of synchronized locks and upgrade steps

2.1 Multi-threaded access

The three cases are as follows:

  • There is Only One thread to access, and Only One
  • There are two threads A and B to access alternately
  • Competitive, multiple threads to access

2.2 Upgrade process is as follows

The lock used by synchronized is the MarkWord existing in the Java object head. The lock upgrade function mainly depends on the lock flag bit in MarkWord and the release of biased lock flag bit. 64 bit markup graph

2.3 no lock

The lockless code is shown below:

public class MyObject
{
    public static void main(String[] args)
    {
        Object o = new Object();

        System.out.println("Base 10 hash code:"+o.hashCode());
        System.out.println(Hexadecimal hash code:+Integer.toHexString(o.hashCode()));
        System.out.println("Binary hash code:"+Integer.toBinaryString(o.hashCode())); System.out.println( ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

Programs don’t have lock contention:

2.4 partial lock

2.4.1 Main Functions

  • When a piece of synchronized code is accessed multiple times by the same thread, since there is only one thread, the thread automatically acquires the lock on subsequent accesses,

  • With the same old customer visit, go to the convenience directly,

  • Look at multithreading tickets, the same thread get experience.

  • The author of Hotspot has found that in most cases: in the case of multi-threading, the lock not only does not exist multi-thread contention, but also exists the lock is acquired by the same thread several times. Biased locking appears in this case, it is to solve the problem of improving performance when only one thread performs synchronization.

2.4.2 64-bit tag map Look again

Biased locking is the modification of the thread ID in Markword by CAS

2.4.3 Holding of biased locks

Theory landing:

In the actual application running process, it is found that “the lock is always held by the same thread, rarely competing”, that is to say, the lock is always owned by the first thread that occupies it, and this thread is the biased thread of the lock.

Then you just need to record the bias thread ID when the lock is first owned. The bias thread then holds the lock all the time. (When the thread enters and exits the block with the synchronized lock, it does not need to re-lock and release the lock. Instead, it directly compares whether a bias lock to the current thread is stored in the object’s header.

If equality indicates that the biased lock is biased in favor of the current thread, there is no need to try to acquire the lock until a race occurs. After each synchronization, check whether the thread ID of the bias lock is the same as the current thread ID. If so, directly enter the synchronization. There is no need to go to CAS to update the object header every time a lock is unlocked. If only one thread is using the lock all the time, it is clear that biased locking has little additional overhead and high performance.

If inconsistencies mean contention, locks are no longer always favoring the same thread, and may need to be upgraded to lightweight locks to ensure fair competition between threads. Biased lock only when other threads try to compete for biased lock, the thread holding biased lock will release the lock, the thread will not actively release biased lock.

Technical implementation: When a synchronized method is captured by a thread, the object of the synchronized method will have the lock-change status bit in its Mark Word, and will occupy the first 54 bits to store the thread pointer as identification. If the thread accesses the same synchronized method again, the thread only needs to go to the Mark Word of the object header to determine whether the biased lock points to its own ID, without entering Monitor to compete for the object.

Detailed case The Account object is provided as follows:

The operation of bias lock is not directly connected to the operating systemUser - to - kernel conversion without upgrading directly to the highest levelTake the “object header” of an Account object as an example.

If a thread executes a synchronized block, the JVM uses CAS to record the thread pointer ID into the Mark Word and modifies the bias flag to indicate that the current thread acquires the lock. Lock objects become biased locks (by modifying the lock flag bit in the object’s head via CAS), which literally means “biased in favor of the first thread to acquire it.” The thread does not actively release bias locks after executing synchronized blocks of code.

At this point the thread acquires the lock and can execute the synchronized code block. When the thread reaches the block for the second time, the JVM determines whether the thread holding the lock is still holding its own (the thread holding the lock ID is also in the object header). The JVM uses the Mark Word of the account object to determine whether the current thread ID is still holding the lock on the object, and can continue to work in the critical section. Since the lock was not released before, there is no need to re-lock it. If only one thread is using the lock all the time, it is clear that biased locking has little additional overhead and high performance.

Conclusion: The JVM does not have to negotiate with the operating system to set up Mutex(for the kernel), it just needs to record the thread ID to indicate that it has acquired the current lock, without the operating system.

This is known as bias lock: bias the current thread until no other thread is competing, and the current thread can continue to execute.

2.4.4 Bias Lock JVM command

java -XX:+PrintFlagsInitial | grep BiasedLock

Partial lock JVM command parameters are as follows:

  • In fact, biased locking is enabled by default after JDK1.6, but the startup time is delayed.

  • So you need to add parameters – XX: BiasedLockingStartupDelay = 0, let the application starts immediately start.

  • Open the biased locking: – XX: XX: + UseBiasedLocking BiasedLockingStartupDelay = 0

  • Turn off biased locking: By default, the application goes directly to lightweight locking. -XX:-UseBiasedLocking

2.4.5 Code demonstration

By default, the code is shown as follows:

public class MyObject
{
    public static void main(String[] args)
    {
        Object o = new Object();

        new Thread(() -> {
            synchronized(o){ System.out.println(ClassLayout.parseInstance(o).toPrintable()); }},"t1").start(); }}Copy the code

Because the parameter system is enabled by default:

Open - XX: + UseBiasedLocking biased locking (default) - XX: - UseBiasedLocking closure biased locking - XX: BiasedLockingStartupDelay = 0 closure delay (demo biased locking need to open)Copy the code

Parameter description: biased locking in JDK1.6 above the default open, open the program start only after a few seconds after activation, you can use the JVM parameters to close the delay – XX: BiasedLockingStartupDelay = 0

If it is determined that the lock is usually in contention, biased locking can be turned off with the JVM parameter -xx: -usebiasedlocking, and lightweight locking is entered by default

To turn off the delay parameter, enable the function:-XX:BiasedLockingStartupDelay=0

Good times will come to an end…… O (╥﹏╥) O, there is a second thread snatching.

2.4.6 Revocation of biased lock

When another thread is gradually competing for the lock, you can no longer use biased locks and upgrade to lightweight locks. Competing threads fail to attempt CAS to update the object header and wait until the global safe point (at which point no code is executed) to revoke the biased lock.

Biased locking uses a mechanism that waits until a contention occurs to release the lock, and the original thread holding the biased lock is revoked only when other threads compete for the lock. Undo requires waiting for the global safe point (at which no bytecode is executing) while checking to see if the thread holding the biased lock is still executing:

  • The first thread is executing the synchronized method (in a synchronized block), and while it is not finished executing, the bias lock is cancelled and the lock escalation occurs when another thread attempts to seize it. At this point, the lightweight lock is held by the thread that originally held the bias lock and continues to execute its synchronization code, while the competing thread goes into spin and waits to acquire the lightweight lock.

  • The first thread completes the synchronized method (exits the synchronized block), sets the object header to lock free, unlocks the bias lock, and rebias.

2.4.7 General Procedure flow diagram

2.5 light lock

2.5.1 role

  • There are threads competing for locks, but the conflict time to acquire locks is extremely short.
  • It’s essentially a spin lock.

2.5.2 64-bit tag map Look again

2.5.3 Obtaining a lightweight lock

Lightweight locks are designed to improve performance when threads execute synchronized blocks nearly alternately.

Main purpose: To reduce the performance cost of heavy lock using OS mutex through CAS without multithreading competition, namely spin first and then block.

Upgrade timing: When biased locks are disabled or multiple threads compete for biased locks, biased locks are upgraded to lightweight locks

If thread A has already acquired the lock, thread B will grab the lock of the object, since the lock of the object has already been acquired by thread A, the current lock is biased.

When thread B finds that the thread ID in the object header Mark Word is not thread B’s own thread ID(but thread A), thread B will perform CAS operation in the hope of obtaining the lock.

There are two cases in thread B’s operation:

  • If the lock is successfully obtained, directly replace the thread ID in Mark Word with B’s own ID(A → B), and the lock will be biased to other threads again (that is to hand the biased lock to another thread, which means that the current thread has been “released” the lock). The lock will remain biased lock state, thread A is Over, thread B is in charge;

  • If lock acquisition fails, the bias lock is upgraded to the lightweight lock. At this time, the lightweight lock is held by the thread that originally holds the bias lock and continues to execute its synchronization code, while the competing thread B will enter spin and wait for obtaining the lightweight lock.

2.5.4 Code demonstration

If you turn off bias locks, you can go straight to lightweight locks:-XX:-UseBiasedLocking

2.5.5 The spin reaches a certain number and degree

When the spin reaches a certain number and degree, it will be upgraded to a heavyweight lock, depending on the situation.

-xx :PreBlockSpin=10, or if the number of spin threads exceeds half the number of CPU cores.

After Java6: adaptive means that the number of spins is not fixed, but based on:

  • The last spin of the same lock.
  • Has the state of the lock thread to determine.

2.5.6 Differences and differences between lightweight lock and biased lock

  • When a lightweight lock is lost, spin attempts to grab the lock
  • Lightweight locks need to be released every time they exit a synchronized block, whereas biased locks release locks only when a race occurs

2.6 heavy lock

There are a large number of threads participating in the lock competition and the conflict is very high. The lock flag bits are as follows:

Code demo:

2.7 small summary

The lock advantages disadvantages Applicable scenario
Biased locking Locking and unlocking require no additional cost, with a nanosecond difference compared to implementing asynchronous methods If there is lock contention between threads, there will be additional consumption of lock drop pins Apply toOnly one thread accesses the synchronized blockscenario
Lightweight lock Competing threads do not block, improving the response time of the program If you never get a thread competing for the lock, using spin consumes CPU Pursuit of response time; Synchronous block execution is very fast
Heavyweight lock Thread contention does not use spin and does not consume CPU Threads are blocked and response time is slow Throughput pursuit; The synchronization block execution speed is long

Summary of synchronized lock upgrade process: in a word, it is the first spin, not blocked again.

In effect, the previous pessimistic locking (heavyweight locking) is changed to the use of biased locking and lightweight locking (spin locking CAS) under certain conditions

Synchronized modifiers and code blocks are implemented differently in bytecode, but the internal implementation is based on MarkWord for object headers.

Synchronized used heavyweight locks before JDK1.6, but has since been optimized to have lock free -> bias -> lightweight -> heavyweight locks instead of using heavyweight locks in all cases.

Biased locking: Applies to single-threaded situations where a biased lock is used to enter synchronized methods/blocks when there is no lock contention.

Lightweight lock: suitable for less competitive situation (which similar to the use of optimistic locking range), when there is competition upgrade for lightweight locks, lightweight USES a spin lock, lock if synchronization method/block of code execution time is very short, using lightweight lock although CPU resources but relatively than using a heavyweight lock or more efficient.

Heavyweight lock: Suitable for highly competitive situations where synchronous method/block execution takes a long time and the spin cost of using a lightweight lock is greater than that of a heavyweight lock, upgrading to a heavyweight lock.

3. Optimization of lock by JIT compiler

JIT, Just In Time Compiler, commonly translated as just-in-time Compiler.

Lock elimination: From the JIT point of view, synchronized (O) does not exist, and the lock object is not shared and spread to other threads. In extreme cases, there is no underlying machine code for the lock object at all, eliminating the use of the lock. The code is shown as follows:

Synchronized (O) does not exist and is not shared with other threads. In extreme cases, there is no underlying machine code attached to the lock object at all, eliminating the use of the lock
public class LockClearUPDemo {
    static Object objectLock = new Object();/ / normal

    public void m1(a) {
        // Lock elimination. The JIT ignores it, and synchronized no longer exists. abnormal
        Object o = new Object();

        synchronized (o) {
            System.out.println("-----hello LockClearUPDemo"+"\t"+o.hashCode()+"\t"+objectLock.hashCode()); }}public static void main(String[] args) {
        LockClearUPDemo demo = new LockClearUPDemo();

        for (int i = 1; i <=10; i++) {
            newThread(() -> { demo.m1(); },String.valueOf(i)).start(); }}}Copy the code

Lock coarcing: If the method is end-to-end and the synchronized blocks are the same lock object, then the JIT compiler will merge these synchronized blocks into a large block, increase the range of bold, and apply for the lock once to avoid repeatedly applying and releasing the lock, thus improving performance. The code is shown as follows:

If the synchronized blocks are synchronized, the JIT compiler will merge the synchronized blocks into a single block, which increases the range of synchronized blocks. The synchronized block can be used only once, avoiding multiple synchronized blocks and releasing locks
public class LockBigDemo
{
    static Object objectLock = new Object();


    public static void main(String[] args)
    {
        new Thread(() -> {
            synchronized (objectLock) {
                System.out.println("11111");
            }
            synchronized (objectLock) {
                System.out.println("22222");
            }
            synchronized (objectLock) {
                System.out.println("33333"); }},"a").start();

        new Thread(() -> {
            synchronized (objectLock) {
                System.out.println("44444");
            }
            synchronized (objectLock) {
                System.out.println("55555");
            }
            synchronized (objectLock) {
                System.out.println("66666"); }},"b").start(); }}Copy the code

The resources

Java concurrent programming knowledge system Java concurrent programming art Java multi-threaded programming core technology Java concurrent implementation principle JDK source analysis