First, the 7 categories of locks

First of all, it should be pointed out that these various categories are various standards for evaluating a thing, such as the size of population, economic development and urban area of a city. A city may occupy several criteria at the same time. In Beijing’s case, it has a large population, developed economy and large urban area.

Similarly, for locks in Java, it is possible for a lock to hold multiple criteria and meet multiple categories, such as a ReentrantLock that is both interruptible and reentrant.

According to the classification standards, we divide locks into the following 7 categories, which are:

  • Biased/lightweight/heavyweight locks;
  • Reentrant lock/non-reentrant lock;
  • Shared/exclusive lock;
  • Fair/unfair lock;
  • Pessimistic lock/optimistic lock;
  • Spin-lock/non-spin-lock;
  • Interruptible locks/non-interruptible locks.
1.1 Java Object Headers

Synchronized is a pessimistic lock that requires a lock on a synchronized resource before operation. This lock is stored in the Java object header

The object header of Hotspot contains two parts of data: a Mark Word (a marker field) and a Klass Pointer (a type Pointer).

Mark Word: Default store object HashCode, generational age, and lock flag bit information. This information is irrelevant to the definition of the object itself, so Mark Word is designed as a non-fixed data structure to store as much data as possible in a very small amount of memory. It reuses its own storage space based on the state of the object, which means that the data stored in Mark Word changes as the lock flag bit changes during runtime.

Klass Point: a pointer to the object’s class metadata that the virtual machine uses to determine which class the object is an instance of.

Array length: Only if the object is an array, this part of the data has 32 bits

The first category is biased lock/lightweight lock/heavyweight lock. These three types of locks specifically refer to the synchronized lock state, which is indicated by the mark word in the object header.

1.2 the Monitor lock

Monitor can be understood as a synchronization tool or a synchronization mechanism and is often described as an object. Each Java object has an invisible lock, called an internal lock or Monitor lock.

Monitor is a thread-private data structure, and each thread has a list of available Monitor Records, as well as a global list of available records. Each locked object is associated with a Monitor, and an Owner field in the monitor stores the unique identity of the thread that owns the lock, indicating that the lock is occupied by the thread.

Back to synchronized, synchronized implements thread synchronization through Monitor, which relies on the underlying operating system’s Mutex Lock for thread synchronization.

As we mentioned in spin locking, “blocking or waking up a Java thread requires the operating system to switch CPU state, a state transition that takes processor time. If synchronizing the contents of a code block is too simple, state transitions can take longer than user code execution. This is how synchronized was originally synchronized, which is why synchronized was inefficient prior to JDK 6. This type of Lock, which relies on the operating system Mutex Lock, is called a “heavyweight Lock.” In JDK 6, “biased locks” and “lightweight locks” were introduced to reduce the performance cost of acquiring and releasing locks.

1.2.1 Timing of obtaining and releasing the Monitor lock

The simplest way to synchronize a code block or method is to use the synchronized keyword to modify a code block or method. The only way to acquire a monitor lock is to enter the synchronized block or method protected by the lock. A thread automatically acquires the lock before entering a block of synchronized protected code, and automatically releases the lock when it exits, either through a normal path or by throwing an exception.

public synchronized void method(a) {
    method body
}
Copy the code

We see that the method() method is modified by synchronized, and to understand the principle behind it, we rewrite the above code as pseudo-code in its equivalent form.

public void method(a) {
    this.intrinsicLock.lock();
    try{
        method body
    }
  	catch(Excetpion e) {
      intrinsicLock.unlock
    }  
    finally {
        this.intrinsicLock.unlock(); }}Copy the code

In this setting, after entering the method method, add the built-in lock immediately, protect the method with a try block, and finally release the lock. The intrinsicLock is the monitor lock. After this pseudo-code expansion, you’ll have a better idea of synchronized.

1.2.2 Javap Command To view disassembly results

JVM implementations of synchronized methods and synchronized blocks differ in detail, so let’s take a look at each implementation.

Synchronized code block

public class SynTest {
    public void synBlock(a) {
        synchronized (this) {
            System.out.println("wj"); }}}Copy the code

Java generates a bytecode file named Syntest. class. We then run javap -verbose Syntest. class to see the corresponding disassembly.

flags: ACC_PUBLIC Code: stack=2, locals=3, args_size=1 0: aload_0 1: dup 2: astore_1 3: monitorenter 4: getstatic #2 // Field java/lang/System.out:Ljava/io/PrintStream; 7: ldc #3 // String wj 9: invokevirtual #4 // Method java/io/PrintStream.println:(Ljava/lang/String;) V 12: aload_1 13: monitorexit 14: goto 22 17: astore_2 18: aload_1 19: monitorexit 20: aload_2 21: athrow 22: returnCopy the code

As you can see, synchronized blocks actually contain monitorenter and Monitorexit directives, with lines 3, 13, and 19 in red corresponding to monitorenter and Monitorexit, respectively. The reason there is one Monitorenter but two MonitoreXit directives is that the JVM guarantees that each Monitorenter must have a monitorexit corresponding to it. The MonitoRenter directive is inserted at the beginning of the synchronized code block. Monitorexit, on the other hand, needs to be inserted at both the normal end of the method and the exception to ensure that the lock is released if an exception is thrown

Monitorenter can be understood as a lock, and Monitorexit as a lock release. Each object maintains a counter that records how many times it has been locked. For unlocked objects, this counter is 0. Monitorenter and Monitorexit

Monitorenter One of three things happens when the thread executing monitorenter tries to take ownership of monitor:

  • If the monitor has a count of 0, the thread acquires the monitor and sets its count to 1. The thread is then the owner of the monitor.

  • If the thread already owns the monitor, it re-enters and adds up the count.

  • If another thread already owns the monitor, the thread will block until the count of the monitor becomes zero, indicating that the monitor has been released, and the current thread will try again to acquire the monitor.

Monitorexit The Monitorexit decreases the monitor’s counter by one until it reaches zero. This means that the monitor has been released, and no thread owns it anymore. This means that the monitor is unlocked, so that other threads waiting for the monitor can try again to acquire the ownership of the monitor.

Synchronized methods

As you can see above, the synchronization code block is implemented using the Monitorenter and Monitorexit directives. As for synchronized methods, monitorenter and Monitorexit directives are not implemented. Disassembled by Javap, it can be seen that synchronized methods are mostly the same as ordinary methods. This method has a flag modifier called ACC_SYNCHRONIZED to indicate that it is a synchronized method.

public synchronized void synMethod(a) {}Copy the code

The corresponding disassembly language is as follows

 public synchronized void synMethod();
    descriptor: ()V
    flags: ACC_PUBLIC, ACC_SYNCHRONIZED
    Code:
      stack=0, locals=1, args_size=1
         0: return
      LineNumberTable:
        line 10: 0

Copy the code

Synchronized methods have an ACC_SYNCHRONIZED flag. When a thread accesses a method, it first checks whether the method has the ACC_SYNCHRONIZED flag. If so, it must acquire the monitor lock before executing the method, and then release the monitor lock after executing the method. In other ways, the synchronized method is similar to the synchronized block in that, for example, if another thread requests to execute the method, it will block because it cannot acquire the monitor lock.

1.2.3 monitor implementation

In the Java virtual machine (HotSpot), monitor is implemented by ObjectMonitor. Its main data structure is as follows (located in the HotSpot virtual machine source code ObjectMonitor. HPP file, implemented in C++), with some attributes omitted

ObjectMonitor() {
    _count        = 0; / / record number
    _recursions   = 0; // The number of lock reentrant times
    _owner        = NULL; // point to the thread holding the ObjectMonitor object
    _WaitSet      = NULL; // After calling wait, the thread is added to _WaitSet
    _EntryList    = NULL ; // Threads waiting to acquire locks are added to the list
}
Copy the code

For a synchronized modified method (block) :

  1. When multiple threads access the method at the same time, they are placed in the _EntryList queue and the thread is blocked
  2. The **_owner of the ObjectMonitor object points to the current thread, and _count plus 1** indicates that the current lock has been acquired by a thread
  3. **_owner of ObjectMonitor becomes null, _count is reduced by 1**, and the thread enters the _WaitSet queue. Until a thread calls notify() to wake it up, the thread enters the _EntryList queue, contests the lock, and then enters the _Owner section
  4. If the current thread completes, then the monitor object is also released, and the **_owner of the ObjectMonitor object becomes NULL and _count is reduced by 1**

From this point of view, a Monitor object exists in the object header of every Java object (storing Pointers), which is how synchronized locks are acquired, and why any object in Java can be used as a lock. It is also the reason why notify/notifyAll/ Wait methods exist in the top-level Object

Synchronized synchronized synchronized synchronized synchronized

2.0 synchronized

Synchronized can guarantee visibility, atomicity and order at the same time. Synchronized is often used to solve concurrency problems, as are many other tools, such as volatile. But volatile guarantees visibility, order, not atomicity,

Synchronized can be used in the following ways

  1. Modifier instance method that locks the current instance object this

    public class SynchronizedDemo {
        public synchronized void methodOne(a) {}}Copy the code
  2. Modifies static methods that lock the Class object of the current Class

    public class SynchronizedDemo {
        public static synchronized void methodTwo(a) {}}Copy the code
  3. Modifies a code block that specifies a lock object and locks the given object

    public class SynchronizedDemo {
    
        public void methodThree(a) {
            // Lock the current instance object this
            synchronized (this) {}}public void methodFour(a) {
            // Lock the class object
            synchronized (SynchronizedDemo.class) {
    
            }
        }
    }
    Copy the code
2.1 biased locking

If there’s no competition for the lock all the time, then there’s no need to lock it, just mark it, and that’s the idea of favoring locking. After an object is initialized, there is no thread to obtain its lock, it is can be biased, when the first thread to access it and try to get the lock, it will record this thread, if you try to get the lock after the thread is the owner of biased locking, can directly obtain locks, overhead is very small, the best performance.

2.2 Lightweight Lock

JVM developers have found that in many cases synchronized code is executed alternately by multiple threads rather than at the same time, meaning that there is no actual contention or only a short lock contention that CAS can resolve, in which case a fully mutually exclusive heavyweight lock is unnecessary. A lightweight lock is one that is accessed by another thread when the lock is biased, indicating that there is a race. Then the biased lock will be upgraded to a lightweight lock, and the thread will try to acquire the lock by spinning instead of blocking.

2.3 Heavyweight Locks

Heavyweight locks are mutex locks, which are implemented using the synchronization mechanism of the operating system, so the overhead is relatively high. When multiple threads compete with each other for a long time, the lightweight lock cannot meet the demand, and the lock will expand to the heavyweight lock. Heavyweight locks block other threads that apply but can’t get the lock.

You can find the upgrade path of lock: no lock, biased lock, lightweight lock, heavyweight lock.

In summary, biased locking performs best and avoids CAS operations. Lightweight locks use spin and CAS to avoid thread blocking and wake up with moderate performance. Heavyweight locks block threads that do not acquire the lock and perform worst.

2.4 Why is there a lock upgrade?

Synchronized is a heavyweight (and pessimistic) lock. Each time a lock request is made, if the current resource is occupied by another thread, the current thread is blocked to the blocking queue, and then the cache of the current thread is cleared. Wait until the lock is released to wake up the current thread with notify() or notifyAll() and put it in a ready state.

Switching back and forth between threads can be very costly to the system, sometimes releasing resources as soon as the thread suspends. The Java thread is mapped to the native thread of the operating system, and every time the thread blocks or wakes up, it has to go through the transformation from user state to core state or from core state to user state, which is very wasteful of resources and will cause performance degradation.

The JVM therefore optimizes synchronized into three levels of locking: biased, lightweight, and heavyweight.

The flag bits of the lock in the Java object header MarkWord are biased lock (01), lightweight lock (00), heavyweight lock (10).

2.5 Synchronized Upgrade mechanism

No lock to bias lock:

As we know, before the Synchronized modified method is called, its object is in the initial state of no lock, and its lock marker bit is 01. At this time, when thread A calls this method, it will replace mark Words by CAS spin. Biased locking is the default open and start time is generally slower than application starts in a few seconds, if you don’t want to have this delay, you can use – XX: BiasedLockingStartUpDelay = 0;

If you do not want biased locking, you can set it to -xx: -usebiasedlocking = false.

Bias lock to bias lock:

Since biased lock thread 1 does not actively modify the object header after acquiring the lock, the object header of the previously locked object remains biased lock even if thread 1 actually dies. If thread 2 wants to enter the synchronization method, it checks to see if thread 1 is still alive. If thread 1 is dead, it restores the head of the locked object to lock free, and then repeats the lock free -> partial lock process.

Bias lock to lightweight lock:

In the case of 2.2, if thread 2 needs to enter the synchronization method and thread 1 still holds the object, the process of biased locking -> lightweight locking is entered. In this case, thread 2 fails to perform CAS replacement, and changes the object header to a lightweight lock. At the same time, the spin is enabled, and the replacement attempt is repeated.

Thread 1 will first copy the lock object header MarkWord into thread 1’s stack frame to store the lock record space (called DisplacedMarkWord). Then CAS is used to replace the contents of the object header with the address of the DisplacedMarkWord stored by thread 1;

If thread 1 copy objects of head at the same time (before the thread 1 CAS), thread 2 is also preparing to get lock, copy the object head to thread 2 lock records in the space, but at the time of thread 2 CAS, find thread 1 had the object changed, thread 2 CAS fails, then thread 2 attempts to use spinlocks to waiting thread 1 releases the lock. A spin lock simply means that thread 2 repeats CAS through a loop

But if the spin too long also not line, because the spin is consumes CPU, thus the number of spin is limited, such as 10 times or 100 times, if you spin the thread 1 times haven’t release the lock, or thread 1 is still in execution, thread 2 still spin waiting, then another thread 3 competition the lock object, At this point the lightweight lock will expand to the heavyweight lock. Heavyweight locks block all threads except those that own the lock, preventing the CPU from idling.

Lightweight lock to heavyweight lock:

The lightweight lock is upgraded to a heavyweight lock after a certain number of failed replacement attempts (default: 10).

Note that if thread 2 spins and thread 3 also needs to access the synchronization method, the lightweight lock will immediately swell to a heavyweight lock

In java1.6, adaptive spin locking was introduced, which means that the number of spins is not fixed, but based on the time of the previous spin on the same lock and the state of the lock owner. If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time.

2.6 Advantages and disadvantages of several locks

2.7 lock optimization

JDK1.6 introduces a number of optimizations such as spin locking, adaptive spin locking, lock elimination, lock coarser, biased locking, and lightweight locking. Locks mainly exist in four states, the order is: no lock state, biased lock state, lightweight lock state, heavyweight lock state, they will gradually upgrade with the fierce competition. One thing, however, is that you can’t degrade the lock

2.7.1 lock coarsening

As we all know, when using a lock, we should keep the scope of the lock as small as possible, in order to execute as little code as possible in the lock, shorten the time of holding the lock, and other threads waiting for the lock can get the lock as soon as possible. This is the right thing to do in most cases. But continuous locking and unlocking operations, such as the for loop below, can cause unnecessary performance losses

// Lock before coarsening:
for(...). {synchronized (obj) {
		// Some operations}}// Lock coarser:
synchronized (this) {
	for(...). {// Some operations}}Copy the code

Synchronized and Lock

3.1 similarities
  • Synchronized and Lock are used to secure resource threads.

  • All guarantee visibility.

    For synchronized, thread A operates before or within A synchronized block, which is visible to subsequent thread B that acquires the same monitor lock. Thread B can see the previous actions of thread A, which also reflects A principle of happens-before against synchronized.

For Lock, it is the same as synchronized, and can ensure visibility. As shown in the figure, all operations before unlocking are visible to all operations after locking

  • Synchronized and ReentrantLock both have reentrant features.

    ReentrantLock is the main implementation class of the Lock interface. When comparing synchronized with Lock, the main implementation class of Lock will also be selected for comparison. Reentrant means that a thread that has acquired a lock and now attempts to request it again is reentrant if it does not have to release the lock early but can simply continue to use the lock it holds. If the lock must be released before it can be reapplied for, it is not reentrant. Synchronized and ReentrantLock both have reentrant properties.

3.2 the difference between
  • Use the difference between

    The synchronized keyword can be added to a method without specifying the lock object (the lock object is this). You can also create a synchronized code block and customize the monitor lock object. The Lock interface, on the other hand, must indicate that the Lock object is started with Lock Lock () and unlock(), and is generally ensured to be unlocked with unlock() in the finally block to prevent deadlocks.

    Different from Lock’s explicit locking and unlocking, synchronized’s implicit unlocking, especially when throwing exceptions, can also ensure the release of the Lock, but there is no relevant embodiment in the Java code.

  • Add unlock order is different

    For Lock, if there are multiple Lock locks, the Lock can be unlocked in reverse order. For example, we can obtain the Lock1 Lock first, and then the Lock2 Lock. When unlocking, the Lock1 Lock is unlocked first, then the Lock2 Lock is unlocked.

    lock1.lock(); lock2.lock(); . lock1.unlock(); lock2.unlock();Copy the code

    Synchronized, however, cannot do this. The order in which synchronized unlocked and locked must be reversed, for example:

    synchronized(obj1){
        synchronized(obj2){ ... }}Copy the code

    So in this case, the sequence is first lock obj1, then lock obj2, then unlock obj2, and finally unlock obj1. This is because synchronized is implemented by the JVM and will be unlocked automatically after the execution of synchronized blocks. Therefore, synchronized will be unlocked in the nested order and cannot be controlled by itself.

  • Synchronized locks are not flexible enough

    Once a synchronized lock has been acquired by a thread, it can only be blocked by other threads trying to acquire it until the thread that holds the lock finishes running or an exception occurs to release the lock. If the thread holding the lock holds it for a long time before releasing it, the entire program becomes less efficient, and if the thread holding the lock never releases it, the thread trying to acquire the lock will have to wait forever.

    The Lock class, by contrast, uses the lockInterruptibly method while waiting for a Lock, which gives it the flexibility to interrupt and exit if the wait becomes too long, to try to obtain the Lock with a tryLock() method, or to do something else if the Lock is not available.

  • Synchronized locks can only be owned by one thread at a time, but Lock locks do not have this limitation

    Read locks, for example, can be held by multiple threads at the same time, whereas synchronized cannot.

  • The principle difference between

    Synchronized is a built-in lock, which is implemented by the JVM to acquire and release locks. It is also divided into biased locks, lightweight locks and heavyweight locks. Lock has different implementation principles. For example, ReentrantLock uses AQS to obtain and release locks internally.

  • Whether fair/unfair can be set

    Fair lock means that multiple threads waiting for the same lock obtain the lock in sequence on a first-come-first-served basis. Lock implementation classes such as ReentrantLock can set fair or unfair as needed, whereas synchronized cannot.

  • The performance difference between

    In Java 5 and prior to synchronized, the performance of synchronized was relatively low. However, in Java 6, the JDK implemented many optimizations for synchronized, such as adaptive spin, lock elimination, lock coarser, lightweight locking, and biased locking. So synchronized in later Versions of Java is no worse than Lock.

3.3 How to Choose
  • Use neither Lock nor synchronized if you can. Because in many cases you can use the mechanism in the java.util.concurrent package, which handles all lock and unlock operations for you, it is recommended to use utility classes in preference for unlocking.
  • If the synchronized keyword is appropriate for your program, use it to reduce the amount of code you need to write and the chances of errors. Because unlock can go badly wrong if you forget to use it in finally, and synchronized is safer.
  • Use Lock only if you need special features of Lock, such as try to acquire locks, interruptibility, timeout, etc.
3.4 The use of Condition

Synchronized combined with wait() and Nitofy ()/notifyAll() can realize the wait/notification model. ReentrantLock can also realize the wait/notification model, but it needs to use Condition, and Condition has better flexibility, which is embodied in the following

  1. Multiple Condition instances can be created within a Lock to implement multiple notification
  2. When notify() is used, the notified thread is randomly selected by the Java VIRTUAL machine, but ReentrantLock and Condition allow selective notification
package com.timwang.concurrent.core.lock;


import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Condition;

/ * * *@author wangjun
 * @dateThe 2020-07-02 * /
public class LockWaitNotify {
    private static java.util.concurrent.locks.ReentrantLock reentrantLock = new java.util.concurrent.locks.ReentrantLock();
    private static Condition conditionA = reentrantLock.newCondition();
    private static Condition conditionB = reentrantLock.newCondition();

    public static void main(String[] args) throws Exception {
        Thread waitThreadA = new Thread(new WaitA(), "threadA");
        waitThreadA.start();
        Thread waitThreadB = new Thread(new WaitB(), "threadB");
        waitThreadB.start();
        TimeUnit.SECONDS.sleep(2);
        reentrantLock.lock();
        try {
            conditionA.signal();
        } finally{ reentrantLock.unlock(); }}static class WaitA implements Runnable {
        @Override
        public void run(a) {
           reentrantLock.lock();
           try {
               System.out.println(Thread.currentThread() + "begin await = " + System.currentTimeMillis());
               conditionA.await();
               System.out.println(Thread.currentThread() + "end await = " + System.currentTimeMillis());
           } catch (Exception e) {
                e.printStackTrace();
           } finally{ reentrantLock.unlock(); }}}static class WaitB implements Runnable {
        @Override
        public void run(a) {
            reentrantLock.lock();
            try {
                System.out.println(Thread.currentThread() + ",begin await = " + System.currentTimeMillis());
                conditionB.await();
                System.out.println(Thread.currentThread() + ",end await = " + System.currentTimeMillis());
            } catch (Exception e) {
                e.printStackTrace();
            } finally{ reentrantLock.unlock(); }}}}Copy the code

The WaitThreadB is blocked because it is not notified

Fair lock and unfair lock

4.1 What is Fair and unfair

First, let’s look at what are fair and unfair locks,

  • Fair locking means that locks are allocated in the order requested by the thread;
  • An unfair lock is one that does not follow exactly the order requested, and allows queue-jumping under certain circumstances. However, it is important to note that unfair does not mean completely random, that threads can jump the queue at will, but only “when appropriate”.

So when is the right time? If the current thread requests the lock and the previous thread that holds the lock releases the lock, the current thread can choose to cut the queue immediately, regardless of the thread that is already waiting. However, if the previous thread does not release the lock at that time when the current thread requests it, the current thread will still be queued.

To better understand fair locks and unfair locks, let’s take an example from our daily life. Suppose we are still in school and we go to the canteen to buy food. I am the second person in the queue, and there is another student in front of me. So the front of the students after dinner when it was my turn, I was distracted, and did not notice that it was my turn now, at this time the front of the students suddenly came back to jump the queue, said “sorry, aunt please give me a chicken leg”, such behavior can be compared to our fair lock and unfair lock.

If you look at this, you might wonder why you have an unfair policy, and unfair is the default policy of ReentrantLock. If we don’t set it, it’s unfair by default. Did I waste all my time waiting in line? After all, fairness is a good behavior, not fairness is a bad behavior.

Let us consider A case, it is assumed that thread A holds A lock, thread B request this lock, because the thread A already holds the lock, so thread B in waiting, waiting for the thread B will be hung, namely into the blocking state, so when A thread when A lock is released, should have thread B turn round for locks, However, if a thread C suddenly jumps the queue and requests the lock, the unfair strategy will be to give the lock to thread C, because it is expensive to wake up thread B, and it is likely that thread C has already acquired the lock and released the lock after it has completed its task. Compared to the long process of waiting for the awakened thread B, jump the queue behavior will make the thread C itself over into blocking process, if the lock code that executes the contents of the small, thread C can be quickly completed the task, and before the thread B is fully awakened, the lock out, this is a win-win situation, for the thread C, Not having to wait improves its efficiency, and for thread B, there is no delay in acquiring the lock, because by the time it wakes up, thread C has already released the lock, because thread C’s execution speed is faster than thread B’s awakening speed, so Java designers design unfair locks. Is to improve the overall operational efficiency.

4.2 Fair scenario

Let’s use a diagram to illustrate fair and unfair scenarios, starting with the fair case. Suppose we create a fair lock and four threads request a fair lock in sequence. After thread 1 takes the lock, threads 2, 3, and 4 will wait in the wait queue. After thread 1 releases the lock, threads 2, 3, and 4 will acquire the lock in sequence. Thread 2 gets it first because it waits the longest.


4.3 Unfair Scenario

Take a look at the fair, assuming that the thread 1 unlock, suddenly have a thread 5 attempts to acquire the lock, then according to our fair strategy, thread 5 can get the lock, though it did not enter waiting queue, and thread 2, 3, 4 waiting time is longer than 5 thread, but in the overall efficiency consideration, This lock is still going to be held by thread 5.

4.4 Code Cases
/** * Description: demonstrates a fair lock, showing both fair and unfair situations. An unfair lock gives priority to the thread that currently holds the lock to acquire the lock again. Code reference from Java concurrent programming field manual 2.7. * /
public class FairAndUnfair {
    public static void main(String args[]) {
        PrintQueue printQueue = new PrintQueue();


        Thread thread[] = new Thread[10];
        for (int i = 0; i < 10; i++) {
            thread[i] = new Thread(new Job(printQueue), "Thread " + i);
        }


        for (int i = 0; i < 10; i++) {
            thread[i].start();
            try {
                Thread.sleep(100);
            } catch(InterruptedException e) { e.printStackTrace(); }}}}class Job implements Runnable {
		private PrintQueue printQueue;
    public Job(PrintQueue printQueue) {
        this.printQueue = printQueue;
    }


    @Override
    public void run(a) {
        System.out.printf("%s: Going to print a job\n", Thread.currentThread().getName());
        printQueue.printJob(new Object());
        System.out.printf("%s: The document has been printed\n", Thread.currentThread().getName()); }}class PrintQueue {
    private final Lock queueLock = new ReentrantLock(false);
    public void printJob(Object document) {
        queueLock.lock();
        try {
            Long duration = (long) (Math.random() * 10000);
            System.out.printf("%s: PrintQueue: Printing a Job during %d seconds\n",
                    Thread.currentThread().getName(), (duration / 1000));
            Thread.sleep(duration);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            queueLock.unlock();
        }

        queueLock.lock();
        try {
            Long duration = (long) (Math.random() * 10000);
            System.out.printf("%s: PrintQueue: Printing a Job during %d seconds\n",
                    Thread.currentThread().getName(), (duration / 1000));
            Thread.sleep(duration);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally{ queueLock.unlock(); }}}Copy the code

We can set fair/unfair locks by changing the parameters in New ReentrantLock(false). Fair output of the above code:

Thread 0: Going to print a job
Thread 0: PrintQueue: Printing a Job during 5 seconds
Thread 1: Going to print a job
Thread 2: Going to print a job
Thread 3: Going to print a job
Thread 4: Going to print a job
Thread 5: Going to print a job
Thread 6: Going to print a job
Thread 7: Going to print a job
Thread 8: Going to print a job
Thread 9: Going to print a job
Thread 1: PrintQueue: Printing a Job during 3 seconds
Thread 2: PrintQueue: Printing a Job during 4 seconds
Thread 3: PrintQueue: Printing a Job during 3 seconds
Thread 4: PrintQueue: Printing a Job during 9 seconds
Thread 5: PrintQueue: Printing a Job during 5 seconds
Thread 6: PrintQueue: Printing a Job during 7 seconds
Thread 7: PrintQueue: Printing a Job during 3 seconds
Thread 8: PrintQueue: Printing a Job during 9 seconds
Thread 9: PrintQueue: Printing a Job during 5 seconds
Thread 0: PrintQueue: Printing a Job during 8 seconds
Thread 0: The document has been printed
Thread 1: PrintQueue: Printing a Job during 1 seconds
Thread 1: The document has been printed
Thread 2: PrintQueue: Printing a Job during 8 seconds
Thread 2: The document has been printed
Thread 3: PrintQueue: Printing a Job during 2 seconds
Thread 3: The document has been printed
Thread 4: PrintQueue: Printing a Job during 0 seconds
Thread 4: The document has been printed
Thread 5: PrintQueue: Printing a Job during 7 seconds
Thread 5: The document has been printed
Thread 6: PrintQueue: Printing a Job during 3 seconds
Thread 6: The document has been printed
Thread 7: PrintQueue: Printing a Job during 9 seconds
Thread 7: The document has been printed
Thread 8: PrintQueue: Printing a Job during 5 seconds
Thread 8: The document has been printed
Thread 9: PrintQueue: Printing a Job during 9 seconds
Thread 9: The document has been printed

Copy the code

As you can see, the order in which threads directly acquire locks is completely fair, first come, first served.

The output of the above code in the unfair case looks like this:

Thread 0: Going to print a job
Thread 0: PrintQueue: Printing a Job during 6 seconds
Thread 1: Going to print a job
Thread 2: Going to print a job
Thread 3: Going to print a job
Thread 4: Going to print a job
Thread 5: Going to print a job
Thread 6: Going to print a job
Thread 7: Going to print a job
Thread 8: Going to print a job
Thread 9: Going to print a job
Thread 0: PrintQueue: Printing a Job during 8 seconds
Thread 0: The document has been printed
Thread 1: PrintQueue: Printing a Job during 9 seconds
Thread 1: PrintQueue: Printing a Job during 8 seconds
Thread 1: The document has been printed
Thread 2: PrintQueue: Printing a Job during 6 seconds
Thread 2: PrintQueue: Printing a Job during 4 seconds
Thread 2: The document has been printed
Thread 3: PrintQueue: Printing a Job during 9 seconds
Thread 3: PrintQueue: Printing a Job during 8 seconds
Thread 3: The document has been printed
Thread 4: PrintQueue: Printing a Job during 4 seconds
Thread 4: PrintQueue: Printing a Job during 2 seconds
Thread 4: The document has been printed
Thread 5: PrintQueue: Printing a Job during 2 seconds
Thread 5: PrintQueue: Printing a Job during 5 seconds
Thread 5: The document has been printed
Thread 6: PrintQueue: Printing a Job during 2 seconds
Thread 6: PrintQueue: Printing a Job during 6 seconds
Thread 6: The document has been printed
Thread 7: PrintQueue: Printing a Job during 6 seconds
Thread 7: PrintQueue: Printing a Job during 4 seconds
Thread 7: The document has been printed
Thread 8: PrintQueue: Printing a Job during 3 seconds
Thread 8: PrintQueue: Printing a Job during 6 seconds
Thread 8: The document has been printed
Thread 9: PrintQueue: Printing a Job during 3 seconds
Thread 9: PrintQueue: Printing a Job during 5 seconds
Thread 9: The document has been printed

Copy the code

It can be seen that under unfair conditions, there is a phenomenon of “jumping the queue” for lock snatches. For example, Thread 0 can obtain the lock first after releasing the lock, although there are already threads 1 to 9 queuing in the waiting queue.

4.5 Advantages and disadvantages of fair and unfair

  • The advantage of a fair lock is that all threads are fair and equal, and each thread has the opportunity to execute after waiting for a period of time. However, its disadvantage is that the overall execution speed is slower and the throughput is smaller
  • On the other hand, unfair locking has the advantage of faster overall execution and higher throughput, but it can also cause thread hunger, which means that threads in the waiting queue may not run for a long time if threads are always queueing.
4.6 Source Code Analysis

Below we to analyze the source of justice and a fair lock, specific see how they are implemented, you can see in already contains a Sync, this class inherits from AQS (AbstractQueuedSynchronizer), the code is as follows:

public class ReentrantLock implements Lock.java.io.Serializable {
 
private static final long serialVersionUID = 7373984872572414699L;
 
/** Synchronizer providing all implementation mechanics */
 
private final Sync sync;
Copy the code

Sync class code:

abstract static class Sync extends AbstractQueuedSynchronizer {... }Copy the code

Sync has two subclasses: FairSync and NonfairSync.

static final class NonfairSync extends Sync {... }static final class FairSync extends Sync {... }Copy the code

Here we look at the fair lock and non – fair lock lock method source.

Fair lock lock source code is as follows:

protected final boolean tryAcquire(int acquires) {
    final Thread current = Thread.currentThread();
    int c = getState();
    if (c == 0) {
        if(! hasQueuedPredecessors() &&Hasqueuedtoraise ()
                compareAndSetState(0, acquires)) {
            setExclusiveOwnerThread(current);
            return true; }}else if (current == getExclusiveOwnerThread()) {
        int nextc = c + acquires;
        if (nextc < 0) {
            throw new Error("Maximum lock count exceeded");
        }
        setState(nextc);
        return true;
    }
    return false;
}

Copy the code

Unfair lock lock source code is as follows:

final boolean nonfairTryAcquire(int acquires) {
    final Thread current = Thread.currentThread();
    int c = getState();
    if (c == 0) {
        if (compareAndSetState(0, acquires)) { // There is no judgment hasqueuedtoraise ()
            setExclusiveOwnerThread(current);
            return true; }}else if (current == getExclusiveOwnerThread()) {
        int nextc = c + acquires;
        if (nextc < 0) // overflow
        throw new Error("Maximum lock count exceeded");
        setState(nextc);
        return true;
    }
    return false;
}

Copy the code

He added a Hasqueued24 judgment, and what did he judge to be inside it? Check whether the current thread is at the top of the synchronization queue, and return true if it is, and false if it is not.

4.7 Figure source code analysis
4.7.1 Unfair Lock

Thread A prepared to go in and obtain the lock. First, it judged the state and found that it was 0. Therefore, the CAS was successful and the thread holding the lock was changed to itself.

Thread B also checks the state of the CAS server and finds that the state is 1, so the CAS server fails

After holding the lock for A long time, A was also A little tired and ready to release the lock to give other boys A chance. Therefore, he changed the state state and erased the traces of the thread holding the lock to wake UP B.

State = 0, CAS = 1, CAS = 1, CAS = 1, CAS = 1, CAS = 1, CAS = 1, CAS = 1

Thread B was woken up by THREAD A to get the lock, but found that the state was 1. CAS failed, so it could only go back to wait for the queue in frustration. The route did not forget to scold A for cheating the man who played with women’s feelings, how to cheat myself and my feelings.

The above is a thread with an unfair lock. In this case, it is possible for a thread like B to be unable to obtain resources for a long time. The advantage is that the possible thread reduces the wait time and improves utilization.

The default is now unfair. To be fair, pass true to the constructor

4.7.2 fair lock

If line A wants to acquire the lock, it first checks the state and finds that it is also 0. When line A looks at the queue, it unexpectedly changes the thread holding the lock to itself.

Thread B comes along and decides state, huh? If state=1, cas fails.

At this time thread C came in, but he first judged the state and found it was 0, thinking it had A chance. Then he looked at the queue and found there was someone in front of him. As A good citizen of the new era, he decided to queue up.

Thread B gets A call from thread A, checks state, and finds that it’s 0, and it’s first in the queue, and that smells good.

5. ReadWriteLock

Before the read/write lock, we assume that we use a normal ReentrantLock. While we are thread-safe, we are wasting resources because if multiple reads are running at the same time, there is no thread-safe problem. We can allow multiple reads to run in parallel to improve program efficiency.

However, writes are not thread-safe, and can cause thread-safe problems if multiple threads write at the same time or read at the same time.

Our read-write lock solves this problem by setting up a set of rules that ensures efficient simultaneous reads by multiple threads while ensuring thread-safe writes.

The overall idea is that it has two locks, the first lock is a write lock, after obtaining the write lock, can both read data and modify data, and the second lock is a read lock, after obtaining the read lock, can only view data, can not modify data. Read locks can be held by multiple threads, so multiple threads can view data at the same time.

Reasonable use of read lock in read place and flexible use of write lock in write place can improve the execution efficiency of the program.

5.1 Rules for Obtaining Read and write locks

We follow the following acquisition rules when using read/write locks:

  1. If one thread has occupied the read lock, the other thread can apply for the read lock successfully.
  2. If a thread has occupied the read lock, if another thread wants to apply for the write lock, the thread that applied for the write lock will wait for the release of the read lock, because the read and write operations cannot be performed simultaneously.
  3. If one thread has occupied the write lock, the other thread must wait for the previous thread to release the write lock, also because the read and write cannot be simultaneous, and the two threads should not write at the same time.

So we can sum it up in one sentence: either one or more threads have a read lock, or one thread has a write lock, but not both. It can also be summed up as: read sharing, other mutually exclusive (write mutually exclusive, read and write mutually exclusive, write and read mutually exclusive).

5.2 Use Cases

ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock ReentrantReadWriteLock

/** * description: demonstrates the use of read/write locks */
public class ReadWriteLockDemo {


    private static final ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(
            false);
    private static final ReentrantReadWriteLock.ReadLock readLock = reentrantReadWriteLock
            .readLock();
    private static final ReentrantReadWriteLock.WriteLock writeLock = reentrantReadWriteLock
            .writeLock();


    private static void read(a) {
        readLock.lock();
        try {
            System.out.println(Thread.currentThread().getName() + "Got read lock, reading now");
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            readLock.unlock();
            System.out.println(Thread.currentThread().getName() + "Release read lock"); }}private static void write(a) {
        writeLock.lock();
        try {
            System.out.println(Thread.currentThread().getName() + "Got write lock, writing");
            Thread.sleep(500);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            writeLock.unlock();
            System.out.println(Thread.currentThread().getName() + "Release write lock"); }}public static void main(String[] args) throws InterruptedException {
        new Thread(() -> read()).start();
        new Thread(() -> read()).start();
        new Thread(() -> write()).start();
        newThread(() -> write()).start(); }}Copy the code

The result of the program is:

Thread-0Get read lock, reading Thread-1Get read lock, reading Thread-0Release the read lock Thread-1Release the read lock Thread-2Get write lock, writing Thread-2Release write lock Thread-3Get write lock, writing Thread-3Release the write lockCopy the code

As you can see, read locks can be acquired by multiple threads at the same time, while write locks cannot.

5.3 Read lock Queue-jumping Strategy

ReentrantLock, if the lock is set to unfair, it can jump the queue at the moment the previous thread releases the lock, without waiting in a queue. Is the policy the same in the read/write lock case?

First, we can see that ReentrantReadWriteLock can be set to fair or not, as follows:

Fair lock:

ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(true);
Copy the code

Unfair lock:

ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(false);
Copy the code

If it is a fair lock, we pass true in the constructor’s arguments. If it is an unfair lock, we pass false in the constructor’s arguments. The default is an unfair lock. Before acquiring the read lock, the thread checks the readerShouldBlock() method, and before acquiring the write lock, the thread checks the writerShouldBlock() method to determine if it needs to cut or queue.

Let’s first look at the implementation of fair locking for these two methods:

final boolean writerShouldBlock(a) {
    return hasQueuedPredecessors();
}
final boolean readerShouldBlock(a) {
    return hasQueuedPredecessors();
}
Copy the code

It is clear that in the fair lock case, whenever there is a thread waiting on the waiting queue, which is when Hasqueuedtoraise () returns true, both writer and reader will block the toraise, and no one is allowed to cut the queue. This is also consistent with the idea of fair locking.

Let’s look at the implementation of an unfair lock:

final boolean writerShouldBlock(a) {
    return false; // writers can always barge
}
final boolean readerShouldBlock(a) {
    return apparentlyFirstQueuedIsExclusive();
}
Copy the code

The writerShouldBlock() method always returns false, as we can see from the ReentrantLock thread, because the return value is false, it can jump the queue at any time. But reading locks is different. The strategy implemented here is interesting. Let’s look at the following scenario:

Let’s say thread 2 and thread 4 are reading at the same time. Thread 3 wants to write, but since thread 2 and thread 4 already hold a read lock, thread 3 enters the wait queue to wait. At this point, thread 5 suddenly runs to get the read lock:

5.3.1 The first strategy: allow queue-jumping

Since there are threads reading now, and thread 5 doesn’t particularly add to their read burden because threads can share the lock, the first strategy is to have thread 5 join thread 2 and thread 4 directly to read.

This strategy seems to increase the efficiency, but there is a serious problem, that is if you want to read the thread keeps increasing, such as 6, thread thread 6 also can jump the queue, this will lead to read lock will not be released after a long time, cause the thread 3 long time not write lock, also is the need to write lock thread will fall into the “hunger” state, It will not be implemented for a long time.

5.3.2 Second strategy: Queue jumping is not allowed

According to this strategy, thread 3 has already waited in advance, so although thread 5 can improve efficiency if it succeeds in jumping the queue directly, we still let thread 5 wait in line:

Thread according to this strategy may be in the waiting queue, and behind the thread 3, let five execution thread 3 priority thread, so that you can avoid the “hunger” state, it is good for the robustness of the program, until completion of the threads run 3, threads run 5 have a chance, then who will not wait for too long time.

So we can see that even if the lock is unfair, as long as the head of the queue is the thread trying to acquire the write lock, then the lock reading is still not able to jump the queue, in order to avoid “hunger”.

5.4 Demonstration of Policy Selection

The choice of policy depends on the implementation of the lock. The implementation of ReentrantReadWriteLock chooses policy 2 (No queue jumping)

public class ReadLockJumpQueue {
 
    private static final ReentrantReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock();
    private static final ReentrantReadWriteLock.ReadLock readLock = reentrantReadWriteLock
            .readLock();
    private static final ReentrantReadWriteLock.WriteLock writeLock = reentrantReadWriteLock
            .writeLock();
 
    private static void read(a) {
        readLock.lock();
        try {
            System.out.println(Thread.currentThread().getName() + "Got read lock, reading now");
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            readLock.unlock();
            System.out.println(Thread.currentThread().getName() + "Release read lock"); }}private static void write(a) {
        writeLock.lock();
        try {
            System.out.println(Thread.currentThread().getName() + "Got write lock, writing");
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            writeLock.unlock();
            System.out.println(Thread.currentThread().getName() + "Release write lock"); }}public static void main(String[] args) throws InterruptedException {
        new Thread(() -> read(),"Thread-2").start();
        new Thread(() -> read(),"Thread-4").start();
        new Thread(() -> write(),"Thread-3").start();
        new Thread(() -> read(),"Thread-5").start(); }}Copy the code

The result of the above code is:

Thread-2 obtains the read lock, thread-4 obtains the read lock, Thread-2 releases the read lock, Thread-3 obtains the write lock, Thread-3 releases the write lock, thread-5 obtains the read lock, Reading Thread-5 release the read lockCopy the code

From this result, we can see that the implementation of ReentrantReadWriteLock chose a “no queue jumping” policy, which greatly reduces the probability of starvation. (If the result is inconsistent with the course, you can increase the sleep time by 100ms after each thread is started to keep the thread running in order).

5.5 Upgrade and downgrade of locks

Let’s take a look at escalation and degradation of locks. First let’s take a look at this code, which demonstrates how to take advantage of the degradation of locks when updating the cache.

public class CachedData {
 
    Object data;
    volatile boolean cacheValid;
    final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
 
    void processCachedData(a) {
        rwl.readLock().lock();
        if(! cacheValid) {// Before a write lock can be acquired, the read lock must be released.
            rwl.readLock().unlock();
            rwl.writeLock().lock();
            try {
                // We need to check the validity of the data again, because other threads may have modified the data between the time we released the read lock and the time we acquired the write lock.
                if(! cacheValid) { data =new Object();
                    cacheValid = true;
                }
                // The read lock is acquired without releasing the write lock, which is the degradation of the read lock.
                rwl.readLock().lock();
            } finally {
                // Release the write lock, but still hold the read lockrwl.writeLock().unlock(); }}try {
            System.out.println(data);
        } finally {
            // Release the read lockrwl.readLock().unlock(); }}}Copy the code

There is a read/write lock in this code, and the most important is the processCachedData method in the middle. In this method, the readLock is acquired first, namely, rwl.readlock ().lock(), which determines whether the current cache is valid, and if so, skips the entire if statement. If it has failed, it means we need to update the cache. Since we need to update the cache, the read lock acquired earlier is not enough, we need to acquire the write lock.

Before acquiring the writeLock, we first release the read lock, then use rwl.writelock ().lock() to obtain the writeLock, and then use the classic try finally statement. In the try statement, we first determine whether the cache is valid, because in the process of releasing the read lock and acquiring the writeLock, It is possible that another thread has preempted the data, so we need to make a second judgment here.

If we find that the cache is invalid, we use new Object() to indicate that we have the new data content, and set the flag bit of the cache to true to make the cache valid. Since we want to print out the value of data later, we cannot release all locks here. Our choice is to acquire the readLock without releasing the write lock, which is what the rwl.readlock ().lock() statement does, then release the write lock while holding the readLock, and finally print out the value of data in the bottom try.

5.5.1 Why is lock degradation required?

If we had used the write lock all the time and released the write lock at the end, it would have been thread-safe, but it would not have been necessary because we only had one piece of code to modify the data:

data = new Object();
Copy the code

After that, we just read data. If a write lock is used all the time, it cannot be read by multiple threads at the same time. Holding a write lock is a waste of resources and reduces the overall efficiency, so it is a good way to use the degradation of the lock to improve the overall performance.

5.5.2 Lock degradation is supported, but upgrade is not

If we run the following code and attempt to acquire the write lock without releasing the read lock, which is the upgrade of the lock, the thread will block directly and the program will not run.

final static ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
 
public static void main(String[] args) {
    upgrade();
}
 
public static void upgrade(a) {
    rwl.readLock().lock();
    System.out.println("Got a read lock");
    rwl.writeLock().lock();
    System.out.println("Upgrade successful");
}
Copy the code

This code prints “Read lock obtained” but not “successful upgrade” because ReentrantReadWriteLock does not support upgrading a read lock to a write lock.

5.5.3 Why Is Lock Upgrade Not Supported?

We know that a read-write lock can be held by multiple threads at the same time if all threads apply for the read-write lock, but only one thread can hold the read-write lock, and it is impossible to hold the read-write lock and the write lock at the same time.

Read locks and write locks cannot be held at the same time. Therefore, the write lock upgrade can be performed only after all read locks are released.

Suppose you have three threads, A, B, and C, and they all hold read locks. Suppose thread A tries to upgrade from A read lock to A write lock. It must then wait for B and C to release the read locks it has acquired. If, over time, B and C gradually release their read locks, thread A can indeed successfully upgrade and acquire the write lock.

But let’s consider a special case. Suppose that thread A and thread B both want to upgrade to A write lock, then thread A needs to wait for all other threads, including thread B, to release the read lock. Thread B also waits for all threads, including thread A, to release the read lock. This is a very typical deadlock situation. Who are willing to take the lead in releasing their own hands of the lock.

However, it is not impossible to upgrade read/write locks, and there are alternatives that can be implemented if only one thread can be upgraded at a time. The most common, ReentrantReadWriteLock, doesn’t support it.

5.6 Read/Write Lock Summary
  • Jump the queue strategy
    • Under the fair policy, as long as there are threads in the queue that are already queuing, queue jumping is not allowed.
    • Under unfair strategy:
      • If read lock is allowed to jump the queue, so as to read lock can be held at the same time by multiple threads, so may cause a steady stream of the thread has been cut in line behind the success, lead to read lock has been released in full, which can lead to write lock has been waiting for, in order to prevent the “hunger”, waiting for is to try to get the head of the queue nodes threads write locks, are not allowed to read lock cut in line.
      • Write locks can be cut in line at any time, because it is not easy to write locks to jump the queue, write locks only in no other thread is currently holding read locks and write locks, can jump the queue, successful and write locks once cut in line failure will enter the waiting queue, so it is difficult to cause “hungry”, allows the write lock is jumped the queue in order to improve the efficiency.
  • Upgrade or downgrade policy: A write lock can only be downgraded to a read lock, but a read lock cannot be upgraded to a write lock.

Reentrant and non-reentrant locks

6.1 No reentrant lock

The so-called non-reentrant lock means that if the current thread has already acquired the lock by executing a method, attempts to acquire the lock again in the method will not be blocked. We tried to design a non-reentrant lock:

public class Lock{
    private boolean isLocked = false;
    public synchronized void lock(a) throws InterruptedException{
        while(isLocked){    
            wait();
        }
        isLocked = true;
    }
    public synchronized void unlock(a){
        isLocked = false; notify(); }}Copy the code
public class Count{
    Lock lock = new Lock();
    public void print(a){
        lock.lock();
        doAdd();
        lock.unlock();
    }
    public void doAdd(a){
        lock.lock();
        //do somethinglock.unlock(); }}Copy the code

If the current thread executes the print() method to obtain the lock, then doAdd() cannot execute the logic in doAdd() and must release the lock first. This is a good example of a non-reentrant lock.

6.2 Reentrant lock
public class Lock{
    boolean isLocked = false;
    Thread  lockedBy = null;
    int lockedCount = 0;
    public synchronized void lock(a)
            throws InterruptedException{
        Thread thread = Thread.currentThread();
        while(isLocked && lockedBy ! = thread){ wait(); } isLocked =true;
        lockedCount++;
        lockedBy = thread;
    }
    public synchronized void unlock(a){
        if(Thread.currentThread() == this.lockedBy){
            lockedCount--;
            if(lockedCount == 0){
                isLocked = false; notify(); }}}}Copy the code

By reentrant, it means that a thread can enter the synchronized code block of a lock it already owns.

We design two threads to call print(). The first thread calls print() to get the lock and enter lock(). Since the initial lockedBy is null, instead of entering while and suspending the current thread, we increment lockedCount and record lockBy as the first thread. The first thread then enters the doAdd() method, which, because of the same process, will not enter while and suspend, and then increments lockedCount. When the second thread tries to lock, it will not acquire the lock because isLocked=true. The flag isLocked is set to false until the first thread calls unlock() twice, decrementing the lockCount to 0.

Breakable lock/non-breakable lock

The lockInterruptibly() method in ReentrantLock enables threads to respond to interrupts if they are blocked, such as a thread t1 that obtains a ReentrantLock using the lockInterruptibly() method and performs a long task. Another thread can acquire the reentrant lock held by T1 by interrupting t1’s execution immediately with the interrupt() method. A thread holding a lock through ReentrantLock or Synchronized will not respond to interrupt() until the lock is released.

Pessimistic lock/optimistic lock

8.1 pessimistic locks

Pessimistic locking more pessimistic, it thinks if don’t lock the resources, other threads will be up for, will cause data errors, so pessimistic locks in order to ensure the correctness of the result, in every time access and modify data, all the data lock, for other threads to access the data, so we can ensure that the data content.

This is also the same with our human pessimist character, pessimists do things before always afraid, so will strictly defend, ensure that others can not touch my things, this is the meaning of the name of the pessimistic lock.

For example, suppose threads A and B both use pessimistic locks, so they must acquire the lock before attempting to acquire A synchronous resource.

Let’s say thread A has the lock and is manipulating A synchronized resource, thread B must wait.

When thread A finishes, the CPU wakes up thread B, which is waiting for the lock, to try again.

If thread B now acquires the lock, it can perform its own operations on the synchronized resource. This is how pessimistic locking works.

8.2 optimistic locking

Optimistic locking more optimistic, think oneself when operating resources with other threads will not, so will not be locked by the operating object, will not allow other threads to contact it, at the same time, in order to ensure the accuracy of the data, before update, to contrast during the period of I modify the data, the data have been modified by the other thread: If it has not been modified, it means that only I am really in the operation, then I can modify the data normally; If I find that the data is different from the one I got at the beginning, it means that another thread has modified the data during this period, which means THAT I am one step too late, so I will abandon the change and choose to report errors, retry and other strategies.

This is similar to the optimists in our lives. The most optimistic person doesn’t worry about things that haven’t happened yet. Instead, he sees the future as rosy, so he doesn’t lock up the data until he changes it. ** Of course, an optimist does not act blindly. If he sees that things are different from what he expected, he will act accordingly. He will not sit idly by and wait for his fate.

Optimistic locking is generally implemented by CAS algorithm. For example, let’s say thread A is using an optimistic lock. So when it operates on a synchronous resource, it does not need to acquire the lock in advance. Instead, it can directly read the synchronous resource and perform calculations in its own thread.

When it is finished and ready to update the synchronized resource, it determines whether the resource has been modified by another thread.

If the synchronization resource has not been updated by other threads at this time, that is to say, the data at this time is the same as the data received by thread A at the beginning, then thread A will update the synchronization resource to complete the modification process.

If the synchronization resource has been modified and updated by another thread, thread A will find that the data at this time is inconsistent with the original data. Thread A will not continue to modify the data, but will choose to report an error or retry according to different business logic.

Pessimistic and optimistic locking concepts are not unique to Java, but are a broad concept that can be applied to other areas, such as databases, as well.

8.3 Typical Cases
  • Pessimistic Lock: synchronized keyword and Lock interface

    Pessimistic Lock implementation in Java includes synchronized keyword and Lock related classes, etc. We take Lock interface as an example, such as ReentrantLock implementation class, Lock () and other methods in the class is to execute Lock. The unlock() method does the unlocking. A typical pessimistic locking concept is that a resource must be locked before being processed and then unlocked.

  • Optimistic locks: Atomic classes

    A typical example of optimistic locking is atomic classes, such as AtomicInteger, which uses the idea of optimistic locking when updating data. Multiple threads can operate on the same atomic variable at the same time.

  • Great Joy and great sorrow: databases

    There are both pessimistic and optimistic locking ideas in the database.

    ** Pessimistic locking: ** For example, if we select a select for update statement in MySQL, it is pessimistic locking and does not allow third parties to modify the data before committing, which of course causes some performance loss and is not desirable in high concurrency situations.

    ** Optimistic locking: ** Instead, we can implement optimistic locking in the database with a version field. In access and modify data don’t need to lock, but we finish in acquiring data and calculate complete, ready to update the data, will check the version number and the version number are consistent when get the data, if consistent updates directly, if inconsistent, explain during the calculation has been modified this data, there are other threads that I can choose to get the data, to calculate, Then try again to update the data.

    The following is an example of an SQL statement (assuming that version is 1 when fetching data) :

    UPDATE student SET name = 'xiaoli', version= 2 WHERE id= 100 AND version= 1Copy the code
8.4 Advantages and Disadvantages

There is a view that pessimistic lock because of its heavy operation, can not be executed in parallel by multiple threads, and there will be a context switch and other actions, so the performance of pessimistic lock is not as good as optimistic lock, should try to avoid pessimistic lock, this view is incorrect.

Because while pessimistic locks do block threads that can’t get a lock, the overhead is fixed. The original cost of pessimistic locks is indeed higher than that of optimistic locks, but the feature is that once and for all, even if the lock is never obtained, there is no additional cost impact.

On the other hand, although the cost of optimistic lock is lower than that of pessimistic lock at the beginning, if the lock is not available all the time, or the concurrency is large and the competition is fierce, leading to continuous retry, then the consumption of resources will be more and more, and even the cost will exceed that of pessimistic lock.

Therefore, the same is pessimistic lock, in different scenes, the effect may be completely different, may be a good choice in today’s scene, in tomorrow’s other scene is a bad choice, this is exactly “your honey, his arsenic”.

Therefore, let’s take a look at the usage scenarios of the two types of locks, use the appropriate locks in the appropriate scenarios, and allocate the appropriate resources to the appropriate places

8.5 Application Scenarios

Pessimistic lock is suitable for scenarios with many concurrent writes, complex critical code and fierce competition. In this scenario, pessimistic lock can avoid a large number of useless repeated attempts and other consumption.

Optimistic locking is suitable for scenarios where most of the data is read and some of the data is modified. It is also suitable for scenarios where there are a lot of data read and write, but the concurrency is not fierce. In these scenarios, the unlocked nature of optimistic locking can lead to significant performance improvements.

Exclusive and shared locks

9.1 an exclusive lock

A lock that can be held by only one thread lock at a time, exclusive to both ReentantLock and Synchronized

9.2 Shared lock

The lock can be held by multiple threads. For ReentrantReadWriteLock, the read lock is shared and the write lock is exclusive

The shared lock of read lock ensures that concurrent read is very efficient. Read, write, read, and write processes are mutually exclusive

Spin locks

10.1 What is Spin

First of all, what is spin? “Spin” can be understood as “self-rotation,” where “spin” refers to a “loop,” such as a while loop or a for loop. “Spin” is the self repeating the cycle until the goal is achieved. Unlike normal locks, which block if the lock is not acquired.

10.1.1 Comparing the process of obtaining a lock with spin and non-spin

Let’s use this flow chart to compare the process of acquiring a spin lock with a non-spin lock.

First, the spin lock does not abandon the CPU time slice, but rather waits for the lock to be released by spinning. That is, it will try again and again to acquire the lock, and if it fails, it will try again until it succeeds.

We’ll look at a spin lock, the spin lock and spin locks are not the same, if it is found that at this point does not lock, it is their own thread switching state, let the thread to sleep, and then the CPU can in this period of time to do a lot of other things, until the threads to hold the lock to release the lock before, so before the CPU thread back, Let the thread try to acquire the lock again. If it fails again, the thread is put to sleep again, and if it succeeds, the lock on the synchronized resource is also acquired successfully.

As you can see, the biggest difference between a non-spin lock and a spin lock is that if it encounters a situation where the lock is not available, it will block the thread until it is woken up. And the spin lock will keep trying. So what’s the advantage of trying and trying with spin locks?

10.2 Benefits of spinlocks

First, both blocking and waking up threads are expensive, and if synchronizing the contents of a code block is not complex, the overhead of switching threads may be greater than the actual business code execution.

In many situations, it may be our synchronized code block content is not much, so I need the execution time is very short, if we just for this time is to switch the thread state, then let the thread actually not switch state, but allow it to attempts to acquire a spin lock, waiting for the other thread lock is released, sometimes I just need to wait for a moment, Context switching and other overhead can be avoided and efficiency can be improved.

To sum up the benefits of a spin lock, a spin lock uses a loop to constantly try to acquire the lock, keeping the thread in a Runnable state and saving the overhead of thread state switching.

10.3 AtomicLong implementation

In concurrent packages of Java version 1.5 and above, the java.util.concurrent package, the atomic classes are basically spinlock implementations.

An AtomicLong implementation has a getAndIncrement method, source code is as follows:

public final long getAndIncrement(a) {
    return unsafe.getAndAddLong(this, valueOffset, 1L);
}
Copy the code

You can see that it calls an unsafe. GetAndAddLong, so let’s look at this method again:

public final long getAndAddLong (Object var1,long var2, long var4){
    long var6;
    do {
        var6 = this.getLongVolatile(var1, var2);
    } while (!this.compareAndSwapLong(var1, var2, var6, var6 + var4));


    return var6;
}

Copy the code

In this method, it uses a do while loop. Here’s the obvious:

do {
    var6 = this.getLongVolatile(var1, var2);
} 
while (!this.compareAndSwapLong(var1, var2, var6, var6 + var4));

Copy the code

In this case, the do-while loop is a spin operation. If the modification fails due to the competition of other threads during the modification process, the while loop will continue indefinitely until the modification succeeds.

10.4 Implement a reentrant spin lock yourself

Let’s look at a self-implementing reentrant spin lock.

The code looks like this:

import java.util.concurrent.atomic.AtomicReference;
import java.util.concurrent.locks.Lock;
 
/** * Description: Implement a reentrant spin lock */
public class ReentrantSpinLock  {
 
    private AtomicReference<Thread> owner = new AtomicReference<>();
 
    // Reentrant times
    private int count = 0;
 
    public void lock(a) {
        Thread t = Thread.currentThread();
        if (t == owner.get()) {
            ++count;
            return;
        }
        // Spin the lock
        while(! owner.compareAndSet(null, t)) {
            System.out.println("Spinning."); }}public void unlock(a) {
        Thread t = Thread.currentThread();
        // Only the thread holding the lock can unlock it
        if (t == owner.get()) {
            if (count > 0) {
                --count;
            } else {
                // There is no CAS operation here because there is no race because only thread holders can unlock
                owner.set(null); }}}public static void main(String[] args) {
        ReentrantSpinLock spinLock = new ReentrantSpinLock();
        Runnable runnable = new Runnable() {
            @Override
            public void run(a) {
                System.out.println(Thread.currentThread().getName() + "Start trying to get a spin lock.");
                spinLock.lock();
                try {
                    System.out.println(Thread.currentThread().getName() + "Got the spin lock.");
                    Thread.sleep(4000);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                } finally {
                    spinLock.unlock();
                    System.out.println(Thread.currentThread().getName() + "Release the spin lock."); }}}; Thread thread1 =new Thread(runnable);
        Thread thread2 = newThread(runnable); thread1.start(); thread2.start(); }}Copy the code

The result of this code is:

. Spinning, spinning, spinning, spinning, spinning, spinning, spinning Thread-0Release the spin lock Thread-1Got the spin lockCopy the code

There will be a lot of “spin” printed in front, indicating that the CPU is still running during the spin.

10.5 disadvantages

So are there any downsides to spin locks? Spin-locks have their drawbacks. The biggest disadvantage is that while it avoids the overhead of thread switching, it also introduces new overhead as it constantly has to try to acquire the lock. If the lock can never be released, then this attempt is a futile attempt and a waste of processor resources. In other words, although the cost of a spinlock is lower than that of a thread switch at first, it increases over time and can even exceed the cost of a thread switch later on.

10.6 Application Scenarios

So let’s take a look at where spin locks work. First, spinlocks are suitable for scenarios where concurrency is not particularly high and where critical sections are short, so we can improve efficiency by avoiding thread switching.

However, if the critical area is large and the thread takes a long time to release the lock, then spin-locking is not suitable because the spin will always use up the CPU but can’t get the lock, wasting resources.

10.7 Adaptive splock

JDK 1.6 introduced a more clever spin lock called adaptive spin locking. The number of spins is variable. In general terms, if the thread succeeded last time, it will spin more this time, because the virtual machine thinks that if it succeeded last time, it will spin more this time. Conversely, if a lock rarely spins successfully, subsequent spins are reduced or even omitted to avoid wasting processor resources. You don’t feel so low now