1. Foundation of Multi-threading (3)

1.1 lock classification in Java

Optimistic lock/pessimistic lock

Pessimistic and optimistic locks are conceptual rather than mechanical locks.

Pessimistic locking

For concurrent operations on the same data, pessimistic locks assume that other threads must modify the data when they use the data, so they lock the data first when they acquire the data to ensure that the data will not be modified by other threads. In Java, the implementation classes for the synchronized keyword and Lock are pessimistic locks.

Optimistic locking

Optimistic locks do not add locks because they assume that no other thread will modify the data while they are using it. They simply update the data to determine whether the data was previously updated by another thread. If the data is not updated, the current thread writes its modified data successfully. If the data has already been updated by another thread, different operations are performed (such as reporting an error or automatic retry) depending on the implementation. Optimistic locking is implemented in Java by using lock-free programming, most commonly the CAS algorithm, where incremental operations in Java atomic classes are implemented by CAS spin.

Exclusive lock (X)/Shared lock (S)

Exclusive lock

An exclusive lock, also called an exclusive lock, means that the lock can only be held by one thread at a time. If thread T holds an exclusive lock on data A, no other thread can hold any type of lock on data A. The implementation classes for synchronized in the JDK and Lock in JUC are mutex.

A Shared lock

A shared lock means that the lock can be held by multiple threads. If thread T adds A shared lock to data A, other threads can only add A shared lock to data A, not an exclusive lock.

The compatibility of the two locks is SS and X.

These two types of locks are also conceptual.

Mutex/read-write lock

These two types of locks are implementations of exclusive and shared locks.

Mutex: ReentrantLock.

Read/write lock: ReadWriteLock.

Reentrant lock

A reentrant lock, also known as a recursive lock, means that if the same thread acquires the lock from the outer method, the inner method of the same thread automatically acquires the lock (provided that the lock object is the same object or class). The lock is not blocked because it has been acquired before and has not been released. Java ReentrantLock and synchronized are both reentrantlocks. One advantage of ReentrantLock is that it can avoid deadlocks to some extent.

Such as synchronized reentrant locks. Two synchronization method in class A, B method to access the C, when threads access method B will need to access the C method, then you need to obtain synchronization locks, access to the B method to obtain the synchronization locks, access method C because before had already get to acquire A lock automatically, equivalent to add A layer of lock, Locks are released layer by layer when the associated method ends. If synchronized cannot reentrant, then C cannot reentrant after acquiring the lock of B method, because the lock was taken by itself, let oneself release the lock. To release the lock yourself, you need to get the C lock, which is deadlocked.

Fair lock/unfair lock

Fair lock

A fair lock means that multiple threads acquire locks in the order in which they apply for locks. The thread directly enters the queue to queue, and the first thread in the queue can obtain the lock. The advantage of a fair lock is that the thread waiting for the lock does not starve. The disadvantage is that the overall throughput efficiency is lower than that of the unfair lock. All threads except the first thread in the waiting queue will block, and the cost of CPU waking up the blocked thread is higher than that of the unfair lock.

Not fair lock

An unfair lock is a process in which multiple threads attempt to acquire the lock directly and wait at the end of the queue if they fail to acquire the lock. However, if the lock is available at this time, the thread can directly acquire the lock without blocking, so an unfair lock may occur in which the line that applied for the lock later obtains the lock first. The advantage of unfair locking is that it reduces the overhead of invoking threads, and the overall throughput is high, because threads have a chance to acquire the lock without blocking and the CPU does not have to wake up all threads. The downside is that threads in a waiting queue might starve to death or wait too long to acquire locks.

Segmented lock

In this way, the granularity of the lock becomes smaller and the probability of contention between threads is lower. But this is a way of designing a lock, and it’s a conceptual lock. A common implementation is ConcurrentHashMap.

Bias lock/lightweight lock/heavyweight lock

These four types of locks refer to states of lock, specifically synchronized.

Pre-concept:

Why does Synchronized achieve thread synchronization?

Synchronized implements thread synchronization through Monitor, which relies on the underlying operating system’s Mutex Lock to implement thread synchronization.

Java object head

Synchronized is a pessimistic lock that requires a lock on a synchronized resource before operating on it. This lock is stored in the Java object header.

Let’s take Hotspot virtual machine as an example. Hotspot’s object header contains two parts of data: Mark Word (Mark field) and Klass Pointer (type Pointer).

Mark Word: Default store object HashCode, generational age, and lock flag bit information. This information is irrelevant to the definition of the object itself, so Mark Word is designed as a non-fixed data structure to store as much data as possible in a very small amount of memory. It reuses its own storage space based on the state of the object, which means that the data stored in Mark Word changes as the lock flag bit changes during runtime.

Klass Point: a pointer to the object’s class metadata that the virtual machine uses to determine which class the object is an instance of.

Monitor

Monitor can be understood as a synchronization tool or a synchronization mechanism and is often described as an object. Each Java object has an invisible lock, called an internal lock or Monitor lock.

Monitor is a thread-private data structure, and each thread has a list of available Monitor Records, as well as a global list of available records. Each locked object is associated with a Monitor, and an Owner field in the monitor stores the unique identity of the thread that owns the lock, indicating that the lock is occupied by the thread.

Blocking or waking up a Java thread requires the operating system to switch CPU state, which takes processor time. If synchronizing the contents of a code block is too simple, state transitions can take longer than user code execution. This is how synchronized was originally synchronized, which is why synchronized was inefficient prior to JDK 6. This type of Lock, which relies on the operating system Mutex Lock, is called a “heavyweight Lock.” In JDK 6, “biased locks” and “lightweight locks” were introduced to reduce the performance cost of acquiring and releasing locks.

There are four lock states, from lowest to highest: no lock, biased lock, lightweight lock, and heavyweight lock. The lock status can only be upgraded, not degraded.

unlocked

None Lock No resource is locked. All threads can access and modify the same resource, but only one thread can modify the resource successfully.

The lockless feature is that the modification operation occurs in a loop, and the thread constantly tries to modify the shared resource. If there are no conflicts, the modification succeeds and exits, otherwise the loop continues. If multiple threads modify the same value, one thread will succeed, and the others will retry until the modification succeeds. The CAS principle and application described above is the implementation of lock-free. Lockless can’t completely replace locking, but the performance of locking is very high in some cases.

Biased locking

Biased locking means that a piece of synchronized code is always accessed by a thread, so the thread will automatically acquire the lock, reducing the cost of acquiring the lock.

In most cases, the lock is always acquired multiple times by the same thread, and there is no multithreading competition, so biased locking occurs. The goal is to improve performance when only one thread executes a synchronized code block.

When a thread accesses a block of synchronized code and acquires a lock, the thread ID of the lock bias is stored in Mark Word. Instead of using CAS to lock and unlock a thread when it enters and exits a synchronized block, it checks whether the Mark Word stores a biased lock pointing to the current thread. Biased locks are introduced to minimize unnecessary lightweight lock execution paths without multithreading competition, because the acquisition and release of lightweight locks depend on multiple CAS atomic instructions, while biased locks only need to rely on one CAS atomic instruction when replacing ThreadID.

Biased lock Only when other threads attempt to compete for biased lock, the thread holding biased lock will release the lock, the thread will not actively release biased lock. A partial lock is revoked by waiting for the global safe point (at which no bytecode is being executed), which first suspends the thread with the partial lock to determine whether the lock object is locked. After the biased lock is revoked, the system restores to the no-lock (flag bit is “01”) or lightweight lock (flag bit is “00”) state.

Biased locking is enabled by default in JDK 6 and later JVMS. Biased locking can be turned off by JVM arguments: -xx: -usebiasedLocking =false. After this is turned off, the program enters the lightweight locking state by default.

Lightweight lock

It means that when a biased lock is accessed by another thread, the biased lock will be upgraded to a lightweight lock. Other threads will try to acquire the lock through the form of spin without blocking, thus improving performance.

When the code enters the synchronization block, if the Lock status of the synchronization object is lockless (the Lock flag bit is “01”, whether the bias Lock is “0”), the VIRTUAL machine will first establish a space named Lock Record in the stack frame of the current thread, which is used to store the copy of the current Mark Word of the Lock object. Then copy the Mark Word from the object header into the lock record.

After the copy is successful, the VM attempts to update the Mark Word of the object to a pointer to the Lock Record using the CAS operation, and sets the owner pointer in the Lock Record to the Mark Word of the object.

If the update succeeds, the thread owns the lock on the object, and the object’s Mark Word lock bit is set to 00, indicating that the object is in a lightweight locked state.

If the lightweight lock fails to update, the virtual machine first checks whether the object’s Mark Word refers to the current thread’s stack frame. If it does, the current thread already owns the lock and can proceed directly to the synchronization block. Otherwise, multiple threads compete for the lock.

If there is currently only one waiting thread, the thread waits through spin. But when the spin exceeds a certain number, or when one thread is holding the lock, one is spinning, and a third person is calling, the lightweight lock is upgraded to the heavyweight lock.

Heavyweight lock

When upgrading to a heavyweight lock, the status value of the lock flag changes to “10”, then the pointer to the heavyweight lock is stored in the Mark Word, and all the threads waiting for the lock will enter the blocking state.

To sum up, biased locking can solve the locking problem by comparing Mark Word to avoid CAS operation. Lightweight locking uses CAS operation and spin to solve the locking problem and avoid thread blocking and wake up, which affects performance. A heavyweight lock blocks all threads except the one that owns the lock.

Spin lock/adaptive spin lock

spinlocks

Blocking or waking up a Java thread requires the operating system to switch CPU state, which takes processor time. If synchronizing the contents of a code block is too simple, state transitions can take longer than user code execution.

In many scenarios, synchronization resources are locked for a short period of time, and the cost of thread suspension and site recovery may not be worth the cost to the system to switch threads for this short period of time. If the physical machine has multiple processors that allow two or more threads to execute in parallel at the same time, we can let the later thread that requests the lock not give up CPU execution time to see if the thread that holds the lock will release the lock soon.

In order for the current thread to “hold on”, we need to spin the current thread. If the thread that held the synchronized resource before spinning has released the lock, the current thread can directly acquire the synchronized resource without blocking, thus avoiding the overhead of switching threads. This is the spin lock.

Basically, the thread is constantly trying to acquire the lock

Spinlocks have their own drawbacks; they are no substitute for blocking. Spin waiting avoids the overhead of thread switching, but it takes up processor time. If the lock is held for a short period of time, the spin wait works very well. Conversely, if the lock is held for a long time, the spinning thread is a waste of processor resources. Therefore, the spin wait time must be limited, and the thread should be suspended if the lock is not successfully acquired if the spin exceeds the limit (the default is 10 spins, which can be changed using -xx :PreBlockSpin).

Adaptive spin lock

Adaptive means that the spin time (number of spins) is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner. If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time. If spin is rarely successfully acquired for a lock, it is possible to omit the spin process and block the thread directly in future attempts to acquire the lock, avoiding wasting processor resources.

1.2. Lock examples in Java

ReentrantLock (ReentrantLock)

Basic usage

public class TestThread { private static ReentrantLock reentrantLock = new ReentrantLock(); private static int num = 10; public static void main(String[] args) { new Thread(() -> { reentrantLock.lock(); try { for (int i = 0; i < 10; i++) { System.out.println(String.format("%s:%d",Thread.currentThread().getName(),num)); Thread.sleep(100); num++; } } catch (InterruptedException e) { e.printStackTrace(); } finally { reentrantLock.unlock(); } }).start(); new Thread(() -> { reentrantLock.lock(); try { for (int i = 0; i < 10; i++) { System.out.println(String.format("%s:%d",Thread.currentThread().getName(),num)); Thread.sleep(100); num--; } } catch (InterruptedException e) { e.printStackTrace(); } finally { reentrantLock.unlock(); } }).start(); }}Copy the code

Reentrant locks are also reentrant locks, so what’s different from synchronized?

You can try to obtain the lock

In synchronized, a thread holding a lock blocks another thread trying to acquire it until the lock is acquired. In ReentrantLock, the thread can attempt to acquire the lock, and if the lock cannot be acquired, the thread can do other operations.

public class TestThread {

    private static ReentrantLock reentrantLock = new ReentrantLock();

    private static int num = 10;

    public static void main(String[] args) {
        new Thread(() -> {
            boolean lockStatus = reentrantLock.tryLock();
            if (!lockStatus){
                System.out.println(String.format("%s获取锁失败",Thread.currentThread().getName()));
                return;
            }else{
                try{
                    System.out.println(String.format("%s:%d",Thread.currentThread().getName(),num));
                }finally {
                    reentrantLock.unlock();
                }
            }
        }).start();

        new Thread(() -> {
            reentrantLock.lock();
            try {
                for (int i = 0; i < 10; i++) {
                    System.out.println(String.format("%s:%d",Thread.currentThread().getName(),num));
                    Thread.sleep(100);
                    num++;
                }
            } catch (InterruptedException e) {
                e.printStackTrace();
            } finally {
                reentrantLock.unlock();
            }
        }).start();


    }
}
Copy the code

Note: tryLock(), if successful, will lock the current resource, so you don’t have to call the lock again. TryLock is an attempt to acquire the lock, and if the attempt fails, return immediately without waiting. TryLock (long Timeout, TimeUnit Unit) throws InterruptedException. TryLock (long Timeout, TimeUnit Unit) A method that has not acquired a lock for a specified period of time is blocked until the lock has been acquired for an extended period of time and the method is interruptible.

Get a fair lock

The constructor with no arguments acquirement an unfair lock

public ReentrantLock() {
        this.sync = new ReentrantLock.NonfairSync();
    }

    public ReentrantLock(boolean fair) {
        this.sync = (ReentrantLock.Sync)(fair ? new ReentrantLock.FairSync() : new ReentrantLock.NonfairSync());
    }
Copy the code

If you want to get a fair lock, you can use the parameter constructor New ReentrantLock(true).

ReadWriteLock ReentrantReadWriteLock

This lock is divided into read locks and write locks. Multiple threads add read locks when reading data. If other threads are reading data, they can also acquire read locks without waiting for the lock to be released. Writing data adds a write lock, which prevents other threads from reading and writing.

Basic usage

public class TestThread { private static ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); private static Lock readLock = readWriteLock.readLock(); private static Lock writeLock = readWriteLock.writeLock(); public static void main(String[] args) { new Thread(() -> { String name = Thread.currentThread().getName(); System.out.println(name + "read lock enter "); readLock.lock(); try { Thread.sleep(5000); System.out.println(string. format("%s: executed ", name)); } catch (InterruptedException e) { e.printStackTrace(); } finally { readLock.unlock(); } }).start(); new Thread(() -> { String name = Thread.currentThread().getName(); System.out.println(name + "write lock into "); writeLock.lock(); Try {system.out. println(string. format("%s: executed ", name)); } finally { writeLock.unlock(); } }).start(); }} # result: Thread0 read lock thread1 write lock thread0: execute complete thread1: execute completeCopy the code

Note A write lock cannot be added when a read lock is added.

public class TestThread { private static ReadWriteLock readWriteLock = new ReentrantReadWriteLock(); private static Lock readLock = readWriteLock.readLock(); private static Lock writeLock = readWriteLock.writeLock(); public static void main(String[] args) { new Thread(() -> { String name = Thread.currentThread().getName(); System.out.println(name + "read lock enter "); readLock.lock(); try { Thread.sleep(5000); System.out.println(string. format("%s: executed ", name)); } catch (InterruptedException e) { e.printStackTrace(); } finally { readLock.unlock(); } }).start(); new Thread(() -> { String name = Thread.currentThread().getName(); System.out.println(name + "second read lock entered "); readLock.lock(); Try {system.out. println(string. format("%s: executed ", name)); }finally { readLock.unlock(); } }).start(); }} # thread_0: thread_1; thread_0: thread_1Copy the code

Note If a read lock is added, other threads can still obtain the read lock.

More content please pay attention to: Jian Xunyun notes

T =user&u=rookie_only& UTM_Campaign = annual_2020&UTM_medium =self_web_share& UTM_source =rookie_o nly

Reference:

  • www.cnblogs.com/jyroy/p/113…
  • www.cnblogs.com/hustzzl/p/9…