There are many concurrent articles that mention various locks such as fair locks, optimistic locks, etc. This article introduces the classification of various locks. The introduction is as follows:

1. Fair lock/unfair lock

2. Reentrant lock/non-reentrant lock

3. Exclusive or shared lock

4. Mutex/read-write lock

5. Optimistic/pessimistic lock

6. Block lock

7. Bias lock/lightweight lock/heavyweight lock

8. The spin lock

These categories do not all refer to the state of the lock, some refer to the characteristics of the lock, and some refer to the design of the lock. The following summary of the content is to explain the noun of each lock.

Fair lock/unfair lock

Fair lock

A fair lock is one in which multiple threads acquire locks in the order in which they are applied.

Not fair lock

An unfair lock means that multiple threads obtain locks in a different order than the one that applied for the locks earlier. Possibly, it could lead to priority reversals or starvation.

In the case of Java ReentrantLock, the constructor specifies whether the lock is fair, and the default is unfair. The advantage of an unfair lock is that the throughput is greater than that of a fair lock. Synchronized is also an unfair lock. Since it does not implement thread scheduling via AQS like ReentrantLock, there is no way to make it a fair lock.

Reentrant lock/non-reentrant lock

Reentrant lock

In the broad sense, a reentrant lock refers to a lock that can be repeated and recursively called. After the lock is used in the outer layer, the inner layer can still be used without deadlock (provided it is the same object or class). Such a lock is called a reentrant lock. ReentrantLock and synchronized are both reentrant locks

synchronized void setA() throws Exception{
   Thread.sleep(1000);
   setB();
}
synchronized void setB() throws Exception{
   Thread.sleep(1000);
}
Copy the code

The above code is a feature of a reentrant lock. Without a reentrant lock, setB may not be executed by the current thread, possibly causing a deadlock.

No reentrant lock

A non-reentrant lock, as opposed to a reentrant lock, does not recursively call, and a recursion call deadlocks. To see a classic explanation of using a spin lock to simulate a non-reentrant lock, the code is as follows

import java.util.concurrent.atomic.AtomicReference;

public class UnreentrantLock {

   private AtomicReference<Thread> owner = new AtomicReference<Thread>();

   public void lock() {
       Thread current = Thread.currentThread();
       // This is a classic "spin" syntax, also found in AtomicInteger
       for (;;) {
           if(! owner.compareAndSet(null, current)) {return; } } } public void unlock() { Thread current = Thread.currentThread(); owner.compareAndSet(current, null); }}Copy the code

The same thread calls lock() twice. If unlock() is not used to release the lock, a deadlock will occur on the second call. The lock is not reentrant, and the same thread does not have to release the lock and acquire it every time. Such scheduling switches are resource-intensive.

Turn it into a reentrant lock:

import java.util.concurrent.atomic.AtomicReference;

public class UnreentrantLock {

   private AtomicReference<Thread> owner = new AtomicReference<Thread>();
   private int state = 0;

   public void lock() {
       Thread current = Thread.currentThread();
       if (current == owner.get()) {
           state++;
           return;
       }
       // This is a classic "spin" syntax, also found in AtomicInteger
       for (;;) {
           if(! owner.compareAndSet(null, current)) {return;
           }
       }
   }

   public void unlock() {
       Thread current = Thread.currentThread();
       if (current == owner.get()) {
           if(state ! =0) {
               state--;
           } else{ owner.compareAndSet(current, null); }}}}Copy the code

Before each operation, determine whether the current lock holder is the current object, using the state count instead of releasing the lock each time.

ReentrantLock implementation in ReentrantLock

Here is the lock acquisition method for an unfair lock:

final boolean nonfairTryAcquire(int acquires) {
   final Thread current = Thread.currentThread();
   int c = getState();
   if (c == 0) {
       if (compareAndSetState(0, acquires)) {
           setExclusiveOwnerThread(current);
           return true; }}// Here it is
   else if (current == getExclusiveOwnerThread()) {
       int nextc = c + acquires;
       if (nextc < 0// overflow
           throw new Error("Maximum lock count exceeded");
       setState(nextc);
       return true;
   }
   return false;
}
Copy the code

AQS maintains a private volatile int state to count reentrant counts, avoiding frequent hold releases, improving efficiency and avoiding deadlocks.

Exclusive lock/shared lock

If you read ReeReentrantLock and ReentrantReadWriteLock in the C.U.T package, you will find that one is exclusive and the other is shared.

Exclusive lock: This lock can only be held by one thread at a time.

Shared lock: This lock can be shared by multiple threads. A typical example is the read lock in ReentrantReadWriteLock. The read lock can be shared, but the write lock can only be used exclusively each time.

In addition, the shared read lock can ensure that concurrent read is very efficient, but read and write, write and read are mutually exclusive.

Exclusive lock and shared lock is also achieved through AQS, through the implementation of different methods, to achieve exclusive or shared. In the case of Synchronized, of course, the lock is exclusive.

Mutex/read-write lock

The mutex

A shared resource is locked before being accessed and unlocked after being accessed. Once locked, any other thread that tries to lock again will be blocked until the current process is unlocked.

If more than one thread is blocked, all the threads on the lock are programmed to be ready. The first thread to be ready performs the lock operation, and the other threads enter the wait state again. In this way, only one thread can access the resource protected by the mutex

Read-write lock

Read/write locks are both mutex and shared locks. Read mode is shared and write mode is mutex (exclusive lock).

Read/write locks have three states: read/lock state, write/lock state and no lock state

The Java implementation of a read/write lock is ReadWriteLock

Only one thread can hold the read-write lock in write mode at a time, but multiple threads can hold the read-write lock at the same time. Only one thread can hold the write lock, but multiple threads can hold the read lock at the same time, which is why it can achieve high concurrency. When it is in write state, any thread attempting to acquire the lock is blocked until the write state lock is released. If it is in the read state lock, other threads are allowed to acquire its read state lock, but not allowed to acquire its write state lock until all threads’ read state lock is released. To prevent a thread from holding a write lock, when the read-write lock senses that a thread is trying to acquire the write lock, it blocks all subsequent threads that attempt to acquire the read lock. So read/write locks are ideal for situations where there are far more reads than writes on a resource.

Optimistic lock/pessimistic lock

Pessimistic locking

Always assume the worst, every time to fetch the data that people will change, so every time when take data will be locked, so people want to take this data will be blocked until it got locked (Shared resources to only one thread at a time using, other threads blocked, after use to transfer resources to other threads). Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock. Exclusive locks such as synchronized and ReentrantLock in Java are implementations of the pessimistic locking idea.

Optimistic locking

Always assume the best case, every time I go to get the data, I think others will not modify it, so I will not lock it, but when updating, I will judge whether others have updated the data during this period, which can be achieved by using the version number mechanism and CAS algorithm. Optimistic locks are suitable for multi-read applications to improve throughput. Optimistic locks are provided by databases similar to write_condition. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS.

Segmented lock

Segmented locking is actually a lock design, not a specific lock. For ConcurrentHashMap, its concurrent implementation is to achieve efficient concurrent operations in the form of segmented locking.

The locking mechanism of concurrent container class is based on segmental locking with smaller granularity. Segmental locking is also one of the important means to improve the performance of multi-concurrent programs.

In concurrent programs, serial operations can degrade scalability, and context switching can degrade performance. Both of these problems are caused by flooding in the event of a lock contention, and when using an exclusive lock to protect a restricted resource, it is basically done in a serial fashion — only one thread can access it at a time. So the biggest threat to scalability is the exclusive lock.

There are three ways to reduce lock contention: 1. Reduce lock holding time; 2. Reduce lock request frequency; 3.

In some cases we can further extend the lock decomposition technique to include a lock decomposition on a set of independent objects, called segmenting locking.

In fact, to put it simply:

How much container lock, every lock part used to lock the container data, then when multithreaded access container different data section of the data, the lock contention between threads will not existence, which can effectively improve the efficiency of concurrent access, this segmentation technique is used by ConcurrentHashMap lock, first of all the data into a period of storage, Then, a lock is assigned to each segment of data. When a thread uses the lock to access one segment of data, other segments of data can also be accessed by other threads.

For example, in ConcurrentHashMap, an array of 16 locks is used, each securing 1/16 of all hash buckets, where the NTH hash bucket is protected by the (N mod 16) lock. This reduces the number of lock requests by about 1/16, assuming that the proper hash algorithm is used to evenly divide the keywords. It is this technique that allows ConcurrentHashMap to support up to 16 concurrent writing threads.

Bias lock/lightweight lock/heavyweight lock

Lock status:

1. There is no lock

2. Bias to the lock state

3. Lightweight lock status

4. Heavyweight lock status

The status of the lock is indicated by an object monitor field in the object header. The four states will gradually upgrade with the situation of competition, and it is an irreversible process, that is, cannot be degraded. None of these four states are locks in the Java language, but rather optimizations made by the Jvm to improve lock acquisition and release efficiency (when synchronized is used).

Biased locking

Biased locking is when a piece of synchronized code is always accessed by a thread, and that thread automatically acquires the lock. Reduce the cost of acquiring locks.

lightweight

Lightweight lock refers to that when a biased lock is accessed by another thread, the biased lock will be upgraded to lightweight lock. Other threads will try to acquire the lock through the form of spin, which will not block and improve performance.

Heavyweight lock

Heavyweight lock means that when the lock is a lightweight lock, the other thread spins, but the spin will not continue. When the spin is a certain number of times, the lock has not been acquired, it will enter the block, and the lock expands to the heavyweight lock. Heavyweight locks can block other threads that apply for them and degrade performance.

spinlocks

We know that CAS algorithm is a kind of implementation of optimistic lock, CAS algorithm also involves spin lock, so here to tell you what is spin lock.

A quick review of the CAS algorithm

CAS is the English word Compare and Swap. CAS is a well-known lock-free algorithm. Lockless programming refers to the Synchronization of variables between multiple threads without using locks. It is also called non-blocking Synchronization when no threads are blocked. The CAS algorithm involves three operands

1. Memory value V to be read or written

2. The comparison value A

3. New value B to be written

When updating A variable, the value of the memory address V is changed to B only when the expected value of the variable A is the same as the actual value of the memory address V. Otherwise, no operation is performed. Typically it is a spin operation, that is, repeated retries.

What is a spin lock?

Spinlock: When a thread is acquiring a lock, if the lock has already been acquired by another thread, the thread will wait in a loop, and then continuously determine whether the lock can be acquired successfully, until the lock is acquired and will exit the loop.

It is a locking mechanism to protect shared resources. In fact, spin locks are similar to mutex in that they are used to solve the mutual exclusion of a resource. Whether it is a mutex or a spin lock, there can be at most one holder at any one time, which means that at most one execution unit can acquire the lock at any one time. But the scheduling mechanism is slightly different. For mutex, if the resource is already occupied, the resource applicant can only go to sleep. But a spinlock does not cause the caller to sleep. If the spinlock has been held by another execution unit, the caller keeps looping to see if the holder of the spinlock has released the lock, hence the name “spin”.

How does Java implement spin locking?

Here’s a simple example:

public class SpinLock {
   private AtomicReference<Thread> cas = new AtomicReference<Thread>();
   public void lock() {
       Thread current = Thread.currentThread();
       / / use the CASwhile (! cas.compareAndSet(null, current)) {// DO nothing} } public void unlock() { Thread current = Thread.currentThread(); cas.compareAndSet(current, null); }}Copy the code

If thread A does not release the lock at this time, another thread B will acquire the lock again. At this time, since CAS is not satisfied, it will enter the while loop to continuously determine whether CAS is satisfied. Until thread A calls unlock to release the lock.

Problems with spin locks

1. If a thread holds a lock for too long, other threads waiting to acquire the lock will enter a waiting loop, consuming CPU. Improper CPU usage may result in high CPU usage. 2. The above Java implementation of the spin lock is not fair, that is, it does not allow the thread that waits the longest to acquire the lock first. Unfair locks lead to “thread hunger.”

Advantages of spin locks

1. The spin lock will not cause the thread state to switch. It is always in the user state, that is, the thread is always active. 2. The non-spin lock will enter the blocking state when it can not get the lock, thus entering the kernel state. When the lock is obtained, it needs to recover from the kernel state, requiring the thread context switch. (After the thread is blocked, it will enter the kernel (Linux) scheduling state, which will cause the system to switch back and forth between user state and kernel state, seriously affecting the lock performance)

Reentrant spinlocks and non-reentrant spinlocks

If you look at the code at the beginning of this article, you can see that it is not reentrant, that is, when a thread has acquired the lock the first time, it can acquire it again before the lock is released, and the second time it cannot successfully acquire it. Since the CAS is not satisfied, the second acquisition will wait in a while loop, which should also be successful if it is a reentrant lock.

Furthermore, even if the second lock is successfully acquired, the second lock will be released when the first lock is released, which is not reasonable.

To implement a reentrant lock, we need to introduce a counter that records the number of threads acquiring the lock.

public class ReentrantSpinLock {
   private AtomicReference<Thread> cas = new AtomicReference<Thread>();
   private int count;
   public void lock() {
       Thread current = Thread.currentThread();
       if (current == cas.get()) { // If the current thread has acquired the lock, the number of threads increases by one, and then returns
           count++;
           return;
       }
       // If no lock is obtained, spin through CASwhile (! cas.compareAndSet(null, current)) {// DO nothing
       }
   }
   public void unlock() {
       Thread cur = Thread.currentThread();
       if (cur == cas.get()) {
           if (count > 0) {// If the value is greater than 0, the current thread has acquired the lock multiple times. Releasing the lock is simulated by counting minus one
               count--;
           } else {// If count==0, the lock can be released, so that the number of times the lock is acquired is the same as the number of times the lock is released.cas.compareAndSet(cur, null); }}}}Copy the code

Spin-locks and mutex

1. Both spin locks and mutex are used to protect resource sharing.

2. Either a spin lock or a mutex can have at most one holder at any time.

3 The thread that acquires the mutex. If the lock is occupied, the thread will go to sleep. The thread that acquires the spin lock does not sleep, but cycles around waiting for the lock to be released.

Spin lock summary

1. Spin lock: When a thread acquires a lock, if the lock is held by another thread, the current thread loops until it acquires the lock.

2. The state of the thread does not change during the spin lock wait. The thread remains in user mode and active.

3. Spin lock Holding the lock for too long will cause other threads waiting to acquire the lock to exhaust the CPU.

4. Spin locking by itself does not guarantee fairness, nor does it guarantee reentrancy.

5. Based on the spin lock, it can realize the lock with fairness and reentrant properties.

(after)

From: segmentfault.com/a/1190000017766364