Basic concepts of locking
Fair and unfair locks
Fair lock
Fair lock means that the first thread in the queue can obtain the lock only when the thread enters the queue in the sequence of applying for the lock.
The advantage of a fair lock is that the thread waiting for the lock does not starve. The disadvantage is that the overall throughput efficiency is lower than the unfair lock, and all threads in the wait queue except the first thread will block.
Not fair lock
An unfair lock means that a thread directly tries to acquire the lock, but if it fails to acquire the lock, it will wait at the end of the queue.
The advantage of unfair locking is that it reduces the overhead of invoking threads, and the overall throughput is high, because threads have a chance to acquire the lock without blocking and the CPU does not have to wake up all threads. The downside is that threads in a waiting queue might starve to death or wait too long to acquire locks.
Reentrant and non-reentrant locks
A reentrant lock, also known as a recursive lock, means that the same thread can repeatedly acquire the same critical resource (lock). An advantage of reentrant locking is that it prevents a deadlock from forming recursively, such as when a subclass synchronized method calls a parent synchronized method, which would occur if there was no reentrant feature.
The idea behind reentrant locks is to associate one with each lockGets the counter and an owner thread.
- When the count is 0, the lock is not held by any thread.
- When a thread requests a lock that is not held, it records the holder of the lock and sets the fetch count to 1.
- If the same thread acquires the lock again, the count is incremented, the synchronized block exits, and the value is decremented
- When the count reaches zero, the lock is released.
Optimistic and pessimistic locks
Pessimistic locking refers to the concept that when using data, it assumes that other threads will modify the data, and therefore locks the data before acquiring it to ensure that the data will not be modified by other threads.
Optimistic locking means that when data is used, no other thread will modify the data by default. However, when data is updated, it determines whether the data is updated. If the data is not updated, the current thread writes its modified data successfully. If the data has already been updated by another thread, different operations are performed (such as reporting an error or automatic retry) depending on the implementation.
Optimistic locking is generally implemented using CAS.
CAS
CAS stands for Compare And Swap. Variable synchronization between multiple threads without locking (no threads are blocked). Essentially, the CAS instruction provided by the CPU is used.
The CAS algorithm involves three operands:
- The memory address of the shared variable A
- The value B used for comparison
- The new value C of the shared variable
The value at address A in memory can be updated to the new value C only when the value at address A in memory is equal to B.
CAS also has some problems:
- Can’t solve the ABA problem. The CAS checks whether the memory value is changed when operating on the value. If the memory value is not changed, the CAS updates the memory value. But if the memory value is A, then B, and then A again, CAS checks to see that the value has not changed, but it has. The solution to the ABA problem is to add A version number to A variable, incrementing the version number by one each time the variable is updated, so that the process changes from “A-B-A” to “1A-2B-3A.”
- If the CAS operation is not successful for a long time, the CAS operation spins all the time, which incurs high CPU overhead.
- CAS guarantees atomic operations on one shared variable, but cannot guarantee atomic operations on multiple shared variables.
Deadlocks and live locks
Deadlock: A condition in which two or more processes (threads) wait for each other while competing for critical resources.
Live lock: Live lock refers to a process in which the thread is not blocked or even voluntarily releases the resource when competing for critical resources. As certain conditions are not met, the thread keeps repeating the trial-fail-trial-fail-failure. Take this example:
The solution to “live locking” is simply to try to wait a random amount of time before retrying to acquire critical resources. Since the wait time is random, The Times for different threads to acquire critical resources are staggered, thus reducing the probability of a live lock. The “wait for a random time” scheme is simple but very effective and is used in well-known distributed consistency algorithms like Raft.
The commonly used locks
ReentrantLock
ReentrantLock, ReentrantLock, uses non-fair locks by default, or can be specified to use fair locks by constructor display.
public class LockDemo { public void fun() throws InterruptedException { Lock lock = new ReentrantLock(); If the lock has already been acquired by another thread, wait lock.lock(); try { // do something } finally { lock.unlock(); False if (lock.tryLock()) {try {// do something} finally {lock.unlock(); Lock.lockinterruptibly (); try { // do something } finally { lock.unlock(); }}}Copy the code
ReentrantReadWriteLock
Reentrant read-write locks, read locks are shareable, and write locks are mutually exclusive.
public void fun() throws InterruptedException { ReadWriteLock reentrantReadWriteLock = new ReentrantReadWriteLock(); Lock readLock = reentrantReadWriteLock.readLock(); // Readlock. lock(); try { // do something } finally { readLock.unlock(); } / / and write locks Lock writeLock = reentrantReadWriteLock. WriteLock (); writeLock.lock(); try { // do something } finally { writeLock.unlock(); }}Copy the code
StampedLock
StampedLock supports three modes: write lock, pessimistic read lock, and Optimistic read.
The semantics of write locks and pessimistic read locks are very similar to those of ReadWriteLock. Multiple threads are allowed to acquire pessimistic read locks at the same time, but only one thread is allowed to acquire write locks. Write locks and pessimistic read locks are mutually exclusive. StampedLock write locks and pessimistic read locks return a stamp after being successfully locked. Then when unlocking, you need to pass in the stamp.
StampedLock performs slightly better than ReadWriteLock because StampedLock supports optimistic reads. ReadWriteLock allows multiple threads to read simultaneously, but when multiple threads are reading simultaneously, all writes are blocked. StampedLock provides optimistic reads that allow one thread to acquire the write lock, meaning that not all writes are blocked.
class Point { private int x, y; final StampedLock sl = new StampedLock(); Int distanceFromOrigin() {stamp = sl.tryoptimisticread (); Int curX = x, curY = y; // Check whether there is a write operation during the read operation. If there is a write operation, // Return false if (! Sl.validate (stamp)){stamp = sl.readlock (); try { curX = x; curY = y; } finally {// unlockRead sl.unlockread (stamp); } } return Math.sqrt( curX * curX + curY * curY); }}Copy the code
StampedLock does not support reentrant, and if a thread blocks on StampedLock’s readLock() or writeLock(), calling interrupt() on that blocking thread will cause a CPU spike.
Semaphore
Semaphore, also known as a Semaphore, was introduced in JDK1.5 to control the number of threads accessing a critical resource at the same time.
public class LockDemo { Semaphore semaphore = new Semaphore(10); public void fun() throws InterruptedException { try { semaphore.acquire(); // do something } finally { semaphore.release(); }}}Copy the code
Semaphore can be used for flow control, especially for scenarios where common resources are limited, such as database connections. For example, if we need to read tens of thousands of files into the database, because the file reading is IO intensive task, can start dozens of threads concurrent reading, but the database connection number is only 10, then we must control only 10 threads can get the database connection operation. At this point, you can use Semaphore for flow control.
AQS
Synchronizer AbstractQueuedSynchronizer, queue, is a state of provides the atomic type management synchronization, blocking and awakens the thread function and simple framework of queue model. Most synchronization classes in Java (Lock, Semaphore, ReentrantLock, etc.) are implemented based on AQS.
Composition of AQS framework
AQS has several important components:
- Wait queue, used to hold threads trying to acquire critical resources
- State: indicates the critical resource status
- The thread currently occupying the critical resource
There are two different modes of AQS, exclusive mode and shared mode. In different modes, the state of critical resources and the way threads occupy critical resources are different
AQS in exclusive mode
In exclusive mode, critical resources can only be occupied by one thread.
In general, the initial value of state is set to 0, indicating idle. When a thread acquires synchronization status, the CAS operation increments state by one to indicate that it is not idle.
When acquiring resources, there are two modes: tryAcquire and Acquire:
- TryAcquire attempts to acquire the resource exclusively and returns true if it succeeds, false otherwise
- Acquire attempts to acquire resources in an exclusive manner. Since the default implementation is unfair, the thread will try to acquire critical resources first. If critical resources have been occupied, the thread will be suspended to join the waiting queue, waiting for the release of critical resources.
The head element of the queue represents the thread that occupies the critical resource. When the critical resource is available, AQS wakes up the waiting thread and points the head to the node that grabs the critical resource.
AQS in shared mode
In shared mode, critical resources can be occupied by multiple threads.
In general, the initial value of state can be set to N (N > 0), indicating idle. Each time a thread obtains a synchronous state, the CAS operation is used to decrease state by one until it reaches zero to indicate non-idle.
To get resources, there are two modes: tryAcquireShared and acquireShared:
- TryAcquireShared returns an int:
- Negative value indicates failure to obtain;
- 0 indicates that the command is successfully obtained but has no remaining resources
- A positive number indicates that the resource was acquired successfully and that there are resources left for other threads to acquire.
- AcquireShared, and exclusive mode type, the default is not fair implementation, so the thread will first try to obtain critical resources, if the critical resources fail to obtain, the thread will join the waiting queue, waiting for the critical resources available to wake up.
Whether in shared or exclusive mode, AQS do not respond to interrupts, meaning threads that have been added to a synchronous queue do not immediately return and throw InterruptedException if awakened by an interruption. Instead, it checks again whether its precursor is head and decides whether to compete for the synchronization state. If its precursor is not head or the scramble synchronization state fails, it is suspended again. Wait until a critical resource is reached before interrupting itself, which is the selfInterrupt in the following code.
public final void acquire(int arg) { if (! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code