What is a lock?

Place it on an openable object and open it with a key or code. —————— Baidu Encyclopedia

For threads: locks are tools used to control access to shared resources across multiple threads. Typically, locks provide exclusive access to a shared resource: only one thread can acquire the lock at a time, and all access to the shared resource requires the lock to be acquired first. However, some locks may allow concurrent access to shared resources, such as read/write locks for ReadWriteLock.

Pessimistic locks and optimistic locks

Pessimistic locking

Pessimistic about thread safety issues caused by concurrent operations. Pessimistic locks believe that competition will always occur, so every time they operate on a resource, they will hold an exclusive lock, just like synchronized. Other threads that want to manipulate the data can’t get the lock and have to block.

Here’s an example from life:

We go to the hospital to see the expert outpatient service, and so on the number into the consultation room after the door closed, afraid of others also come in.

Optimistic locking

For concurrent operation thread safety problem between state, optimistic optimistic locking that competition does not always happen, so it doesn’t need to hold the lock, will compare to replace these two actions as an atomic operation to try to modify variables in memory, if failure, said conflict, then there should be a corresponding retry logic.

Here’s an example from life:

We go to the hospital to see the specialist clinic, waiting for the number to enter the clinic, the back door is open, not afraid of others will come in.

Two types of lock usage scenarios

  • Pessimistic locking applies to scenarios where there are too many writes and too few reads. That is, conflicts are serious and contention between threads is encouraged. Using optimistic locking causes threads to retry repeatedly, which may also reduce performance.

  • Optimistic locking is suitable for scenarios with fewer writes (and fewer conflicts) because locking is not required and locks are released, reducing the overhead of locking and thus improving throughput.

Exclusive locks and shared locks

An exclusive lock

A lock can only be held by one thread at a time. If one thread holds an exclusive lock on data, no other thread can hold any type of lock on that data. The thread that acquires the exclusive lock can both read and modify data.

The synchronized and java.util.Concurrent (JUC) JDK implementation classes for Lock are exclusive locks.

A Shared lock

Locks can be held by multiple threads. If one thread adds a shared lock to data, other threads can only add a shared lock to data, not an exclusive lock. The thread that acquires the shared lock can only read the data, not modify it.

ReentrantReadWriteLock in the JDK is a shared lock.

Mutex and read-write locks

The mutex

A mutex is a common implementation of an exclusive lock, which allows only one visitor to access a resource at a time and is unique and exclusive.

Read-write lock

Read/write locks are an implementation of shared locks. Read/write locks manage a set of locks, a read-only lock and a write lock.

A read lock can be held by multiple threads without a write lock, while a write lock is exclusive. Write locks take precedence over read locks, and a thread that has acquired a read lock must be able to see what was updated by the previous release of the write lock.

Read/write locks are more concurrent than mutex locks, with only one writer thread at a time, but multiple threads can concurrently read.

The JDK defines an interface for reading and writing locks: ReadWriteLock

Fair locks and unfair locks

Fair lock

Multiple threads acquire locks in the order in which they apply for them. This is similar to a vaccination queue, where the first person to come fights first and the last person is at the end of the queue, which is fair.

Not fair lock

The order in which multiple threads acquire locks is not the same as the order in which they apply for locks. It is possible that the later-applied thread acquires the locks first than the earlier-applied thread. In a high-concurrency environment, priority reversal or starvation may occur (a thread never acquires the locks).

Reentrant lock

A reentrant lock, also known as a recursive lock, means that the same thread acquires the lock in the outer method and automatically acquires the lock in the inner method. Reentrant locks prevent deadlocks to some extent.

In the case of Java ReentrantLock, its name is clearly a ReentrantLock. For Synchronized, it is also a reentrant lock.

spinlocks

When a thread does not acquire a lock, instead of being suspended directly, it executes a busy loop, which is called spin. The purpose of a spin lock is to reduce the chance that a thread will be suspended, since suspending and waking up threads are also costly operations.

If the lock is held by another thread for a long time and the current thread is suspended even after the spin, the busy loop becomes a waste of system resources and degrades overall performance. Therefore, spin-locks are not suitable for concurrency where the lock is held for a long time.

Segmented lock

A lock design is not a specific lock.

Segment locking is designed to further refine the granularity of the lock. When the operation does not need to update the entire array, only one item in the array is locked.

CurrentHashMap uses a Segment lock at the base of the Java language. Using Segment can be used concurrently.

Lock escalation

In order to improve performance and reduce the cost of acquiring and releasing locks, JDK1.6 introduces four lock states: no lock, biased lock, lightweight lock, and heavyweight lock, which can be upgraded but not degraded as multiple threads compete.

unlocked

Lock-free is an optimistic lock.

Biased locking

Biased Locking means that it is Biased towards the first thread accessing the lock. If only one thread accesses the locked resource and there is no multi-threaded race, then the thread does not need to acquire the lock repeatedly. In this case, a Biased Locking is applied to the thread.

The realization of bias lock is achieved by controlling the Mark bit of the object. If the current state is biased, it is necessary to further determine whether the thread ID stored in the object header is consistent with the current thread ID. If so, it is directly entered.

Lightweight lock

When thread contention becomes more intense, biased locks will be upgraded to lightweight locks. Lightweight locks think that although competition exists, the degree of competition is ideally very low and wait for the last thread to release the lock by means of spin.

Heavyweight lock

Further intensified, if thread concurrent threads of spin more than a certain number of times, or a thread holds a lock, a thread in the spin, again the third interview with thread (anyway is the competition continues to increase), lightweight will expand as heavyweight lock, lock heavyweight lock can make in addition to the outside of the thread that owns lock threads are blocked.

Upgrading to a heavyweight lock is essentially a mutex. One thread holds the lock while the rest of the threads block.

In Java, the synchronized keyword is implemented as an escalation process: no lock > bias lock > lightweight lock > heavyweight lock.

Lock optimization technique

Lock coarsening

Reducing the number of synchronized blocks and extending the scope of a single synchronized block essentially combines multiple locked and unlocked requests into one synchronized request.

In Java, StringBuffer, which is a thread-safe class, naturally the most common append() method is a synchronous method, and we write code that repeats the append string over and over again, which means locking -> unlocking again and again, which is bad for performance, Because this means that the Java virtual machine is repeatedly switching between kernel and user mode on this thread, the Java virtual machine will coarse the code for multiple append calls with a lock, extending the multiple Append operations to the end of the append method. As a large synchronized block, this reduces the number of lock -> unlock times, effectively improving the efficiency of code execution.

Lock elimination

Lock elimination refers to the elimination of locks that require synchronization on code but are detected by the VIRTUAL machine just-in-time compiler (JIT) to be impossible to compete for shared data.