1. Optimistic locks vs. pessimistic locks

Optimistic lock and pessimistic lock are two kinds of thinking.

Optimistic lock: The belief that no other thread will modify the data when you use it. So it’s unlocked. When updating data, if the data has not been modified, the update succeeds. If the data is modified and the update fails, perform other operations, such as retry or report an error

Pessimistic locking: The belief that when you use data, some other thread will modify the data. So just lock it

Pessimistic locking is suitable for multi-write scenarios. Common implementations: sychronized,Lock

Optimistic locking is suitable for multi-read scenarios. Common implementations: atomic class operations (atomic classes in the java.util.concurrent package)

So how do optimistic locks synchronize correctly? CompareAndSwap (CAS) is a lock-free algorithm.

2. Spinlocks vs. adaptive spinlocks

Spin locking: Blocking and waking up a thread requires the operating system to switch CPU state, which can be time consuming. If a thread fails to request a lock, it would normally block. But we delay blocking, and if the lock is released during the delay, the current thread acquires the lock, avoiding the performance penalty of blocking. The delayed operation is the spin.

But spin has its drawbacks. The spin itself consumes processor time, and if the lock is acquired during the spin time, time can be saved. If the lock is not released during spin time, the spin fails and the current thread enters the blocked state. In this way, the resources occupied during the spin are wasted. So the spin time has to be limited.

Spin principle: CAS(optimistic locking is also implemented this way)

Adaptive spin lock: The spin time is not fixed, but is determined by the last spin time on the same lock and the state of the lock owner. If on the same lock, the currently running thread obtains the lock through spin, then the current thread is considered to be able to acquire the lock through spin. If the spin success rate is low for a lock, then do not spin and go directly to the blocking state to avoid wasting resources on spin.

3. Sychronized: No-lock vs. partial lock vs. lightweight lock vs. heavyweight lock

The four types of locks refer to lock states. Is specifically optimized for sychronized. Levels from low to high: no lock, biased lock, lightweight lock, heavyweight lock

No lock: Multiple threads can access a resource at the same time, but only one thread can modify the resource at the same time. If the modification is not successful, the system tries again until the modification is successful. Applies to: Whether the thread holding the lock will release the lock soon.

Principle: Spin.

Biased lock: the lock has been acquired by a thread, so the thread can automatically acquire the lock, saving the cost of obtaining the lock.

Application scenario: In a multi-threaded environment, if a thread is always accessing a synchronized resource, the thread can always obtain biased locks.

There is a threadId in the object header, which is null by default. When the lock is first acquired, the threadId is written to the lock object’s threadId and will bias the lock flag bit which is 01. Next time the lock is acquired, compare the threadId with the threadId of the lock object header. If they are the same, the current thread automatically acquires the lock.

In the event that another thread is competing for a biased lock, the thread holding the biased lock will release the lock (lock revoke), and the thread will not release the biased lock voluntarily.

Partial lock cancellation: wait until the global safe point (no bytecode execution at this point), first suspend the thread with the lock, determine whether the lock object is locked, and finally expand to lightweight lock

Here’s how: Update threadId via CAS.

Lightweight lock: When a lock is biased, the lock expands to lightweight when another thread is competing for the lock. Application scenario: In multi-threaded scenarios, multiple threads alternately access synchronous resources without lock contention.

Lightweight lock: When a thread obtains a Lock, it will create a Lock Record space in the stack frame of the current thread, copy the Mardword of the Lock object to the Lock Record through CAS, and update the Mardword to the pointer pointing to the Lock Record. Also points the owner in the Lock Record to the MarkWord of the Lock object. The next time a thread requests a lock, first check whether the MarkWord of the lock object points to the stack frame of the current thread. If so, acquire the lock. If not, spin wait if there are currently two threads, one holding the lock, one spinning, and one thread, then the lightweight lock expands to the heavyweight lock

The principle is: update MardWord via CAS (more frequently than biased lock updates threadId) and spin

Heavyweight locks: Threads waiting for locks are blocked

Principle: Use the operating system internal mutex

4. Fair locks vs. unfair locks

Fair lock: Locks are added to the wait queue in the order in which they were applied. After the lock is released, the lock is obtained in the order of application.

Unfair lock: at the time of access, try to obtain the lock, it may appear after the application of the line to obtain the lock first.

Fair lock: no waiting thread == starve to death (never get lock)==. However, throughput is low and all threads are blocked except for the first waiting thread, which is costly.

Unfair lock: High throughput, but thread starvation.

ReentrantLock has two internal classes that implement fair and unfair locking, respectively

5. Flushable locks vs. non-reentrant locks

Reentrant lock: also known as recursive lock. If a thread acquires a lock on an external method, it does not need to acquire the lock again when accessing an internal method. All premises use the same lock object. Deadlock issues can be avoided

ReentrantLock and sychronized are reentrant locks.

Non-reentrant lock: The opposite of a reentrant lock. Deadlocks can occur. NonReentrantLock

Exclusive locks VS shared locks

Exclusive lock: also called exclusive lock, mutex lock. Synchronized and Lock.

Shared lock: This lock can be held by multiple threads. The thread that acquires the shared lock cannot modify the data, only read it.

ReentrantReadWriteLock source code can be studied. Read locks are shared locks, while write locks are exclusive locks

conclusion

Different locks, just from the point of view of solving the problem. Most of the locks we actually use are multi-type ones. For example, a ReentrantLock can be both a ReentrantLock and a mutex lock. It can also be a fair lock or an unfair lock (with an internal implementation), but only one of the same type can exist. For example, a ReentrantLock cannot be both optimistic and pessimistic.

Refer to the link

The unspoken Java “lock” thing

Java thread optimization favors locking, lightweight locking, heavyweight locking