This is the 23rd day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021.

1. Fair locks vs. unfair locks

Fair lock: multiple threads obtain the lock in the order they apply for it. The thread directly enters the queue to queue, and the first thread in the queue can obtain the lock. It’s like standing in line for dinner. First come, second come.

Unfair lock: Multiple threads obtain locks in a different order than the one that applied for the locks earlier.

To compare

Fair lock, is very fair, in the concurrent environment, each thread in the lock will first check the lock maintenance wait queue, if it is empty, or the current thread is the first waiting queue, hold the lock, otherwise it will join the wait queue, in accordance with the RULES of FIFO from the queue to take their own.

Unfair lock, more rude, up directly to try to possess the lock, if the attempt fails, then use similar to fair lock.

The advantage of a fair lock is that the thread waiting for the lock does not starve.

The advantage of an unfair lock is that the throughput is greater than that of a fair lock. However, in the case of high concurrency, priority inversion or starvation can occur.

endoscopic

The creation of ReentrantLock in a package can specify the Boolean type of the constructor to get a fair or unfair lock, which defaults to an unfair lock.

Check already, you can see there is a Sync, inherited from the AbstractQueuedSynchronizer internal class add locks and the lock is released most of the operations are actually implemented in Sync. It has two subclasses, FairSync and NonfairSync.

public class ReentrantLock implements Lock.java.io.Serializable {
    private static final long serialVersionUID = 7373984872572414699L;
    private final Sync sync;
    
    public ReentrantLock(a) {
        sync = new NonfairSync();
    }
    
    public ReentrantLock(boolean fair) {
        sync = fair ? new FairSync() : new NonfairSync();
    }
    
    abstract static class Sync extends AbstractQueuedSynchronizer {
        private static final long serialVersionUID = -5179523762034025860L;

        abstract void lock(a);
		/ /...
    }
  
    static final class NonfairSync extends Sync {
		/ /...
    }

    static final class FairSync extends Sync {
        / /...}}Copy the code

By comparing the two constructors, you can see the difference between fair and unfair locks

  • If the lock is not occupied at this time, then the lock will be directly acquired and returned. Otherwise, the queue will be queued in the way of fair lock and enter the blocking queue to wait for wake up
  • TryAcquire () has a new constraint on acquiring synchronization status:! hasQueuedPredecessors()Is used to determine whether the current thread is the first in the synchronization queue

Synchronized keyword is also an unfair lock.

2. Optimistic locks vs. pessimistic locks

Optimistic locking and pessimistic locking are broad concepts that reflect different perspectives on thread synchronization. There are practical applications of this concept in Both Java and databases.

  • Pessimistic locking is a pessimistic idea. It always thinks that when it uses data, other threads must modify it. Therefore, pessimistic locking always locks resources or data when it holds data, so that other threads will block when they want to request this resource, until the pessimistic lock releases the resource. Traditional relational database inside used a lot of this locking mechanism, ** such as row lock, table lock, read lock, write lock, etc., are in operation before the first lock. The realization of pessimistic locking often depends on the locking function of the database itself.

    In Java, the implementation classes for the synchronized keyword and Lock are pessimistic locks.

  • Optimistic locks, on the other hand, assume that no other thread will modify the data when they use it, so they will not add the lock, but just check whether the data has been updated before by another thread. If the data is not updated, the current thread writes its modified data successfully. If the data has been updated by another thread, different operations are performed (such as reporting an error or automatic retry) depending on the implementation.

    Generally speaking, there are two implementation schemes of optimistic locking: version number mechanism and CAS implementation.

    In Java. Java util. Concurrent. Atomic package the atomic variable classes below the increment operation is achieved by CAS optimistic locking.

To compare

Pessimistic lock: It is suitable for scenarios with frequent write operations. If a large number of read operations occur, locks will be added each time the read operations occur, which increases the overhead of locks and reduces the throughput of the system.

Optimistic lock: Applies to scenarios where read operations are frequent. If a large number of write operations occur, data conflicts are more likely. To ensure data consistency, the application layer needs to continuously obtain data, which increases a large number of query operations and reduces the throughput of the system.

Pessimistic locking is suitable for strong consistency scenarios, but low efficiency, especially low read concurrency. Optimistic locking is suitable for scenarios with more read and less write and fewer concurrent conflicts.

Optimistic Lock FAQ:

  • ABA problem
  • Long cycle time and high overhead
  • Atomic operations of only one shared variable are guaranteed

3. Reentrant lock (recursive lock)

A reentrant lock, also known as a recursive lock, means that if the same thread acquires the lock from the outer method, the inner method of the same thread automatically acquires the lock (provided that the lock object is the same object or class). The lock is not blocked because it has been acquired before and has not been released.

That is, a thread can enter any block of code synchronized with a lock it already owns.

The most important function of reentrant lock is to avoid deadlocks to a certain extent. ReentrackLock and Synchronized are typical reentrant locks.

public class Wget {
    public synchronized void doSomething(a) {
        System.out.println("Method 1 executes...");
        doOthers();
    }

    public synchronized void doOthers(a) {
        System.out.println("Method 2 executes..."); }}Copy the code

In the code above, both methods in the class are decorated with built-in synchronized, and the doSomething() method calls the doOthers() method. Because the built-in lock is reentrant, the same thread that calls doOthers() can directly acquire the lock of the current object and go into doOthers() to operate.

If it is a non-reentrant lock, the current thread needs to release the lock that acquired the current object when doSomething() was executed before calling doOthers(), which is actually held by the current thread and cannot be released. So a deadlock occurs.

spinlocks

A spin lock is one in which the thread attempting to acquire the lock does not block immediately, but instead attempts to acquire the lock in a circular manner. This has the advantage of reducing the cost of thread context switching, but the disadvantage is that the loop consumes CPU

public class SpinLockDemo {

    AtomicReference<Thread> lock = new AtomicReference<>();

    public void myLock(a){
        Thread thread = Thread.currentThread();
        // If not empty, spin
        while(! lock.compareAndSet(null,thread)){

        }
    }

    public void myUnlock(a){
        Thread thread = Thread.currentThread();
        // After unlocking, set the lock to NULL
        lock.compareAndSet(thread,null); }}Copy the code

The advantages and disadvantages

Disadvantages:

  1. If one thread holds the lock for too long, it can cause other threads waiting to acquire the lock to go into a loop, consuming CPU. Improper CPU usage may result in high CPU usage.
  2. The Java implementation above is not fair in that the thread that waits the longest gets the lock first. Unfair locks lead to “thread hunger.”

Advantages:

  1. The spin lock does not cause the thread state to switch, it is always in user state, that is, the thread is always active; Does not make the thread into the blocking state, reduces the unnecessary context switch, execution speed
  2. When a non-spin lock cannot be acquired, it will enter the blocking state, thus entering the kernel state. When the lock is acquired, it needs to recover from the kernel state, requiring a thread context switch. (After the thread is blocked, it will enter the kernel (Linux) scheduling state, which will cause the system to switch back and forth between user state and kernel state, seriously affecting the lock performance)

Reentrant spinlocks and non-reentrant spinlocks

A spin lock is not reentrant. If a thread has acquired the lock the first time, it will not be able to acquire it again before the lock is released. Since the CAS is not satisfied, the second acquisition will wait in a while loop, which should also be successful if it is a reentrant lock.

Furthermore, even if the second lock is successfully acquired, the second lock will be released when the first lock is released, which is not reasonable.

To implement a reentrant lock, we need to introduce a counter that records the number of threads acquiring the lock.

Spin-locks and mutex

  • Both spin locks and mutex are designed to protect resource sharing.
  • Either a spin lock or a mutex can have at most one holder at any time.
  • The thread that acquires the mutex and goes to sleep if the lock is occupied; The thread that acquires the spin lock does not sleep, but cycles around waiting for the lock to be released.

Conclusion:

  • Spin lock: When a thread acquires a lock, if the lock is held by another thread, the current thread loops until the lock is acquired.
  • The state of the thread does not change during the spin lock wait. The thread remains user and active.
  • A spin lock that is held for too long can cause other threads waiting to acquire the lock to drain the CPU.
  • Spin-locks by themselves do not guarantee fairness, nor do they guarantee reentrancy.
  • Based on the spin lock, it is possible to achieve fair and reentrant locking.

4. Exclusive lock (mutex/write lock), shared lock (read lock)

Exclusive lock: A lock that can only be held by one thread at a time. It is an exclusive lock for both ReentrantLock and Synchronized

Shared lock: The lock can be held by multiple threads

For ReentrantReadWriteLock, the read lock is shared and the write lock is exclusive.

The shared lock of read lock ensures that concurrent read is very efficient. Read/write, read/write, and write processes are mutually exclusive.

Unlocked VS biased VS lightweight VS heavyweight locks

These four types of locks refer to states of lock, specifically synchronized. There is a bit of additional information to cover before we introduce these four lock states.

Why does Synchronized achieve thread synchronization in the first place?

Before we can answer this question, we need to understand two important concepts: “Java object headers” and “Monitor.”

unlocked

None Lock No resource is locked. All threads can access and modify the same resource, but only one thread can modify the resource successfully.

The lockless feature is that the modification operation occurs in a loop, and the thread constantly tries to modify the shared resource. If there are no conflicts, the modification succeeds and exits, otherwise the loop continues. If multiple threads modify the same value, one thread will succeed, and the others will retry until the modification succeeds. The CAS principle and application described above is the implementation of lock-free. Lockless can’t completely replace locking, but the performance of locking is very high in some cases.

Biased locking

Biased locking means that a piece of synchronized code is always accessed by a thread, so the thread will automatically acquire the lock, reducing the cost of acquiring the lock.

In most cases, the lock is always acquired multiple times by the same thread, and there is no multithreading competition, so biased locking occurs. The goal is to improve performance when only one thread executes a synchronized code block.

When a thread accesses a block of synchronized code and acquires a lock, the thread ID of the lock bias is stored in Mark Word. Instead of using CAS to lock and unlock a thread when it enters and exits a synchronized block, it checks whether the Mark Word stores a biased lock pointing to the current thread. Biased locks are introduced to minimize unnecessary lightweight lock execution paths without multithreading competition, because the acquisition and release of lightweight locks depend on multiple CAS atomic instructions, while biased locks only need to rely on one CAS atomic instruction when replacing ThreadID.

Biased lock Only when other threads attempt to compete for biased lock, the thread holding biased lock will release the lock, the thread will not actively release biased lock. A partial lock is revoked by waiting for the global safe point (at which no bytecode is being executed), which first suspends the thread with the partial lock to determine whether the lock object is locked. After the biased lock is revoked, the system restores to the no-lock (flag bit is “01”) or lightweight lock (flag bit is “00”) state.

Biased locking is enabled by default in JDK 6 and later JVMS. Biased locking can be turned off by JVM arguments: -xx: -usebiasedLocking =false. After this is turned off, the program enters the lightweight locking state by default.

Lightweight lock

It means that when a biased lock is accessed by another thread, the biased lock will be upgraded to a lightweight lock. Other threads will try to acquire the lock through the form of spin without blocking, thus improving performance.

When the code enters the synchronization block, if the Lock status of the synchronization object is lockless (the Lock flag bit is “01”, whether the bias Lock is “0”), the VIRTUAL machine will first establish a space named Lock Record in the stack frame of the current thread, which is used to store the copy of the current Mark Word of the Lock object. Then copy the Mark Word from the object header into the lock record.

After the copy is successful, the VM attempts to update the Mark Word of the object to a pointer to the Lock Record using the CAS operation, and sets the owner pointer in the Lock Record to the Mark Word of the object.

If the update succeeds, the thread owns the lock on the object, and the object’s Mark Word lock bit is set to 00, indicating that the object is in a lightweight locked state.

If the lightweight lock fails to update, the virtual machine first checks whether the object’s Mark Word refers to the current thread’s stack frame. If it does, the current thread already owns the lock and can proceed directly to the synchronization block. Otherwise, multiple threads compete for the lock.

If there is currently only one waiting thread, the thread waits through spin. But when the spin exceeds a certain number, or when one thread is holding the lock, one is spinning, and a third person is calling, the lightweight lock is upgraded to the heavyweight lock.

Heavyweight lock

When upgrading to a heavyweight lock, the status value of the lock flag changes to “10”, then the pointer to the heavyweight lock is stored in the Mark Word, and all the threads waiting for the lock will enter the blocking state.

To sum up, biased locking can solve the locking problem by comparing Mark Word to avoid CAS operation. Lightweight locking uses CAS operation and spin to solve the locking problem and avoid thread blocking and wake up, which affects performance. A heavyweight lock blocks all threads except the one that owns the lock.