Art is long, life is long

Summary:

There are many locks in Java, and they can be classified according to different functions and types. Here is my classification of some common locks in Java, including some basic overview:

  • Whether a thread needs to lock resources can be divided into pessimistic locking and optimistic locking
  • If the slave resource is locked, whether the thread is blocked can be classified as a spinlock
  • Concurrent access to resources from multiple threads, also known as Synchronized, can be classified as locking free, biased, lightweight, and heavyweight
  • The fairness of locks can be divided into fair locks and unfair locks
  • Reentrant locks and non-reentrant locks can be divided into reentrant locks and non-reentrant locks
  • Whether multiple threads can acquire the same lock can be divided into shared locks and exclusive locks

Optimistic locks and pessimistic locks

  • Optimistic locking is a kind of optimistic thought that reading more than writing less, meet low likelihood of concurrent write, every time to fetch the data that other people will not change, so will not lock, but at the time of update will assess the during that time, people have to update the data, at the time of writing in a first read the current version number, and then lock operation (compared with the previous version number, Update if it is the same), and repeat the read-compare-write operation if it fails. Optimistic locking in Java is basically implemented through CAS operation, which is an updated atomic operation. The CAS operation compares the current value with the passed value. If the current value is the same as the passed value, it will be updated, otherwise it will fail.
  • Pessimistic locking is a pessimistic thinking, that is, the likelihood of concurrent writing is high, and every time I go to fetch data, I think someone else will modify it, so every time I read or write data, I will lock it, so that someone else will try to read or write the data, and the data will be blocked until I get the lock. Pessimistic lock in Java is Synchronized. In AQS framework, cas optimistic lock is first tried to obtain the lock. If it fails to obtain the lock, it will be converted to pessimistic lock, such as RetreenLock.

Fair locks and unfair locks

  • A fair lock means that the lock allocation mechanism is fair. Usually, the thread that requests the lock first will be allocated the lock first. ReentrantLock defines a fair lock by providing an initialization method in the constructor to determine whether the lock is fair.
  • JVM mechanisms that allocate locks randomly and nearby are called unfair locks, and ReentrantLock provides an initialization method in the constructor for whether or not the lock is fair, which defaults to unfair locks. The actual execution of unfair locks is far more efficient than that of fair locks. Unless the program has special needs, the allocation mechanism of unfair locks is most commonly used.

Reentrant lock (recursive lock)

  • This article is about reentrantlocks in the broad sense, not reentrantLocks in JAVA alone. A reentrant lock, also known as a recursive lock, means that after an outer function of the same thread acquires the lock, the inner recursive function still has the code to acquire the lock, but is not affected. ReentrantLock and synchronized are both reentrant locks in JAVA environments.

Shared and exclusive locks

  • Shared lock: If transaction T holds A shared lock on data A, other transactions can only hold A shared lock on data A, not an exclusive lock. Transactions that are allowed to share locks can only read data, not modify it.

  • Exclusive lock: If transaction T holds an exclusive lock on data A, no other transaction can hold any lock on data A. A transaction that acquires an exclusive lock can both read and modify data.

Read-write lock

  • A read/write lock is a resource that can be accessed by more than one reader thread, or by one writer thread that cannot exist at the same time. Read/write locks in Java are implemented using ReentrantReadWriteLock.

The mutex

  • A mutex is a lock that can only be held by one thread at a time. Synchronized and JUC locks are mutex in the JDK.

spinlocks

  • Spinlocks principle is very simple if the thread holding the lock can lock is released in a very short time resources, and the thread lock wait for competition there is no need to do between kernel mode and user mode switch into the block pending state, they just need to wait for a while (spin), such as thread holding the lock immediately after releasing the lock locks, thus avoiding the consumption of user and kernel thread switching. If the thread can’t get the lock, it can’t use the spin to do idle work, so it needs to set a maximum spin wait time. If the thread holding the lock executes for longer than the maximum spin wait time and does not release the lock, other threads contending for the lock still cannot acquire the lock within the maximum wait time, then the contending thread will stop spinning and enter the blocking state.

Advantages and disadvantages of spin locks

  • Spinlocks reduce thread blocking as much as possible, which is a big performance improvement for blocks that are less competitive for locks and have a very short lock time, because the spin cost is less than the cost of blocking, suspending and waking up operations that cause the thread to do two context switches! But if the lock of the competition is intense, or thread holding the lock need long time to occupy the lock synchronization block, this time is not suitable for using a spin lock, because of the spin lock before acquiring a lock is CPU doing this all the time, for the XX XX, competition at the same time a large number of threads in a lock, and can lead to acquiring a lock for a long time, The cost of thread spin is greater than the cost of thread blocking and suspension operations, and other threads that need CPU cannot obtain CPU, resulting in CPU waste. So in this case we want to turn off the spin lock;

Spinlock time threshold (1.6 introduced adaptive spinlock)

  • The purpose of a spin lock is to hold CPU resources until the lock is acquired. But how do you choose when to execute the spin? If the spin execution time is too long, a large number of threads in the spin state will occupy CPU resources, which will affect the overall system performance. So the cycle of spin is extra important!
  • In JDK 1.6, adaptive spin locking was introduced. Adaptive spin locking means that the time of spin on the same lock is not fixed, but determined by the state of the previous spin on the same lock and the owner of the lock. It is generally considered that the time for a thread context switch is the optimal time, and the JVM is optimized for the current CPU load. If the average load is less than CPUs, the thread spins; if more than (CPUs/2) threads are spinning, the thread blocks directly afterwards. If the thread that is spinning finds that its Owner has changed, it will delay spin time (spin count) or block. If the CPU is in power-saving mode, it will stop spinning. The worst case of spin time is CPU memory delay (CPU A stores A piece of data, and CPU B gets the data directly). Differences between thread priorities are appropriately waived when spinning.

Spin-lock opening:

  • -xx :+UseSpinning enabled in JDK1.6;

  • -xx :PreBlockSpin=10;

  • After JDK1.7, this parameter is removed and controlled by the JVM.

No lock/bias lock/lightweight lock/heavyweight lock

unlocked

  • None Lock No resource is locked. All threads can access and modify the same resource, but only one thread can modify the resource successfully.

    The lockless feature is that the modification operation occurs in a loop, and the thread constantly tries to modify the shared resource. If there are no conflicts, the modification succeeds and exits, otherwise the loop continues. If multiple threads modify the same value, one thread will succeed, and the others will retry until the modification succeeds. The CAS principle and application described above is the implementation of lock-free. Lockless can’t completely replace locking, but the performance of locking is very high in some cases.

Biased locking:

  • Biased locking is a lock optimization introduced in JDK6. In most cases, locks are not contested by multiple threads, and are always acquired multiple times by the same thread. Biased locking is introduced to make it cheaper for the thread to acquire the lock. A bias lock is biased in favor of the first thread to acquire it, and if the lock is not acquired by another thread during subsequent execution, the thread holding the bias lock will never need to synchronize.

Lightweight lock

  • Biased locks are quickly upgraded to lightweight locks if it becomes apparent that another thread is applying for the lock.

Heavyweight lock

  • Referring to the original Synchronized implementation, heavyweight locks are blocked when other threads attempt to acquire the lock and wake up only when the thread holding the lock releases it.

Sychronized synchronization lock

  • Synchronized treats any non-null object as a lock. It is an exclusive pessimistic lock and a reentrant lock.

Scope of Synchronized:

  1. When applied to a method, it locks an instance of the object (this);

  2. When used for static methods, the Class instance is locked, and the permanent band is shared globally because the data related to the Class is stored in PermGen (in JDK1.8, metaspace). Therefore, the static method lock is equivalent to a global lock of the Class, and will lock all threads that call the method.

  3. Synchronized, when applied to an object instance, locks all blocks of code that are locked on that object. It has multiple queues, and when multiple threads access an object monitor together, the object monitor stores these threads in different containers.

ReentrantLock

  • ReentantLock inherits the interface Lock and implements the method defined in the interface. It is a reentrant Lock. In addition to completing all the work that synchronized can do, ReentantLock also provides methods such as responsible interrupt Lock, polling Lock request, timing Lock and so on to avoid multi-thread deadlock.
  • ReentrantLock supports fair and unfair choices when acquiring locks

lock:

The lock entry is the lock() method:

public void lock(a) {
    sync.lock();
}
Copy the code

Sync is a static inner class of ReentrantLock that inherits AQS to implement ReentrantLock logic. Sync has two concrete implementation classes: NonfairSync and FairSync

NonfairSync

Take a look at NonfairSync:

static final class NonfairSync extends Sync {
    private static final long serialVersionUID = 7316153563782823691L;

    /** * Performs lock. Try immediate barge, backing up to normal * acquire on failure. */
    // Override Sync's lock method
    final void lock(a) {
    	// Try to preempt the CAS lock
        if (compareAndSetState(0.1))
        	// If the preemption succeeds, the lock is acquired
            setExclusiveOwnerThread(Thread.currentThread());
        else
        	// If no preemption succeeds, call acquire() and follow the logic inside
            acquire(1);
    }
	// Rewrites the tryAcquire method for AQS
    protected final boolean tryAcquire(int acquires) {
        returnnonfairTryAcquire(acquires); }}Copy the code

FairSync

Take a look at FairSync:

static final class FairSync extends Sync {
    private static final long serialVersionUID = -3000897897090466540L;

	// Override Sync's lock method
    final void lock(a) {
        acquire(1);
    }

    /** * Fair version of tryAcquire. Don't grant access unless * recursive call or no waiters or is first. */
    // Overwrite Sync's tryAcquire method
    protected final boolean tryAcquire(int acquires) {
    	// Get the current thread of execution
        final Thread current = Thread.currentThread();
        // Get the value of state
        int c = getState();
        // In the unlocked state
        if (c == 0) {
        	// There are no precursor nodes and the value of state has been successfully replaced
            if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {
                // Save the current thread that acquired the lock. The next time you come back, you don't need to try to compete for the lock
                setExclusiveOwnerThread(current);
                return true; }}else if (current == getExclusiveOwnerThread()) {
        	// If the lock is acquired by the same thread, simply increase the reentrant count
            int nextc = c + acquires;
            // If nexTC is less than 0, an exception is thrown
            if (nextc < 0)
                throw new Error("Maximum lock count exceeded");
            setState(nextc);
            // Succeeded in obtaining the lock
            return true;
        }
        // Failed to obtain the lock
        return false; }}Copy the code

Summarize NonfairSync and FairSync

At this point, it should be clear that Sync has two concrete implementation classes:

  • NonfairSync: Can preempt the lock, call NonfairSync, no matter whether there are other threads waiting on the queue, on the first CAS operation, successful to acquire the lock, failed to go through the acquire logic; When the lock resource is released, the sync.nonFairtryAcquire method is used
  • FairSync: All threads acquire locks according to FIFO. In the lock method, there is no CAS attempt and it is directly the acquire logic. When releasing resources, you go through your own tryAcquire logic

ReentrantLock is different from synchronized

  • Synchronized is a syntax provided at the Java language level, so exceptions are not considered; ReentrantLock is a lock implemented by Java code, so the lock must be acquired before it is released properly;

  • Synchronized must wait until the lock is acquired without additional attempts. ReentrantLock can attempt to acquire a lock;

  • ReentrantLock supports fair and unfair choices when acquiring locks