Optimistic locks and pessimistic locks

Pessimistic locking

The pessimistic lock corresponds to the pessimistic people in life, who are always thinking things are going to be bad.

For example, if there is only one pit in the toilet, the door will be locked inside the first time when the toilet is locked, so that other people have to wait outside the door to go to the toilet. This state is called “blocking”.

Back in the code world, a pessimistic lock is placed on a shared data, and every time a thread tries to manipulate the data, it assumes that other threads will also try to manipulate the data, so every time it tries to manipulate the data, it will lock, so that other threads trying to manipulate the data cannot get the lock and will have to block.

In Java language, synchronized and ReentrantLock are typical pessimistic locks, and some container classes using the synchronized keyword, such as HashTable, are also pessimistic locks.

Optimistic locking

The optimism lock corresponds to the positive people in life, who are always thinking about things going well.

For example, if there is only one pit in the toilet, the optimistic lock thinks: there are few people in the wilderness, and no one will grab my pit. It is a waste of time to lock the toilet every time, so it is better not to lock the toilet. You see optimism lock is born optimistic!

Back in the code world, optimistic locks operate on data without locking it, and update it to see if it has been updated by another thread in the meantime.

Optimistic locking can be implemented using the version number mechanism and CAS algorithm. In the Java language Java. Util. Concurrent. Atomic package under the atomic classes is to use CAS optimistic locking.

Two types of lock usage scenarios

Pessimistic locks and optimistic locks are not superior to each other, but have their own adaptation scenarios.

Optimistic locking is suitable for scenarios with fewer writes (and fewer conflicts) because locking is not required and locks are released, reducing the overhead of locking and thus improving throughput.

Pessimistic locking is suitable for scenarios where there are many writes and few reads, i.e., conflicts are serious and contention between threads is encouraged. Optimistic locking causes threads to retry continuously, which may also reduce performance.

Exclusive locks and shared locks

An exclusive lock

An exclusive lock means that the lock can only be held by one thread at a time. If one thread holds an exclusive lock on data, no other thread can hold any type of lock on that data. The thread that acquires the exclusive lock can both read and modify data.

The synchronized and java.util.Concurrent (JUC) JDK implementation classes for Lock are exclusive locks.

A Shared lock

A shared lock means that the lock can be held by multiple threads. If one thread adds a shared lock to data, other threads can only add a shared lock to data, not an exclusive lock. The thread that acquires the shared lock can only read the data, not modify it.

ReentrantReadWriteLock is a shared lock in the JDK.

Mutex and read-write locks

The mutex

A mutex is a common implementation of an exclusive lock, which allows only one visitor to access a resource at a time and is unique and exclusive.

Mutex Only one thread can own the mutex at a time. Other threads only have to wait.

Read-write lock

Read/write locks are an implementation of shared locks. Read/write locks manage a set of locks, a read-only lock and a write lock.

A read lock can be held by multiple threads without a write lock, while a write lock is exclusive. Write locks take precedence over read locks, and a thread that has acquired a read lock must be able to see what was updated by the previous release of the write lock.

Read/write locks are more concurrent than mutex locks, with only one writer thread at a time, but multiple threads can concurrently read.

The JDK defines an interface for reading and writing locks: ReadWriteLock

Public interface ReadWriteLock {/** * readLock(); /** * get writeLock */ Lock writeLock(); }Copy the code

ReentrantReadWriteLock implements the ReadWriteLock interface. The specific implementation is not expanded here.

Fair locks and unfair locks

Fair lock

Fair lock means that multiple threads obtain locks in the order they apply for locks. It is similar to queuing for a ticket. The first person to come buys the locks first, and the last person is at the end of the queue.

In Java, a constructor can initialize a fair lock

/** * Create a reentrant lock, true for fair lock, false for unfair lock. Default unfair Lock */ Lock Lock = new ReentrantLock(true);Copy the code

Not fair lock

An unfair lock means that multiple threads acquire the lock in a different order than the one that applied for it first. In a high-concurrency environment, the priority may be reversed, or a thread may be hungry (the lock has not been obtained by a thread).

In Java, the synchronized keyword is an unfair lock, and ReentrantLock is an unfair lock by default.

/** * Create a reentrant lock, true for fair lock, false for unfair lock. Default unfair Lock */ Lock Lock = new ReentrantLock(false);Copy the code

Reentrant lock

A reentrant lock, also known as a recursive lock, means that the same thread acquires the lock in the outer method and automatically acquires the lock in the inner method.

In the case of Java ReentrantLock, its name is clearly a ReentrantLock. For Synchronized, it is also a reentrant lock.

Knock on the blackboard: One advantage of reentrant locks is that deadlocks are somewhat avoided.

Using synchronized as an example, take a look at the following code:

public synchronized void mehtodA() throws Exception{
 // Do some magic tings
 mehtodB();
}
 
public synchronized void mehtodB() throws Exception{
 // Do some magic tings
}
Copy the code

MethodA (methodB); methodA (methodB); methodA (methodA); methodA (methodA); methodA (methodA); If it were not a reentrant lock, mehtodB might not be executed by the current thread, possibly causing a deadlock.

spinlocks

A spinlock is when a thread does not acquire a lock and is not suspended directly, but executes a busy loop, which is called spin.

The purpose of a spin lock is to reduce the chance that a thread will be suspended, since suspending and waking up threads are also costly operations.

If the lock is held by another thread for a long time and the current thread is suspended even after the spin, the busy loop becomes a waste of system resources and degrades overall performance. Therefore, spin-locks are not suitable for concurrency where the lock is held for a long time.

In Java, the AtomicInteger class has a spin operation. Let’s look at the code:

public final int getAndAddInt(Object o, long offset, int delta) { int v; do { v = getIntVolatile(o, offset); } while (! compareAndSwapInt(o, offset, v, v + delta)); return v; }Copy the code

If the CAS operation fails, the loop retrieves the current value and tries again.

Adaptive spin locks are also something to look at.

In JDK1.6, adaptive spin is introduced, which is more intelligent. The spin time is not fixed, but is determined by the previous spin time on the same lock and the state of the lock owner. If the virtual machine thinks there is a good chance that the spin will succeed again, it will take more time in order. If the spin is less successful, it may simply omit the spin process later to avoid wasting processor resources.

Segmented lock

Section lock is a lock design, not a specific lock.

Segment locking is designed to further refine the granularity of the lock. When the operation does not need to update the entire array, only one item in the array is locked.

CurrentHashMap uses a Segment lock at the base of the Java language. Using Segment can be used concurrently.

Lock escalation | bias (do not have a lock lock lock | | lightweight heavyweight lock)

In order to improve performance and reduce the cost of acquiring and releasing locks, JDK1.6 introduces four lock states: no lock, biased lock, lightweight lock, and heavyweight lock, which can be upgraded but not degraded as multiple threads compete.

unlocked

No lock state is actually the above mentioned optimistic lock, here will not repeat.

Biased locking

Java Biased Locking means that it is Biased towards the first thread that accesses the lock. If only one thread accesses the locked resource and there is no multi-threaded race, then the thread does not need to obtain the lock repeatedly. In this case, a Biased Locking is added to the thread.

The realization of bias lock is achieved by controlling the Mark bit of the object. If the current state is biased, it is necessary to further determine whether the thread ID stored in the object header is consistent with the current thread ID. If so, it is directly entered.

Lightweight lock

When thread contention becomes more intense, biased locks will be upgraded to lightweight locks. Lightweight locks think that although competition exists, the degree of competition is ideally very low and wait for the last thread to release the lock by means of spin.

Heavyweight lock

Further intensified, if thread concurrent threads of spin more than a certain number of times, or a thread holds a lock, a thread in the spin, again the third interview with thread (anyway is the competition continues to increase), lightweight will expand as heavyweight lock, lock heavyweight lock can make in addition to the outside of the thread that owns lock threads are blocked.

Upgrading to a heavyweight lock is essentially a mutex. One thread holds the lock while the rest of the threads block.

In Java, the synchronized keyword is implemented as an escalation process: no lock > bias lock > lightweight lock > heavyweight lock. This process will be explained in more detail later in the tutorial on the principle of the synchronized keyword.

Lock optimization techniques (lock coarsening, lock elimination)

Lock coarsening

Lock coarsening is to reduce the number of multiple synchronization blocks and expand the scope of a single synchronization block. In essence, it is to merge multiple locking and unlocking requests into one synchronization request.

For example, a loop body has a code synchronization block, and each time the loop performs a lock unlock operation.

private static final Object LOCK = new Object(); for(int i = 0; i < 100; i++) { synchronized(LOCK){ // do some magic things } }Copy the code

After coarsening the lock, it looks like this:

synchronized(LOCK){ for(int i = 0; i < 100; i++) { // do some magic things } }Copy the code

Lock elimination

Lock elimination is when the virtual machine compiler detects uncontested locks on shared data at run time and removes them.

Let me give you an example to help you understand.

public String test(String s1, String s2){
    StringBuffer stringBuffer = new StringBuffer();
    stringBuffer.append(s1);
    stringBuffer.append(s2);
    return stringBuffer.toString();
}
Copy the code

The above code has a test method that concatenates the strings s1 and s2.

The three variables s1, S2, and stringBuffer in the test method are all local variables. Local variables are on the stack. The stack is thread private, so the test method is thread-safe even if multiple threads access it.

We all know that StringBuffer is thread-safe and append is a synchronous method, but test is thread-safe by nature. To improve efficiency, the virtual machine eliminated these synchronous locks, a process known as lock elimination.

Public synchronized StringBuffer append(String STR) {toStringCache = null; super.append(str); return this; }Copy the code

Summary in one chart:

As a Java developer, in addition to the programming language itself, you also need to know these five things

In the Java language, locks can be used to create locks.

The last

Java interview 1 — 5 years of development must be asked the following interview questions, also are the first line of Internet companies Java interview essential skills, belowRefer to the chart of skills required for Ali’s annual salary of 50W, we can refer to next!