More exciting original content please pay attention to:JavaInterview, welcome star, support and encourage the following authors, thank you very much.
The classification of locks
Optimistic and pessimistic locks
A macro classification of locks is optimistic locking and pessimistic locking. Optimistic locking and pessimistic locking are not specific locks (there is no implementation name for that specific lock in Java), but two different strategies in the case of concurrency.
An Optimistic Lock is someone who is very Optimistic and thinks that no one will change the data every time they go to get it. So it won’t be locked. However, if you want to update the data, before updating it, you check to see if anyone else has changed the data in the period between reading and updating it. If so, it reads again and tries to update again, repeating the above steps until the update succeeds (and of course allowing the thread that failed to update to abandon the update operation).
A Pessimistic Lock is when you are Pessimistic and each time you fetch the data, you assume someone else will change it. So I lock it every time I get the data. This is blocked until the pessimistic lock is released, and the thread that wants to acquire the data acquires the lock, and then acquires the data.
Pessimistic locks block transactions, while optimistic locks rollback retry. Both of them have advantages and disadvantages. For example, optimistic locking is suitable for scenarios where there are few writes, i.e. conflicts are really rare, thus eliminating the overhead of locking and increasing the overall throughput of the system. However, if conflicts occur frequently, the upper-layer application will constantly retry, which can degrade performance, so this scenario is suitable for pessimistic locking. Conclusion: Optimistic locking is suitable for scenarios with few writes and few conflicts. Pessimistic locking is suitable for scenarios with lots of writing and collisions.
The foundation of optimistic locking – CAS
One concept we must understand in the implementation of optimistic locking is CAS.
What is CAS? Compare-and-swap, Compare and replace, or Compare and set.
-
Comparison: Read A value of A and check if the original value is A before updating it to B (not modified by another thread, so ABA issues are ignored here).
-
Replace: If yes, update A to B, end. If not, it will not be updated.
The above two steps are atomic operations that can be interpreted as instantaneous, and are considered one-step operations by the CPU.
With CAS, you can implement an optimistic lock:
public class OptimisticLockSample{
public void test(a){
int data = 123; // Share data
// The thread that updates data does the following
for (;;) {
int oldData = data;
int newData = doSomething(oldData);
// The following is a simulated CAS update operation, attempting to update the value of data
if (data == oldData) { // compare
data = newData; // swap
break; // finish
} else {
// Do nothing, loop again}}}/** ** Obviously, the code in test() is not atomic at all, it just shows the flow of CAS. * Because real CAS leverages CPU instructions. * * * /
}
Copy the code
CAS is also implemented in Java via native methods.
public final class Unsafe {...public final native boolean compareAndSwapObject(Object var1, long var2, Object var4, Object var5);
public final native boolean compareAndSwapInt(Object var1, long var2, int var4, int var5);
public final native boolean compareAndSwapLong(Object var1, long var2, long var4, long var6); . }Copy the code
A simple and intuitive implementation of optimistic locking (or rather optimistic locking process) is written above, which allows multiple threads to read simultaneously (because there is no locking at all), and if data is updated, one and only one thread can successfully update data, causing the other threads to need to rollback and retry. CAS uses CPU instructions to ensure atomicity at the hardware level to achieve a lock-like effect.
As can be seen from the whole process of optimistic locking, there is no operation of locking and unlocking, so optimistic locking strategy is also called lockless programming. In other words, optimistic locking is not really a “lock”, it is just a CAS algorithm with repeated retries.
spinlocks
Synchronized with the Lock interface
There are two ways to implement locking in Java: one is using the synchronized keyword, and the other is using the implementation class of the Lock interface.
Synchronized keyword is like automatic transmission, which can satisfy all driving needs. But if you want to do more advanced operations, such as drift or various advanced operations, then you need manual gear, which is the implementation class of the Lock interface.
Synchronized, on the other hand, has become more efficient after various optimizations with each version of Java. It’s just not as convenient to use as the implementation class of the Lock interface.
The synchronized lock upgrade process is the core of its optimization:Biased locking -> Lightweight lock -> Heavyweight lock
class Test{
private static final Object object = new Object();
public void test(a){
synchronized(object) {
// do something }}}Copy the code
When a block of code is locked with the synchronized keyword, the lock object (object in the above code) is not a heavyweight lock, but a bias lock. A biased lock literally means a lock that is “biased in favor of the first thread to acquire it.” The thread does not actively release bias locks after executing synchronized blocks of code. When a block of synchronized code is reached for the second time, the thread determines whether the thread holding the lock is its own (the thread ID holding the lock is stored in the object head), and if so, proceeds normally. Since there is no previous release, there is no need to re-lock. If the lock is used by a single thread from beginning to end, it is clear that the bias lock has little overhead and high performance.
Once a second thread enters the lock contention, the bias lock is converted to a lightweight lock (spin lock). Lock contention: There is no lock contention if multiple threads take turns to acquire a lock, but each time they do so without blocking. Lock contention occurs only when a thread obtains a lock and finds that the lock has been occupied and needs to wait for its release.
In the lightweight lock state, the lock competition continues, and the thread that did not grab the lock spins, that is, in a loop to determine whether the lock can be acquired. The operation of obtaining a lock is to modify the lock flag bit in the head of an object through the CAS operation. Compare the current lock flag bit to see if it is in the free state. If so, set it to the locked state. Compare and set is atomic operation, which is guaranteed by JVM level. The current thread holds the lock and then changes the current lock owner information to itself.
If we get the lock thread operation time is very long, such as complex calculation, large data transmission network; Other threads waiting for the lock would then have to spin for a long time, which could be very resource-intensive. In this case, only one thread is working efficiently while the other threads are wasting CPU, a phenomenon known as busy-waiting. So if multiple threads use an exclusive lock but no lock contention occurs, or if there is a very slight lock contention, synchronized is a lightweight lock that allows a short period of busy time, etc. This is an alternative idea, short periods of busy, etc., in exchange for the overhead of threads switching between user and kernel mode.
Obviously, there is a limit to how busy you can be (the JVM has a counter that keeps track of spins, allowing 10 cycles by default, which can be changed with the vm parameters). If lock contention is serious, a thread that has reached a certain maximum number of spins will upgrade the lightweight lock to the heavyweight lock (still changing the lock flag bit through CAS, but not changing the thread ID that holds the lock). When a subsequent thread tries to acquire the lock and finds that the lock is a heavyweight lock, it suspends itself and waits for the thread that released the lock to wake it up. Prior to JDK1.6, synchronized directly weighted locks, and it is now clear that through a series of optimizations, performance has improved significantly.
In the JVM, synchronized locks can only be progressively upgraded (also known as lock inflation) in a progressive order of bias, lightweight, and heavyweight, and cannot be degraded.
Reentrant lock (recursive lock)
A reentrant lock literally means “re-entrant lock” and allows the same thread to acquire the same lock multiple times. For example, in a recursive function, does the lock block itself? If not, the lock is called a reentrant lock (also called a recursive lock for this reason).
All Java locks with names starting with Reentrant are Reentrant, and all the classes that implement existing locks provided by the JDK, including synchronized keyword locks, are Reentrant. If you really need a non-reentrant lock, then you need to implement it yourself, get to the Internet search, there are many, their implementation is also very simple.
If it is not a reentrant lock, it will cause deadlock in the recursive function, so the lock in Java is basically reentrant lock, the significance of non-reentrant lock is not very big, I have not thought of any scenarios will use it; Note: if you need a non-reentrant lock scenario, please leave a comment.
The following figure shows the Lock implementation classes:
Fair locks and unfair locks
If multiple threads apply for a fair lock, then the thread that acquired the lock releases the lock first, which is fair enough. If the lock is unfair, the later thread may acquire the lock first, randomly or otherwise, depending on the implementation algorithm.
For the ReentrantLock class, the constructor specifies whether the lock is fair or not. The default is unfair. Because in most cases, the throughput of an unfair lock is greater than that of a fair lock, the use of an unfair lock is preferred if there is no special requirement.
A synchronized lock is only an unfair lock, and there is no way to make it fair. This is one of the advantages of ReentrantLock over synchronized lock, which is more flexible.
Here is the ReentrantLock constructor code:
/**
* Creates an instance of {@code ReentrantLock} with the
* given fairness policy.
*
* @param fair {@code true} if this lock should use a fair ordering policy
*/
public ReentrantLock(boolean fair) {
sync = fair ? new FairSync() : new NonfairSync();
}
Copy the code
ReentrantLock internally implements FairSync and NonfairSync internal classes to implement fair and unfair locking. Source code analysis will be provided in the following sections, so stay tuned for the project and welcome fork and Star.
Interruptible lock
It literally means “interrupt-responsive lock”.
First, we need to understand what interrupts are. Java does not provide any direct way to interrupt a thread, only an interrupt mechanism. So what is the interrupt mechanism? Thread A calls thread.interrupt () by asking Thread B to stop running. (Thread B can also send an interrupt request to itself, thread.currentThread ().interrupt(). However, thread B does not stop running immediately, but responds to the interrupt in its own way at the appropriate time or ignores the interrupt. That is, Java interrupts do not terminate the thread directly, but only set the state to respond to the interrupt, leaving it up to the interrupted thread to decide what to do. This is like when reading, the teacher asks the student to review his homework by himself in the evening self-study, but whether the student reviews his homework, how to review his homework completely depends on the student himself.
Going back to the lock analysis, if thread A holds the lock, thread B waits to acquire it. Since thread A has held the lock for too long and thread B doesn’t want to wait any longer, we can have thread B interrupt itself or interrupt thread B in another thread, which is called A midcourse lock.
In Java, synchronized locks are non-interruptible locks, and Lock implementation classes are interruptible locks. The JDK’s own implementation of Lock locks is more flexible, which is why there are so many classes that implement synchronized locks.
Lock interface definitions:
public interface Lock {
void lock(a);
void lockInterruptibly(a) throws InterruptedException;
boolean tryLock(a);
boolean tryLock(long time, TimeUnit unit) throws InterruptedException;
void unlock(a);
Condition newCondition(a);
}
Copy the code
LockInterruptibly is used to obtain an interruptible lock.
A Shared lock
Multiple threads can share a lock. The shared lock is usually used when reading data. For example, we can allow 10 threads to read a shared lock at the same time. In this case, we can set a shared lock with 10 credentials.
In Java, there are also concrete shared lock implementation classes, such as Semaphore. The source code analysis of this class will be analyzed in the following sections, so stay tuned for this project and welcome fork and Star.
The mutex
This literally means mutually exclusive lock between threads, meaning that the lock can only be owned by one thread.
In Java, ReentrantLock and synchronized lock are mutually exclusive.
Read-write lock
Read/write locks are actually a pair of locks, a read lock (shared lock) and a write lock (mutex, exclusive lock).
In Java, the ReadWriteLock interface specifies only two methods, one that returns a read lock and one that returns a write lock.
public interface ReadWriteLock {
/**
* Returns the lock used for reading.
*
* @return the lock used for reading
*/
Lock readLock(a);
/**
* Returns the lock used for writing.
*
* @return the lock used for writing
*/
Lock writeLock(a);
}
Copy the code
As mentioned earlier in this article [Optimistic locking strategy](# Foundation of Optimistic locking – CAS), all threads can read at any time and only determine whether the value has been changed before writing.
Read/write locks actually do the same thing, but the strategy is slightly different. In many cases, threads know if they are reading data in order to change it. So why not make this explicit when locking? If I read the value to update it, then the write lock is added directly to the lock, and while I hold the write lock, other threads either read or write will wait. If the data is read only for display, then a read lock is explicitly added to the lock, and other threads can directly acquire the read lock without waiting (the read lock counter is increased by 1).
Although read-write locks feel similar to optimistic locks, read-write locks are pessimistic locking strategies. This is because the read-write lock does not determine whether the value has been modified before the update, but rather determines whether a read-write lock should be used before the lock is added. Optimistic locking refers specifically to lockless programming.
The JDK provides a unique ReadWriteLock interface implementation class called ReentrantReadWriteLock. As you can see by name, this lock provides a read-write lock and is also a reentrant lock.
conclusion
All kinds of locks used in Java are basically pessimistic locks. Are there optimistic locks in Java? Result is yes, that is Java. Util. Concurrent. The atomic atomic classes below are achieved by optimistic locking. As follows:
public final int getAndAddInt(Object var1, long var2, int var4) {
int var5;
do {
var5 = this.getIntVolatile(var1, var2);
} while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));
return var5;
}
Copy the code
As you can see from the above source code, CAS is repeated in a loop until it succeeds.
Parameter is introduced
-XX:-UseBiasedLocking=falseDisable bias lock JDK1.6 -xx :+UseSpinning Enable spin lock. -xx :PreBlockSpin=10 Set spin countCopy the code