The existence of the lock is to ensure that a resource fragment in the case of multi-threaded contention processing, the final result of resource processing is atomic.
The previous article talked about some knowledge of synchronized, we learned about the meaning of the synchronized keyword in Java, and the process of internal lock upgrade. In our work or learning process, we will encounter many concepts of lock, such as pessimistic lock, optimistic lock, spin lock, read and write lock, exclusive lock, mutex lock, shared lock, reentrant lock, non-reentrant lock, exclusive lock, biased lock, fair lock, unfair lock, lightweight lock, heavyweight lock, segmental lock. So many concepts, how to distinguish their meaning, how to better distinguish and use these locks? In fact, there are a lot of repetitive concepts, as long as we understand.
Pessimistic lock & Optimistic lock
From the final implementation of lock, generally divided into pessimistic lock and optimistic lock
Pessimistic locking
Pessimistic locking is that pessimistic thought the current thread at the time of operation, there are always other threads to change the data, so pessimistic locking internal to do any operation, all want to lock first, then the individual threads compete this lock to achieve the same time there is only one thread to lock, and the purpose of synchronous operation internal resources. Synchronized and lock are both implementations of pessimistic locking.
How to understand locking?
Synchronized can load a method and lock an entire method, as well as a piece of code. But synchronzied ultimately uses an object as a lock. Locking means that a method or a piece of code is synchronized. When you add an object to the scope as a lock, that object is the added lock.
How to understand to get a lock?
In the case of pessimistic locks, locking is actually done by changing the in-memory flag of the object header corresponding to monitor to indicate that the current thread has occupied the current object lock. All threads change the lock mark to achieve a purpose of locking, and eventually the thread marked in the object lock is the thread that holds the current object lock, and the code block used by the object lock can only be executed by this thread.
Optimistic locking
Optimistic locking is that optimistic that the current thread when doing the operation, there will be no other threads to change the operation data, so not to execute the above said locking process, only at the time of submission, judge to change the current resources, whether it is before the read value, if not will handle failure, the follow-up to consider whether or not to go to the operation or the end. In this way, the atomicity of resources can be achieved through locking. In fact, this operation is also called CAS operation, and is used in many places. The Atomic class of Java implements Atomic operation in this way
CAS
CAS is Compare And Swap, which is a logical operation method used to implement Atomic operations. This operation method is used in many places in Java, except for Atomic classes mentioned above, And even in the process of competing for locks, this method is used to ensure that the process of locking is an Atomic operation. In addition, the database can also achieve atomic operations by changing a certain data in this way, similar to a purpose of locking, with a database change statement can well explain this operation.
update user set name="Bill"where name="Zhang SAN";If the current user's name is still John, change it to John.
Copy the code
spinlocks
Optimistic locks use a CAS operation. If the operation fails, it can be considered whether to retry the operation or stop. Spin locking is one of its operations. Normally, when we modify a data, we usually want to modify it successfully. If one operation fails, it may not be the result we want. Spin-lock it does not stop and release CPU resources, but continues to spin and execute CAS until it succeeds. Java used the -xx :+UseSpinning parameter to enable spin locking prior to 1.6, and automatically after 1.6.
The advantages and disadvantages
- Advantages: The advantage of spin locking is that since we want to ensure that the resource must change, we don’t need to release CPU resources after a failed execution and then perform the operation the next time we get the CPU resources, but rather by not releasing CPU resources,
Reduce the cost of CPU switching
. - Disadvantages: CPU resources are not released, but CPU resources are not released either. If the operation of the spin lock fails all the time, CPU resources are occupied all the time. If the accumulation is too much, the CPU is occupied by the spin lock, resulting in CPU overload, or even other operations using the CPU cannot be performed
Adaptive spin lock
In order to address the disadvantages of spinlocks mentioned above, java1.6 not only enabled the spinlock by default, but also optimized the spinlock to keep the CPU spinning. The optimization said that if the current thread has been successfully modified during the spinlock process, If the current thread is found to have a high probability of spin failure, it will reduce the spin time of the thread or even block directly. By reducing the spin time, you ensure that the CPU is not occupied by threads that have been spinning indefinitely.
Read-write lock
ReadWriteLock is a Java interface. It has two lock implementations: a read lock and a write lock. ReentrantReadWriteLock implements the read and write lock function by using AQS and switches read and write locks. I mentioned above that lock in Java is actually the implementation of pessimistic lock. Although pessimistic lock can guarantee atomicity, the problem is that no matter what operation is done inside, only one thread can execute at a time, even if it is only a read operation. So sure you can think of is fully exert performance is not enough, the resulting the read-write lock, the meaning of the read-write lock is at the same time or the current state is read lock, either write locks, read the state of lock time do read operations can have multiple threads at the same time, the state of write locks at that time only one thread of the data, Read locks and write locks cannot be created at the same time and must compete with each other. This is just a brief introduction for now, and I will add more articles to detail the implementation of read/write locks and the switching process.
Exclusive, mutex, exclusive & shared locks
Exclusivity is mutually exclusive, mutually exclusive means they can’t be used at the same time. These three are the same concept, which means that the lock can only be operated by one thread, and the shared lock means that although it is locked, it can be used by multiple threads at the same time. This can be explained by the lock we mentioned above, speaking, reading and writing, speaking, reading and writing in the lock was divided into read locks and write locks, write lock can have only one thread to write, then write lock is an exclusive lock, also called a mutex and exclusive locks, read and read lock can have multiple threads at the same time, so read lock is also called read in a Shared lock in addition to reading and writing lock lock is a Shared lock, any other form of lock, As long as only one thread can operate at a time, they are exclusive, mutually exclusive, and exclusive locks.
Reentrant locks & non-reentrant locks
Normally, we only need to add the lock operation, which generally applies to code blocks or methods. If multiple methods add the same object lock at the same time, and there are mutual calls or even recursion, after the thread gets the lock, it encounters the lock operation. In this case, the allowed thread can continue to acquire the lock, which creates a state of whether the lock can be reentrant.
What problems do non-reentrant locks cause?
If this kind of situation I mentioned above, and the current rule is unable to continue into the lock to lock, so the situation is likely to be follow-up, the current thread holds the lock this object, but could not continue to get it again, need to wait for the object lock release to continue, so will cause the thread has been unable to get the second lock, a deadlock occurs.
Reentrant lock
The monitor object has a _recursions field that stores the number of synchronized locks entered (+1), released (-1), and completely released (0). Synchronized, in other words, is a reentrant lock that supports blocks of code that repeatedly enter the same lock.
Bias lock, lightweight lock, heavyweight lock
These three locks are concepts that emerge from the upgrading process of synchronized locks, which have been explained in detail in my previous article and are simply introduced here
Biased locking
Normally, if a thread in our program holds the object lock, in most cases, the same thread will hold it next time. Biased locking is an optimization of this behavior. When the same thread does not hold the lock all the time, it has to compete for the lock all the time. Biased locking has two kinds of behavior:
After the first thread to lock, lock object head open towards identification, and record the current thread thread ID of the head to the object, if the next to pick locks, found objects is biased locking, and record the thread ID is their own, so don't have to do any operation, can be directly into the corresponding internal lock resources do operation
.When another thread comes to get the lock, it finds that it is biased to the lock, but the thread ID is not its own, so it will judge whether the corresponding thread is still in the lock using state. If not, it will change the thread ID to its own, and enter the lock resource
Lightweight lock
Lightweight locking means that the thread is competing for the lock, but it is lightweight, and only the current thread is trying to take the lock.
Lightweight lock occurs after biased lock behavior. When the second biased lock behavior is performed, the corresponding thread is found to be still in the lock state and still in use, then the lock state will be upgraded to lightweight lock. The lock is acquired by repeating the CAS operation mentioned above and changing the _owner field in the monitor lock object. If this operation changes the value of this field to its thread ID, the lock is acquired.
Heavyweight lock
Heavyweight lock is to prevent the CPU from being occupied meaninglessly by the spin-lock competition in the lightweight lock stage, and to block the thread that snatches the lock to ensure the normal operation of the program.
In the process of behavior of lightweight lock, if the current thread when acquiring the lock, the discovery is lightweight lock, if _owner field is its own thread information, then the reentrant, continue to operate, save the value of the other threads as if _owner field, has not been released, cannot change the success, will lock escalation for a heavyweight lock, And it blocks itself. The thread needs to wake up when a lock is blocked, so if it finds that a thread has competed for the lock and does not get the lock, it needs to wake up the previously blocked thread when releasing the lock.
Fair locks & Unfair locks
What is fair and unfair is that if multiple threads are fighting for a lock and are blocked, is there a sequence problem when they wake up to retrieve the lock again?
Fair lock
First to compete for the lock, first to be awakened to get the lock. Ensure that the thread that blocks first can be released first.
Not fair lock
There’s no order. There’s no telling who’s gonna get the lock next. Synchronized is an unfair lock.
implementation
abstract static class Sync extends AbstractQueuedSynchronizer {...Copy the code
static final class NonfairSync extends Sync {...Copy the code
static final class FairSync extends Sync {...Copy the code
In addition to synchronized, there is another implementation of fair and unfair lock in Java is AQS. Sync class inherits AQS and provides two subclasses, one is NonfairSync which is unfair lock, and the other is FairSync which is fair lock. The implementation details will be covered in an article.
Segmented lock
final V putVal(K key, V value, boolean onlyIfAbsent) {
if (key == null || value == null) throw new NullPointerException();
int hash = spread(key.hashCode());
int binCount = 0;
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
if (tab == null || (n = tab.length) == 0)
tab = initTable();
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
if (casTabAt(tab, i, null.new Node<K,V>(hash, key, value, null)))
break; // no lock when adding to empty bin
}
else if ((fh = f.hash) == MOVED)
tab = helpTransfer(tab, f);
else {
V oldVal = null;
synchronized (f) {`
if (tabAt(tab, i) == f) {
if (fh >= 0) {
binCount = 1;
Copy the code
Segmented locking is actually a concept I made up. Segmented locking is actually used in ConcurrentHashMap to implement internal division into 16 segments, lock each segment, fine-grained control of the change of the value of each segment, and ensure that ConcurrentHashMap can support the simultaneous operation of 16 threads to improve concurrency. But ultimately it’s all synchronized lock control. Let me take a look at the put method in there. Specifically synchronized actually has been talked about a lot, we can understand.
conclusion
This article mainly introduces some common concepts of locks in Java, and understands the function and use of various types of locks. It can help us to ensure the performance of the system and the stable and correct operation of the system through reasonable use of locks in concurrent scenarios.