This time let’s look at the lock

When it comes to locks, we often associate them with locks in daily life. We often come into contact with locks in our daily life. Like locks on our phones, locks on our computers, locks on our lives, these are all locks.

What does a lock do?

Still not clear what the use of locks is? This point is to ponder why we use locks, we use mobile phone locks to protect our privacy security, use locks to protect our property security, precisely speaking, we use locks for security. In our daily life, we can use locks to protect our privacy and property security, so what is the use of locks in Java?

In Java lock

Locks in Java are also precisely for security purposes, but the difference is that locks in Java are needed to ensure concurrency. Therefore, locking in Java is precisely to ensure concurrency safety, but also to solve the memory consistency, atomicity, order three problems. There are a variety of locks available in Java, each with its own characteristics and scope of application. So we should be familiar with the difference and principle of lock to use correctly.

Optimistic locks and pessimistic locks

Pessimistic locking

Optimistic lock and pessimistic lock in the previous I just started to write when I have written about the relevant article, here is a new introduction.

Pessimistic locking is pessimistic as its name suggests. It believes that every time data is accessed, it can be modified by another thread, so it locks the resource when it is accessed. This ensures that the resource cannot be modified by another thread when it is accessed. In this case, other threads that want to acquire resources must block until the current thread releases the lock. Pessimistic locks are implemented in JavaThe synchronized keywordandLockAll implementation classes are pessimistic locks. Let’s see how pessimistic locking is actually implemented.



After thread A preempts the resource, thread B is blocked and waits for thread A to release the resource.

When thread A releases the resource, thread B acquires the lock and operates on the resource ˛ Pessimistic locking ensures that only one thread can operate on the resource at A time.

Optimistic locking

In contrast to pessimistic locks, optimistic locks do not feel that the data will be modified when accessed (so it is optimistic), so they will not be locked when the resource is accessed, but go back at commit time to determine whether the current data has been modified and can be used in the databaseversionVersion number to implement. In Java we do this using CSA. Let’s look at the execution of optimistic locking

CAS

Compare And Swap (CAS) is a lock-free algorithm, which is a non-blocking atomic operation provided by Java. Multi-threaded synchronization without locking. Atomic classes in concurrent packages (java.util.concurrent) use CAS to implement optimistic locking. CAS guarantees relatively newer atomicity through hardware, and in the JDK, Unsafe provides a series of compareAndSwap* methods. The Unsafe class is not addressed here. The CAS operation compares the data to be modified in memory with the expected value. If the two values are equal, the value is changed to the new value. Otherwise, the CAS operation is not performed.

  • The expected value of A
  • V in memory
  • B to be modified

In simple terms, CAS is an infinite loop in which the expected value is determined to be equal to the value in memory. If the value is equal, the modification is performed. If the value is not equal, the loop continues until the execution succeeds and exits.

The problem of the CAS

CAS although very cow force but it also exists some problems such as ABA, for example, now have A Shared variables in memory the value of X is A, A variable in this time want to modify the value of the variable X will first get access at this time of the value of X is A, then use the X variable to modify the CAS operation to B. If thread 1 gets variable X and before CAS, thread 2 changes the value of X to B and the CAS operation changes the value of X to A, the result of the final execution is shared variable A, but the A is no longer the A obtained by thread 1. This is the classic ABA problem. The ABA problem arises because the variable’s state values change in A circle, A can go to B, B can go to A, and if A goes to B, B goes to C this problem does not occur.

Solution: The AtomicStampedReference method was added after JDK1.5 to add a timestamp to each variable to avoid ABA problems.


CAS also has the problem of high loop overhead because it will continue to loop until the expected and memory equality changes are successful. There is also the problem of ensuring atomicity of only one shared variable, but the AtomicReference class was added after JDK1.5 to ensure atomicity between reference objects.

Use pessimistic locks and optimistic locks

Pessimistic locks can be implemented using the synchronized keyword, and optimistic locks can use atomic classes provided under the union package.

Fair locks and unfair locks

Pessimistic locks and optimistic locks, now fair locks and unfair locks. In the lock is also fair and unfair drop, fair lock as its name stresses is a fair, so many threads at the same time to apply for a lock, the thread will be put into a queue, in the queue the first thread into the queue to obtain lock resources, pay attention to the first come first served. For example, when we had lunch in the school canteen, I remember that my classmates would go to the canteen as soon as they were out of school so that they could get dinner as soon as possible, and no one would miss dinner in the process of queuing. At this timeCanteen aunt is fairEveryone in the queue gets to eat, and so does the thread. My classmate was standing in line at the cafeteria and someone cut in line.The canteen lady is not fairIsn’t it unfair to give food to someone who cuts in line and not call him?First come, not first served. However, fair locks also have disadvantages. When one thread acquires resources, other threads in the queue can only block. Therefore, fair locks are much less efficient than non-fair locks. This is because it is more expensive for the CPU to wake up a blocked thread than an unfair lock. Let’s look at an example:

In Java, ReentrantLock provides an implementation of fair and unfair locks. How does ReentrantLock implement fair and unfair locking

Use fair and unfair locks

ReentrantLock is an unfair lock by default. Let’s look at an example of a fair lock:



Take a look at the output:



We can see that the fair lock output is in order, first come, first served.

Let’s look at an example of an unfair lock:



Output result:



We can see that if we use an unfair lock the output is completely unordered, first come not first served.

So in the case of a fair lock thread 1 gets the lock and then thread 2, when it requests the lock, suspends and waits for thread 1 to release the lock and then thread 2 can get the lock. If another thread 3 wants to acquire the lock, one of the two threads will acquire the lock if the lock is not fair. In the case of a fair lock, thread 3 can only suspend and wait for thread 2 to acquire the lock resource after it is released.

When to use fair and unfair locks

Use fair locks in scenarios where fair resources are required, and use unfair locks if special fair treatment is not required because fair locks incur performance overhead.

Exclusive locks and shared locks

A ReentrantLock is an exclusive lock that only one thread can hold at a time, and other threads have to wait for the thread to release the lock before acquiring the lock again. Pessimistic locks can only acquire lock resources after they have to wait for the release of other threads. Actually accurate say exclusive lock also is pessimistic lock. When we talk about shared locks, shared locks are actually optimistic locks that relax the lock policy and allow multiple threads to acquire the lock at the same time. ReadWriteLock is a typical shared lock in parallel packets. It allows a resource to be accessed by multiple read operations or by a write operation, but not both at the same time.

spinlocks

What is a spin lock? A spin lock is when a thread acquires a lock that has already been acquired by another thread, the thread does not suspend immediately but attempts to acquire the lock again without giving up CPU usage. The default number of times is 10. If the spin lock takes too long to acquire the lock, subsequent threads will run out of CPU resources and release it. And spin locks are unfair.

advantages

The spin lock does not cause the thread state to switch, it is always in user state, that is, the thread is always active; Does not make the thread into the blocking state, reduces the unnecessary context switch, execution speed.

There are all kinds of unexpected situations in life, and there are all kinds of unexpected exceptions in Java. Next time we talk about exceptions in Java, please follow us