ReentrantLock implements layer dependencies
A, the CAS (compareAndSet)
LockSupport
Basic method
park
Park causes the current thread to drop the CPU into a waiting state and the operating system will no longer schedule it
Until another thread calls the unpark method on it, which restores the thread specified by the parameter to a runnable stateCopy the code
[1] Part differs from thread.yield ()
-
Yield simply tells the operating system that it can let other programs run first, but that it can still run
-
The park method gives up the thread’s running qualification and makes the thread enter the WAITING state
[2] Interrupted response
The Park method responds to interrupts, returning when an interrupt occurs and resetting the interrupt status of the threadCopy the code
[3] two variants
-
ParkNanos: You can specify the maximum time to wait in nanoseconds relative to the current time
-
ParkUntil: Specifies the maximum wait time. The parameter is absolute time, milliseconds relative to the epoch
When the wait times out, the method returns. There are also some other variations where you can specify an object to wait on for debugging purposes, usually passing this as an argumentCopy the code
getBlocker
Second, the AQS
- Provides a state field that is volatile to ensure memory visibility and order
- AQS maintains a wait queue internally, and uses CAS method to implement non-blocking algorithm for updating
Third, already
Sync is an abstract classNonfairSync is fairfalseClass to use when [default]FairSync is fairtrueClass that you need to useCopy the code
The lock implementation
This method is overridden by subclassesCopy the code
If it is not locked, CAS is used to lock it. If the current thread is already locked, the lock count is increased.
If tryArquire method returnsfalseAcquireQueued (addWaiter(Node.exclusive), arg).
Where addWaiter creates a new Node, Node, representing the current thread, and joins the internal wait queue. After the wait queue, the acquireQueued is called to attempt to acquire the lock with the code Copy the code
Is an infinite loop. In each loop, the current node is first checked to see if it is the first waiting node.If the lock is available and the current node is removed from the wait queue and returned,Otherwise, the CPU is abandoned by a final call to lockSupport. park via the parkAndCheckInterrupt method,Enter the wait state and, after being awakened, check whether an interruption has occurred and record the interrupt flag. And returns the interrupt flagCopy the code
If the lock can be obtained immediately, if not, join the queue. After being woken up, the thread checks if it is the first to wait and returns if it is and can acquire the lock, otherwise it continues to wait. If an interrupt occurs during this process, lock records the interrupt flag bit, but does not return or throw the exception ahead of timeCopy the code
Unlock implementation
The tryRelease method changes the thread state and frees the lock, and the unparksucceeded method calls locksupport. unpark to wake up the first waiting threadCopy the code
Fair locks and unfair locks
Fair locks have one more check in the source code implementation than unfair locks: the lock is acquired when no other thread is waiting longerCopy the code
Fair lock model
At initialization, state=0, indicating that no thread is trying to grab the lock. At this point, thread A requests the lock, occupies the lock, and adds state+1Copy the code
Thread A acquires the lock and changes the state atomicity to +1, at which point the state is changed to 1. Thread A continues to perform other tasks, and then thread B requests the lock. Thread B cannot obtain the lock and generates nodes for queuingCopy the code
At initialization, an empty head node is generated, followed by thread B. If thread A requests the lock again, is there A queue? The answer, of course, is no, otherwise it would simply be deadlocked. When A requests the lock againCopy the code
Reentrant lock: when a thread acquires a lock and then attempts to acquire the same lock again, it simply adds up the state values. If thread A releases the lock onceCopy the code
Only if thread A has released the lock completely and the state value has been reduced to 0 will any other thread have A chance to acquire the lock. When A fully releases the lock, state reverts to 0 and the queue is notified to wake up the THREAD node B so that B can compete for the lock again. Of course, if thread B is followed by thread C, thread C continues to sleep until thread C is notified that thread B is finished. Note that when a thread node is awakened and the lock is acquired, the node is removed from the queueCopy the code
Unfair lock model
When thread A finishes executing, it takes time to wake up thread B, and thread B has to compete for the lock again when it wakes up. Therefore, if thread C comes in during the switching process, it is possible for thread C to acquire the lock. If thread C obtains the lock, B can only continue to sleepCopy the code
Why not default to fair lock
The overall performance of ensuring fairness is low, not because checking is slow, but because active threads cannot acquire locks and enter a wait state, causing frequent context switches and reducing overall efficiencyCopy the code
The ReentrantLock tryLock() method uses an unfair lock
Compared with synchronized
-
ReentrantLock implements the same semantics as synchronized and supports non-blocking methods to acquire the lock, as well as interrupts, time-limited blocking, and more flexible; Synchronized is simpler to use and has less code
-
Synchronized stands for declarative programming where Java systems are responsible for implementation and programmers don’t know the details; Explicit locking represents an imperative programming mindset that requires the user to implement all the details
-
Besides simplicity, the benefits of declarative programming can also be seen in terms of performance. On newer JVMS, ReentrantLock and synchronized performance are close, and Java compilers and virtual machines are constantly optimizing synchronized implementations, such as automatically analyzing the use of synchronized, For scenarios where there is no lock contention, calls to acquire/release locks are automatically ignored
-
Use synchronized when you can, and ReentrantLock when you can’t
Code resources
https://gitee.com/pingfanrenbiji/myconcurrent/blob/master/src/main/java/pers/hanchao/concurrent/reentrantLock/LockSuppor tTest.javaCopy the code
Refer to the article
https://blog.csdn.net/u011669700/article/details/80070892
https://blog.csdn.net/yanyan19880509/article/details/52345422
Copy the code
This article is formatted using MDNICE