synchronized
After JavaSE1.6, locks have the following states: no lock state, biased lock state (lock revoke), lightweight lock state (CAS spin consumes CPU performance), and heavyweight lock state. Locks can be upgraded, but not degraded, to improve the efficiency of acquiring and releasing locks.
Synchronized Is the basis for implementing synchronization
Every object in Java can be used as a synchronization lock. Specific forms of expression:
-
For normal synchronous methods, the lock is the current instance object;
-
For statically synchronized methods, the lock is the Class object of the current Class;
-
For blocks of synchronized code, locks are objects configured in synchronized parentheses.
When a thread attempts to access a synchronized block/method, it must first acquire the internal lock, or it will be blocked and suspended. The built-in lock must be released after a normal exit from the synchronized code or after an exception is thrown, or when the WAIT series of methods on the resource are called within the synchronized block.
Note:
- Synchronized cannot be inherited and cannot use the synchronized keyword to modify interface methods.
- The constructor also cannot use Synchronized.
Synchronized memory semantics
Resolved shared variable memory visibility issues
- The semantics of entering a synchronized block are: variables used in a synchronized block are purged from the thread’s local memory, ensuring that variables used in a synchronized block are not read from the thread’s local memory, but directly from the main memory.
- The semantics of exiting a synchronized block are to flush changes made to shared variables in a synchronized block to main memory.
Disadvantages: Thread scheduling overhead caused by thread context switch
The effect of synchronized
- Atomicity: Synchronized ensures that operations within a block are atomic
- Visibility: synchronized ensures that changes to a shared variable are visible in a timely manner (achieved by synchronizing the variable back to main memory before executing unlock)
- Order: synchronized guarantees order (by “allowing only one thread to lock a variable at a time”)
Implementation principle of synchronized
1.6 before: The underlying pool code uses the **Monitor (built into each Object) to lock (Monitorenter) and unlock (Monitorexit). Monitor is a mutex(mutex) that depends on the underlying operating system ** To achieve mutual exclusion, there is a user state to the kernel state transformation, high cost, low efficiency.
Both synchronized methods and synchronized code blocks are synchronized underneath monitor.
The difference between the two:
- Synchronized code blockIs the use of
monitorenter
andmonitorexit
Instruction implementation; - The ACC_SYNCHRONIZED flag is set in access_flags of the method to achieve synchronization. But synchronous methods can also be implemented using these two instructions.
The JVM implements this using the ACC_SYNCHRONIZED identifier. That is, the JVM implements synchronization by adding ACC_SYNCHRONIZED to the method access identifier (FLAGS).
The Java® Virtual Machine Specification
Method-level synchronization is performed implicitly, As part of method Invocation and return. A synchronized method is distinguished in the run-time constant pool’s method_info structure by the ACC_SYNCHRONIZED flag, which is checked by the method invocation instructions. When invoking a method for which ACC_SYNCHRONIZED is set, the executing thread enters a monitor, invokes the method itself, and exits the monitor whether the method invocation completes normally or abruptly. During the time the executing thread owns the monitor, no other thread may enter it. If an exception is thrown during invocation of the synchronized method and the synchronized method does not handle the exception, the monitor for the method is automatically exited before the exception is rethrown out of the synchronized method.
Synchronous methods are implicit. A synchronized method stores the ACC_SYNCHRONIZED identifier in the method_info structure of the runtime constant pool. When a thread accesses a method, it checks for the presence of the ACC_SYNCHRONIZED flag and, if so, obtains the corresponding Monitor lock before executing the method. The corresponding Monitor lock is released when the method completes execution (either a normal return or an exception is thrown). If another thread tries to access the method at this point, it will block because it can’t get the Monitor lock. When an exception is thrown in a synchronized method and there is no catch in the method, the acquired Monitor lock is first released before being thrown out
Sync code block: The JVM uses the monitorenter and Monitorexit directives for synchronization. That is, the JVM actually generates two bytecode instructions before and after a block of code to implement synchronization.
- Monitorenter:
Each object is associated with a monitor. A monitor is locked if and only if it has an owner. The thread that executes monitorenter attempts to gain ownership of the monitor associated with objectref, as follows:
- If the entry count of the monitor associated with objectref is zero, the thread enters the monitor and sets its entry count to one. The thread is then the owner of the monitor. If the thread already owns the monitor associated with objectref, it reenters the monitor, incrementing its entry count.
- If another thread already owns the monitor associated with objectref, the thread blocks until the monitor’s entry count is zero, then tries again to gain ownership.
The general meaning is as follows:
Each object is associated with a Monitor. When a monitor is owned, it is locked. When a thread executes a Monitorenter instruction, it tries to acquire the monitor. The steps are as follows:
- Each monitor maintains a counter that records the number of owns. This counter is 0 for an unowned monitor, and when a thread acquires monitor (executes
monitorenter
), the counter increases to 1.
- When the same thread acquires the monitor again, the counter increments again;
- When different threads attempt to acquire the monitor, it is blocked.
- When the same thread releases monitor (execute
monitorexit
Command), the counter then self-decrement. When the counter is zero. The monitor is released and can be acquired by other threads.
- monitorexit
The thread decrements the entry count of the monitor associated with objectref. If as a result the value of the entry count is zero, the thread exits the monitor and is no longer its owner. Other threads that are blocking to enter the monitor are allowed to attempt to do so.
When the thread executes monitorexit, it decrement the monitor’s counter by one. If the result is 0, the thread no longer owns the Monitor. Other threads can obtain the monitor.
After Java SE 1.6: Biased locking and lightweight locking were introduced to reduce the performance cost of acquiring and releasing locks:
There are four lock states, in descending order: no lock state, biased lock state, lightweight lock state, and heavyweight lock state. Locks can be upgraded but not degraded. This lock optimization is actually achieved through some flag bits in the Header of the Java object; Access and changes to locks are essentially tied to Java object headers.
unlocked
No lock, that is, no resource is locked. All threads can access the same resource, but only one thread can modify the resource successfully.
If the Lock status of the synchronization object is lockless (the Lock flag bit is “01”, whether the bias Lock is “0”), the VIRTUAL machine will first create a space in the stack frame of the current thread named Lock Record **, which is used to store the current copy of the Lock object Mark Word. The official product is the taliban Mark Word.
Biased locking
As you can see from the allocation of the object header, biased locking is controlled by the thread ID and EPOCH, which are more than unlocked
The author of HotSpot has found through research that in most cases, the lock not only does not exist multi-thread competition, but also exists the situation that the lock is obtained by the same thread for many times. Biased lock appears in this case, and its main function is to optimize the situation that the same thread obtains a lock for many times.
Partial locking
If a synchronized method is accessed by a thread, it can judge the current lock status by checking the lock flag bit of the object header Mark Word. If it is 01, it indicates that there is no lock or biased lock. Then, it can judge whether there is no lock or biased lock according to the Mark of biased lock or not
The thread uses CAS to try to lock the object. If CAS is used to replace ThreadID successfully, it means that it is the first time to lock the object. Then the current thread will acquire the biased lock of the object, and record the current ThreadID and the epoch of obtaining the lock in the Mark Word of the object header. The synchronized code block is then executed.
When the thread accesses the same synchronized method again, it checks for the object’s Mark Word bias lock and whether it points to its thread ID. If so, the thread does not need to enter the Monitor, but directly into the method body.
If the first thread has already acquired the lock of the current object, then the second thread tries to grab the lock of the object. Since the lock of the object has already been acquired by the first thread, it is biased lock, while the second thread will find that the Mark Word in the header of the object is biased lock. If the thread ID is not its own, it will perform CAS (Compare and Swap) to obtain the lock. There are two cases:
Lock acquired successfully: then it will directly change the Mark Word thread ID from the first thread to itself (the bias lock marker remains unchanged), so the object will remain biased lock state.
Failure to acquire the lock (the number of spins reaches the threshold value) means that there may be multiple threads trying to acquire the lock of the object at the same time, so the biased lock will be upgraded to a lightweight lock.
Bias lock revocation
Biased locks use a mechanism that waits until a race occurs to release the lock, so the thread holding the biased lock will release the lock only when other threads attempt to contest the biased lock. ** Bias lock undo, need to wait for the global safe point (at this point in time no bytecode is being executed). It first suspends the thread that has biased lock, determines whether the lock object is locked, and then restores to the unlocked (flag bit “01”) or lightweight lock (flag bit “00”) state after the biased lock is revoked.
SafePoint: A SafePoint is a place in Java code where a thread can pause.Copy the code
Closing bias lock
Biased locking is enabled by default in Java 6 and Java 7. Since biased locking is intended to improve performance when only one thread is executing synchronized blocks, if you are sure that all locks in your application are normally in contention, you can turn biased locking off with the JVM argument: -xx: -usebiasedLocking =false, and your application will enter lightweight locking by default.
Open bias lock
-XX:+UseBiasedLocking -XX:BiasedLockingStartupDelay=0
Lightweight lock
Locking process
When thread 1 obtains the lightweight product of product b, the JVM will first create the product of product B for storing the product of product B in the stack frame of product B and copy the product of product B into the product of product B MarkWord), and then use CAS to replace the contents of the lock object header with the address of the DisplacedMarkWord stored by thread 1;
If thread 1 copy objects of head at the same time (before the thread 1 CAS), thread 2 is also preparing to get lock, copy the object head to thread 2 lock records in the space, but at the time of thread 2 CAS, find thread 1 had the object changed, thread 2 CAS fails, then thread 2 attempts to use spinlocks to waiting thread 1 releases the lock.
But if the spin too long also not line, because the spin is consumes CPU, thus the number of spin is limited, such as 10 times or 100 times, if you spin the thread 1 times haven’t release the lock, or thread 1 is still in execution, thread 2 still spin waiting, then another thread 3 competition the lock object, At this point the lightweight lock will expand to the heavyweight lock. Heavyweight locks block all threads except those that own the lock, preventing the CPU from idling
The spin
The principle of Spin is that if the Owner can release the lock in a very short time, the contending threads can wait for a while (the so-called Spin), and the Spin will remain in the user state, saving the performance cost of switching state. After the Owner thread releases the lock, the contending thread may acquire the lock immediately, thus avoiding system blocking. However, if the Owner runs for longer than the threshold and the contended thread spins for some time, the contended thread will stop spinning and enter the blocking state. So the general idea is to spin first, block later, and minimize the likelihood of blocking, which is a huge performance boost for code blocks with short execution times. Obviously, spin makes sense on multiple processors (multiple cores).
JVM for the choice, the number of spin jdk1.5 default is 10 times, in 1.6 introduces adaptive spin locks, adaptive spin lock means that the spin time is not fixed, but by the last spin in the same lock lock time, and the owner of the state to decide, basic thought a thread context switching time is the best one.
Flowchart representation (partial lock acquisition already included)
Heavyweight lock
Default lock form before JDK1.5.
If the spin fails (and the lock still cannot be acquired), the lock is converted to a heavyweight lock, in which case any thread that cannot acquire the lock will enter the Monitor (kernel state).
The heavyweight Lock is implemented through the internal Monitor of the object. The essence of the Monitor is the Mutex Lock implementation of the underlying operating system. The switching between the threads of the operating system needs to switch from the user state to the kernel state, and the switching cost is very high.
The whole flowchart of locking
Compare the advantages and disadvantages of locks
Principle of synchronized reentrant implementation
ObjectMonitor() {_header = NULL; _count =0; _waiters = 0, _recursions = 0; _object = NULL; _owner = NULL; _WaitSet = NULL; _WaitSetLock = 0 ; _Responsible = NULL ; _succ = NULL ; _cxq = NULL ; FreeNext = NULL ; _EntryList = NULL ; _SpinFreq = 0 ; _SpinClock = 0 ; OwnerIsThread = 0 ; } /* ObjectMonitor has several key attributes: Head _waiters _header MarkOop object waiting thread count _recursions number of reentrant _owner threads to obtain ObjectMonitor _WaitSet calls the wait in the Java () method will be into it _cxq | _EntryList When multiple threads attempt to acquire the lock */ Copy the code
Synchronized is implemented based on monitor, and there is a counter called recursions that increases 1 each time a monitor enters and decreases 1 each time a monitor exits
** Note: ** When a subclass inherits from its parent, it can also call the parent’s synchronized method via a reentrant lock, and the counter in Monitor is still incremented by 1.
The Lock Lock
Lock is the interface, which is implemented as ReentrantLock
ReentrantLock
The underlying support
Using AQS (AbstractQueuedSynchronizer
Abstract synchronization queue (AQS), where the important variables are as follows
Node {
volatile Node prev;
volatile Node next;
volatile Thread thread;
}
private transient volatile Node head; // Waiting for the queue
private transient volatile Node tail; // Wait for the end of the queue
private volatile int state; // The lock status is 1 if the lock is successfully added, and 0 if reentrant +1 is unlocked
Copy the code
Lock Lock process
lock.lock()
lock
Call already
The lock method in Sync(inherited from AQS) calls the abstract method lock() of the inner abstract class.
public void lock(a) { sync.lock(); } Copy the code
The abstract lock() method in sync has two overrides: the fair lock and the lock() method in the unfair lock
The lock() method of an unfair lock
/* If CAS fails to acquire the lock, call acquire() */ final void lock(a) { if (compareAndSetState(0.1)) setExclusiveOwnerThread(Thread.currentThread()); else acquire(1); } Copy the code
compareAndSetState()
/* The compareAndSetState() method passes in the expected value and the CAS operation to change, trying to get the lock */ protected final boolean compareAndSetState(int expect, int update) { // Call the local method of Java's unsafe class for CAS operations return unsafe.compareAndSwapInt(this, stateOffset, expect, update); } Copy the code
The lock() method for fair locking
final void lock(a) { acquire(1); } Copy the code
Acquire (1); The thread acquires a lock or is enqueued
public final void acquire(int arg) { if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code
tryAcquire()
Try to get a lock
protected boolean tryAcquire(int arg) { throw new UnsupportedOperationException(); } Copy the code
TryAcquire () has four override methods
FairSync in ReentrantLock
NonfairSync in ReentrantLock
Sync in ReentrantReadWriteLock
Worker in ThreadPoolExecutor
Fair lock
tryAcquire()
Added hasqueuedToraise () to ensure fairness by ensuring that only the first thread on the AQS queue can attempt to acquire the lock resource
protected final boolean tryAcquire(int acquires) { // Get the current thread final Thread current = Thread.currentThread(); The synchronization state 0 is free (that is, no thread holds the lock) and 1 is not free int c = getState(); // Check whether the synchronization state is free state (c=0) if (c == 0) { //1. Decide if you need to queue, no! (hasqueuedtoraise () returns false.) True attempts the CAS lock TryAcquire () returns true after CAS locks the current thread TryAcquire () returns false after CAS fails //2. Queue up! (Hasqueuedprosper () returns true) false Do not operate and finally tryAcquire() returns false if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) { // do not queue yourself and CAS successfully locked the current thread as the holder of the lock setExclusiveOwnerThread(current); /**protected final void setExclusiveOwnerThread(Thread thread){ exclusiveOwnerThread = thread; } * / return true; }}// If c=1, the thread holding the lock is the current thread else if (current == getExclusiveOwnerThread()) { //ReentrantLock is reentrant // If yes, the counter +1 int nextc = c + acquires; When nexTC = integer. MAX_VALUE, adding +1 will turn negative if (nextc < 0) throw new Error("Maximum lock count exceeded"); // Set the reentrant count setState(nextc); return true; } // Return false if the lock is held by someone other than itself return false; } Copy the code
hasQueuedPredecessors()
Core code for fairnessDetermine whether you can taste the contention lock based on whether the queue is empty and whether the next node on the head node is itself
public final boolean hasQueuedPredecessors(a) { Node t = tail; / / end nodes Node h = head; // The header is always a node where Thread is null Node s; /** 1. If the queue is null(not initialized), h! = t to false finally don't need to return false in line 2. The queue is not empty (initialized) && (next head nodes for null | | head nodes under a thread is the current thread) h! =t && ((s = h.next) ==null || s.thread ! = Thread. CurrentThread ()) true && (empty true | | true) true finally returns true h! If =t,s=null, it indicates that a thread is joining the queue (two steps: initialization, joining the queue), and has just completed the first step. The current thread is also need to line up ineligible CAS acquiring a lock true && (not empty false | | if not for the current thread to true) someone true eventually return true the queue queue, the current thread need to queue ineligible CAS acquiring a lock true && (isn't empty false | | It is the first thread in the queue (except the head node) that is eligible for CAS to acquire the lock */ returnh ! = t && ((s = h.next) ==null|| s.thread ! = Thread.currentThread()); }Copy the code
Unfair lock (
NonfairSync extends Sync
)tryAcquire()
methods
protected final boolean tryAcquire(int acquires) { // Call the nonfairTryAcquire() method in the parent Sync class return nonfairTryAcquire(acquires); } Copy the code
The principle of nonfairTryAcquire() is the same as that of tryAcquire() of fair lock, but there is no need to determine whether to queue according to the queue situation during lock, and CAS can directly compete for the lock
final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); if (c == 0) { if (compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; }}// Determine if the current thread is an implementation of lock holder reentrant else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); //protected final void setState(int newState) { // state = newState; / /} return true; } return false; } Copy the code
acquireQueued(addWaiter(Node.EXCLUSIVE), arg))
If the node is not allowed to compete for the lock or fails to compete for the lock, the node is enqueued. After enqueueing, the node spins once to try to acquire the lock. If it is not its turn to compete or fails to compete, park is called to block
AddWaiter (Node. EXCLUSIVE) team
private Node addWaiter(Node mode) { // Create a Node object for the current thread Node node = new Node(Thread.currentThread(), mode); // Assign the last node to a temporary node Node pred = tail; // Check whether the tail is null // Not null, the queue has been initialized directly insert node into the queue if(pred ! =null) { // Set the node's precursor to the end of the previous queue node.prev = pred; // CAS determines whether the tail has been changed. To prevent multiple threads from locking, CAS ensures that nodes are enqueued atomically if (compareAndSetTail(pred, node)) { // Sets the successor of the previous queue's tail to the current node pred.next = node; returnnode; }}// If null, the queue is not initialized, call enq() to initialize and insert node into the queue enq(node); AcquireQueued () is invoked to lock the node once, failing which park blocks return node; } Copy the code
Initialize and insert queue ENq (node); The queue is not initialized, call the enq() spin to initialize and insert node into the queue
private Node enq(final Node node) { / / spin for (;;) { // Assign the end node to the temporary node t Node t = tail; // Check whether the tail node is null, indicating that the queue is not initialized if (t == null) { // Must initialize // CAS initializes the queue head node whose thread is null // New Node() calls the no-argument constructor to initialize the head Node if (compareAndSetHead(new Node())) tail = head; } else { // Insert queue, set the precursor and the successor node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; // The return is to terminate the loop returnt; }}}}Copy the code
After joining the queue, pass the node node, spin once to try to acquire the lock (determine whether the previous thread has been unlocked), and call park to block if it is not its turn to compete or if the race fails
final boolean acquireQueued(final Node node, int arg) { boolean failed = true; try { boolean interrupted = false; // Start the spin for (;;) { // Get the previous node of node final Node p = node.predecessor(); // Determine whether the previous node is the head node, i.e., whether you are eligible to compete for the lock // is the head node, and then tries to acquire the lock once if (p == head && tryAcquire(arg)) { Private void setHead(node) {head = node; private void setHead(node) {head = node; node.thread = null; // Set the node(now head) precursor to null node.prev = null; } * / setHead(node); // Set the successor of the previous node (the old head node) to NULL for GC collection p.next = null; // help GC failed = false; // The thread is not interrupted, the lock is successful, return false, the thread continues normally return interrupted; } // The previous node is not the head node, someone is queuing, and no attempt is made to obtain the lock / / call shouldParkAfterFailedAcquire () will spin before one's waitStatus is set to 1 (block) if (shouldParkAfterFailedAcquire(p, node) && // When the previous person's waitStatus is set successfully, park blocks the current thread. parkAndCheckInterrupt()) interrupted = true; }}finally { if(failed) cancelAcquire(node); }}Copy the code
ShouldParkAfterFailedAcquire * * * * spin () sets the former one waitStatus to 1 (says it has blocked), a thread to block before, not their Settings
private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) { int ws = pred.waitStatus; if (ws == Node.SIGNAL) /* * The node status is set and blocked */ return true; if (ws > 0) { /* * Predecessor was cancelled. Skip over predecessors and * indicate retry. */ do { node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else { /* * waitStatus must be 0 or PROPAGATE. Indicate that we * need a signal, but don't park yet. Caller will need to * retry to make sure it cannot acquire before parking. */ compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; } Copy the code
ParkAndCheckInterrupt () : blocks the current thread until unpark() wakes up
private final boolean parkAndCheckInterrupt(a) { // Block the current thread LockSupport.park(this); return Thread.interrupted(); } Copy the code
The unlock process
lock.unlock()
;
public void unlock(a) { sync.release(1); } Copy the code
release(1);
public final boolean release(int arg) { if (tryRelease(arg)) { Node h = head; if(h ! =null&& h.waitStatus ! =0) unparkSuccessor(h); return true; } return false; } Copy the code
Attempt to release the lock
protected final boolean tryRelease(int releases) { // The thread's AQS status is -1 int c = getState() - releases; // If the current thread is not the thread holding the lock, an exception is thrown if(Thread.currentThread() ! = getExclusiveOwnerThread())throw new IllegalMonitorStateException(); boolean free = false; If the reentrant count is 0, the thread holding the lock is cleared. If the reentrant count is not 0, only the reentrant count of the thread is -1 if (c == 0) { free = true; setExclusiveOwnerThread(null); } // Update the AQS status setState(c); // Return the thread to release the full release lock return free; } Copy the code
ReentrantLock
Difference from Synchronized:
Similarities:
1.They're all locked in sync;2.Both are reentrant locks;3.Blocking synchronization; That is to say, as if a thread got a lock object, entered the synchronized block, access to the synchronized block other threads must be blocked on the outside of the synchronized block waiting, for the thread blocking and awakens the cost is relatively high, the operating system needs to switch back and forth between user mode and kernel mode, the cost is very high, but can help improve a lock optimization);Copy the code
Difference:
comparative SynChronized ReentrantLock (implements the Lock interface) The original form It is a Java language keyword that is mutually exclusive at the native syntax level and requires JVM implementation It is an API-level mutex class provided after JDK 1.5 implementation Unlock by JVM lock To unlock the API layer, manually release the lock. The code Synchronized does not require users to manually release the lock. When the synchronized method or synchronized code block is executed, the system will automatically release the occupation of the lock by the thread, which is safer. ReentrantLock, on the other hand, requires the user to manually release the lock. If the lock is not released, a deadlock may occur. You need the lock() and unlock() methods in conjunction with a try/finally block to do this, flexibility The scope of a lock is the entire method or part of a synchronized block Lock is more flexible across methods because it is a method call Wait interruptible It cannot be interrupted unless an exception is thrown. (Lock release method: 1. 2. Throw an exception and wait for the JVM to exit) If the thread holding the lock does not release it for a long time, the thread waiting can choose to abandon the wait. TryLock (long timeout, TimeUnit Unit); 2. LockInterruptibly () into a block of code. Interrupt () does not interrupt synchronized.) Fair lock Not fair lock The constructor can pass Boolean, true for fair lock, false for unfair lock, Conditions Condition Multiple Condition objects can be obtained by multiple newconditions, which can easily achieve more complex thread synchronization functions. Advanced features provided Provides a number of methods to listen for information about the current lock, such as: getHoldCount() getQueueLength() isFair() isHeldByCurrentThread() isLocked()
convenience The use of Synchronized is more convenient and simple, and the compiler to ensure the lock and release Manual declarations are required to lock and release locks, applicable Synchronized is appropriate in situations where competition for resources is not fierce and there is occasional synchronization. The reason for this is that a compiler usually optimizes Synchronize as much as possible and is very readable ReentrantLock provides a variety of synchronizations, such as time-limited synchronization, Interrupt synchronization, and so on. In the case of low resource competition, the performance is slightly worse than synchronized. But when synchronization is intense, synchronized performance drops tens of times. ReentrantLock, on the other hand, remains normal. Important differences between Lock and synchronized keywords in the handling of locks
Lock acquisition: the former is acquired manually by the developer through program code, the latter is acquired by the JVM (without developer intervention)
Implementation: the former is implemented in Java code, the latter is implemented in the JVM underlying (no developer concern)
Release: The first must be released manually ina finally block using unlock(), and the second by the JVM (no developer concern)
Specific types of locks: the former provides a variety of locks, such as fair locks, non-fair locks, the latter and the former provide reentrant locks