This article is the third article of Synchronized bottom implementation, the content for the heavyweight lock implementation.

This series of articles will conduct a comprehensive analysis of HotSpot synchronized lock implementation, including partial lock, lightweight lock, heavyweight lock lock, lock unlock, lock upgrade process principle and source analysis, hoping to give some help to students in the study of synchronized road. It mainly includes the following articles:

Synchronized low-level implementation — An introduction

Synchronized low-level implementation — biased locking

Synchronized low-level implementation — lightweight locking

Synchronized low-level implementation — heavyweight locking

More articles can be found on my blog: github.com/farmerjohng…

Heavyweight expansion and locking processes

When multiple threads compete for locks at the same time, the synchronizer.cpp#slow_enter method is entered

voidObjectSynchronizer::slow_enter(Handle obj, BasicLock* lock, TRAPS) { markOop mark = obj->mark(); assert(! mark->has_bias_pattern(),"should not see bias pattern here");
  // If there is no lock state
  if (mark->is_neutral()) {
    lock->set_displaced_header(mark);
    if (mark == (markOop) Atomic::cmpxchg_ptr(lock, obj()->mark_addr(), mark)) {
      TEVENT (slow_enter: release stacklock) ;
      return ;
    }
    // Fall through to inflate() ...
  } else
  // For lightweight lock reentrant
  if(mark->has_locker() && THREAD->is_lock_owned((address)mark->locker())) { assert(lock ! = mark->locker(),"must not re-lock the same lock"); assert(lock ! = (BasicLock*)obj->mark(),"don't relock with same BasicLock");
    lock->set_displaced_header(NULL);
    return; }.../ / this time need expansion to heavyweight, lock, setting Displaced Mark Word for a particular value, on behalf of the lock is with a heavy lock monitor
  lock->set_displaced_header(markOopDesc::unused_mark());
  // Call the inflate to heavyweight lock first, which returns an ObjectMonitor object and then calls its Enter method
  ObjectSynchronizer::inflate(THREAD, obj())->enter(THREAD);
}
Copy the code

The expansion process is completed in the inflate.

ObjectMonitor * ATTR ObjectSynchronizer::inflate (Thread * Self, oop object) {
  ...

  for (;;) {
      constmarkOop mark = object->mark() ; assert (! mark->has_bias_pattern(),"invariant");// Mark is one of the following states:
      // * Vehicle Vehicle (Critical vehicle state) - Return directly
      // * stack-locked (lightweight lock state) -swell
      // * INFLATING - busy waiting until INFLATING is complete
      // * Neutral is Neutral
      // * BIASED (BIASED lock) - Illegal state, not shown here

      // CASE: inflated
      if (mark->has_monitor()) {
          // It is already in the heavyweight lock stateObjectMonitor * inf = mark->monitor() ; .return inf ;
      }

      // CASE: inflation in progress
      if (mark == markOopDesc::INFLATING()) {
         // Expanding, indicating that another thread is expanding the lock. Continue retry
         TEVENT (Inflate: spin while INFLATING) ;
         // Spin /yield/park is performed to complete the spin action
         ReadStableMark(object) ;
         continue ;
      }
 
      if (mark->has_locker()) {
          // Allocate an ObjectMonitor object and initialize the value
          ObjectMonitor * m = omAlloc (Self) ;
          
          m->Recycle();
          m->_Responsible  = NULL ;
          m->OwnerIsThread = 0 ;
          m->_recursions   = 0 ;
          m->_SpinDuration = ObjectMonitor::Knob_SpinLimit ;   // Consider: maintain by type/class
		  // Set the mark word of the lock object to INFLATING (0)
          markOop cmp = (markOop) Atomic::cmpxchg_ptr (markOopDesc::INFLATING(), object->mark_addr(), mark) ;
          if(cmp ! = mark) { omRelease (Self, m,true);continue ;       // Interference -- just retry
          }

          Product of the taliban taliban product
          markOop dmw = mark->displaced_mark_helper() ;
          assert (dmw->is_neutral(), "invariant");// Set the monitor field
          m->set_header(dmw) ;
          // Owner is Lock Recordm->set_owner(mark->locker()); m->set_object(object); .// Set the lock object header to the heavyweight lock stateobject->release_set_mark(markOopDesc::encode(m)); .return m ;
      }

      // CASE: neutral
  	 
      // Allocate and initialize ObjectMonitor objects
      ObjectMonitor * m = omAlloc (Self) ;
      // prepare m for installation - set monitor to initial state
      m->Recycle();
      m->set_header(mark);
      / / the owner to NULL
      m->set_owner(NULL);
      m->set_object(object);
      m->OwnerIsThread = 1 ;
      m->_recursions   = 0 ;
      m->_Responsible  = NULL ;
      m->_SpinDuration = ObjectMonitor::Knob_SpinLimit ;       // consider: keep metastats by type/class
	  // Replace the mark word of the object header with CAS to the heavyweight lock state
      if(Atomic::cmpxchg_ptr (markOopDesc::encode(m), object->mark_addr(), mark) ! = mark) {Failure indicates that another thread is executing the inflate, releasing the monitor object
          m->set_object (NULL); m->set_owner (NULL); m->OwnerIsThread =0 ;
          m->Recycle() ;
          omRelease (Self, m, true); m =NULL ;
          continue ;
          // interference - the markword changed - just retry.
          // The state-transitions are one-way, so there's no chance of
          // live-lock -- "Inflated" is an absorbing state.}...returnm ; }}Copy the code

Inside the inflate is a for loop, primarily to handle situations where multiple threads invoke the inflate simultaneously. The lock object is then processed differently according to its state:

1. If it is in the heavyweight state, expansion is complete

2. Expand a lightweight lock

3. If the status is expanding, wait busy

4. If there is no lock, perform expansion

Lightweight locks and no-lock states need to be expanded. The lightweight lock expansion process is as follows:

OmAlloc allocates an ObjectMonitor object from omFreeList. OmAlloc allocates an object from omFreeList. OmAlloc allocates an object from omFreeList. A batch of monitors is allocated to omFreeList from the JVM’s global gFreeList.

2. Initialize the Monitor object

3. Set the status to INFLATING

Set the header field of the monitor to be the product mark Word, the owner field to be the Lock Record, and the obj field to be the Lock object

5. Set the mark Word of the lock object header to the heavyweight lock state, pointing to the Monitor object assigned in the first step

The expansion process in non-lock state is as follows:

1. Call omAlloc to assign an ObjectMonitor object (monitor)

2. Initialize the Monitor object

3. Set the header field of monitor to Mark word, owner field to NULL, and obj field to lock object

4. Set the mark Word of the lock object header to the heavyweight lock state, pointing to the Monitor object assigned in the first step

As for why lightweight locks need an INFLATING state, a comment in the code reads:

// Why do we CAS a 0 into the mark-word instead of just CASing the
// mark-word from the stack-locked value directly to the new inflated state?
// Consider what happens when a thread unlocks a stack-locked object.
// It attempts to use CAS to swing the displaced header value from the
// on-stack basiclock back into the object header. Recall also that the
// header value (hashcode, etc) can reside in (a) the object header, or
// (b) a displaced header associated with the stack-lock, or (c) a displaced
// header in an objectMonitor. The inflate() routine must copy the header
// value from the basiclock on the owner's stack to the objectMonitor, all
// the while preserving the hashCode stability invariants. If the owner
// decides to release the lock while the value is 0, the unlock will fail
// and control will eventually pass from slow_exit() to inflate. The owner
// will then spin, waiting for the 0 value to disappear. Put another way,
// the 0 causes the owner to stall if the owner happens to try to
// drop the lock (restoring the header from the basiclock to the object)
// while inflation is in-progress. This protocol avoids races that might
// would otherwise permit hashCode values to change or "flicker" for an object.
// Critically, while object->mark is 0 mark->displaced_mark_helper() is stable.
// 0 serves as a "BUSY" inflate-in-progress indicator.
Copy the code

I did not quite understand, know the classmate can give directions ~

After the expansion is complete, the Enter method is called to obtain the lock

void ATTR ObjectMonitor::enter(TRAPS) {
   
  Thread * const Self = THREAD ;
  void * cur ;
  If the owner is null, the current thread directly acquires the lock if CAS is set successfully
  cur = Atomic::cmpxchg_ptr (Self, &_owner, NULL);if (cur == NULL) {...return ;
  }
  // In the case of reentrant
  if (cur == Self) {
     // TODO-FIXME: check for integer overflow!  BUGID 6557169.
     _recursions ++ ;
     return ;
  }
  // The current thread is the thread that previously held the lightweight lock. Inflated by the lightweight Lock and the first call to the Enter method, that cur is a pointer to the Lock Record
  if (Self->is_lock_owned ((address)cur)) {
    assert (_recursions == 0."internal state error");
    // The reentrant count is reset to 1
    _recursions = 1 ;
    // Set the owner field to the current thread (owner was a pointer to Lock Record)
    _owner = Self ;
    OwnerIsThread = 1 ;
    return; }...// Try to spin the lock before calling the system's synchronization operation
  if (Knob_SpinEarly && TrySpin (Self) > 0) {...// Get the lock in the spin process, then return directly
     Self->_Stalled = 0 ;
     return; }... {...for (;;) {
      jt->set_suspend_equivalent();
      // Call the system synchronization operation in this methodEnterI (THREAD) ; . } Self->set_current_pending_monitor(NULL); }... }Copy the code
  1. Returns if the current is lock-free, the lock is reentrant, and the current thread is the thread that previously held the lightweight lock.
  2. The purpose of this is to reduce the overhead of performing an operating system synchronization operation
  3. callEnterIMethod to obtain a lock or block

The EnterI method is quite long, but before we look at it, let’s explain the general principle:

An ObjectMonitor object contains several key fields: CXQ (ContentionList in the figure below), EntryList, WaitSet, and Owner.

CXQ, EntryList, and WaitSet are all linked lists of ObjectWaiter, with owner pointing to the thread holding the lock.

When a thread attempts to acquire a lock, if the lock is already occupied, the thread is wrapped as an ObjectWaiter object and inserted at the head of CXQ’s queue, and the park function is called to suspend the current thread. On Linux, the park function calls the pthread_cond_wait of the GCLIb library, and the JDK ReentrantLock uses this method to suspend threads. More details can be found in my previous two articles: A Few thoughts on synchronization – Futex, a Linux kernel-level synchronization mechanism

Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Synchronized is unfair, though, so the presumed heir doesn’t necessarily get the lock (which is why it’s called “presumed” heir).

If a thread calls Object#wait after it has acquired the lock, it is added to WaitSet, and when awakened by Object#notify, it is moved from WaitSet to CXQ or EntryList. Note that when you call wait or notify on a lock object, the current state of the lock, such as a partial lock or a lightweight lock, will inflate to a heavyweight lock first.

Synchronized’s Monitor locking mechanism is similar to JDK ReentrantLock and Condition. ReentrantLock also has a linked list of threads waiting to acquire the lock. The Condition also has a waitSet-like collection to hold threads that have called await. If you are familiar with ReentrantLock, it should be easy to understand Monitor.

Back to the code, start analyzing the EnterI method:

voidATTR ObjectMonitor::EnterI (TRAPS) { Thread * Self = THREAD ; .// Try to get the lock
    if (TryLock (Self) > 0) {...return ;
    }

    DeferredInitialize () ;
 
	/ / spin
    if (TrySpin (Self) > 0) {...return; }...// Encapsulate the thread into a node
    ObjectWaiter node(Self) ;
    Self->_ParkEvent->reset() ;
    node._prev   = (ObjectWaiter *) 0xBAD ;
    node.TState  = ObjectWaiter::TS_CXQ ;

    // Insert the node node into the head of the _CXq queue. CXQ is a one-way linked list
    ObjectWaiter * nxt ;
    for (;;) {
        node._next = nxt = _cxq ;
        if (Atomic::cmpxchg_ptr (&node, &_cxq, nxt) == nxt) break ;

        // If CAS fails, try to acquire the lock again to reduce the frequency of inserts to the _CXq queue
        if (TryLock (Self) > 0) {...return; }}// SyncFlags defaults to 0 and sets _Responsible to itself if there are no other waiting threads
    if ((SyncFlags & 16) = =0 && nxt == NULL && _EntryList == NULL) {
        Atomic::cmpxchg_ptr (Self, &_Responsible, NULL); } TEVENT (Inflated enter - Contention) ;int nWakeups = 0 ;
    int RecheckInterval = 1 ;

    for (;;) {

        if (TryLock (Self) > 0) break; assert (_owner ! = Self,"invariant"); .// park self
        if (_Responsible == Self || (SyncFlags & 1)) {
            // Park is called when the current thread is _Responsible
            TEVENT (Inflated enter - park TIMED) ;
            Self->_ParkEvent->park ((jlong) RecheckInterval) ;
            // Increase the RecheckInterval, but clamp the value.
            RecheckInterval *= 8 ;
            if (RecheckInterval > 1000) RecheckInterval = 1000 ;
        } else {
            // Otherwise call park directly to suspend the current thread
            TEVENT (Inflated enter - park UNTIMED) ;
            Self->_ParkEvent->park() ;
        }

        if (TryLock(Self) > 0) break; .if ((Knob_SpinAfterFutile & 1) && TrySpin (Self) > 0) break; .When the lock is released, _succ is set to a thread in EntryList or _CXQ
        if (_succ == Self) _succ = NULL ;

        // Invariant: after clearing _succ a thread *must* retry _owner before parking.
        OrderAccess::fence() ;
    }

   // The lock has been obtained

    assert (_owner == Self      , "invariant") ;
    assert (object() != NULL    , "invariant");// Remove the current thread's node from the CXQ or EntryList
    UnlinkAfterAcquire (Self, &node) ;
    if (_succ == Self) _succ = NULL ;
	if (_Responsible == Self) {
        _Responsible = NULL; OrderAccess::fence(); }...return ;
}

Copy the code

There are three main steps:

  1. Inserts the current thread to the head of the CXQ queue
  2. Then park the current thread
  3. Try to acquire the lock again when awakened

In particular, the _Responsible and _succ fields are used:

When a race occurs, a thread is selected as _Responsible, and the _Responsible thread invokes the time-limited Park method to prevent stranding.

Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir

Release of heavyweight locks

Heavyweight lock release code in ObjectMonitor::exit:

void ATTR ObjectMonitor::exit(bool not_suspended, TRAPS) {
   Thread * Self = THREAD ;
   If _owner is not the current thread
   if(THREAD ! = _owner) {// The current thread is the thread that previously held the lightweight lock. The Enter method has not been called since the lightweight Lock bloat, _owner will be a pointer to the Lock Record.
     if (THREAD->is_lock_owned((address) _owner)) {
       assert (_recursions == 0."invariant"); _owner = THREAD ; _recursions =0 ;
       OwnerIsThread = 1 ;
     } else {
       // Exception: the thread is not currently holding the lock
       TEVENT (Exit - Throw IMSX) ;
       assert(false."Non-balanced monitor enter/exit!");
       if (false) {
          THROW(vmSymbols::java_lang_IllegalMonitorStateException());
       }
       return; }}// If the reentrant counter is not 0, the counter -1 will be returned
   if(_recursions ! =0) {
     _recursions--;        // this is simple recursive enter
     TEVENT (Inflated exit - recursive) ;
     return ;
   }

   // _Responsible set to null
   if ((SyncFlags & 4) = =0) {
      _Responsible = NULL; }...for (;;) {
      assert (THREAD == _owner, "invariant");// Knob_ExitPolicy defaults to 0
      if (Knob_ExitPolicy == 0) {
         // code 1: Release the lock first, and then acquire the lock if another thread enters the block
         OrderAccess::release_store_ptr (&_owner, NULL);// drop the lock
         OrderAccess::storeload() ;                         // See if we need to wake a successor
         // code 2: If there is no waiting thread or there is already a presumed heir
         if ((intptr_t(_EntryList)|intptr_t(_cxq)) == 0|| _succ ! =NULL) {
            TEVENT (Inflated exit - simple egress) ;
            return ;
         }
         TEVENT (Inflated exit - complex egress) ;

         // code 3: The lock needs to be regained to perform the next operation, that is, set _owner to the current thread
         if (Atomic::cmpxchg_ptr (THREAD, &_owner, NULL) != NULL) {
            return; } TEVENT (Exit - Reacquired) ; }... ObjectWaiter * w =NULL ;
      // code 4: Different wake up strategy depending on QMode. Default is 0
      int QMode = Knob_QMode ;
	 
      if (QMode == 2&& _cxq ! =NULL) {
          // QMode == 2: the CXQ thread has a higher priority, and directly wakes up the first CXQ threadw = _cxq ; assert (w ! =NULL."invariant"); assert (w->TState == ObjectWaiter::TS_CXQ,"Invariant"); ExitEpilog (Self, w) ;return ;
      }

      if (QMode == 3&& _cxq ! =NULL) {
          // Insert elements from CXQ at the end of EntryList
          w = _cxq ;
          for(;;) { assert (w ! =NULL."Invariant"); ObjectWaiter * u = (ObjectWaiter *) Atomic::cmpxchg_ptr (NULL, &_cxq, w) ;
             if (u == w) break; w = u ; } assert (w ! =NULL              , "invariant"); ObjectWaiter * q =NULL ;
          ObjectWaiter * p ;
          for(p = w ; p ! =NULL ; p = p->_next) {
              guarantee (p->TState == ObjectWaiter::TS_CXQ, "Invariant"); p->TState = ObjectWaiter::TS_ENTER ; p->_prev = q ; q = p ; }// Append the RATs to the EntryList
          // TODO: organize EntryList as a CDLL so we can locate the tail in constant-time.
          ObjectWaiter * Tail ;
          for(Tail = _EntryList ; Tail ! =NULL&& Tail->_next ! =NULL ; Tail = Tail->_next) ;
          if (Tail == NULL) {
              _EntryList = w ;
          } else {
              Tail->_next = w ;
              w->_prev = Tail ;
          }

          // Fall thru into code that tries to wake a successor from EntryList
      }

      if (QMode == 4&& _cxq ! =NULL) {
          // Insert CXQ to the EntryList queue head
          w = _cxq ;
          for(;;) { assert (w ! =NULL."Invariant"); ObjectWaiter * u = (ObjectWaiter *) Atomic::cmpxchg_ptr (NULL, &_cxq, w) ;
             if (u == w) break; w = u ; } assert (w ! =NULL              , "invariant"); ObjectWaiter * q =NULL ;
          ObjectWaiter * p ;
          for(p = w ; p ! =NULL ; p = p->_next) {
              guarantee (p->TState == ObjectWaiter::TS_CXQ, "Invariant"); p->TState = ObjectWaiter::TS_ENTER ; p->_prev = q ; q = p ; }// Prepend the RATs to the EntryList
          if(_EntryList ! =NULL) {
              q->_next = _EntryList ;
              _EntryList->_prev = q ;
          }
          _EntryList = w ;

          // Fall thru into code that tries to wake a successor from EntryList
      }

      w = _EntryList  ;
      if(w ! =NULL) {
          // If EntryList is not empty, the queue head element of EntryList is awakened directly
          assert (w->TState == ObjectWaiter::TS_ENTER, "invariant"); ExitEpilog (Self, w) ;return ;
      }

      // If EntryList is null, CXQ elements are processed
      w = _cxq ;
      if (w == NULL) continue ;

      // The CXQ field is set to null because the CXQ element will be moved to EntryList later
      for(;;) { assert (w ! =NULL."Invariant"); ObjectWaiter * u = (ObjectWaiter *) Atomic::cmpxchg_ptr (NULL, &_cxq, w) ;
          if (u == w) break ;
          w = u ;
      }
      TEVENT (Inflated exit- drain cxq into EntryList) ; assert (w ! =NULL              , "invariant"); assert (_EntryList ==NULL    , "invariant");if (QMode == 1) {
         // QMode == 1: transfers the CXQ elements to EntryList and reverses the order
         ObjectWaiter * s = NULL ;
         ObjectWaiter * t = w ;
         ObjectWaiter * u = NULL ;
         while(t ! =NULL) {
             guarantee (t->TState == ObjectWaiter::TS_CXQ, "invariant") ;
             t->TState = ObjectWaiter::TS_ENTER ;
             u = t->_next ;
             t->_prev = u ;
             t->_next = s ;
             s = t;
             t = u ;
         }
         _EntryList  = s ;
         assert (s != NULL."invariant"); }else {
         // QMode == 0 or QMode == 2 '
         // Move elements from CXQ to EntryList
         _EntryList = w ;
         ObjectWaiter * q = NULL ;
         ObjectWaiter * p ;
         for(p = w ; p ! =NULL ; p = p->_next) {
             guarantee (p->TState == ObjectWaiter::TS_CXQ, "Invariant"); p->TState = ObjectWaiter::TS_ENTER ; p->_prev = q ; q = p ; }}// _succ is not null, indicating that there is already an heir, so there is no need for the current thread to wake up, reducing the context switch rate
      if(_succ ! =NULL) continue;

      w = _EntryList  ;
      // Wake up the first EntryList element
      if(w ! =NULL) {
          guarantee (w->TState == ObjectWaiter::TS_ENTER, "invariant"); ExitEpilog (Self, w) ;return; }}}Copy the code

After performing the necessary lock reentrant judgments and spin optimization, the main logic is entered:

Code 1 sets owner to null, which releases the lock at which point another thread can acquire it. Here is an unfair lock optimization;

Code 2 simply returns if there are no threads currently waiting, since no other threads need to be woken up. Or if succ is not null, meaning that there is already an “awake” heir thread, the current thread does not need to wake up any threads;

Code 3 The current thread regains the lock because CXQ and EntryList queues need to be worked on later and the thread is woken up;

Code 4 will execute a different wake up strategy depending on the QMode;

There are different processing modes according to QMode:

  1. QMode = 2 and CXQ is not empty: Fetch the ObjectWaiter at the head of the CXQ queue and call ExitEpilog. This method wakes up the ObjectWaiter thread and returns immediately.
  2. QMode = 3 and CXQ is not empty: insert the CXQ queue to the end of the EntryList.
  3. QMode = 4 and CXQ is not empty: Insert the CXQ queue into the EntryList header.
  4. QMode = 0: do nothing for the moment, continue to read;

QMode=2; QMode= 0; QMode= 3;

1. If the first element of the EntryList is not empty, the ExitEpilog method is called, which wakes up the thread of the ObjectWaiter object and returns immediately; 2. If the first element of the EntryList is empty, all the elements of CXQ are put into the EntryList, and ExitEpilog is performed on the first element from the EntryList, and then the EntryList returns immediately.

The above summary of QMode refers to this article. Also, for more information on how to compile the JVM, check out this article from docker

QMode defaults to 0.

public class SyncDemo {

    public static void main(String[] args) {

        SyncDemo syncDemo1 = new SyncDemo();
        syncDemo1.startThreadA();
        try {
            Thread.sleep(100);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        syncDemo1.startThreadB();
        try {
            Thread.sleep(100);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        syncDemo1.startThreadC();
       

    }

    final Object lock = new Object();


    public void startThreadA(a) {
        new Thread(() -> {
            synchronized (lock) {
                System.out.println("A get lock");
                try {
                    Thread.sleep(500);
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
                System.out.println("A release lock"); }},"thread-A").start();
    }

    public void startThreadB(a) {
        new Thread(() -> {
            synchronized (lock) {
                System.out.println("B get lock"); }},"thread-B").start();
    }

    public void startThreadC(a) {
        new Thread(() -> {
            synchronized (lock) {

                System.out.println("C get lock"); }},"thread-C").start(); }}Copy the code

By default, line C must acquire the lock first after A releases the lock. When the lock is acquired, the current thread is inserted into the head of the CXQ. When the lock is released, the default strategy is to insert the elements from the CXQ into the EntryList in the original order if the EntryList is empty, and wake up the first thread. That is, when EntryList is empty, it is the later line that acquires the lock first. The Lock mechanism in the JDK is different in this regard.

The difference between Synchronized and ReentrantLock

The difference between Synchronized and ReentrantLock

  1. Synchronized is the JVM level lock implementation, ReentrantLock is the JDK level lock implementation;
  2. Synchronized lock status cannot be determined directly in the code, but ReentrantLock can passReentrantLock#isLockedJudgment;
  3. Synchronized is unfair lock, ReentrantLock can be fair or unfair;
  4. Synchronized can’t be interrupted, andReentrantLock#lockInterruptiblyMethods can be interrupted;
  5. Synchronized automatically releases the lock when an exception occurs (automatically implemented by JavAC compilation), whereas ReentrantLock requires the developer to display the release lock in the finally block;
  6. ReentrantLock can acquire locks in various forms, such as tryLock(), which returns success immediately, and wait for a specified length of time to acquire locks.
  7. Synchronized obtains the lock first (as mentioned above) for the thread that is already waiting, while ReentrantLock must obtain the lock first for the thread that is already waiting.

End

In general, there are many similarities between Synchronized heavyweight locks and ReentrantLock implementations, including their data structures, the way threads are suspended, and so on. In everyday use, Synchronized is sufficient without special requirements. It’s easier for you to understand the implementation of one of the two, the other or the other locking mechanism, and this is what we call technical interoperability.