The introduction

If the source code analysis is not particularly deep partners may need to carefully chew, do not hold a look can understand the mentality of learning, otherwise it will eventually have no effect. If you just want to explore the principles of Synchronized, check out my previous article: Understanding the Principles of Synchronized in Java Concurrency from the ground up.

Five, Hotspot source in-depth interpretation of Synchronized keyword principle

Start reading the source code with monitorenter and Monitorexit, which are interpreted and executed by the JVM after the bytecode is loaded into memory. Monitorenter, monitorexit directive resolution is implemented through two methods called interpreterRuntime.cpp:

JavaThread / * * the current thread of acquiring a lock BasicObjectLock based object lock * * / InterpreterRuntime: : monitorenter (JavaThread * thread, BasicObjectLock* elem) InterpreterRuntime::monitorexit(JavaThread* thread, BasicObjectLock* elem)Copy the code

5.1 Source location related to Synchronized keyword in Hotspot

If you are not familiar with the C/C++ and Hotspot source directory, you can find the corresponding implementation according to the directory I provide:

  • Monitor: its \ hotspot vm \ \ SRC \ share \ runtime \ objectMonitor HPP
  • MarkWord: its \ hotspot \ SRC \ share \ \ oops vm \ markOop HPP
  • Monitorenter | exit command: its \ hotspot \ SRC \ share \ \ vm interpreter \ interpreterRuntime CPP
  • BiasedLocking: openjdk\hotspot\ SRC \share\vm\runtime\biasedLocking. CPP

Using monitorenter as an entry point, we analyze synchronized along the path of lock-free > biased > lightweight > heavyweight locks:

IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorenter(JavaThread* thread, BasicObjectLock* elem)) #ifdef ASSERT thread->last_frame().interpreter_frame_verify_monitor(elem); #endif ... if (UseBiasedLocking) { // Retry fast entry if bias is revoked to avoid unnecessary inflation ObjectSynchronizer::fast_enter(h_obj, elem->lock(), true, CHECK); } else { ObjectSynchronizer::slow_enter(h_obj, elem->lock(), CHECK); }... #ifdef ASSERT thread->last_frame().interpreter_frame_verify_monitor(elem); #endif IRT_ENDCopy the code

If bias locking is enabled, ObjectSynchronizer::fast_enter is executed, whereas if bias locking is not enabled, ObjectSynchronizer:: slow_Enter is executed to bypass bias locking. Go straight to lightweight locks.

5.2. Enable biased lock state execution logic

ObjectSynchronizer::fast_enter is implemented in synchronizer. CPP as follows:

void ObjectSynchronizer::fast_enter(Handle obj, BasicLock* lock, bool attempt_rebias, TRAPS) {check whether biased locking is enabled if (UseBiasedLocking) {// If (! SafepointSynchronize::is_at_safepoint() {// Try to obtain BiasedLocking::Condition cond = using 'revoke_and_rebias' BiasedLocking::revoke_and_rebias(obj, attempt_rebias, THREAD); If (cond == BiasedLocking::BIAS_REVOKED_AND_REBIASED) {return; // If (cond == BiasedLocking::BIAS_REVOKED_AND_REBIASED) {return; }} else {// If at a safe point, undo bias lock assert(! attempt_rebias, "can not rebias toward VM thread"); BiasedLocking::revoke_at_safepoint(obj); } assert(! obj->mark()->has_bias_pattern(), "biases should be revoked by now"); } slow_enter (obj, lock, THREAD) ; }Copy the code

Fast_enter implements a simple process:

  • Check again if bias lock is enabled
  • When at an unsafe point, revoke_and_rebias attempts to obtain a bias lock, and returns if successful, or enters the lightweight lock acquisition process if unsuccessful
  • Revoke_and_rebias The logic for obtaining bias locks is in biasedLocking. CPP
  • If bias locks are not enabled, the slow_Enter process for obtaining lightweight locks is entered

5.3. Biased lock acquisition logic

BiasedLocking::revoke_and_rebias Is used to obtain the status of the current biased lock (it may be re-biased after the biased lock is revoked). The logic of this method is as follows in biasedLocking. CPP:

BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { assert(! SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); markOop mark = obj->mark(); If (mark->is_biased_anonymously() &&! Attempt_rebias) {// This branch is entered when performing a hashCode calculation of the object, at a non-global safe point to perform biased lock relocations markOop biased_value = mark; // create a markword markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); //Atomic:cmpxchg_ptr is a CAS operation Atomic:: cMPxCHG_ptr (unbiASed_prototype, obj->mark_addr(), mark); If (res_mark == biASed_value) {// If CAS is successful, return BIAS_REVOKED; }} else if (mark->has_bias_pattern()) {// If (mark->has_bias_pattern()) Regardless of whether the thread id is null), try to re-bias Klass* k = obj-> Klass (); markOop prototype_header = k->prototype_header(); // If a thread has already locked the lock object globally, cancel the partial lock operation if (! prototype_header->has_bias_pattern()) { markOop biased_value = mark; MarkOop res_mark = (markOop) Atomic:: cMPxchg_ptr (prototype_header, obj->mark_addr(), mark); assert(! (*(obj->mark_addr()))->has_bias_pattern(), "even if we raced, should still be revoked"); return BIAS_REVOKED; } else if (prototype_header->bias_epoch()! Attempt_rebias = attempt_epoch () {attempt_rebias () if attempt_rebias () {attempt_rebias (thread_java_thread (), "") if attempt_rebias () is allowed; markOop biased_value = mark; markOop rebiased_prototype = markOopDesc::encode((JavaThread*) THREAD, mark->age(), prototype_header->bias_epoch()); // Through CAS, MarkOop res_mark = (markOop) Atomic:: cMPxchg_ptr obj->mark_addr(), mark); If (res_mark == biASed_value) {// If the CAS is successful, the undo and rebias state is displayed. Return BIAS_REVOKED_AND_REBIASED; MarkOop biASed_value = mark; markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = (markOop) Atomic::cmpxchg_ptr(unbiased_prototype, obj->mark_addr(), mark); If (res_mark == biASed_value) {// If CAS operation succeeds, return BIAS_REVOKED; }}}}... / / omit}Copy the code

5.4. Biased lock cancellation logic

Revoke_at_safepoint method is called. Revoke_at_safepoint method is also defined in biasedLocking. CPP.

void BiasedLocking::revoke_at_safepoint(Handle h_obj) { assert(SafepointSynchronize::is_at_safepoint(), "must only be called while at safepoint"); oop obj = h_obj(); HeuristicsResult heuristics = update_heuristics(obj, false); // Update the undo heuristics count and return the HeuristicsResult heuristics(obj, false); If (heuristics == HR_SINGLE_REVOKE) {revoke_bias(obj, false, false, NULL) {revoke_bias(obj, false, false, NULL); / / cancel the biased locking} else if ((heuristics = = HR_BULK_REBIAS) | | (heuristics = = HR_BULK_REVOKE)) {/ / if it is repeated revoked or multiple bias / / batch bulk_revoke_or_rebias_at_safepoint(obj, (heuristics == HR_BULK_REBIAS), false, NULL); } clean_up_cached_monitor_info(); }Copy the code

A bias lock is released by waiting for the global safe point (at which no bytecode is executing), first suspending the thread with the bias lock, then checking whether the thread with the bias lock is still alive, and setting the object header to lock free if the thread is not active. If the thread is still alive, it is upgraded to a lightweight lock that iterates over records biased toward the object. The Mark Word of the lock record and object header in the stack frame is either re-biased to other threads or reverted to lockless, or the Mark object is not suitable as a biased lock. Finally wake up the suspended thread.

The JVM maintains a bias lock revoke counter for each class that counts bias lock revokes. When this value reaches a specified threshold, the JVM considers the class’s bias lock to be problematic. Rebias is required, and the operation of rebias on all objects belonging to this class is called Bulk rebias. When bulk rebias is done, increment the value of the class’s EPOCH, which is stored in the EPOCH field in the object header. The conditions for determining whether this object obtains biased lock are: the biased_lock:1, lock:01, threadid of markword is equal to the current threadid, the epoch field and the epoch value of the owning class are the same, if the epoch value is different, Either undo bias locking or rebias; If the revoke counter of this class continues to increase to a threshold value, the JVM will decide that the class is not suitable for biased locks and bulk REVOKE will be required.

5.5 lightweight lock acquisition logic

The lightweight lock is obtained by calling the ::slow_enter method, which is also in synchronizer. CPP and is implemented as follows:

void ObjectSynchronizer::slow_enter(Handle obj, BasicLock* lock, TRAPS) { markOop mark = obj->mark(); assert(! mark->has_bias_pattern(), "should not see bias pattern here"); If (mark-> neutral()) {// Biase_lock :0, _displaced_header lock->set_displaced_header(mark); // Update mark word with CAS to pointer to BasicLock object If (mark == (markOop) Atomic::cmpxchg_ptr(lock, obj()->mark_addr(), mark)) {TEVENT (slow_enter: release stacklock) ; return ; } // Fall through to inflate() ... } // If markword is locked, and the PTR pointer in markword points to the stack frame of the current thread, indicating the reentrant operation, Else if (mark->has_locker() && THREAD-> IS_lock_owned ((address)mark->locker())) {assert(lock! = mark->locker(), "must not re-lock the same lock"); assert(lock ! = (BasicLock*)obj->mark(), "don't relock with same BasicLock"); lock->set_displaced_header(NULL); return; } #if 0 // The following optimization isn't particularly useful. if (mark->has_monitor() && mark->monitor()->is_entered(THREAD)) { lock->set_displaced_header (NULL) ; return ; } #endif: // There are multiple threads competing for the lightweight lock, and the lightweight lock is inflated by the 'inflate' to the heavyweight lock->set_displaced_header(markOopDesc::unused_mark()); ObjectSynchronizer::inflate(THREAD, obj())->enter(THREAD); }Copy the code

Simple cleanup of the acquisition logic for lightweight locks:

  • Mark ->is_neutral() method, is_neutral this method is defined in markoop.hpp if biased_lock:0 and lock:01 indicates no lock state
  • If Mark is unlocked, go to the next step; otherwise, perform the last step
  • Save mark to the DisplacedHeader field of the BasicLock object
  • Try to update markword to pointer to BasicLock object through CAS. If the update is successful, it means that the lock is contended, then the synchronization code is executed, otherwise the next step is executed
  • If mark is currently locked and the PTR pointer in Mark points to the stack frame of the current thread, the synchronization code is executed. Otherwise, multiple threads are competing for the lightweight lock, which needs to be expanded to a heavyweight lock

5.6. Lightweight lock release logic

The release of the lightweight lock is called through MonitoreXit as follows:

IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorexit(JavaThread* thread, BasicObjectLock* elem))
#ifdef ASSERT
  thread->last_frame().interpreter_frame_verify_monitor(elem);
#endif
  Handle h_obj(thread, elem->obj());
  assert(Universe::heap()->is_in_reserved_or_null(h_obj()),
         "must be NULL or an object");
  if (elem == NULL || h_obj()->is_unlocked()) {
    THROW(vmSymbols::java_lang_IllegalMonitorStateException());
  }
  ObjectSynchronizer::slow_exit(h_obj(), elem->lock(), thread);
  // Free entry. This must be done here, since a pending exception might be installed on
  // exit. If it is not cleared, the exception handling code will try to unlock the monitor again.
  elem->set_obj(NULL);
#ifdef ASSERT
  thread->last_frame().interpreter_frame_verify_monitor(elem);
#endif
IRT_END
Copy the code

ObjectSynchronizer::slow_exit (slow_exit) ¶

void ObjectSynchronizer::slow_exit(oop object, BasicLock* lock, TRAPS) {
  fast_exit (object, lock, THREAD) ;
}
Copy the code

The code for ObjectSynchronizer::fast_exit is as follows:

void ObjectSynchronizer::fast_exit(oop object, BasicLock* lock, TRAPS) { assert(! object->mark()->has_bias_pattern(), "should not see bias pattern here"); // if displaced header is null, the previous enter is recursive enter, no-op markOop dhw = lock->displaced_header(); // Get the lock object header markOop mark; if (dhw == NULL) { // Recursive stack-lock. // Diagnostics -- Could be: stack-locked, inflating, inflated. mark = object->mark() ; assert (! mark->is_neutral(), "invariant") ; if (mark->has_locker() && mark ! = markOopDesc::INFLATING()) { assert(THREAD->is_lock_owned((address)mark->locker()), "invariant") ; } if (mark->has_monitor()) { ObjectMonitor * m = mark->monitor() ; assert(((oop)(m->object()))->mark() == mark, "invariant") ; assert(m->is_entered(THREAD), "invariant") ; } return ; } mark = object->mark() ; // If the object is stack-locked by the current thread, try to // swing the displaced header from the box back to the mark. if (mark == (markOop) lock) { assert (dhw->is_neutral(), "invariant") ; // Replace the product of the Hermite product with the object header through CAS. If the product is replaced successfully, the product will be released successfully. if ((markOop) Atomic::cmpxchg_ptr (dhw, object->mark_addr(), mark) == mark) { TEVENT (fast_exit: release stacklock) ; return; }} / / lock, call the release of heavyweight lock lock method ObjectSynchronizer: : inflate (THREAD, object) - > exit (true, THREAD); }Copy the code

Lightweight lock release is also relatively simple, that is, to replace the Mark Word in the lock record space of the current thread stack frame with the object header of the lock object, if successful, the lock is released successfully. Otherwise, the lock expands into a heavyweight lock, which implements the release logic of the heavyweight lock.

5.7 Analysis of lock expansion process

Heavyweight locking is implemented through an object’s internal monitor, which essentially relies on MutexLock at the bottom of the operating system. We first see lock expansion process, from the previous analysis of already know the process of inflation is through ObjectSynchronizer: : inflate method, the code is as follows:

ObjectMonitor * ATTR ObjectSynchronizer::inflate (Thread * Self, oop object) { // Inflate mutates the heap ... // Relaxing assertion for bug 6320749. assert (Universe::verify_in_progress() || ! SafepointSynchronize::is_at_safepoint(), "invariant") ; for (;;) Const markOop mark = object->mark(); assert (! mark->has_bias_pattern(), "invariant") ; ObjectMonitor * INF = mark->monitor(); if (mark->has_monitor()) {//has_monitor is a method in markoop.hpp. // The object monitor that acquired the heavyweight lock returns assert (INF ->header()-> IS_neutral (), "invariant"); assert (inf->object() == object, "invariant") ; assert (ObjectSynchronizer::verify_objmon_isinpool(inf), "monitor is invalid"); return inf ; } if (mark == markOopDesc::INFLATING()) {// Wait to Inflate, indicating that there is a thread INFLATING, and continue the next round of INFLATING TEVENT (Inflate: spin while INFLATING); ReadStableMark(object) ; continue ; } if (mark->has_locker()) {ObjectMonitor * m = omAlloc (Self); We do this before the. If We do not do anything about it, do not do anything about it CAS in order to minimize the length of time // in which INFLATING appears in the mark. m->Recycle(); m->_Responsible = NULL ; m->OwnerIsThread = 0 ; m->_recursions = 0 ; m->_SpinDuration = ObjectMonitor::Knob_SpinLimit ; // Consider: Maintain by type/class /** Compare object->mark_addr() to mark, and if these values are equal, change object->mark_addr() to markOopDesc::INFLATING(), Equal returns mark, Object_addr ()**/ markOop CMP = (markOop) Atomic:: cMPxchG_ptr (markOopDesc::INFLATING(), object->mark_addr(), mark) ; if (cmp ! = mark) {//CAS fails omRelease (Self, m, true); // Release monitor continue; MarkOop DMW = mark->displaced_mark_helper(); assert (dmw->is_neutral(), "invariant") ; // set the ObjectMonitor attribute m->set_header(DMW); m->set_owner(mark->locker()); m->set_object(object); // TODO-FIXME: assert BasicLock->dhw ! = 0. guarantee (object->mark() == markOopDesc::INFLATING(), "invariant") ; object->release_set_mark(markOopDesc::encode(m)); if (ObjectMonitor::_sync_Inflations ! = NULL) ObjectMonitor::_sync_Inflations->inc() ; TEVENT(Inflate: overwrite stacklock) ; if (TraceMonitorInflation) { if (object->is_instance()) { ResourceMark rm; tty->print_cr("Inflating object " INTPTR_FORMAT " , mark " INTPTR_FORMAT " , type %s", (void *) object, (intptr_t) object->mark(), object->klass()->external_name()); } } return m ; // if the state is unlocked assert (mark-> IS_neutral (), "invariant"); ObjectMonitor * m = omAlloc (Self) ; //// Get a valid ObjectMonitor // set the ObjectMonitor attribute m->Recycle(); m->set_header(mark); m->set_owner(NULL); m->set_object(object); m->OwnerIsThread = 1 ; m->_recursions = 0 ; m->_Responsible = NULL ; m->_SpinDuration = ObjectMonitor::Knob_SpinLimit ; // consider: MarkOopDesc ::encode(m); metastats by type/class /** If (Atomic::cmpxchg_ptr (markOopDesc::encode(m), object->mark_addr(), mark)! M ->set_object (NULL); m->set_owner (NULL) ; m->OwnerIsThread = 0 ; m->Recycle() ; omRelease (Self, m, true) ; m = NULL ; continue ; // interference - the markword changed - just retry. // The state-transitions are one-way, so there's no chance of // live-lock -- "Inflated" is an absorbing state. } if (ObjectMonitor::_sync_Inflations ! = NULL) ObjectMonitor::_sync_Inflations->inc() ; TEVENT(Inflate: overwrite neutral) ; if (TraceMonitorInflation) { if (object->is_instance()) { ResourceMark rm; tty->print_cr("Inflating object " INTPTR_FORMAT " , mark " INTPTR_FORMAT " , type %s", (void *) object, (intptr_t) object->mark(), object->klass()->external_name()); } } return m ; // Return ObjectMonitor}}Copy the code

The process of lock expansion is a bit complicated. The whole process of lock expansion is completed by spin. The specific implementation logic is summarized as follows:

  • Mark ->has_monitor() determines if the current lock object is a heavyweight lock, that is, lock:10, then the second step is performed, otherwise the third step is performed.
  • ObjectMonitor, the ObjectMonitor for the heavyweight lock, is obtained by mark->monitor and returned. The lock expansion process is complete.
  • If the current lock is INFLATING, another thread is INFLATING the lock, and the current thread spins until the lock is inflated by another thread.
  • If the current lightweight lock state is mark->has_locker(), lock inflation is performed. First, an available ObjectMonitor is obtained through the omAlloc method and the initial data is set. Then use CAS to set the object header to ‘markOopDesc:INFLATING’, indicating that the lock is INFLATING. If the CAS fails, the spin continues.
  • If the status is unlocked, the logic is similar to Step 4.

The process of lock bloating is actually getting an ObjectMonitor ObjectMonitor, and the actual logic for preempting the lock is in the ObjectMonitor::enter method.

5.8. Competition logic for heavyweight locks

ObjectMonitor:: Enter (), ObjectMonitor::enter (), ObjectMonitor::enter (), ObjectMonitor::enter ()

  • Use CAS to set the _owner field of monitor to the current thread, and return directly if the setting is successful.
  • If the previous _owner referred to the current thread, the current thread is reentrant and execute _RECURsions ++ to increase the number of reentrants.
  • If the current thread succeeds in acquiring the monitor lock, set _RECURsions to 1 and _owner to the current thread.
  • If the lock fails to be acquired, wait for the lock to be released.
void ATTR ObjectMonitor::enter(TRAPS) { // The following code is ordered to check the most common cases first // and to reduce RTS->RTO cache line upgrades on SPARC and IA32 processors. Thread * const Self = THREAD ; void * cur ; cur = Atomic::cmpxchg_ptr (Self, &_owner, NULL) ; If (cur == NULL) {// Either ASSERT _recursions == 0 or explicitly set _recursions = 0 , "invariant") ; assert (_owner == Self, "invariant") ; // CONSIDER: set or assert OwnerIsThread == 1 return ; } if (cur == Self) { // TODO-FIXME: check for integer overflow! BUGID 6557169. _recursions ++ ; return ; } if (Self->is_lock_owned ((address)cur)) { assert (_recursions == 0, "internal state error"); _recursions = 1 ; // Commute owner from a thread-specific on-stack BasicLockObject address to // a full-fledged "Thread *". _owner = Self ; OwnerIsThread = 1 ; return ; } // We've encountered genuine contention. assert (Self->_Stalled == 0, "invariant") ; Self->_Stalled = intptr_t(this) ; // Try one round of spinning *before* enqueueing Self // and before going through the awkward and expensive state // transitions. The following spin is strictly optional ... // Note that if we acquire the monitor from an initial spin // we forgo posting JVMTI events and firing DTRACE probes. if (Knob_SpinEarly && TrySpin (Self) > 0) { assert (_owner == Self , "invariant") ; assert (_recursions == 0 , "invariant") ; assert (((oop)(object()))->mark() == markOopDesc::encode(this), "invariant") ; Self->_Stalled = 0 ; return ; } assert (_owner ! = Self , "invariant") ; assert (_succ ! = Self , "invariant") ; assert (Self->is_Java_thread() , "invariant") ; JavaThread * jt = (JavaThread *) Self ; assert (! SafepointSynchronize::is_at_safepoint(), "invariant") ; assert (jt->thread_state() ! = _thread_blocked , "invariant") ; assert (this->object() ! = NULL , "invariant") ; assert (_count >= 0, "invariant") ; // Prevent deflation at STW-time. See deflate_idle_monitors() and is_busy(). // Ensure the object-monitor relationship remains stable while there's contention. Atomic::inc_ptr(&_count); EventJavaMonitorEnter event; { // Change java thread status to indicate blocked on monitor enter. JavaThreadBlockedOnMonitorEnterState jtbmes(jt, this); DTRACE_MONITOR_PROBE(contended__enter, this, object(), jt); if (JvmtiExport::should_post_monitor_contended_enter()) { JvmtiExport::post_monitor_contended_enter(jt, this); } OSThreadContendState osts(Self->osthread()); ThreadBlockInVM tbivm(jt); Self->set_current_pending_monitor(this); // TODO-FIXME: change the following for(;;) loop to straight-line code. for (;;) { jt->set_suspend_equivalent(); // cleared by handle_special_suspend_equivalent_condition() // or java_suspend_self() EnterI (THREAD) ; if (! ExitSuspendEquivalent(jt)) break ; // // We have acquired the contended monitor, but while we were // waiting another thread suspended us. We don't want to enter // the monitor while suspended because that would surprise the // thread that suspended us. // _recursions = 0 ; _succ = NULL ; exit (false, Self) ; jt->java_suspend_self(); } Self->set_current_pending_monitor(NULL); }... // Omit countless lines of code hereCopy the code

If the lock fails to be acquired, it needs to wait for the lock to be released by means of spin. The spin execution method is ObjectMonitor::EnterI. Part of the principle and code are as follows:

  • Encapsulate the current thread as ObjectWaiter object node and set the state to TS_CXQ.
  • Spin node nodes to the _CXQ queue.
  • After the node node is added to the _CXQ queue, it continues to spin to try to acquire the lock, and if the lock is not acquired within the specified threshold, it suspends the current thread through Park and waits to be awakened.
void ATTR ObjectMonitor::EnterI (TRAPS) { Thread * Self = THREAD ; . ObjectWaiter node(Self); Self->_ParkEvent->reset() ; node._prev = (ObjectWaiter *) 0xBAD ; node.TState = ObjectWaiter::TS_CXQ ; // Push "Self" onto the front of the _cxq. // Once on cxq/EntryList, Self stays on-queue until it acquires the lock. // Note that spinning tends to reduce the rate at which threads // enqueue and dequeue on EntryList|cxq. ObjectWaiter * nxt ; for (;;) {// spin to add node to _CXq queue node._next = NXT = _CXq; if (Atomic::cmpxchg_ptr (&node, &_cxq, nxt) == nxt) break ; // Interference - the CAS failed because _cxq changed. Just retry. // As an optional optimization we retry the lock. if (TryLock (Self) > 0) { assert (_succ ! = Self , "invariant") ; assert (_owner == Self , "invariant") ; assert (_Responsible ! = Self , "invariant") ; return ; }}... // After the node node is added to the _CXq queue, it continues to spin to acquire the lock, and if it does not acquire the lock within the specified threshold, it suspends the current thread with park and waits to be awakened. For (;;) { if (TryLock (Self) > 0) break ; assert (_owner ! = Self, "invariant") ; if ((SyncFlags & 2) && _Responsible == NULL) { Atomic::cmpxchg_ptr (Self, &_Responsible, NULL) ; Park self} / / / / through the park, suspends the current thread if (_Responsible = = self | | (SyncFlags & 1)) {TEVENT (Inflated the enter - park TIMED); Self->_ParkEvent->park ((jlong) RecheckInterval) ; // Increase the RecheckInterval, but clamp the value. RecheckInterval *= 8 ; if (RecheckInterval > 1000) RecheckInterval = 1000 ; } else { TEVENT (Inflated enter - park UNTIMED) ; Self->_ParkEvent->park() ; } if (TryLock(Self) > 0) break; // When a thread is awakened, it continues to execute TEVENT (Vehicle enter-wakeup); if (ObjectMonitor::_sync_FutileWakeups ! = NULL) { ObjectMonitor::_sync_FutileWakeups->inc() ; } ++ nWakeups ; if ((Knob_SpinAfterFutile & 1) && TrySpin (Self) > 0) break ; if ((Knob_ResetEvent & 1) && Self->_ParkEvent->fired()) { Self->_ParkEvent->reset() ; OrderAccess::fence() ; } if (_succ == Self) _succ = NULL ; // Invariant: after clearing _succ a thread *must* retry _owner before parking. OrderAccess::fence() ; }... // omit a lot of code}Copy the code

ObjectMonitor::TryLock (self) code is defined in ObjectMonitor::TryLock.

The CAS sets the _owner field of monitor to the current thread through spin. If it succeeds, the lock has been acquired; if it fails, it continues to be suspended.

int ObjectMonitor::TryLock (Thread * Self) {
   for (;;) {
      void * own = _owner ;
      if (own != NULL) return 0 ;
      if (Atomic::cmpxchg_ptr (Self, &_owner, NULL) == NULL) {
         // Either guarantee _recursions == 0 or set _recursions = 0.
         assert (_recursions == 0, "invariant") ;
         assert (_owner == Self, "invariant") ;
         // CONSIDER: set or assert that OwnerIsThread == 1
         return 1 ;
      }
      // The lock had been free momentarily, but we lost the race to the lock.
      // Interference -- the CAS failed.
      // We can either return -1 or retry.
      // Retry doesn't make as much sense because the lock was just acquired.
      if (true) return -1 ;
   }
}
Copy the code

5.9 release of heavyweight locks

A heavyweight lock is released via ObjectMonitor::exit, which tells the blocked thread to compete for the lock:

  • Check that owner in the current lock object does not point to the current thread. If the BasicLock that owner points to is on the current thread stack, then _owner points to the current thread.
  • If _owner in the current lock object points to the current thread, the number of times the current thread has re-entered the lock is determined. If it is not 0, ObjectMonitor::exit() continues until the number of times the lock has re-entered is 0.
  • Release the current lock and determine whether to wake up the pending thread in _CXQ based on the QMode mode. Or something else.
void ATTR ObjectMonitor::exit(bool not_suspended, TRAPS) {
   Thread * Self = THREAD ;
   if (THREAD != _owner) {//如果当前锁对象中的_owner没有指向当前线程
     //如果_owner指向的BasicLock在当前线程栈上,那么将_owner指向当前线程
     if (THREAD->is_lock_owned((address) _owner)) {
       // Transmute _owner from a BasicLock pointer to a Thread address.
       // We don't need to hold _mutex for this transition.
       // Non-null to Non-null is safe as long as all readers can
       // tolerate either flavor.
       assert (_recursions == 0, "invariant") ;
       _owner = THREAD ;
       _recursions = 0 ;
       OwnerIsThread = 1 ;
     } else {
       // NOTE: we need to handle unbalanced monitor enter/exit
       // in native code by throwing an exception.
       // TODO: Throw an IllegalMonitorStateException ?
       TEVENT (Exit - Throw IMSX) ;
       assert(false, "Non-balanced monitor enter/exit!");
       if (false) {
          THROW(vmSymbols::java_lang_IllegalMonitorStateException());
       }
       return;
     }
   }
   //如果当前,线程重入锁的次数,不为0,那么就重新走ObjectMonitor::exit,直到重入锁次数为0为止
   if (_recursions != 0) {
     _recursions--;        // this is simple recursive enter
     TEVENT (Inflated exit - recursive) ;
     return ;
   }
  ...//此处省略很多代码
  for (;;) {
    if (Knob_ExitPolicy == 0) {
      OrderAccess::release_store(&_owner, (void*)NULL);   //释放锁
      OrderAccess::storeload();                        // See if we need to wake a successor
      if ((intptr_t(_EntryList)|intptr_t(_cxq)) == 0 || _succ != NULL) {
        TEVENT(Inflated exit - simple egress);
        return;
      }
      TEVENT(Inflated exit - complex egress);
      //省略部分代码...
    }
    //省略部分代码...
    ObjectWaiter * w = NULL;
    int QMode = Knob_QMode;
    //根据QMode的模式判断,
    //如果QMode == 2则直接从_cxq挂起的线程中唤醒    
    if (QMode == 2 && _cxq != NULL) {
      w = _cxq;
      ExitEpilog(Self, w);
      return;
    }
     //省略部分代码... 省略的代码为根据QMode的不同,不同的唤醒机制
  }
}
Copy the code

According to different strategies (designated by QMode), obtain from CXQ or EntryList head node, through ObjectMonitor: : ExitEpilog method to wake up the thread of the node encapsulation, wake up the operation eventually completed by unpark.

void ObjectMonitor::ExitEpilog (Thread * Self, ObjectWaiter * Wakee) { { assert (_owner == Self, "invariant") ; // Exit protocol: // 1. ST _succ = wakee // 2. membar #loadstore|#storestore; // 2. ST _owner = NULL // 3. unpark(wakee) _succ = Knob_SuccEnabled ? Wakee->_thread : NULL ; ParkEvent * Trigger = Wakee->_event ; // Hygiene -- once we've set _owner = NULL we can't safely dereference Wakee again. // The thread associated with Wakee may have grabbed the lock and "Wakee" may be // out-of-scope (non-extant). Wakee = NULL ; // Drop the lock OrderAccess::release_store_ptr (&_owner, NULL) ; OrderAccess::fence() ; // ST _owner vs LD in unpark() if (SafepointSynchronize::do_call_back()) { TEVENT (unpark before SAFEPOINT) ; } DTRACE_MONITOR_PROBE(contended__exit, this, object(), Self); Trigger->unpark() ; // Maintain stats and report events to JVMTI if (ObjectMonitor::_sync_Parks! = NULL) { ObjectMonitor::_sync_Parks->inc() ; }}Copy the code

Remind again: if the source code analysis is not particularly deep friends may need to carefully chew, do not hold the mentality of reading side can understand learning, otherwise it will not have any effect in the end.

Vi. Reference materials

  • Deep Understanding of the JVM VIRTUAL Machine
  • The Beauty of Concurrent Programming in Java
  • Java High Concurrency Programming
  • Core Technology of Website Architecture with 100 million Traffic
  • Java Concurrent Programming