This is the second article of synchronized series. The main content is to analyze the realization of bias lock.

The birth background and basic principles of bias locking have been described above, and it is strongly recommended to read this article on the basis of having seen the previous article.

More articles can be found on my blog: github.com/farmerjohng…

This series of articles will conduct a comprehensive analysis of HotSpot synchronized lock implementation, including partial lock, lightweight lock, heavyweight lock lock, lock unlock, lock upgrade process principle and source analysis, hoping to give some help to students in the study of synchronized road. It mainly includes the following articles:

Synchronized low-level implementation — An introduction

Synchronized low-level implementation — biased locking

Synchronized low-level implementation — lightweight locking

Synchronized low-level implementation — heavyweight locking

This paper will be divided into several parts:

1. Bias the lock entrance

2. Process of obtaining biased locks

3. Biased lock revocation process

4. Bias lock release process

5. Batch rebias and batch revocation of bias lock

The JVM version analyzed in this article is JVM8, and the exact version number and code can be seen here.

Offset lock entrance

At present a lot of articles on the Internet, about biased lock source entry are looking for the wrong place, leading me to a lot of logic for biased lock has been thinking impassability, took a lot of detources.

Synchronized is divided into synchronized code block and synchronized method. The logic of acquiring lock at the bottom is the same. This article explains the realization of synchronized code block. As mentioned in the previous article, synchronized blocks are implemented by monitorenter and Monitorexit directives.

Regarding the entry for obtaining locks in the HotSpot VIRTUAL machine, many articles on the web either refer to the method entry as interpreterRuntime. Cpp-monitorenter # or bytecodeInterpreter. Cpp-1816. There are also some problems with the position of the lock entrance in the article of Zhan Xiao Wolf (of course, the article is very good, and the article of Xiao Wolf gave me a lot of help when I started to study synchronized).

To find the lock entry, be sure to find a place in the source code where monitorenter instructions are parsed. Monitorenter directives are parsed in HotSpot in two places: bytecodeInterpreter. Cpp# 1816 and templateTable_x86_64.cpp#3667.

The former is the bytecodeInterpreter in the JVM, which implements every JVM instruction (e.g., monitorenter, invokevirtual, etc.) in C++. It has the advantage of being relatively simple to implement and easy to understand, but slow to execute. The latter is the templateInterpreter, which writes a corresponding piece of assembly code for each instruction and starts with each instruction bound to the corresponding assembly code entry, which is extremely efficient. The implementation of template interpreter can see this article, in the process of research has also consulted the author of the article ‘Mr. Wang’ some questions, here thank you.

In HotSpot, only the template interpreter is used, and the bytecode interpreter is not used at all, as you can see in the reading notes.

So montorenter’s parsing entry is in the template interpreter, and the code is in templateTable_x86_64.cpp#3667. By calling the path: TemplateTable_x86_64 #monitorenter->interp_masm_x86_64#lock_object enter macroAssembler_x86#biased_locking_enter, Here you can see that the corresponding assembly code is generated. Note that instead of calling biased_locking_Enter every time monitorenter instructions are parsed, the method is called only at JVM startup to generate assembly code, after which instructions are parsed by executing assembly code directly.

The logic of bytecodeInterpreter is very much the same as the logic of templateInterpreter, because templateInterpreter is assembly code, which is a little bit more obscure, So looking at the bytecodeInterpreter implementation makes it a little easier to understand. But the catch is that bytecodeInterpreter didn’t implement biased locking logic before JDK8u. The jdK8-87EE5EE27509 version I read before does not implement the logic of biased lock, so I did not understand it for a long time. In this COMMIT, bytecodeInterpreter adds support for biased locking. I take a look at the comparison to templateInterpreter except for the stack structure, the logic is basically the same. So the biased lock logic is explained in bytecodeInterpreter below. The templateInterpreter source code is annotated in English, so it is not difficult to understand the basic instructions of assembly with annotations.

Bias lock acquisition process

Here we begin biased lock acquisition process analysis in bytecodeInterpreter. Cpp# 1816. Note that the code has been cut.

CASE(_monitorenter): {
  // The lockee is the lock object
  oop lockee = STACK_OBJECT(- 1);
  // derefing's lockee ought to provoke implicit null check
  CHECK_NULL(lockee);
  // code 1: Find an idle Lock Record
  BasicObjectLock* limit = istate->monitor_base();
  BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base();
  BasicObjectLock* entry = NULL;
  while(most_recent ! = limit ) {if (most_recent->obj() == NULL) entry = most_recent;
    else if (most_recent->obj() == lockee) break;
    most_recent++;
  }
  // Entry is not null, indicating that there are free Lock records
  if(entry ! =NULL) {
    // code 2: Point the obj pointer to the Lock Record object
    entry->set_obj(lockee);
    int success = false;
    uintptr_t epoch_mask_in_place = (uintptr_t)markOopDesc::epoch_mask_in_place;
	// Markoop is the mark word for the object header
    markOop mark = lockee->mark();
    intptr_t hash = (intptr_t) markOopDesc::no_hash;
    // code 3: If the lock object's Mark word state is biased mode
    if (mark->has_bias_pattern()) {
      uintptr_t thread_ident;
      uintptr_t anticipated_bias_locking_value;
      thread_ident = (uintptr_t)istate->thread();
     // Code 4: There are a few steps here, which are analyzed below
      anticipated_bias_locking_value =
        (((uintptr_t)lockee->klass()->prototype_header() | thread_ident) ^ (uintptr_t)mark) &
        ~((uintptr_t) markOopDesc::age_mask_in_place);
	 // code 5: If the biased thread is itself and the epoch is equal to the epoch of the class
      if  (anticipated_bias_locking_value == 0) {
        // already biased towards this thread, nothing to do
        if (PrintBiasedLockingStatistics) {
          (* BiasedLocking::biased_lock_entry_count_addr())++;
        }
        success = true;
      }
       // code 6: If bias mode is off, try to undo bias lock
      else if((anticipated_bias_locking_value & markOopDesc::biased_lock_mask_in_place) ! =0) {
        markOop header = lockee->klass()->prototype_header();
        if(hash ! = markOopDesc::no_hash) { header = header->copy_set_hash(hash); }// Use CAS to replace mark word with mark word in class
        if (Atomic::cmpxchg_ptr(header, lockee->mark_addr(), mark) == mark) {
          if(PrintBiasedLockingStatistics) (*BiasedLocking::revoked_lock_entry_count_addr())++; }}// code 7: If the epoch is not equal to the epoch in the class, retry the bias
      else if((anticipated_bias_locking_value & epoch_mask_in_place) ! =0) {
        // Construct a mark word biased towards the current thread
        markOop new_header = (markOop) ( (intptr_t) lockee->klass()->prototype_header() | thread_ident);
        if(hash ! = markOopDesc::no_hash) { new_header = new_header->copy_set_hash(hash); }// CAS replaces the mark word for the object header
        if (Atomic::cmpxchg_ptr((void*)new_header, lockee->mark_addr(), mark) == mark) {
          if (PrintBiasedLockingStatistics)
            (* BiasedLocking::rebiased_lock_entry_count_addr())++;
        }
        else {
          // Monitorenter method is called to update the lock if rebias fails
          CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
        }
        success = true;
      }
      else {
         // If the thread is biased to another thread, the thread is biased to another thread.
       	// Code 8: Build an anonymous biased mark word, try to replace the lock object's mark word with CAS instruction
        markOop header = (markOop) ((uintptr_t) mark & ((uintptr_t)markOopDesc::biased_lock_mask_in_place |(uintptr_t)markOopDesc::age_mask_in_place |epoch_mask_in_place));
        if(hash ! = markOopDesc::no_hash) { header = header->copy_set_hash(hash); } markOop new_header = (markOop) ((uintptr_t) header | thread_ident);
        // debugging hint
        DEBUG_ONLY(entry->lock()->set_displaced_header((markOop) (uintptr_t) 0xdeaddead);if (Atomic::cmpxchg_ptr((void*)new_header, lockee->mark_addr(), header) == header) {
           // CAS is successfully modified
          if (PrintBiasedLockingStatistics)
            (* BiasedLocking::anonymously_biased_lock_entry_count_addr())++;
        }
        else {
          // If the change fails, there are multiple threads competing, so go to monitorenter method
          CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
        }
        success = true; }}// If the bias thread is not the current thread or bias mode is not enabled, success==false will result
    if(! success) {// Lightweight lock logic
      //code 9: Construct a product of the herbivist herbivist product and direct the Lock of the product Record to it
      markOop displaced = lockee->mark()->set_unlocked();
      entry->lock()->set_displaced_header(displaced);
      // If -xx :+UseHeavyMonitors is specified, call_VM =true, which disables biased and lightweight locks
      bool call_vm = UseHeavyMonitors;
      // Replace the mark word in the object header with a pointer to the Lock Record using CAS
      if(call_vm || Atomic::cmpxchg_ptr(entry, lockee->mark_addr(), displaced) ! = displaced) {// Determine if the lock is reentrant
        if(! call_vm && THREAD->is_lock_owned((address) displaced->clear_lock_bits())) {//code 10: Product will be reentrant if the product is product reentrant
          entry->lock()->set_displaced_header(NULL);
        } else {
          CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception);
        }
      }
    }
    UPDATE_PC_AND_TOS_AND_CONTINUE(1.- 1);
  } else {
    // Lock record is not enough, re-execute
    istate->set_msg(more_monitors);
    UPDATE_PC_AND_RETURN(0); // Re-execute}}Copy the code

Review the format of the mark word in the object header:

Each class in the JVM also has a Mark Word-like prototype_header that marks information such as the class’s epoch and bias switch. The lockee->klass()->prototype_header() gets the class’s prototype_header.

Code 1, find a free Lock Record from the stack of the current thread (i.e. BasicObjectLock Record), determine whether the Lock Record is free based on whether its OBJ field is null. Note that the last available Lock Record is found in ascending order of memory address, in other words, the highest available Lock Record.

Code 2, once you get the Lock Record, the first thing you need to do is assign its OBj field.

Code 3, determine whether the mark word of the lock object is in biased mode, that is, whether the lower 3 bits are 101.

Code 4, Here are a few steps arithmetic operation anticipated_bias_locking_value = (((uintptr_t) lockee – > klass () – > prototype_header () | thread_ident) ^ (uintptr_t)mark) & ~((uintptr_t) markOopDesc::age_mask_in_place); This bit operation can be divided into three parts.

The first part ((uintptr_t) lockee – > klass () – > prototype_header () | thread_ident) to the current thread id and class prototype_header phase or, The value is (current thread ID + epoch + partial lock flag + lock flag bit in prototype_header). Note that the four bytes of the prototype_header generation age are 0

In the second part ^ (uintptr_t)mark xor the result of the above calculation with the lock object markOop, all the equal bits are set to 0, only the unequal bits are left.

Uintptr_t markOopDesc::age_mask_in_place 0001111000, inverse, becomes… 1110000111, all the other digits are 1 except for the four digits in generational age; The inverse result is then matched with the above result, and the generation age in the above xOR result is ignored.

Code 5, anticipated_bias_locking_value==0 means that the biased thread is the current thread and the Mark Word epoch equals the class epoch, in which case nothing needs to be done.

Code 6, (Anticipated_bias_locking_value & markOopDesc:: biASed_lock_mask_in_place)! Mark ->has_bias_pattern(); mark->has_bias_pattern(); That means the class’s prototype_header is not biased.

The CAS directive Atomic:: cMPxchg_ptr (header, lockee-> mark_ADDR (), mark) == mark is used to undo the bias lock. The CAS directive Atomic:: cMPxchg_ptr (header, lockee-> mark_ADDR (), mark) == mark. In contrast, the cmpxchg_ptr method takes the expected value as its first argument, the object as its second argument, and the expected value as its third argument. The method returns the actual value, and if the value is equal to the expected value, the modification is successful.

Code 7, if the epoch has expired, re-bias is required, and CAS instruction is used to replace the mark Word of the lock object with a new Mark word biased to the current thread and whose epoch is the epoch of the class.

In code 8, CAS changes the bias thread to the current thread. If the current bias is anonymous, the modification can be successful; otherwise, the lock upgrade logic will be entered.

Code 9, this step is already lightweight lock logic. As can be seen from the format of the mark word in the figure above, the mark Word in the lightweight Lock stores a pointer to the Lock Record. Construct an unlocked Mark Word and store it in Lock Record (see the first article for the format of Lock Record). The reason why the mark Word is set to be in the unlocked state is that the lightweight Lock will set the mark Word of the object head as the product of the Lock Record, so it will be better to set the product to be in the unlocked state when it is set to be in the unlocked state.

Code 10: if the product is product of Lock reentrant, the product of Lock Record will be set to NULL to function as Lock reentrant counting.

Above is biased locking locking process (including parts of lightweight lock locking process), if the current lock has to other threads | | epoch value date | | | bias mode off | biased locking concurrency conflicts exist in the process, will enter the InterpreterRuntime: : monitorenter method, In this method, bias locks are undone and upgraded.

Bias lock revocation

The undo here refers to changing the lock object into non-biased lock state because the conditions are not met in the process of acquiring biased lock. Freeing is the process of exiting a synchronized block. The logic for freeing locks is described in the next section. Please note the distinction between undo and release in this article.

If failed to get biased locking into InterpreterRuntime: : monitorenter method

IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorenter(JavaThread* thread, BasicObjectLock* elem)) ... Handle h_obj(thread, elem->obj()); assert(Universe::heap()->is_in_reserved_or_null(h_obj()), "must be NULL or an object"); if (UseBiasedLocking) { // Retry fast entry if bias is revoked to avoid unnecessary inflation ObjectSynchronizer::fast_enter(h_obj, elem->lock(), true, CHECK); } else { ObjectSynchronizer::slow_enter(h_obj, elem->lock(), CHECK); }... IRT_ENDCopy the code

You can see that if JVM bias locking is enabled, it goes into ObjectSynchronizer::fast_enter.

void ObjectSynchronizer::fast_enter(Handle obj, BasicLock* lock, bool attempt_rebias, TRAPS) {
 if (UseBiasedLocking) {
    if(! SafepointSynchronize::is_at_safepoint()) { BiasedLocking::Condition cond = BiasedLocking::revoke_and_rebias(obj, attempt_rebias, THREAD);if (cond == BiasedLocking::BIAS_REVOKED_AND_REBIASED) {
        return; }}else{ assert(! attempt_rebias,"can not rebias toward VM thread"); BiasedLocking::revoke_at_safepoint(obj); } assert(! obj->mark()->has_bias_pattern(),"biases should be revoked by now");
 }

 slow_enter (obj, lock, THREAD) ;
}
Copy the code

If it is a normal Java thread, it will follow the above logic to the BiasedLocking:: REVOke_and_rebias method. If it is a VM thread, it will follow the following BiasedLocking:: REVOke_at_safepoint. We mainly look at BiasedLocking::revoke_and_rebias method. The main function of this method is like its name: undo or rebias. The first argument encapsulates the lock object and the current thread, and the second argument indicates whether rebias is allowed, which is true.

BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { assert(! SafepointSynchronize::is_at_safepoint(), "must not be called while at safepoint"); markOop mark = obj->mark(); if (mark->is_biased_anonymously() && ! Attempt_rebias) {// If there is an anonymous bias and attempt_rebias==false, this occurs when the lock object's hashcode method is called, and the bias lock needs to be lifted. markOop biased_value = mark; markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = (markOop) Atomic::cmpxchg_ptr(unbiased_prototype, obj->mark_addr(), mark); if (res_mark == biased_value) { return BIAS_REVOKED; }} else if (mark->has_bias_pattern()) {Klass* k = obj-> Klass (); markOop prototype_header = k->prototype_header(); //code 1: If (! prototype_header->has_bias_pattern()) { markOop biased_value = mark; markOop res_mark = (markOop) Atomic::cmpxchg_ptr(prototype_header, obj->mark_addr(), mark); assert(! (*(obj->mark_addr()))->has_bias_pattern(), "even if we raced, should still be revoked"); return BIAS_REVOKED; If (prototype_header->bias_epoch()! = mark->bias_epoch()) { if (attempt_rebias) { assert(THREAD->is_Java_thread(), ""); markOop biased_value = mark; markOop rebiased_prototype = markOopDesc::encode((JavaThread*) THREAD, mark->age(), prototype_header->bias_epoch()); markOop res_mark = (markOop) Atomic::cmpxchg_ptr(rebiased_prototype, obj->mark_addr(), mark); if (res_mark == biased_value) { return BIAS_REVOKED_AND_REBIASED; } } else { markOop biased_value = mark; markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = (markOop) Atomic::cmpxchg_ptr(unbiased_prototype, obj->mark_addr(), mark); if (res_mark == biased_value) { return BIAS_REVOKED; }}}} //code 3: HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias); if (heuristics == HR_NOT_BIASED) { return NOT_BIASED; } else if (heuristics == HR_SINGLE_REVOKE) {//code 4: Klass *k = obj-> Klass (); markOop prototype_header = k->prototype_header(); If (mark->biased_locker() == THREAD && prototype_header->bias_epoch() == mark->bias_epoch()) { This is what happens when the Object#hashcode method is called // because you just iterate through the stack of the current thread, there is no need to wait for safepoint to undo. ResourceMark rm; if (TraceBiasedLocking) { tty->print_cr("Revoking bias by walking my own stack:"); } BiasedLocking::Condition cond = revoke_bias(obj(), false, false, (JavaThread*) THREAD); ((JavaThread*) THREAD)->set_cached_monitor_info(NULL); assert(cond == BIAS_REVOKED, "why not?" ); return cond; } else {// The following code will eventually call the revoke_bias method VM_RevokeBias REVOKE (&obj, (JavaThread*) THREAD) from safepoint in the VM THREAD; VMThread::execute(&revoke); return revoke.status_code(); } } assert((heuristics == HR_BULK_REVOKE) || (heuristics == HR_BULK_REBIAS), "?" ); / / code5: Batch revoke and batch rebias logic VM_BulkRevokeBias bulk_REVOKE (&obj, (JavaThread*) THREAD, (heuristics == HR_BULK_REBIAS), attempt_rebias);  VMThread::execute(&bulk_revoke); return bulk_revoke.status_code(); }Copy the code

There is A lot of logic that leads to this method, but we’ll just analyze the most common case: suppose the lock is already favoring thread A, and thread B tries to acquire the lock.

The above code 1 and code 2 b threads will not go, will eventually go to the code 4, if you want to undo the lock is the current Thread is called directly revoke_bias revocation of biased locking, otherwise will push the operation to VM Thread when medium to safepoint execution.

A few words about VM threads: There is a dedicated VM Thread in the JVM that continuously pulls requests from the VMOperationQueue, such as GC requests. Operations that require SafePoint (VM_Operationevaluate_at_safepoint returns true) must wait until all Java threads have entered SafePoint. Check out this article for safePoint.

Let’s focus on the revoke_bias method. The first argument is the lock object, and the second and third arguments are false

static BiasedLocking::Condition revoke_bias(oop obj, bool allow_rebias, bool is_bulk, JavaThread* requesting_thread) {
  markOop mark = obj->mark();
  // If bias mode is not turned on, NOT_BIASED is returned directly
  if(! mark->has_bias_pattern()) { ...return BiasedLocking::NOT_BIASED;
  }

  uint age = mark->age();
  // Build two Mark Words, one is anonymous bias mode (101), one is unlocked mode (001)markOop biased_prototype = markOopDesc::biased_locking_prototype()->set_age(age); markOop unbiased_prototype = markOopDesc::prototype()->set_age(age); . JavaThread* biased_thread = mark->biased_locker();if (biased_thread == NULL) {
     // Anonymous bias. This logic may result when calling the lock object's hashcode() method
     // If bias is not allowed, the mark word of the object is set to lockless mode
    if (!allow_rebias) {
      obj->set_mark(unbiased_prototype);
    }
    ...
    return BiasedLocking::BIAS_REVOKED;
  }

  // code 1: check whether the bias thread is still alive
  bool thread_is_alive = false;
  // If the current thread is a biased thread
  if (requesting_thread == biased_thread) {
    thread_is_alive = true;
  } else {
     // Iterate through all threads of the current JVM, if found, the biased thread is still alive
    for(JavaThread* cur_thread = Threads::first(); cur_thread ! =NULL; cur_thread = cur_thread->next()) {
      if (cur_thread == biased_thread) {
        thread_is_alive = true;
        break; }}}// If the biased thread is no longer alive
  if(! thread_is_alive) {// Mark word is set to anonymous bias if bias is allowed; otherwise, it is set to unlocked
    if (allow_rebias) {
      obj->set_mark(biased_prototype);
    } else{ obj->set_mark(unbiased_prototype); }...return BiasedLocking::BIAS_REVOKED;
  }

  // All Lock records in the thread stack are traversed while the thread is alive
  GrowableArray<MonitorInfo*>* cached_monitor_info = get_or_compute_monitor_info(biased_thread);
  BasicLock* highest_lock = NULL;
  for (int i = 0; i < cached_monitor_info->length(); i++) {
    MonitorInfo* mon_info = cached_monitor_info->at(i);
    // If the corresponding Lock Record can be found, the biased thread is still executing the code in the synchronized code block
    if (mon_info->owner() == obj) {
      ...
      // Need to upgrade to lightweight Lock, directly modify the bias of the thread stack Lock Record. In order to handle the case of Lock reentrant, set the Hermite of Lock Record to null and the first Lock Record will be processed in the code below
      markOop mark = markOopDesc::encode((BasicLock*) NULL);
      highest_lock = mon_info->lock();
      highest_lock->set_displaced_header(mark);
    } else{... }}if(highest_lock ! =NULL) {
    // Change the state of the first Lock Record to no Lock, then set obj's Mark word as the pointer to execute the Lock Recordhighest_lock->set_displaced_header(unbiased_prototype); obj->release_set_mark(markOopDesc::encode(highest_lock)); . }else {
    // The bias thread is no longer in the synchronized block.if (allow_rebias) {
       // Set to anonymous bias state
      obj->set_mark(biased_prototype);
    } else {
      // Set mark word to unlocked stateobj->set_mark(unbiased_prototype); }}return BiasedLocking::BIAS_REVOKED;
}
Copy the code

It is important to note that calling the Object#hash or system.identityhashcode () method of a lock object causes a biased lock or lightweight lock escalation on that object. This is because in Java, the hashcode of an object is generated when these two methods are called, stored in Mark Word if it is unlocked, and in monitor if it is heavyweight. Biased locks have no place to store this information and must be upgraded. For details, see the hashCode () method’s effect on biased locking in the section of this article (note: there are some errors in the description of biased locking in this article). I also asked the author of this article some questions, and he kindly answered me. Thank you!

Revoke_bias method logic:

  1. Check whether the biased thread is alive. If not, revoke the biased lock directly. The JVM maintains a collection of all living threads and iterates through the collection to determine whether a thread is alive or not.
  2. Whether the biased thread is still in the synchronized block, if not, revoke the biased lock. Let’s review the locking process of bias locking: each entry into the synchronized block (i.e., execution)monitorenter) will find the first available item on the stack in descending orderLock RecordAnd points its obj field to the lock object. Each unlock (i.emonitorexit) will relate the lowest oneLock RecordRemove. So you can iterate through the thread stackLock RecordTo determine whether the thread is still in the synchronized block.
  3. Will bias thread all relevantLock RecordtheDisplaced Mark WordSet it to NULL, and then set the highest bitLock RecordtheDisplaced Mark WordSet to lock – free, highest bitLock RecordThe first time you get the lockLock Record(the first time here is the first time when reentrant acquires the lock), and then points the object’s head to the highest bitLock Record, there is no need to use CAS instruction, because it is insafepoint. Once that’s done, it’s upgraded to a lightweight lock. All Lock records that were previously thread-biased have become lightweight locks. If this doesn’t make sense, review the lightweight locking process from the previous article.

Bias lock release

Bias lock release entry in bytecodeInterpreter. Cpp# 1923

CASE(_monitorexit): {
  oop lockee = STACK_OBJECT(- 1);
  CHECK_NULL(lockee);
  // derefing's lockee ought to provoke implicit null check
  // find our monitor slot
  BasicObjectLock* limit = istate->monitor_base();
  BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base();
  // Iterate through the stack Lock Record from low to high
  while(most_recent ! = limit ) {// If the Lock Record is associated with the Lock object
    if ((most_recent)->obj() == lockee) {
      BasicLock* lock = most_recent->lock();
      markOop header = lock->displaced_header();
      // Release the Lock Record
      most_recent->set_obj(NULL);
      // In bias mode, just release the Lock Record. Otherwise, go through the release process of lightweight or heavyweight locks
      if(! lockee->mark()->has_bias_pattern()) {bool call_vm = UseHeavyMonitors;
        // header! =NULL indicates that the product of the product is not reentrant, and the product needs to be transferred to the Mark Word of the object header
        if(header ! =NULL || call_vm) {
          if(call_vm || Atomic::cmpxchg_ptr(header, lockee->mark_addr(), lock) ! = lock) {// CAS failures or heavyweight locks go here, restore obj first and call monitorexitmost_recent->set_obj(lockee); CALL_VM(InterpreterRuntime::monitorexit(THREAD, most_recent), handle_exception); }}}// Execute the next command
      UPDATE_PC_AND_TOS_AND_CONTINUE(1.- 1);
    }
    // Process the next Lock Record
    most_recent++;
  }
  // Need to throw illegal monitor state exception
  CALL_VM(InterpreterRuntime::throw_illegal_monitor_state_exception(THREAD), handle_exception);
  ShouldNotReachHere();
}
Copy the code

The above code should be easy to understand with the notes. The release of the bivariate Lock is very simple, as long as the corresponding Lock Record is released, while the lightweight Lock needs to replace the bivariate Mark Word with the Mark Word of the object header. If the CAS fails or heavyweight lock into InterpreterRuntime: : monitorexit method. This method is covered in the article on lightweight and heavyweight locks.

Batch rebias and batch undo

The background of batch relocking and batch revoking can be seen in the previous article. The relevant implementation is in BiasedLocking::revoke_and_rebias:

BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { ... //code 1: rebiased logic HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias); // Non-biased logic... assert((heuristics == HR_BULK_REVOKE) || (heuristics == HR_BULK_REBIAS), "?" ); / / code 2: Batch revoke and batch rebias logic VM_BulkRevokeBias bulk_REVOKE (&obj, (JavaThread*) THREAD, (heuristics == HR_BULK_REBIAS), attempt_rebias);  VMThread::execute(&bulk_revoke); return bulk_revoke.status_code(); }Copy the code

Each time a biased lock is revoked, it is recorded by the update_heuristics method. When the number of object unlocks of a certain class reaches a certain threshold, the JVM considers that class is not suitable for the biased mode or needs to be biased again to another object. Update_heuristics returns HR_BULK_REVOKE or HR_BULK_REBIAS. Perform batch undo or batch rebias.

Start with the update_heuristics method.

static HeuristicsResult update_heuristics(oop o, bool allow_rebias) {
  markOop mark = o->mark();
  // Return if it is not biased
  if(! mark->has_bias_pattern()) {return HR_NOT_BIASED;
  }
 
  // Lock the class of the object
  Klass* k = o->klass();
  // The current time
  jlong cur_time = os::javaTimeMillis();
  // The last batch undo time of the class
  jlong last_bulk_revocation_time = k->last_biased_lock_bulk_revocation_time();
  // The number of times this type of bias lock is revoked
  int revocation_count = k->biased_lock_revocation_count();
  / / BiasedLockingBulkRebiasThreshold is heavy bias threshold value (the default) 20, BiasedLockingBulkRevokeThreshold is batch cancellation thresholds (40) by default, BiasedLockingDecayTime is the delay time after starting a new batch rebias from the last batch rebias. The default value is 25000. That is, after batch rebias is enabled, it takes a long time (>=BiasedLockingDecayTime) for the undo counter to exceed the threshold, then we will reset the counter.
  if((revocation_count >= BiasedLockingBulkRebiasThreshold) && (revocation_count < BiasedLockingBulkRevokeThreshold) && (last_bulk_revocation_time ! =0) &&
      (cur_time - last_bulk_revocation_time >= BiasedLockingDecayTime)) {
    // This is the first revocation we've seen in a while of an
    // object of this type since the last time we performed a bulk
    // rebiasing operation. The application is allocating objects in
    // bulk which are biased toward a thread and then handing them
    // off to another thread. We can cope with this allocation
    // pattern via the bulk rebiasing mechanism so we reset the
    // klass's revocation count rather than allow it to increase
    // monotonically. If we see the need to perform another bulk
    // rebias operation later, we will, and if subsequently we see
    // many more revocation operations in a short period of time we
    // will completely disable biasing for this type.
    k->set_biased_lock_revocation_count(0);
    revocation_count = 0;
  }

  // Auto-incrementing and canceling the counter
  if (revocation_count <= BiasedLockingBulkRevokeThreshold) {
    revocation_count = k->atomic_incr_biased_lock_revocation_count();
  }
  // If the batch revoke threshold is reached, HR_BULK_REVOKE is returned
  if (revocation_count == BiasedLockingBulkRevokeThreshold) {
    return HR_BULK_REVOKE;
  }
  // If the bulk rebias threshold is reached, HR_BULK_REBIAS is returned
  if (revocation_count == BiasedLockingBulkRebiasThreshold) {
    return HR_BULK_REBIAS;
  }
  // Unlock a single object if the threshold is not reached
  return HR_SINGLE_REVOKE;
}
Copy the code

When bulk_revoke_OR_rebias_at_safepoint is invoked in safepoint by the VM thread. If Bulk_rebias is true, it indicates batch rebias. If bulk_rebias is not true, it indicates batch rebias. Attempt_rebias_of_object represents whether rebias is run on the lock object O of the operation, in this case true.

static BiasedLocking::Condition bulk_revoke_or_rebias_at_safepoint(oop o,
                                                                   bool bulk_rebias,
                                                                   bool attempt_rebias_of_object,
                                                                   JavaThread* requesting_thread) {... jlong cur_time = os::javaTimeMillis(); o->klass()->set_last_biased_lock_bulk_revocation_time(cur_time); Klass* k_o = o->klass(); Klass* klass = k_o;if (bulk_rebias) {
    // Batch bias logic
    if (klass->prototype_header()->has_bias_pattern()) {
      // Increments the epoch in the previous class
      int prev_epoch = klass->prototype_header()->bias_epoch();
      // code 1: the epoch in the class is incremented
      klass->set_prototype_header(klass->prototype_header()->incr_bias_epoch());
      int cur_epoch = klass->prototype_header()->bias_epoch();

      // code 2: iterate through the stack of all threads and update the epoch of all lock instances of type klass
      for(JavaThread* thr = Threads::first(); thr ! =NULL; thr = thr->next()) {
        GrowableArray<MonitorInfo*>* cached_monitor_info = get_or_compute_monitor_info(thr);
        for (int i = 0; i < cached_monitor_info->length(); i++) {
          MonitorInfo* mon_info = cached_monitor_info->at(i);
          oop owner = mon_info->owner();
          markOop mark = owner->mark();
          if ((owner->klass() == k_o) && mark->has_bias_pattern()) {
            // We might have encountered this object already in the case of recursive locking
            assert(mark->bias_epoch() == prev_epoch || mark->bias_epoch() == cur_epoch, "error in bias epoch adjustment"); owner->set_mark(mark->set_bias_epoch(cur_epoch)); }}}}// Then re-bias the current lock object
    revoke_bias(o, attempt_rebias_of_object && klass->prototype_header()->has_bias_pattern(), true, requesting_thread);
  } else{...MarkOopDesc ::prototype() returns a prototype that closes the bias pattern
    klass->set_prototype_header(markOopDesc::prototype());

    // code 4: walk through the stack of all threads and undo the bias of all locks in that class
    for(JavaThread* thr = Threads::first(); thr ! =NULL; thr = thr->next()) {
      GrowableArray<MonitorInfo*>* cached_monitor_info = get_or_compute_monitor_info(thr);
      for (int i = 0; i < cached_monitor_info->length(); i++) {
        MonitorInfo* mon_info = cached_monitor_info->at(i);
        oop owner = mon_info->owner();
        markOop mark = owner->mark();
        if ((owner->klass() == k_o) && mark->has_bias_pattern()) {
          revoke_bias(owner, false.true, requesting_thread); }}}// Undo the bias mode of the current lock object
    revoke_bias(o, false.true, requesting_thread); }... BiasedLocking::Condition status_code = BiasedLocking::BIAS_REVOKED;if (attempt_rebias_of_object &&
      o->mark()->has_bias_pattern() &&
      klass->prototype_header()->has_bias_pattern()) {
    // Construct a Mark word biased toward the request thread
    markOop new_mark = markOopDesc::encode(requesting_thread, o->mark()->age(),
                                           klass->prototype_header()->bias_epoch());
    // Update the mark word for the current lock objecto->set_mark(new_mark); status_code = BiasedLocking::BIAS_REVOKED_AND_REBIASED; . }...return status_code;
}

Copy the code

The method is divided into two logics: batch rebias and batch undo.

First look at the batch heavy bias, which is divided into two steps:

Code 1 increments the undo counter in the class by one, and then attempts to re-bias when an existing instance of the class acquires a lock, as described in the bias lock Acquisition process section.

Code 2 handles the lock objects currently in use by walking through the stack of all living threads, finding all biased lock objects in use, and then updating their EPOCH values. This means that the lock in use will not be re-biased, otherwise the thread-safety of the lock will be compromised.

The batch undo logic is as follows:

Code 3 turns off bias markers for classes, and then upgrades to lightweight locks when existing instances of the class acquire locks; The Mark Word for newly allocated objects of this class is in lockless mode.

Code 4 handles the lock objects currently in use by walking through the stack of all living threads, finding all biased lock objects in use, and then undoing biased locks.