This paper will be divided into several parts:

1. Bias the lock entrance

2. Process of obtaining biased locks

3. Biased lock revocation process

4. Bias lock release process

5. Batch rebias and batch revocation of bias lock

The JVM version analyzed in this article is JVM8, and the exact version number and code can be seen here.

Offset lock entrance

At present a lot of articles on the Internet, about biased lock source entry are looking for the wrong place, leading me to a lot of logic for biased lock has been thinking impassability, took a lot of detources.

Synchronized is divided into synchronized code block and synchronized method. The logic of acquiring lock at the bottom is the same. This article explains the realization of synchronized code block. As mentioned in the previous article, synchronized blocks are implemented by monitorenter and Monitorexit directives.

Regarding the entry for obtaining locks in the HotSpot VIRTUAL machine, many articles on the web either refer to the method entry as interpreterRuntime. Cpp-monitorenter # or bytecodeInterpreter. Cpp-1816. There are also some problems with the position of the lock entrance in the article of Zhan Xiao Wolf (of course, the article is very good, and the article of Xiao Wolf gave me a lot of help when I started to study synchronized).

To find the lock entry, be sure to find a place in the source code where monitorenter instructions are parsed. Monitorenter directives are parsed in HotSpot in two places: bytecodeInterpreter. Cpp# 1816 and templateTable_x86_64.cpp#3667.

The former is the bytecodeInterpreter in the JVM, which implements every JVM instruction (e.g., monitorenter, invokevirtual, etc.) in C++. It has the advantage of being relatively simple to implement and easy to understand, but slow to execute. The latter is the templateInterpreter, which writes a corresponding piece of assembly code for each instruction and starts with each instruction bound to the corresponding assembly code entry, which is extremely efficient. The implementation of template interpreter can see this article, in the process of research has also consulted the author of the article ‘Mr. Wang’ some questions, here thank you.

In HotSpot, only the template interpreter is used, and the bytecode interpreter is not used at all, as you can see in the reading notes.

So montorenter’s parsing entry is in the template interpreter, and the code is in templateTable_x86_64.cpp#3667. By calling the path: TemplateTable_x86_64 #monitorenter->interp_masm_x86_64#lock_object enter macroAssembler_x86#biased_locking_enter, Here you can see that the corresponding assembly code is generated. Note that instead of calling biased_locking_Enter every time monitorenter instructions are parsed, the method is called only at JVM startup to generate assembly code, after which instructions are parsed by executing assembly code directly.

The logic of bytecodeInterpreter is very much the same as the logic of templateInterpreter, because templateInterpreter is assembly code, which is a little bit more obscure, So looking at the bytecodeInterpreter implementation makes it a little easier to understand. But the catch is that bytecodeInterpreter didn’t implement biased locking logic before JDK8u. The jdK8-87EE5EE27509 version I read before does not implement the logic of biased lock, so I did not understand it for a long time. In this COMMIT, bytecodeInterpreter adds support for biased locking. I take a look at the comparison to templateInterpreter except for the stack structure, the logic is basically the same. So the biased lock logic is explained in bytecodeInterpreter below. The templateInterpreter source code is annotated in English, so it is not difficult to understand the basic instructions of assembly with annotations.

Bias lock acquisition process

Here we begin biased lock acquisition process analysis in bytecodeInterpreter. Cpp# 1816. Note that the code has been cut.

CASE(_monitorenter): {oop lockee = STACK_OBJECT(-1); // derefing's lockee ought to provoke implicit null check CHECK_NULL(lockee); // code 1: Find a free Lock Record BasicObjectLock* limit = istate->monitor_base(); BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base(); BasicObjectLock* entry = NULL; while (most_recent ! = limit ) { if (most_recent->obj() == NULL) entry = most_recent; else if (most_recent->obj() == lockee) break; most_recent++; } // Entry is not null, indicating that there is still a free Lock Record if (entry! = NULL) {// code 2: Lock Record obj pointer to Lock object entry->set_obj(lockee); int success = false; uintptr_t epoch_mask_in_place = (uintptr_t)markOopDesc::epoch_mask_in_place; Markoop mark = lockee->mark(); intptr_t hash = (intptr_t) markOopDesc::no_hash; If (mark->has_bias_pattern()) {uintptr_t thread_ident; uintptr_t anticipated_bias_locking_value; thread_ident = (uintptr_t)istate->thread(); / / code 4: There are a couple of steps here, Below analysis anticipated_bias_locking_value = (((uintptr_t) lockee - > klass () - > prototype_header () | thread_ident) ^ (uintptr_t)mark) & ~((uintptr_t) markOopDesc::age_mask_in_place); / / code 5: If (anticipated_biAS_locking_value == 0) {// Already biased towards this thread, nothing to do if (PrintBiasedLockingStatistics) { (* BiasedLocking::biased_lock_entry_count_addr())++; } success = true; Else if ((anticiPATed_bias_locking_value & markOopDesc::biased_lock_mask_in_place)! = 0) { markOop header = lockee->klass()->prototype_header(); if (hash ! = markOopDesc::no_hash) { header = header->copy_set_hash(hash); } if (Atomic::cmpxchg_ptr(header, lockee->mark_addr(), mark) == mark) { if (PrintBiasedLockingStatistics) (*BiasedLocking::revoked_lock_entry_count_addr())++; Else if ((anticipated_bias_locking_value & epoch_mask_in_place)! = 0) {/ / construct a bias current thread mark word markOop new_header = (markOop) ((intptr_t) lockee - > klass () - > prototype_header () | thread_ident); if (hash ! = markOopDesc::no_hash) { new_header = new_header->copy_set_hash(hash); If (Atomic::cmpxchg_ptr((void*)new_header, lockee->mark_addr(), mark) == mark) { if (PrintBiasedLockingStatistics) (* BiasedLocking::rebiased_lock_entry_count_addr())++; } else {// Rebias fails, indicating that there are multiple threads competing, Call monitorenter method for lock escalation CALL_VM (InterpreterRuntime: : monitorenter (THREAD, entry), handle_exception); } success = true; } else {// the thread is biased to another thread, or is biased anonymously (i.e., no thread is biased to any thread) Let's build an anonymous biased Mark word, Mark word markOop header = (markOop) ((Uintptr_t) mark & ((uintptr_t)markOopDesc::biased_lock_mask_in_place |(uintptr_t)markOopDesc::age_mask_in_place |epoch_mask_in_place)); if (hash ! = markOopDesc::no_hash) { header = header->copy_set_hash(hash); } markOop new_header = (markOop) ((uintptr_t) header | thread_ident); // debugging hint DEBUG_ONLY(entry->lock()->set_displaced_header((markOop) (uintptr_t) 0xdeaddead);) if (Atomic::cmpxchg_ptr((void*)new_header, lockee->mark_addr(), Header) = = headers) {/ / CAS modify success if (PrintBiasedLockingStatistics) (*) BiasedLocking::anonymously_biased_lock_entry_count_addr())++; } else {// If the change fails, the current is not anonymous bias, there is multi-thread contention, So enter CALL_VM monitorenter method (InterpreterRuntime: : monitorenter (THREAD, entry), handle_exception); } success = true; }} // If bias mode is not enabled or bias mode is disabled, success==false if (! Success) {// Lightweight lock logic //code 9: Product product = lockee-> Mark ()->set_unlocked(); entry->lock()->set_displaced_header(displaced); // If -xx :+UseHeavyMonitors is specified, call_VM =true, disabling biased and lightweight locks bool call_VM = UseHeavyMonitors; / / object using CAS will head mark word swapped for a pointer to the Lock Record if (call_vm | | Atomic: : cmpxchg_ptr (entry, lockee - > mark_addr (), displaced)! Function of the product of herbivore {// determine if (! call_vm && THREAD->is_lock_owned((address) displaced->clear_lock_bits())) { //code 10: Product of the herbivore product is set to null entry->lock()->set_displaced_header(null); product of the herbivore product is set to null entry->lock()->set_displaced_header(null); } else { CALL_VM(InterpreterRuntime::monitorenter(THREAD, entry), handle_exception); } } } UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1); } else {istate->set_msg(more_monitors); UPDATE_PC_AND_RETURN(0); // Re-execute } }Copy the code

Review the format of the mark word in the object header:

Each class in the JVM also has a Mark Word-like prototype_header that marks information such as the class’s epoch and bias switch. The lockee->klass()->prototype_header() gets the class’s prototype_header. MarkOopDesc can perform various operations on the object header. Intptr_t is the type of bit operation

Code 1, find a free Lock Record from the stack of the current thread (i.e. BasicObjectLock Record), determine whether the Lock Record is free based on whether its OBJ field is null. Note that the last available Lock Record is found in ascending order of memory address, in other words, the highest available Lock Record.

Code 2, once you get the Lock Record, the first thing you need to do is assign its OBj field.

Code 3, determine whether the mark word of the lock object is in biased mode, that is, whether the lower 3 bits are 101.

Code 4, Here are a few steps arithmetic operation anticipated_bias_locking_value = (((uintptr_t) lockee – > klass () – > prototype_header () | thread_ident) ^ (uintptr_t)mark) & ~((uintptr_t) markOopDesc::age_mask_in_place); This bit operation can be divided into three parts.

The first part ((uintptr_t) lockee – > klass () – > prototype_header () | thread_ident) to the current thread id and class prototype_header phase or, The value is (current thread ID + epoch + partial lock flag + lock flag bit in prototype_header). Note that the four bytes of the prototype_header generation age are 0

In the second part ^ (uintptr_t)mark xor the result of the above calculation with the lock object markOop, all the equal bits are set to 0, only the unequal bits are left.

Uintptr_t markOopDesc::age_mask_in_place 0001111000, inverse, becomes… 1110000111, all the other digits are 1 except for the four digits in generational age; The inverse result is then matched with the above result, and the generation age in the above xOR result is ignored.

Code 5, anticipated_bias_locking_value==0 means that the biased thread is the current thread and the Mark Word epoch equals the class epoch, in which case nothing needs to be done.

Code 6, (Anticipated_bias_locking_value & markOopDesc:: biASed_lock_mask_in_place)! Mark ->has_bias_pattern(); mark->has_bias_pattern(); That means the class’s prototype_header is not biased.

The CAS directive Atomic:: cMPxchg_ptr (header, lockee-> mark_ADDR (), mark) == mark is used to undo the bias lock. The CAS directive Atomic:: cMPxchg_ptr (header, lockee-> mark_ADDR (), mark) == mark. In contrast, the cmpxchg_ptr method takes the expected value as its first argument, the object as its second argument, and the expected value as its third argument. The method returns the actual value, and if the value is equal to the expected value, the modification is successful.

Code 7, if the epoch has expired, re-bias is required, and CAS instruction is used to replace the mark Word of the lock object with a new Mark word biased to the current thread and whose epoch is the epoch of the class.

In code 8, CAS changes the bias thread to the current thread. If the current bias is anonymous, the modification can be successful; otherwise, the lock upgrade logic will be entered.

Code 9, this step is already lightweight lock logic. As can be seen from the format of the mark word in the figure above, the mark Word in the lightweight Lock stores a pointer to the Lock Record. Construct an unlocked Mark Word and store it in Lock Record (see the first article for the format of Lock Record). The reason why the mark Word is set to be in the unlocked state is that the lightweight Lock will set the mark Word of the object head as the product of the Lock Record, so it will be better to set the product to be in the unlocked state when it is set to be in the unlocked state.

Code 10: if the product is product of Lock reentrant, the product of Lock Record will be set to NULL to function as Lock reentrant counting.

Above is biased locking locking process (including parts of lightweight lock locking process), if the current lock has to other threads | | epoch value date | | | bias mode off | biased locking concurrency conflicts exist in the process, will enter the InterpreterRuntime: : monitorenter method, In this method, bias locks are undone and upgraded.

Bias lock revocation

The undo here refers to changing the lock object to non-biased lock state (no lock, lightweight lock, etc.) because the conditions are not met in the process of acquiring biased lock. Freeing is the process of exiting a synchronized block. The logic for freeing locks is described in the next section. Please note the distinction between undo and release in this article.

If failed to get biased locking into InterpreterRuntime: : monitorenter method

IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorenter(JavaThread* thread, BasicObjectLock* elem))
  ...
  Handle h_obj(thread, elem->obj());
  assert(Universe::heap()->is_in_reserved_or_null(h_obj()),
         "must be NULL or an object");
  if (UseBiasedLocking) {
    // Retry fast entry if bias is revoked to avoid unnecessary inflation
    ObjectSynchronizer::fast_enter(h_obj, elem->lock(), true, CHECK);
  } else{ ObjectSynchronizer::slow_enter(h_obj, elem->lock(), CHECK); }... IRT_ENDCopy the code

You can see that if JVM bias locking is enabled, it goes into ObjectSynchronizer::fast_enter.

void ObjectSynchronizer::fast_enter(Handle obj, BasicLock* lock, bool attempt_rebias, TRAPS) {
 if (UseBiasedLocking) {
    if(! SafepointSynchronize::is_at_safepoint()) { BiasedLocking::Condition cond = BiasedLocking::revoke_and_rebias(obj, attempt_rebias, THREAD);if (cond == BiasedLocking::BIAS_REVOKED_AND_REBIASED) {
        return; }}else{ assert(! attempt_rebias,"can not rebias toward VM thread"); BiasedLocking::revoke_at_safepoint(obj); } assert(! obj->mark()->has_bias_pattern(),"biases should be revoked by now"); } // At this point, if the preferred thread is in the code block, it has been upgraded to a lightweight lock by the current thread. The current thread will loop through the following method to acquire the lock, and after a certain number of times, it will upgrade the heavyweight lock. // ObjectSynchronizer::slow_enter slow_enter (obj, lock, THREAD) ; }Copy the code

If it is a normal Java thread, it will follow the above logic to the BiasedLocking:: REVOke_and_rebias method. If it is a VM thread, it will follow the following BiasedLocking:: REVOke_at_safepoint. We mainly look at BiasedLocking::revoke_and_rebias method. The main function of this method is like its name: undo or rebias. The first argument encapsulates the lock object and the current thread, and the second argument indicates whether rebias is allowed, which is true.

BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { assert(! SafepointSynchronize::is_at_safepoint(),"must not be called while at safepoint");
    
  markOop mark = obj->mark();
  if(mark->is_biased_anonymously() && ! Attempt_rebias) {// If there is an anonymous bias and attempt_rebias==falseIt's going to come up here, if the hashcode method of the lock object is called, and you need to undo the biased lock. markOop biased_value = mark; Unbiased_prototype = markOopDesc::prototype()->set_age(mark->age()); markOop res_mark = (markOop) Atomic::cmpxchg_ptr(unbiased_prototype, obj->mark_addr(), mark);if (res_mark == biased_value) {
      returnBIAS_REVOKED; }}else if(mark->has_bias_pattern()) {Klass* k = obj-> Klass (); markOop prototype_header = k->prototype_header(); //code 1: if bias mode is turned off for the corresponding classif(! prototype_header->has_bias_pattern()) { markOop biased_value = mark; markOop res_mark = (markOop) Atomic::cmpxchg_ptr(prototype_header, obj->mark_addr(), mark); assert(! (*(obj->mark_addr()))->has_bias_pattern(),"even if we raced, should still be revoked");
      returnBIAS_REVOKED; Code2: If epoch is expired}else if(prototype_header->bias_epoch() ! = mark->bias_epoch()) {if (attempt_rebias) {
        assert(THREAD->is_Java_thread(), "");
        markOop biased_value       = mark;
        markOop rebiased_prototype = markOopDesc::encode((JavaThread*) THREAD, mark->age(), prototype_header->bias_epoch());
        markOop res_mark = (markOop) Atomic::cmpxchg_ptr(rebiased_prototype, obj->mark_addr(), mark);
        if (res_mark == biased_value) {
          returnBIAS_REVOKED_AND_REBIASED; }}else {
        markOop biased_value       = mark;
        markOop unbiased_prototype = markOopDesc::prototype()->set_age(mark->age());
        markOop res_mark = (markOop) Atomic::cmpxchg_ptr(unbiased_prototype, obj->mark_addr(), mark);
        if (res_mark == biased_value) {
          returnBIAS_REVOKED; }}}} //code 3: batch rebias and batch rebias logic // Update HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias);if (heuristics == HR_NOT_BIASED) {
    return NOT_BIASED;
  } else if(heuristics == HR_SINGLE_REVOKE) {//code 4: Klass *k = obj-> Klass (); markOop prototype_header = k->prototype_header();if(mark->biased_locker() == THREAD && prototype_header->bias_epoch() == mark->bias_epoch()) { When the Object is calledThe #hashcode method does this// There is no need to wait for SafePoint to undo the thread because it simply traverses the stack of the current thread. ResourceMark rm;if (TraceBiasedLocking) {
        tty->print_cr("Revoking bias by walking my own stack:");
      }
      BiasedLocking::Condition cond = revoke_bias(obj(), false.false, (JavaThread*) THREAD);
      ((JavaThread*) THREAD)->set_cached_monitor_info(NULL);
      assert(cond == BIAS_REVOKED, "why not?");
      return cond;
    } elseVM_RevokeBias REVOKE (&obj, (JavaThread*) THREAD); VMThread::execute(&revoke);return revoke.status_code();
    }
  }
	
  assert((heuristics == HR_BULK_REVOKE) ||
         (heuristics == HR_BULK_REBIAS), "?"); / / code5: Batch revoke and batch rebias logic VM_BulkRevokeBias bulk_REVOKE (&obj, (JavaThread*) THREAD, (heuristics == HR_BULK_REBIAS), attempt_rebias);  VMThread::execute(&bulk_revoke);return bulk_revoke.status_code();
}
Copy the code

There is A lot of logic that leads to this method, but we’ll just analyze the most common case: suppose the lock is already favoring thread A, and thread B tries to acquire the lock.

“Revoke_bias” is used to revoke the lock bias of the current Thread. Otherwise, this operation will be pushed to the VM Thread until safePoint.

A few words about VM threads: There is a dedicated VM Thread in the JVM that continuously pulls requests from the VMOperationQueue, such as GC requests. Operations that require SafePoint (VM_Operationevaluate_at_safepoint returns true) must wait until all Java threads have entered SafePoint. Check out this article for safePoint.

Let’s focus on the revoke_bias method. The first parameter is the lock object, the second and third parameters are whether to allow rebias, and whether to batch operation, respectively, here are false

static BiasedLocking::Condition revoke_bias(oop obj, bool allow_rebias, bool is_bulk, JavaThread* requesting_thread) { markOop mark = obj->mark(); // If bias mode is not turned on, NOT_BIASED is returned directlyif(! mark->has_bias_pattern()) { ...returnBiasedLocking::NOT_BIASED; } uint age = mark->age(); // Build two Mark Words, one is the anonymous bias pattern (101), MarkOop biased_prototype = markOopDesc::biased_locking_prototype()->set_age(age); markOop unbiased_prototype = markOopDesc::prototype()->set_age(age); . JavaThread* biased_thread = mark->biased_locker();if(biASed_thread == NULL) {// Anonymous bias. Calling the lock object's hashcode() method can lead to this logic // If rebias is not allowed, set the mark word of the object to lockless modeif (!allow_rebias) {
      obj->set_mark(unbiased_prototype);
    }
    ...
    returnBiasedLocking::BIAS_REVOKED; } // code 1: check whether the biased thread is alive bool thread_is_alive =false; // If the current thread is a biased threadif (requesting_thread == biased_thread) {
    thread_is_alive = true;
  } else{// Iterate through all threads of the current JVM, and if found, the biased thread is still alivefor(JavaThread* cur_thread = Threads::first(); cur_thread ! = NULL; cur_thread = cur_thread->next()) {if (cur_thread == biased_thread) {
        thread_is_alive = true;
        break; }}} // If the biased thread is no longer aliveif(! Thread_is_alive) {// Mark word is set to anonymous bias if rebias is allowed, otherwise it is set to unlockedif (allow_rebias) {
      obj->set_mark(biased_prototype);
    } else{ obj->set_mark(unbiased_prototype); }...returnBiasedLocking::BIAS_REVOKED; } // If the thread is still alive, the Lock Record GrowableArray in the thread stack will be traversed get_or_compute_monitor_info(biased_thread); BasicLock* highest_lock = NULL;for(int i = 0; i < cached_monitor_info->length(); i++) { MonitorInfo* mon_info = cached_monitor_info->at(i); // If the corresponding Lock Record can be found, the biased thread is still executing the code in the synchronized code blockif(mon_info->owner() == obj) { ... // Need to upgrade to lightweight Lock, directly modify the bias of the thread stack Lock Record. To handle lock reentrantcaseWhere the Hermite of Lock Record is set to null The first Lock Record is reprocessed in the following code: markOop mark = markOopDesc::encode((BasicLock*) NULL); highest_lock = mon_info->lock(); highest_lock->set_displaced_header(mark); }else{... }}if(highest_lock ! = NULL) {// Change the first Lock Record to no Lock state, Highest_lock ->set_displaced_header(unbiased_prototype); highest_lock->set_displaced_header(prototype); obj->release_set_mark(markOopDesc::encode(highest_lock)); . }else{// The bias thread is no longer in the sync block.if(allow_rebias) {// set to anonymous bias state obj->set_mark(biased_prototype); }elseObj ->set_mark(unbiased_prototype); }}return BiasedLocking::BIAS_REVOKED;
}
Copy the code

It is important to note that calling the Object#hash or system.identityhashcode () method of a lock object causes a biased lock or lightweight lock escalation on that object. This is because in Java, the hashcode of an object is generated when these two methods are called, stored in Mark Word if it is unlocked, and in monitor if it is heavyweight. Biased locks have no place to store this information and must be upgraded. For details, see the hashCode () method’s effect on biased locking in the section of this article (note: there are some errors in the description of biased locking in this article). I also asked the author of this article some questions, and he kindly answered me. Thank you!

Revoke_bias method logic:

  1. Check whether the biased thread is alive. If not, revoke the biased lock directly. The JVM maintains a collection of all living threads and iterates through the collection to determine whether a thread is alive or not.
  2. Whether the biased thread is still in the synchronized block, if not, revoke the biased lock. Let’s review the lock-biased process: Each time the synchronized block is entered (that is, the monitorenter), the first available Lock Record in the stack is found in descending order, and its OBj field points to the Lock object. Each unlock (i.e., monitorexit) removes the lowest relevant Lock Record. So you can tell if a thread is still in the synchronized block by iterating through the Lock Record in the thread stack.
  3. Set the product of Pakistan’s bivariate Hermite product to null, and set the product of Pakistan’s bivariate Hermite product to null. The highest bit of the Lock Record is the Lock Record that was acquired for the first time, and then the object header is pointed to the highest bit of the Lock Record. There is no need for CAS instruction, because it is in SafePoint. Once that’s done, it’s upgraded to a lightweight lock. All Lock records that were previously thread-biased have become lightweight locks. If this doesn’t make sense, review the lightweight locking process from the previous article.

Bias lock release

Bias lock release entry in bytecodeInterpreter. Cpp# 1923

CASE(_monitorexit): {
  oop lockee = STACK_OBJECT(-1);
  CHECK_NULL(lockee);
  // derefing's lockee ought to provoke implicit null check // find our monitor slot BasicObjectLock* limit = istate->monitor_base();  BasicObjectLock* most_recent = (BasicObjectLock*) istate->stack_base(); // Iterate through the stack from low to high Lock Record while (most_recent! If ((most_recent)->obj() == lockee) {BasicLock* Lock = most_recent-> Lock (); markOop header = lock->displaced_header(); // Release Lock Record most_recent->set_obj(NULL); // In bias mode, just release the Lock Record. Otherwise, go through the release process of lightweight lock or heavyweight lock. lockee->mark()->has_bias_pattern()) { bool call_vm = UseHeavyMonitors; // header! If (header!) =NULL indicates that the product of the Hermite product is not reentrant. = NULL || call_vm) { if (call_vm || Atomic::cmpxchg_ptr(header, lockee->mark_addr(), lock) ! Most_recent ->set_obj(lockee); CALL_VM(InterpreterRuntime::monitorexit(THREAD, most_recent), handle_exception); }} // Run the next command UPDATE_PC_AND_TOS_AND_CONTINUE(1, -1); } // Process the next Lock Record most_recent++; } // Need to throw illegal monitor state exception CALL_VM(InterpreterRuntime::throw_illegal_monitor_state_exception(THREAD), handle_exception); ShouldNotReachHere(); }Copy the code

The above code should be easy to understand with the notes. The release of the bivariate Lock is very simple, as long as the corresponding Lock Record is released, while the lightweight Lock needs to replace the bivariate Mark Word with the Mark Word of the object header. If the CAS fails or heavyweight lock into InterpreterRuntime: : monitorexit method. This method is covered in the article on lightweight and heavyweight locks.

Batch rebias and batch undo

The background of batch relocking and batch revoking can be seen in the previous article. The relevant implementation is in BiasedLocking::revoke_and_rebias:

BiasedLocking::Condition BiasedLocking::revoke_and_rebias(Handle obj, bool attempt_rebias, TRAPS) { ... //code 1: rebiased logic HeuristicsResult heuristics = update_heuristics(obj(), attempt_rebias); // Non-biased logic... assert((heuristics == HR_BULK_REVOKE) || (heuristics == HR_BULK_REBIAS),"?"); / / code 2: Batch revoke and batch rebias logic VM_BulkRevokeBias bulk_REVOKE (&obj, (JavaThread*) THREAD, (heuristics == HR_BULK_REBIAS), attempt_rebias);  VMThread::execute(&bulk_revoke);return bulk_revoke.status_code();
}
Copy the code

Each time a biased lock is revoked, it is recorded by the update_heuristics method. When the number of object unlocks of a certain class reaches a certain threshold, the JVM considers that class is not suitable for the biased mode or needs to be biased again to another object. Update_heuristics returns HR_BULK_REVOKE or HR_BULK_REBIAS. Perform batch undo or batch rebias. Start with the update_heuristics method.

static HeuristicsResult update_heuristics(oop o, bool allow_rebias) { markOop mark = o->mark(); // Return if it is not biasedif(! mark->has_bias_pattern()) {returnHR_NOT_BIASED; } // Lock the object's class Klass* k = o-> Klass (); Jlong cur_time = OS ::javaTimeMillis(); Jlong last_bulk_reVOCation_time = k-> last_biASed_lock_BULK_reVOCation_time (); Int reVOCation_count = k-> biASed_lock_reVOCation_count (); / / BiasedLockingBulkRebiasThreshold is heavy bias threshold value (the default) 20, BiasedLockingBulkRevokeThreshold / / batch cancellation thresholds (40) by default, BiasedLockingDecayTime is the delay time after starting a new batch rebias from the last batch rebias // default 25000. After batch rebias is enabled, a long time passes (>=BiasedLockingDecayTime) before the undo counter exceeds the threshold. // Then we reset the counter.if((revocation_count >= BiasedLockingBulkRebiasThreshold) && (revocation_count < BiasedLockingBulkRevokeThreshold) && (last_bulk_revocation_time ! = 0) && (cur_time - last_bulk_revocation_time >= BiasedLockingDecayTime)) { // This is the first revocation we've seen in a while of an // object of this type since the last time we performed a bulk // rebiasing operation. The application is allocating objects in // bulk which are biased toward a thread and then handing them // off to another thread. We can cope with this allocation // pattern via the bulk rebiasing mechanism so we reset the // klass's revocation count rather than allow it to increase
    // monotonically. If we see the need to perform another bulk
    // rebias operation later, we will, and if subsequently we see
    // many more revocation operations in a short period of time we
    // will completely disable biasing forthis type. k->set_biased_lock_revocation_count(0); revocation_count = 0; } // Increments the undo counterif(revocation_count <= BiasedLockingBulkRevokeThreshold) { revocation_count = k->atomic_incr_biased_lock_revocation_count(); } // HR_BULK_REVOKE is returned if the batch revoke threshold is reachedif (revocation_count == BiasedLockingBulkRevokeThreshold) {
    returnHR_BULK_REVOKE; } // Return HR_BULK_REBIAS if the bulk rebias threshold is reachedif (revocation_count == BiasedLockingBulkRebiasThreshold) {
    returnHR_BULK_REBIAS; } // Unlock a single object if the threshold is not reachedreturn HR_SINGLE_REVOKE;
}
Copy the code

When bulk_revoke_OR_rebias_at_safepoint is invoked in safepoint by the VM thread. If Bulk_rebias is true, it indicates batch rebias. If bulk_rebias is not true, it indicates batch rebias. Attempt_rebias_of_object represents whether rebias is run on the lock object O of the operation, in this case true.

static BiasedLocking::Condition bulk_revoke_or_rebias_at_safepoint(oop o,
                                                                   bool bulk_rebias,
                                                                   bool attempt_rebias_of_object,
                                                                   JavaThread* requesting_thread) {
  ...
  jlong cur_time = os::javaTimeMillis();
  o->klass()->set_last_biased_lock_bulk_revocation_time(cur_time);


  Klass* k_o = o->klass();
  Klass* klass = k_o;

  if(bulk_rebias) {// Logic of bulk rebiasif(klass->prototype_header()->has_bias_pattern()) {// Increment the epoch in the previous class int prev_epoch = klass->prototype_header()->bias_epoch(); Klass ->set_prototype_header()->incr_bias_epoch(); klass->set_prototype_header()->incr_bias_epoch(); int cur_epoch = klass->prototype_header()->bias_epoch(); // code 2: iterate through the stack of all threads and update the epoch of all lock instances of type klassfor(JavaThread* thr = Threads::first(); thr ! = NULL; thr = thr->next()) { GrowableArray<MonitorInfo*>* cached_monitor_info = get_or_compute_monitor_info(thr);for (int i = 0; i < cached_monitor_info->length(); i++) {
          MonitorInfo* mon_info = cached_monitor_info->at(i);
          oop owner = mon_info->owner();
          markOop mark = owner->mark();
          if ((owner->klass() == k_o) && mark->has_bias_pattern()) {
            // We might have encountered this object already in the case of recursive locking
            assert(mark->bias_epoch() == prev_epoch || mark->bias_epoch() == cur_epoch, "error in bias epoch adjustment"); owner->set_mark(mark->set_bias_epoch(cur_epoch)); }}}} // Revoke_bias (o, revoke_bias(o, revoke_bias) attempt_rebias_of_object && klass->prototype_header()->has_bias_pattern(),true, requesting_thread);
  } else{... / / code 3: Prototype klass->set_prototype_header(markOopDesc::prototype());  // code 4: walk through the stack of all threads and undo the bias of all locks in that classfor(JavaThread* thr = Threads::first(); thr ! = NULL; thr = thr->next()) { GrowableArray<MonitorInfo*>* cached_monitor_info = get_or_compute_monitor_info(thr);for (int i = 0; i < cached_monitor_info->length(); i++) {
        MonitorInfo* mon_info = cached_monitor_info->at(i);
        oop owner = mon_info->owner();
        markOop mark = owner->mark();
        if ((owner->klass() == k_o) && mark->has_bias_pattern()) {
          revoke_bias(owner, false.true, requesting_thread); }}} revoke_bias(o,false.true, requesting_thread); }... BiasedLocking::Condition status_code = BiasedLocking::BIAS_REVOKED;if(attempt_rebias_of_object && o->mark()->has_bias_pattern() && klass->prototype_header()->has_bias_pattern()) { // New_mark = markOopDesc::encode(requesting_thread, o->mark()->age(), klass->prototype_header()->bias_epoch()); // Update the mark word o->set_mark(new_mark); status_code = BiasedLocking::BIAS_REVOKED_AND_REBIASED; . }...return status_code;
}
Copy the code

The method is divided into two logics: batch rebias and batch undo.

First look at the batch heavy bias, which is divided into two steps:

Code 1 increments the undo counter in the class by one, and then attempts to re-bias when an existing instance of the class acquires a lock, as described in the bias lock Acquisition process section.

Code 2 handles the lock objects currently in use by walking through the stack of all living threads, finding all biased lock objects in use, and then updating their EPOCH values. This means that the lock in use will not be re-biased, otherwise the thread-safety of the lock will be compromised.

After batch rebias, all lock objects in the locked state have updated their epoch values. If the epoch values are different from those of the class in the subsequent acquisition of the biased lock, there is no need to wait for the security point to revoke the biased lock, which improves the performance.

The batch undo logic is as follows:

Code 3 turns off bias markers for classes, and then upgrades to lightweight locks when existing instances of the class acquire locks; The Mark Word for newly allocated objects of this class is in lockless mode.

Code 4 handles the lock objects currently in use by walking through the stack of all living threads, finding all biased lock objects in use, and then undoing biased locks.