Synchronized, I’m sure you’re familiar with synchronized, but since Java 6, the JDK has drastically changed the weight of locks from heavyweight to scalable lightweight. Today, we’ll take a look at synchronized

First we need to introduce the essence of synchronized lock acquisition, see the following code and bytecode

public class Demo {
    private static int count;
    public static void main(String[] args) {
        Object object = new Object();
        synchronized(object) { count ++; }}}Copy the code

The bytecode is as follows

 0 new #2 <java/lang/Object>
 3 dup
 4 invokespecial #1 <java/lang/Object.<init> : ()V>
 7 astore_1
 8 aload_1
 9 dup
10 astore_2
   // Get monitor, enter synchronized
11 monitorenter
12 getstatic #7 <Demo.count : I>
15 iconst_1
16 iadd
17 putstatic #7 <Demo.count : I>
20 aload_2
21 monitorexit
  // Release the monitor, release the lock
  
  // Once an exception is thrown, execute the following command to avoid deadlocks
22 goto 30 (+8)
25 astore_3
26 aload_2
27 monitorexit
28 aload_3
29 athrow
30 return
Copy the code

The essence of synchronized lock acquisition is to acquire monitor, which does not exist in the lock but is obtained from JVM. The address of this monitor is placed in markword of OBJ, that is to say, each object used as a lock corresponds to a Monitor.

The markword of obj is shown below,

Let’s look at the picture below

Thread 1 uses synchronized to acquire obj’s lock, at which point obJ’s markWord points to Monitor’s address. Then, if thread2 and thread3 enter and find obJ’s markWord already has an address, they will follow the address to find Monitor.

Find owner pointing to Thread1, proving that Thread1 holds the lock

When Thread1 releases the lock, it notifies them that they can grab the lock. The thread that grabbed the lock can change its Owner to point to itself

Lock escalation

Thread acquisition lock process

The above description is a simple way of saying that we have an unanswered question: how does a thread snatch a lock? To grab a lock, you need to leave your own trace in the object

Thread and obJ are not directly related in the figure above, but in fact Thread1 also retains information about object

See the image below (from the Dark Horse tutorial)

We say that Thread0 acquires the lock of an object. What is the nature of this statement?

Thread0 holds a stack frame called Lock Record, which has an internal Lock Record address to Record the object address

Then try swapping the value of the lightweight lock position in the Mark Word in Object with CAS

Thus, Thread0 records the value of the original lightweight lock position,

Markword records the lock Record value in Thread0, which is 00

If your own thread reenters, it pushes the new lock record into Thread0, exits the reentrant, and pops out once

Now, if another thread comes in and wants to swap cas, and finds that the value in markword has changed to 00, he knows he’s been beaten to the punch, so he’ll wait

In this case, the wait indicates that there is competition and lock upgrade is required. Therefore, synchronized implements the above lightweight lock by default, and only in this case will lock upgrade be performed

After the lock is upgraded, Thread0 comes back to unlock and finds that the 00 he placed in the Lightweight lock position has been changed to prove that the lock has been upgraded and he starts the heavyweight unlock process

Ok, now let’s see how do we lock the upgrade

Upgrade from lightweight lock to heavyweight lock

When the lightweight lock is added, the CAS may not succeed, perhaps because other threads have already added the lightweight lock, and we know that the meaning of the lightweight lock is that there is a conflict in the case of multiple threads with few conflicts, which proves that the lightweight lock may not be capable of concurrency

This requires a lock upgrade, also known as lock inflation, to become a heavyweight lock

Steps:

  1. Thread-a first adds A lightweight lock to the Object, and Then Thread-B enters and finds that the lightweight lock has been applied to the Monitor, changes the markword to the address of the Monitor, and makes the Object point to the address of the weight lock. Then Thread-B enters the blocking queue
  2. After executing the task, Thread-A intends to use CAS to restore the value in Markword. Since Markword has been modified and cannot be restored, it decides to enter the heavyweight lock process, finds Monitor through the address, and sets the owner object to wake Up Thread-A

Blocking is a time-consuming operation. It is common for one thread to block and then another thread releases the lock.

Java has spin operations, which are a lightweight form of “blocking,” that is, looping locally

However, spin is an execution program, so it consumes CPU. Spinning in multiple cores has little impact on CPU

The number of spins is variable, so if you did a couple of spins successfully last time, you could do a similar number of spins this time, if you did a lot of spins last time and it didn’t work, you might not spin this time

Spin needs to be set to a threshold number of times, beyond which it is set to block queue

Biased locking

If thread A has already added A lightweight lock, there is no competition from other threads, but it needs A CAS operation every time it reenters, which will be tedious

This is just like when you open your mobile phone, the weight lock is like a password, very secure, but more troublesome, while the light lock is like a fingerprint, less secure, but fast

But if the screen goes black every time you use it and you have to press your fingerprint to open it again, it’s a bit of a hassle

The thread ID will be set to the MarkWord header of the object. If this thread gets it every time, then CAS will not be required. But if it is not you once, CAS will be re-applied.

So markWord is very important, almost everything related to locking is in it! And the GC age, divide and substitute, everybody look again, deepen memory

The core of biased lock is in the thread ID in the figure above, which is a placeholder of 23bit. Whoever the thread ID is, the lock is biased lock, of course, provided that the biased mode is set to 1 and the last digit is marked. Therefore, when the last three digits of mark word are 101, it means biased state, and 001 means non-biased state

strategy

After the lock is biased to thread A, thread A is still in use. If thread B comes to acquire the lock at this time, it proves that there is still A slight thread contention, so it is upgraded to light lock

If A has already released the lock, thread B acquires the lock. There is no thread contention, and the JVM simply changes the ID of the thread that favors the lock to thread B to avoid an easy upgrade

However,

If thread A and thread B use the lock interchangeably, causing the thread ID to change each time, the efficiency is reduced, whereas if A bunch of threads use the lock interchangeably, the waste is even greater, which is A serious problem

** When the modification is more than 20 times, we need to undo this bias. If it is multiple threads, we need to batch re-bias. ** here is difficult to understand, I use an example to express

case

Let’s say we have two threads, thread 1 creates 30 objects, and then start, prints the current markword,

And then thread 2 takes those 30 objects, prints the current Markword,

To ensure serialization, we need to use synchronized to ensure that thread 1 finishes execution before thread 2 arrives

Let’s first look at the printing of thread 1

[t1] - 0 00000000.00011111 11110011 11100000 00000101 
[t1] - 1 00000000.00011111 11110011 11100000 00000101. [t1] -28 00000000..00011111 11110011 11100000 00000101 
[t1] - 29 00000000..00011111 11110011 11100000 00000101
Copy the code

So let’s read each of these lines, as shown here

In the last 32 bits, the first 23 bits are the thread ID, and the last three bits represent the bias mode and flag bit

So let’s look at the print for thread 2, and here we print three numbers, the print for thread 2 before the lock was taken, when the lock was taken, and after the lock was released, and obviously, before the lock was taken, it must be 101, which is the biased lock that thread 1 added

																							// Note the last three digits
[t2] - 0 00000000.00011111 11110011 11100000 00000101 
[t2] - 0 00000000.00100000 01011000 11110111 00000000 
[t2] - 0 00000000.00000000 00000000 00000000 00000001. [t2] -18 00000000.00011111 11110011 11100000 00000101 
[t2] - 18 00000000.00100000 01011000 11110111 00000000 
[t2] - 18 00000000.00000000 00000000 00000000 00000001 
// The last three digits from 0 to 18 are 101, 000,001 respectively
// The thread id has been changed to 101 since 19th.
[t2] - 19 00000000.00011111 11110011 11100000 00000101 
[t2] - 19 00000000.00011111 11110011 11110001 00000101 
[t2] - 19 00000000.00011111 11110011 11110001 00000101 
  
[t2] - 20 00000000.00011111 11110011 11100000 00000101 
[t2] - 20 00000000.00011111 11110011 11110001 00000101 
[t2] - 20 00000000.00011111 11110011 11110001 00000101.Copy the code

As you can see, the last three digits from 0 to 18 correspond to the thread ID

  • 101,00011111 11110011 1110000 indicates bias to thread 1
  • When the lock is acquired, it is 000, indicating that it has been upgraded to a lightweight lock
  • After the lock is released, the value is 001, indicating that the bias is revoked

By the 19th,

  • The thread ID becomes 00011111 11110011 1110000, which is the ID of thread 2
  • The last three become 101

Favoring thread 2 and all subsequent locks favoring thread 2. This is called batch rebias

Remove bias lock, then behind is the upgrade weight lock, direct serial

The last

In this paper, we get to know the whole process of synchronized lock upgrade when the thread competition is from weak to strong by going into the structure of thread stack frame and Markword. Because the JDK team found that in most cases, thread contention was minimal, using synchronized by default would have a performance impact, so defaulting to lightweight locking and then upgrading the lock was the best option

The resources

Dark horse programmers learn Java concurrent programming in depth