preface

In the interview, I was asked how synchronized was achieved at the bottom. The answer was shallow, and the interviewer was not satisfied with it. Therefore, I thought it was necessary to sum up and gnaze this hard bone. In addition, I have organized a complete PDF version of Java interview bible, which has become a document

Synchronized Usage scenario

We all know that synchronized can be used in methods and blocks of code, so what’s the difference?

Synchronized is used in methods

With static methods, synchronized locks class objects.

Public class SynchronizedTest {/** * synchronized public static void test1(){ System.out.println("I am test1 method"); }}Copy the code

In the case of instance methods, synchronized locks instance objects.

Public class SynchronizedTest {** * synchronized * @return */ public synchronized String syncOnMethod(){ return "a developer name Jimoer"; }}Copy the code

Synchronized is used in code blocks

Synchronized blocks of synchronized code are used on objects of class instances and lock the current instance of the class. That is, when buildName is executed, the entire object is locked until buildName is done and the lock is released.

public class SynchronizedTest { private String name; public String getName() { return name; } public void setName(String name) { this.name = name; Public void buildName(String firstName){synchronized(this){public void buildName(String firstName){synchronized(this){ this.setName(firstName+this.getName()); }}}Copy the code

Synchronized blocks of code are used on class objects that are locked.

public class SynchronizedTest { private static String myName = "Jimoer"; Public static void buildName(String firstName){public static void buildName(String firstName){ synchronized(SynchronizedTest.class){ System.out.println(firstName+myName); }}}Copy the code

Synchronized blocks are used on any instance object, locking the configured instance object.

public class SynchronizedTest { private String lastName; public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; Public void buildName(String firstName){synchronized(lastName){public void buildName(String firstName){synchronized(lastName){ System.out.println(firstName+lastName); }}}Copy the code

The use of synchronized is introduced here. It is ok to use it under normal circumstances and know the scope of lock when it is used in the actual scene. But the principle should be asked in the interview, and when there is a problem in the program, knowing the principle is also the basis for fast locating the problem.

The principle of synchronized

Let’s take a look at the synchronized low-level implementation.

For example, the following code contains a block of synchronized code and a synchronized method.

public class SynchronizedTest { private static String myName = "Jimoer"; public static void main(String[] args) { synchronized (myName){ System.out.println(myName); }} /** * synchronized public static void test1(){system.out.println ("I am test1 method"); }}Copy the code

After compiling, I generated a class file. I decompiled the class file and looked at the content of the generated class file.

javap -p -v -c SynchronizedTest.class
Copy the code

The decompiled bytecode file is a bit too much, so I only truncated the key parts for analysis.

Note above that I’ve highlighted in red that the synchronized keyword, when compiled by Javac, forms monitorenter and Monitorexit bytecode instructions before and after the synchronized block. According to the Java Virtual Machine Specification

  • When monitorenter executes, it first attempts to acquire the lock on an object (acquiring the lock is actually acquiring ownership of a Monitor object).

  • If the object is not locked, or the current thread already holds the lock on that object, increment the value of the lock’s counter by one.

  • The lock counter is decrement by one when monitorexit is executed. Once the counter reaches zero, the lock is released.

  • If the object lock fails to be acquired, the current thread should block and wait until the object requesting the lock is released by the thread holding it.

Synchronized methods

The decompiled bytecode file portion of synchronized method test1 looks like this:

Notice what I’ve circled in red, the ACC_SYNCHRONIZED flag. Monitorenter and MonitoreXit are implicitly called to lock the method when the thread executes the method and checks for this flag.

The monitor object

As I said above, the process of acquiring an object lock is actually the process of acquiring ownership of a Monitor object. The same thread can acquire the monitor object repeatedly, but each time the same thread acquires the ownership of the Monitor object, the lock count is increased by one. When unlocking the monitor object, the lock count must be reduced to 0 to release the lock. Monitor object, which we don’t actually see in the Java decompilation file. This object is stored in the object header.

Object head

In the HotSpot VIRTUAL machine, the memory layout of objects stored in the heap can be divided into three parts: Header, Instance Data and Padding.

  • The instance data stores the actual valid data of the object, which contains various types of field content, whether its own or inherited from its parent class.

  • The alignment fill part is not necessarily there, just for space. The vm automatic memory management system requires that the size of the object be a multiple of 8 bytes. If the size of the entire object is not a multiple of 8 bytes, the object is used for padding.

  • The object header contains two types of information. 1. The first type is its own runtime data, including hashcode, GC generation age, lock status flag, lock held by thread, biased thread ID, etc. This part of data is officially called “Mark Word”. 2. The second type is a type pointer, that is, a pointer to an object’s type metadata that the virtual machine uses to determine which type the object is an instance of.

Back to our monitor object, the source code for the monitor object is written in C++ in the objectmonitor.hpp file of the virtual machine. The data structure looks like this.

ObjectMonitor() { _header = NULL; _count = 0; _waiters = 0, _recursions = 0; // thread reentrant count _object = NULL; // Store the Monitor object _owner = NULL; // Hold current thread owner _WaitSet = NULL; // List of threads in wait state _WaitSetLock = 0; _Responsible = NULL ; _succ = NULL ; _cxq = NULL ; // FreeNext = NULL; _EntryList = NULL ; // List of threads waiting for lock block _SpinFreq = 0; _SpinClock = 0 ; OwnerIsThread = 0 ; _previous_owner_tid = 0; }Copy the code

Those who want to learn more about the Monitor object can go to the Java Virtual Machine source code.

Heavyweight lock

In mainstream Java virtual machine implementations, Java threads are mapped to the operating system’s native kernel threads. If a thread is blocked or woken up, the operating system needs to help complete the process, which inevitably leads to the transition from user state to core state, which takes a lot of processing time. So the invocation process in ObjectMonitor files and the complex operating system execution mechanism can be costly when threads are blocked or woken up. Synchronized was referred to as a heavyweight lock until JDK1.6.

Heavy lock weight reduction

Efficient concurrency is an important upgrade from JDK5 to JDK6, where the virtual machine development team spent a lot of resources implementing various lock optimization techniques to reduce the weight of heavyweight locks. The whole locking process of synchronized after upgrading is roughly shown in the following figure.

It should be noted that the lock upgrade process is not reversible.

Biased locking

Above at the time of introduction object head, said to the first object contained in the content, one of them is to lock the thread ID, it represents the meaning of that is to say, if a thread to get at the end of the lock, lock the mark of a counter will be + 1, and the thread ID is stored in the lock of the object of the above. This process is accomplished by CAS, with each thread enters is unlocked, when after the success of the CAS, directly to the sign of lock count + 1 (hold biased locking every time after the thread into the lock don’t do any operation, marks count + 1) directly, other threads came in again, this time the CAS fails, or failed to get the lock.

Biased locking is enabled by default in JDK1.6. Disable it using xx: -usebiasedlocking =false.

Biased locking can improve the performance of programs with synchronization but no contention, but if most locks are always accessed by multiple different threads, biased locking is unnecessary.

Lightweight lock

Lightweight locks are still associated with the first part of the object header (Mark Word).

  • When the code is about to enter the synchronization block, if the synchronization object is not locked, the virtual machine first creates a space called Lock Record in the stack frame of the current thread. The user stores a copy of the current Mark Word of the Lock object.

  • The JVM then uses the CAS operation to try to update the object’s Mark Word to a pointer to the Lock Record. If the update succeeds, the thread acquires the lock and performs subsequent synchronization operations.

  • If the update fails, the lock object has been preempted by another thread, and the lightweight lock is no longer valid and must be expanded to a heavyweight lock. The flag of the locked object becomes the flag of the heavyweight lock.

spinlocks

When a lightweight lock fails to acquire, it is upgraded to a heavyweight lock. However, heavyweight locks have previously been described as costly, and the JVM development team has noticed that on many applications, the two-lock state of shared data only lasts for a short period of time, which is not worth suspending and resuming threads. So I came up with a strategy that when a thread requests an object that has been locked, I can tell the thread that has not acquired the lock to “hold on a second” without giving up processor execution time, and just let the thread execute a busy loop (spin), which is called a spin lock. Spin-locking was introduced in JDK1.4.2, and is disabled by default. It can be enabled with the -xx :UserSpinning parameter (default spins are 10). The user can customize the spins by using -xx :PreBockSpin.

The number of spins, whether user-specified or default, is the same for all locks of the JVM. Adaptive spin was introduced in JDK6, based on the last spin time on the same lock and the state of the owner. If the last time the same object was spunlocked successfully, the next time it spins, the chances of success are high, and the number of spins automatically increases. On the other hand, if the spin rarely succeeds in locking, the spin process may be omitted in the future.

If the lightweight fails, you upgrade to a spinlock, and if the spinlock fails, you upgrade to a heavyweight lock.

conclusion

Synchronized weight loss is often referred to as lock inflation or lock escalation. The main steps are:

  • At first, bias lock is used to acquire locks, which solves the lock consumption in the scenario of synchronization but no competition.

  • Again through the object head of the Mark Word to achieve lightweight lock, through the lightweight lock if there is competition, then continue to upgrade.

  • Upgrade to a spin lock. If the maximum spin count is reached, then upgrade to a heavyweight lock. All threads that do not acquire the lock block and wait.