preface
When it comes to Java interviews, the most common knowledge is multithreading. When it comes to multithreading, you can’t get around a Single Java keyword — synchronized. We all know that this keyword ensures that only one thread can execute a method or a block of code at a time to ensure the safety of multiple threads. So, in this article, we uncover the veil of synchronized.
Thread-safe implementation
Before getting into the details of synchronized, let’s look at the different approaches to thread-safety and the theoretical basis for how synchronized achieves thread-safety. There are three main thread-safe implementations: mutually exclusive synchronization (blocking synchronization), non-blocking synchronization, and thread-safe solutions that do not require synchronization.
- Mutual Exclusion & Synchnronization
Mutually exclusive synchronization is the practice of ensuring that shared data is only used by one (or more, when using semaphore) thread at a time when multiple threads concurrently access the data. The mutex is a means to realize synchronization, and the critical region, mutex and semaphore are the main ways to realize the mutex. Therefore, in the word of mutual exclusion and synchronization, mutual exclusion is the cause and synchronization is the effect. Mutual exclusion is the method, synchronization is the destination.
In Java, the most basic mutually exclusive synchronization method is synchronized. For details about how to implement mutually exclusive synchronization, see below.
BTW, in addition to synchronized, is a ReentrantLock in the java.util.concurrent package, which is used in almost the same way as synchronized. Synchronized is a native syntax, whereas ReentrantLock is an API level mutex provided by the JDK.
- Nonblocking synchronization
Mutex synchronization mainly blocks threads to ensure thread safety and is also called blocking synchronization. It says that as long as you don’t do the right kind of synchronization (locking, for example), you’re bound to have a problem, regardless of whether there’s competition for shared data (pessimistic locking).
With the development of the hardware instruction set, we have another option: do the operation first, and if there are no other threads competing, the operation succeeds. If other threads are contending and a conflict occurs, other compensation measures are taken (the most common compensation measure is to retry again and again until success occurs). Many implementations of this optimistic concurrency strategy do not require threads to be suspended, so this type of synchronization becomes non-blocking synchronization.
- Thread-safe solutions that require no synchronization
Synchronization is not necessary to be thread-safe; there is no cause-and-effect relationship. If a method does not inherently involve sharing data, it naturally does not require any synchronization to ensure correctness, so some code is inherently thread-safe. Examples include Reentrant Code and Thread Local Storage.
Synchronized improvements in the JDK
Prior to JDK1.5, Java relied on the Synchronized keyword for thread safety. Synchronized is a built-in lock implemented by the JVM, whose acquisition and release are implicitly implemented by the JVM.
In JDK1.5, the Lock interface was added to the java.util.concurrent package to implement locking, which provides synchronization similar to the Synchronized keyword, except that the Lock is acquired and released when used. As mentioned above, Lock is implemented based on Java, while Synchronized is implemented based on Mutex Lock of the underlying operating system. Every Lock acquisition and release operation will bring the switch between user mode and kernel mode, thus increasing the system performance overhead. As a result, Synchronized locks perform poorly in competitive locking situations and are often referred to as heavyweight locks. Synchronized Lock performance in JDK1.5 is much worse than Lock performance, especially in the case of repeated Lock requests by a single thread. For example, in Dubbo’s Netty-based communication, after the consumer communicates with the server, a thread is required to poll for the return message because the return message is received asynchronously. When receiving messages, locks are used to ensure atomicity of the Request session. If Synchronized is used here, a user – and kernel-mode switch occurs every time the same thread requests the lock resource.
As of JDK1.6, Java has fully optimized Synchronized locks, even surpassing them in some scenarios.
Synchronized usage
Everything in Java is an object, and every object can be locked, which is the basis of Synchronized thread safety.
- For synchronous methods, the lock is the current instance object, i.e
this
, has no impact on other instance objects of the class. - For statically synchronized methods, the lock is the Class object of the current object, affecting other instantiated objects of that Class.
- For synchronous method blocks, the lock is
synchronized
Objects configured in parentheses.
That is, synchronized can be used to modify a class, its methods, or method blocks. The following code corresponds to each of the above three cases.
public class synchronizedTest implements Runnable { static synchronizedTest instance=new synchronizedTest(); Public void run() {synchronized(instance){//*******}} public void run() {synchronized(instance){// public void run() {synchronized(instance){// Void static synchronized method2() {} ////Copy the code
Synchronized method block
When a thread attempts to access a synchronized block of code, it must first acquire the lock and release it when it exits or throws an exception. So where does the lock live? What information is stored inside the lock? Let’s start with a piece of code and its bytecode (I’m using Idea’s Jclasslib plug-in here).
package techgo.blog; public class SynchronizedTest { private int i = 0; public void fun() { synchronized (this) { i ++; }}}Copy the code
We look at Monitorenter and Monitorexit, and then look at the table of virtual machine bytecode instructions. We know that these two bytecode operations represent acquiring and releasing locks on objects, respectively. After entering the Monitorenter directive, the thread holds the Monitor object, and after exiting the Monitorenter directive, the thread releases the Monitor object. This is how synchronized method blocks are implemented.
Synchronized methods
For synchronized methods, we would not see these instructions if we looked at their bytecode because synchronized methods rely on ACC_SYNCHRONIZED on the method modifier:
public synchronized void fun1() {
}Copy the code
When a method is invoked, the calling instruction checks to see if the method has the ACC_SYNCHRONIZED access flag set. If this flag is set, the executing thread will hold the Monitor object before executing the method. The Mointor object is not available to other threads while the method is running, and the Monitor object is released when the method is complete.
Implementation of synchronized locks
Synchronized object lock whose pointer points to the starting address of a monitor object (implemented by C++). Each object instance has a Monitor. Monitor can be created and destroyed together with objects. Or it can be generated automatically when a thread tries to acquire an object lock. It’s important to note that Monitor is not a Java-specific concept, and for a more detailed introduction to Monitor, see this article.
In the HotSpot VIRTUAL machine, the ObjectMonitor class is finally used to implement Monitor.
OpenjdkhotspotsrcsharevmruntimeobjectMonitor. HPP source code is as follows:
ObjectMonitor() { _count = 0; _owner = NULL; // Point to the thread or underlying lock that acquired the ObjectMonitor object. _EntryList = NULL; // Threads in the lock block state are added to the entry set; _WaitSet = NULL; // Threads in wait state are added to the wait set; _WaitSetLock = 0 ; _header = NULL; // Waiters = 0,// Wait threads _RECURsions = 0; _object = NULL; // The monitor locks the parasitic object. Locks do not appear in plain sight, but are stored in objects. _Responsible = NULL ; _succ = NULL ; _cxq = NULL ; FreeNext = NULL ; _SpinFreq = 0 ; _SpinClock = 0 ; OwnerIsThread = 0 ; // _owner is (Thread *) vs SP/BasicLock _previous_owner_tid = 0; // The ID of the thread before the monitor}Copy the code
When multiple threads access a piece of synchronized code at the same time, multiple threads are stored in the ContentionList and _EntryList collections. All threads in the block state are added to the list. Next, when the thread obtains the object’s Monitor, the Monitor implements mutual exclusion by relying on the Mutex Lock of the underlying operating system. If the thread successfully applies for Mutex, it will hold the Mutex, and other threads will not be able to obtain the Mutex. A thread that failed to compete will enter the ContentionList again and be suspended.
If a thread calls wait(), the currently held Mutex is released and the thread enters the WaitSet collection, waiting to be woken up the next time. Mutex is also released if the current thread completes the method successfully.
Go further (lock optimization)
As we all know, objects are created in the heap. And the object storage layout in memory can be divided into three areas: object header, instance data, alignment fill.
For object headers, there are mainly two parts of information: Mark Word and Klass Point:
- Mark Word is used to store the runtime data of the object itself, such as HashCode, GC generation age, lock status flags, locks held by threads, bias thread ids, bias timestamps, and so on. Java object headers typically occupy two machine codes (in a 32-bit virtual machine, one machine code equals four bytes, or 32 bits), but three machine codes are required if the object is an array because the JVM can determine the size of a Java object from its metadata information. There is no way to determine the size of an array from its metadata, so a block is used to record the length of the array.
- The other part is the type pointer Klass Point: the JVM uses this pointer to determine which class the object is an instance of.
The lock upgrade function mainly relies on the lock flag bit in Mark Word and the release of biased lock flag bit. Synchronized Synchronized lock starts from biased lock. With the increasingly fierce competition, biased lock is upgraded to lightweight lock and finally to heavyweight lock. Well, today is the first here, lock optimization details are still in the code word.
References:
Understanding the Java Virtual Machine in Depth, second edition
https://blog.csdn.net/wangyadong317/article/details/84065828
https://blog.csdn.net/zjy15203167987/article/details/82531772
https://www.cnblogs.com/JsonShare/p/11433302.html
https://baijiahao.baidu.com/s?id=1612142459503895416&wfr=spider&for=pc
http://cmsblogs.com/?p=2071
https://www.php.cn/java-article-410323.html
This article is published by OpenWrite!
Article: https://zhuanlan.zhihu.com/lovebell
Personal public account: Technology Go
Your praise and support is the biggest motivation for the author to continue to update!