Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.
This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.
Synchronized is implemented based on the JVM and synchronizes methods and code blocks by entering and exiting Monitor objects based on the JVM. The details of method synchronization and code block synchronization are different. Code block synchronization is implemented using the Monitorenter and Monitorexit directives, while method synchronization is implemented through the ACC_SYNCHRONIZED identifier in the constant pool. The method’s ACC_SYNCHRONIZED method flag is first checked to see if it is set. If so, the thread of execution obtains monitor before executing the method body.
Synchronized locks are essentially the object of a lock, so let’s take a look at Java objects before we understand the principle.
Java object
Object is stored in the heap, each object contains head, instance attributes, and populate the data (if the array object is also a record array length attribute) populate the data is what, this is because the size of the virtual machine request object is at least 8 bytes integer times, so if the size of the object to reach this condition, through filling data to meet the requirements. Let’s take a closer look at the Java object in code
<dependency> <groupId>org.openjdk.jol</groupId> <artifactId>jol-core</artifactId> <version>0.9</version> </dependency> Public Class HeadObject {private Boolean flag; } public class TestDemo { private static HeadObject headObject=new HeadObject(); public static void main(String[] args) { System.out.println(ClassLayout.parseInstance(headObject).toPrintable()); }}Copy the code
The internal structure of the object needs to be displayed with the help of a JAR package
As can be seen from the output result, the object header takes up 12 bytes, and the attribute takes up 1 byte, which adds up to 13 bytes. It is not a multiple of 8, so it will increase or decrease the padding data by 3 bytes.
If we change the field flag in HeadObject to int, see what happens
You can see that the padding data is gone, because the object header plus attribute size is 16 bytes, which is exactly a multiple of 8, so there is no padding.
>>>>
Object head
The logic behind synchronized locking is in the object header
Object headers generally use two attributes to store object headers. The main structure is Mark Word and Class Metadata Address.
By default, Mark Word stores the object’s Hashcode, generation age, and lock marker bits. Here’s a look at the Mark Word default storage structure for 32-bit JVMS
The contents of the Mark Word memory will change depending on the lock state.
Implementation principle of heavyweight lock
Each Synchronized lock implemented before JDK1.6 is a heavyweight lock with an identifier of 10, where the pointer points to the starting position of a Monitor object (the monitor lock), and each object has a Monitor object associated with it. When Monirot is held by a thread, It’s locked. In the JAVA virtual machine, Montitor is implemented by ObjectMonitor and its main data structure is shown below:
ObjectMonitor has two queues, _WaitSet and _EntryList, that hold the list of ObjectWaiter objects (each thread waiting for a lock is encapsulated as an ObjectWaiter object). _owner refers to the thread holding the ObjectMonitor object. When multiple threads access a piece of synchronized code at the same time, the thread will enter the _EntryList. When the thread obtains the object’s monitor, it will enter the _owner area and set the owner of monitor to the current thread. Count will be increased by 1. The currently held Monitor is released, the owner variable is restored to null,count is reduced to 1, and the thread enters the WaitSet to be woken up. If the current thread completes, monitor is released and the value of the variable is reset so that other threads can enter and acquire the lock.
The disadvantages of heavyweight locks are also obvious
-
Depending on the implementation of mutex related instructions of the underlying operating system, locking and unlocking need to switch between user mode and kernel mode, resulting in significant performance loss.
-
Researchers have found that locking and unlocking of most objects is done in a specific thread. That is, there is a low probability of threads competing for locks. They did an experiment, looking for some typical software, to test the repeat rate of the same thread lock unlocking, as shown in the figure below, you can see that the repeat lock rate is very high. Early JVMS wasted 19% of their execution time on locks
Starting with jdk1.6, Synchronized improved its overall performance by adding biased and lightweight locks.
No lock to bias lock
In most cases, locks are not contested and are always acquired by the same thread multiple times, so biased locks are introduced to reduce the cost of acquiring locks by the same thread.
-
First, check whether it is in the biased state. If it is in the biased state, it will test whether the current Mark Word thread ID points to itself. If so, it does not need to obtain the lock again, and directly execute the synchronization code.
-
If the thread ID is not its own thread ID, it will obtain the lock through CAS. If the thread ID is successfully obtained, it indicates that the current biased lock does not compete, and if the thread ID is failed, it indicates that the current biased lock does compete. In this case, biased lock cancellation will be initiated first
-
Lock revocation process: When competition occurs, the thread holding the Lock will wait for a global safe point to block and traverse the thread stack to check whether there is a Lock Record of the locked object. If there is a Lock Record, it is necessary to repair the Lock Record and Mark Word to make it Lock free and set whether the Lock state is biased to 0. Then start the lightweight lock and lock process.
Biased locks are upgraded to lightweight locks
How to upgrade to a lightweight lock
-
First the thread creates the lock record in its own stack frame
-
The thread copies the Mark Word to the Lock Record in the thread stack
-
Points the Owner pointer in the lock record to the lock object
-
Replace the MarkWord in the object header of the lock object with a pointer to the lock record
-
The lock flag bit becomes 00, indicating a lightweight lock
Upgrade from lightweight lock to heavyweight lock
When the lock is upgraded to a lightweight lock, if there is still a new thread competing for the lock, the new thread will spin to try to acquire the lock for a certain number of times (default: 10). If the lock is still not acquired, the lock will be upgraded to a heavyweight lock.
Here’s why it spins for a while and then becomes a heavyweight lock.
In general, the code in a synchronized code block should finish executing very quickly, and the new thread spins for a while and gets the lock easily. If it doesn’t, because the spin is an infinite loop that consumes CPU, it sets the spin count and then switches to a heavyweight lock, so the competing threads don’t have to spin all the time.
Synchronized code blocks and synchronized method underlying code
Through this underlying code, combined with the underlying principle of Synchronized, you will have a clearer understanding of Synchronized workflow.
For synchronized methods, it is the ACC_SYNCHRONIZED access flag bit that distinguishes whether a method is synchronized or not. When the method is called, the ACC_SYNCHORONIZED method flag is first checked to see if it is set, and if so, the monitor must be obtained before the method is executed.
The synchronization blocks are implemented through monitorenter and MonitoreXit. To ensure that the Monitorenter and Monitorexit pairs correctly when the method exception completes, the compiler automatically generates an exception handler. The exception handler will execute the Monitorexit directive.
If you think xiaotiao’s writing is good, click “like” and follow a wave, together in the surrounding society to show our persistence