Start with ali and other big factory interview questions
- Talk about your understanding of Synchronized
- Synchronized
- Synchronized is not necessarily inferior to Lock
Background of synchronized lock optimization: Using locks can achieve data security, but will bring performance degradation. Lockless programs can improve performance based on thread parallelism, but reduce security. Balance??
The lock upgrade process is as follows:Synchronized lock:The Mark Word in the object header is reused according to the different lock flag bits and lock upgrade strategies
Synchronized performance changes
Java5 before, only Synchronized, this is the operating system level heavyweight operation, heavyweight lock, if the lock competition is more fierce, performance degradation, Java5 before, between the user state and the kernel state switch. (To put it bluntly, heavyweight locks require switching to the underlying kernel state of the operating system, which consumes performance)The Java thread is mapped to the native thread of the operating system. If you want to block or wake up a thread, the operating system needs to intervene and switch between the user state and the core state, which will consume a lot of system resources, because the user state and the kernel state have their own dedicated memory space, special registers, etc. Many variables and parameters need to be passed to the kernel when the user mode is switched to the kernel mode. The kernel also needs to protect some register values and variables during the user mode switchover, so that after the kernel-mode call is finished, the user mode is switched back to continue working.
In earlier versions of Java, synchronized was a heavyweight Lock that was inefficient because monitor locks depended on the underlying operating system’s Mutex locks (i.e., Mutex locks). Heavyweight lock), hung thread and restoring the thread need to be completed to kernel mode, blocking or wake a Java thread need switch operating system CPU status to complete, this state switching costs processor time, if the content is too simple in the synchronized code block, the switch time than user code execution time is long, time cost is relatively high, That’s why early synchronized was inefficient, right
After Java 6, lightweight and biased locks were introduced to reduce the performance cost of acquiring and releasing locks
Why can every object be a lock ????Monitor can be understood as either a synchronization tool or a synchronization mechanism, and is often described as a Java object.Java objects are natural Monitors. Every Java object has the potential to become a Monitor because in Java design, every Java object comes out of the womb with an invisible lock called an internal lock or Monitor lock.
The essence of Monitor is to rely on the Mutex Lock implementation of the underlying operating system. Switching between threads of the operating system requires a very expensive transition from user mode to kernel mode.
Mutex Lock: Monitor is implemented underneath the JVM in c++ code. The essence is the implementation of Mutex Lock, which depends on the underlying operating system. The switching between threads of the operating system requires the conversion from user state to kernel state, which requires a lot of processor time and high cost. Synchronized is therefore a heavyweight operation in the Java language.
How does Monitor relate to Java objects and threads? If a Java object is locked by a thread, the LockWord in the Mark Word field of the Java object points to the starting address of monitor. 2. The Owner field of monitor holds the ID of the thread that has the associated object lock
The switch to Mutex Lock requires a transition from user to core state, so the state transition takes a lot of processor time.
Java6 began with the optimization of Synchronized. After java6, lightweight and biased locks were introduced to reduce the performance cost of acquiring and releasing locks
Synchronized lock types and upgrade steps
There are three kinds of multi-threaded access cases, as follows:
- There is Only One thread to access, and Only One
- There are two threads A and B to access alternately
- Competitive, multiple threads to access
The upgrade process is as follows:The lock used by synchronized is the MarkWord existing in the Java object head. The lock upgrade function mainly depends on the lock flag bit in MarkWord and the release of biased lock flag bit
unlocked
The lockless code is shown below:
package com.atguigu.juc.senior.inner.object;
import org.openjdk.jol.info.ClassLayout;
/ * * *@auther zzyy
* @createThe 2020-06-13 richest * /
public class MyObject
{
public static void main(String[] args)
{
Object o = new Object();
System.out.println("Base 10 hash code:"+o.hashCode());
System.out.println(Hexadecimal hash code:+Integer.toHexString(o.hashCode()));
System.out.println("Binary hash code:"+Integer.toBinaryString(o.hashCode())); System.out.println( ClassLayout.parseInstance(o).toPrintable()); }}Copy the code
Programs do not compete for locks.
Partial lock
When a synchronous code has been accessed by the same thread many times, because there is only one thread then the thread in the follow-up access will automatically obtain the lock, the same old customer visit, direct old custom convenient, look at multithreading tickets, the same thread get experience.
Hotspot authors have found that in most cases, not only do locks not compete with multiple threads, but locks are acquired multiple times by the same thread.
Biased locking occurs in this case to improve performance only when synchronization is performed by one thread.
Biased locking is the modification of the thread ID in Markword by CAS
Theory landing: In the process of practical application, it is found that “the lock is always held by the same thread, and competition rarely occurs”, that is to say, the lock is always owned by the first thread that occupies it, and this thread is the biased thread of the lock. Then you just need to record the bias thread ID when the lock is first owned. The bias thread then holds the lock all the time. (When the thread enters and exits the block with the synchronized lock, it does not need to re-lock and release the lock. Instead, it directly compares whether a bias lock to the current thread is stored in the object’s header. If equality indicates that the biased lock is biased in favor of the current thread, there is no need to try to acquire the lock until a race occurs. After each synchronization, check whether the thread ID of the bias lock is the same as the current thread ID. If so, directly enter the synchronization. There is no need to go to CAS to update the object header every time a lock is unlocked. If only one thread is using the lock all the time, it is clear that biased locking has little additional overhead and high performance.
If inconsistencies mean contention, locks are no longer always favoring the same thread, and may need to be upgraded to lightweight locks to ensure fair competition between threads. Biased lock only when other threads try to compete for biased lock, the thread holding biased lock will release the lock, the thread will not actively release biased lock.
Technical implementation: When a synchronized method is captured by a thread, the object of the synchronized method will have the lock-change status bit in its Mark Word, and will occupy the first 54 bits to store the thread pointer as identification. If the thread accesses the same synchronized method again, the thread only needs to go to the Mark Word of the object header to determine whether the biased lock points to its own ID, without entering Monitor to compete for the object.
Detailed case The Account object is provided as follows:If a thread executes a synchronized block, the JVM uses CAS to record the thread pointer ID into the Mark Word and modifies the bias flag to indicate that the current thread acquires the lock. Lock objects become biased locks (by modifying the lock flag bit in the object’s head via CAS), which literally means “biased in favor of the first thread to acquire it.” The thread does not actively release bias locks after executing synchronized blocks of code.
At this point the thread acquires the lock and can execute the synchronized code block. When the thread reaches the block for the second time, the JVM determines whether the thread holding the lock is still holding its own (the thread holding the lock ID is also in the object header). The JVM uses the Mark Word of the account object to determine whether the current thread ID is still holding the lock on the object, and can continue to work in the critical section.Since the lock was not released before, there is no need to re-lock it. If only one thread is using the lock all the time, it is clear that biased locking has little additional overhead and high performance.
Conclusion: The JVM does not have to negotiate with the operating system to set up Mutex(for the kernel), it just needs to record the thread ID to indicate that it has acquired the current lock, without the operating system.
This is known as bias lock: bias the current thread until no other thread is competing, and the current thread can continue to execute.
Partial lock JVM command parameters are as follows:
* Actually bias locking in JDK16.Is the default after open, but the startup time delay, * so you need to add parameters - XX: BiasedLockingStartupDelay =0Make it start as soon as the program starts. Open biased locking: * * * - XX: XX: + UseBiasedLocking BiasedLockingStartupDelay =0* * close to lock: shut down after the default goes straight to the -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- > > > > > > > > lightweight lock state. * -XX:-UseBiasedLockingCopy the code
By default, the code is shown as follows:
package com.atguigu.juc.senior.inner.object;
import org.openjdk.jol.info.ClassLayout;
/ * * *@auther zzyy
* @createThe 2020-06-13 richest * /
public class MyObject
{
public static void main(String[] args)
{
Object o = new Object();
new Thread(() -> {
synchronized(o){ System.out.println(ClassLayout.parseInstance(o).toPrintable()); }},"t1").start(); }}Copy the code
Open - XX: + UseBiasedLocking biased locking (default) - XX: - UseBiasedLocking closure biased locking - XX: BiasedLockingStartupDelay =0Turn off delay (need to turn on when demonstrating bias lock)Copy the code
Parameter description: biased locking in JDK1.6 above the default open, open the program start only after a few seconds after activation, you can use the JVM parameters to close the delay – XX: BiasedLockingStartupDelay = 0
If it is determined that the lock is usually in contention, biased locking can be turned off with the JVM parameter -xx: -usebiasedlocking, and lightweight locking is entered by default
Closing time delay parameters, enable this feature: – XX: BiasedLockingStartupDelay = 0
Good times will come to an end…… O (╥﹏╥) O, there is a second thread snatching.
Partial lock cancellation: When another thread is gradually competing for the lock, the partial lock is no longer used and should be upgraded to a lightweight lock. Competing threads fail to attempt CAS to update the object header and wait until the global safe point (at which point no code is executed) to revoke the biased lock.
Biased locking uses a mechanism that waits until a contention occurs to release the lock, and the original thread holding the biased lock is revoked only when other threads compete for the lock. Undo requires waiting for the global safe point (at which no bytecode is executing) while checking to see if the thread holding the biased lock is still executing:
① The first thread is executing the synchronized method (in the synchronized block), but it is not finished. When other threads snatch it, the bias lock will be cancelled and the lock will be upgraded. At this point, the lightweight lock is held by the thread that originally held the bias lock and continues to execute its synchronization code, while the competing thread goes into spin and waits to acquire the lightweight lock.
② The first thread completes the synchronized method (exit the synchronized block), then the object head is set to the state of no lock, the biased lock is revoked, and the biased again.
The general process is shown as follows:
Light lock
There are threads competing for the lock, but the conflict time for acquiring the lock is very short, which is essentially a spin lock.Lightweight locks are designed to improve performance when threads execute synchronized blocks nearly alternately.
Main purpose: To reduce the performance cost of heavy lock using OS mutex through CAS without multithreading competition, namely spin first and then block.
Upgrade timing: When biased locks are disabled or multiple threads compete for biased locks, biased locks are upgraded to lightweight locks
If thread A has already acquired the lock, thread B will grab the lock of the object, since the lock of the object has already been acquired by thread A, the current lock is biased. When thread B finds that the thread ID in the object header Mark Word is not thread B’s own thread ID(but thread A), thread B will perform CAS operation in the hope of obtaining the lock. There are two cases in thread B’s operation:
If the lock is successfully obtained, directly replace the thread ID in Mark Word with B’s own ID(A → B), and the lock will be biased to other threads again (that is to hand the biased lock to another thread, which means that the current thread has been “released” the lock). The lock will remain biased lock state, thread A is Over, thread B is in charge; If lock acquisition fails, the bias lock is upgraded to the lightweight lock. At this time, the lightweight lock is held by the thread that originally holds the bias lock and continues to execute its synchronization code, while the competing thread B will enter spin and wait for obtaining the lightweight lock. If you turn off bias locks, you can go straight to lightweight locks:-XX:-UseBiasedLocking
When the spin reaches a certain number and degree, it will be upgraded to a heavyweight lock, depending on the situation.
-xx :PreBlockSpin=10, or if the number of spin threads exceeds half the number of CPU cores.
After Java6: adaptive means that the number of spins is not fixed, but based on:
- The last spin of the same lock.
- Has the state of the lock thread to determine.
The difference and difference between lightweight lock and biased lock
- When a lightweight lock is lost, spin attempts to grab the lock
- Lightweight locks need to be released every time they exit a synchronized block, whereas biased locks release locks only when a race occurs
Heavy lock
There are a large number of threads participating in the lock competition and the conflict is very high. The lock flag bits are as follows:
A small summary
Summary of synchronized lock upgrade process: in a word, it is the first spin, not blocked again. In effect, the previous pessimistic locking (heavyweight locking) is changed to the use of biased locking and lightweight locking (spin locking CAS) under certain conditions
Synchronized modifiers and code blocks are implemented differently in bytecode, but the internal implementation is based on MarkWord for object headers.
Synchronized used heavyweight locks before JDK1.6, but has since been optimized to have lock free -> bias -> lightweight -> heavyweight locks instead of using heavyweight locks in all cases.
Biased locking: Applies to single-threaded situations where a biased lock is used to enter synchronized methods/blocks when there is no lock contention.
Lightweight lock: suitable for less competitive situation (which similar to the use of optimistic locking range), when there is competition upgrade for lightweight locks, lightweight USES a spin lock, lock if synchronization method/block of code execution time is very short, using lightweight lock although CPU resources but relatively than using a heavyweight lock or more efficient.
Heavyweight lock: Suitable for highly competitive situations where synchronous method/block execution takes a long time and the spin cost of using a lightweight lock is greater than that of a heavyweight lock, upgrading to a heavyweight lock.
JIT compiler optimizations for locks
The lock elimination code is shown below:
package com.atguigu.itdachang;
Synchronized (O) does not exist and is not shared with other threads. In extreme cases, there is no underlying machine code attached to the lock object at all, eliminating the use of the lock
public class LockClearUPDemo
{
static Object objectLock = new Object();/ / normal
public void m1(a)
{
// Lock elimination. The JIT ignores it, and synchronized no longer exists. abnormal
Object o = new Object();
synchronized (o)
{
System.out.println("-----hello LockClearUPDemo"+"\t"+o.hashCode()+"\t"+objectLock.hashCode()); }}public static void main(String[] args)
{
LockClearUPDemo demo = new LockClearUPDemo();
for (int i = 1; i <=10; i++) {
newThread(() -> { demo.m1(); },String.valueOf(i)).start(); }}}Copy the code
The lock coarsening code is shown below:
package com.atguigu.itdachang;
If the synchronized blocks are synchronized, the JIT compiler will merge the synchronized blocks into a single block, which increases the range of synchronized blocks. The synchronized block can be used only once, avoiding multiple synchronized blocks and releasing locks
public class LockBigDemo
{
static Object objectLock = new Object();
public static void main(String[] args)
{
new Thread(() -> {
synchronized (objectLock) {
System.out.println("11111");
}
synchronized (objectLock) {
System.out.println("22222");
}
synchronized (objectLock) {
System.out.println("33333"); }},"a").start();
new Thread(() -> {
synchronized (objectLock) {
System.out.println("44444");
}
synchronized (objectLock) {
System.out.println("55555");
}
synchronized (objectLock) {
System.out.println("66666"); }},"b").start(); }}Copy the code