I met a synchronized
Synchronized is not unfamiliar to us in concurrent programming. As we all know, when multiple threads are parallel, the program is not safe. This insecurity mainly occurs in the insecurity of shared variables.
package com.zwx.concurrent; public class TestSynchronized { private static int count; public static void increment(){ try { Thread.sleep(1); } catch (InterruptedException e) { e.printStackTrace(); } count++; } public static void main(String[] args) throws InterruptedException { for (int i=0; i<1000; i++){ new Thread(()->TestSynchronized.increment()).start(); } Thread.sleep(3000); System.out.println(" result: "+ count); }}Copy the code
The output here is expected to be 1000, but it doesn’t have to be 1000 because of the following scenarios: Count ++ =1; count++ =1; count++ =1; count++ =1; count++ =1 Count ++ =1; count++ =1; count++ =1; count++ =1; count++ =1 The final result will not output 1000, so how to solve this problem? Increment () ¶ Increment () ¶ Increment () ¶
Synchronized usage
Synchronized has three ways to lock:
-
Modifier instance method, used to lock the current instance, before entering the synchronization code to obtain the current instance lock
Public synchronized void test(){system.out.println (” synchronized “); }
-
Modifies a static method that locks the current class object before entering the synchronized code
Public static synchronized void test2(){system.out.println (” synchronized “); }
-
Modifies a block of code that specifies a lock object, locks a given object, and acquires the lock for the given object before entering a synchronized code base
Public void test3(){synchronized (this){system.out.println (” modify code block “); }}
How are locks stored
One of the most common phrases that everyone encounters in learning Java is, I think,Everything is an object. A lock is an object. What is the structure of the object? What information is stored in the lock object?
In the Hotspot virtual machine, objects are stored in three areas: Header, Instance Data, and Padding. Synchronized locks are stored in Java object headers, which contain two parts of information:
The first part is officially called”Mark Word“, for storing its own runtime data, such as HashCode,GC generation age, lock mark, thread ID of biased lock, etc. The second part is the type pointer, the class meta-information to which the object points, which the virtual machine uses to determine which class instance the object is. (If a Java object is an array, there must also be a piece of data in the object header to record the length of the array.)
The lock is recorded in the object header.Mark Word“, then”Mark Word“And how do you store lock information?
In a 32-bit VM,”Mark Word“Storage structure is shown as follows:
On a 64-bit VM,”Mark Word“Storage structure is shown as follows:
Synchronized lock escalation
Synchronized has long been an elder statesman in multithreaded concurrent programming, and many would call it a heavyweight lock. But with various optimizations made to Synchronized in Java SE 1.6, which introduced biased and lightweight locks to reduce the performance cost of acquiring and releasing locks, there are cases where synchronized isn’t that heavy. In Java SE 1.6, there are four lock states, from lowest to highest: no lock state, biased lock state, lightweight lock state, and heavyweight lock state, which are gradually upgraded with the competition situation. As for the lock degradation there is no standard, after reaching certain harsh conditions can be degraded, but in general we can simply think that the lock can not be degraded, there is no too much narrative.
Biased locking
HotSpot authors have found through their research that in most cases, locks are not only not contested by multiple threads, but are always acquired by the same thread multiple times, so biased locks are introduced to make it cheaper for threads to acquire locks. When a thread accesses a block of code with a synchronized lock, the ID of the current thread is stored in the object header, and the thread subsequently enters and exits the block without having to re-lock and release the lock. Instead, it directly compares whether the thread ID that points to the current thread is stored in the object header. If equality means that the biased lock is biased in favor of the current thread, there is no need to try to acquire the lock.
Biased lock acquisition
1. Firstly, obtain the Mark Word in the lock object header to judge whether the current object is in the biased state (that is, no object has been biased lock). 2. If it is in the biased state, the ID of the current thread is written into MarkWord through CAS atomic operation. If CAS succeeds, the biased lock is successfully obtained, and the biased lock marker is set to 1, and the ID of the current thread is written into MarkWord. If CAS fails, it indicates that another thread has obtained the biased lock, and there is lock competition in the current environment. In this case, the biased lock in the thread that has obtained the biased lock needs to be revoked (refer to the cancellation of biased lock below for details) and upgraded to lightweight lock. 3. If the current thread is biased, you need to check whether ThreadID in Mark Word is equal to you. If it is equal, you do not need to obtain the lock again and can directly execute the synchronization code block.
Bias lock revocation
The cancellation of a biased lock requires waiting for the global safe point (that is, no bytecode is being executed at this point in time), and then suspends the thread with a biased lock and checks whether the thread with a biased lock is alive. There are two main cases:
- If the thread is not active, the object header is set to lock free.
- If the thread is still alive, the stack with the biased lock is executed, the lock records of the biased object are traversed, and the lock records in the stack and the Mark Word of the object header are either rebiased to other threads (rebiased needs to meet the condition of batch rebiased), or reverted to lockless or the marked object is not suitable as a biased lock.
Finally wake up the suspended thread.
Batch heavy bias of bias lock
Lock bias occurs when a thread creates a large number of objects and then synchronizes them with another thread as a lock object, and the threshold is reached (except in this case, the lock bias state ends only when there are threads competing for the lock). – XX: BiasedLockingBulkRebiasThreshold oriented towards the JVM threshold parameter, the default 20, can pass – XX: + PrintFlagsFinal to print out the default parameters, then we through an example to demonstrate the batch weight bias:
<dependency> <groupId>org.openjdk.jol</groupId> <artifactId>jol-core</artifactId> <version>0.10</version> </dependency> package com.zwx.concurrent; import com.zwx.model.User; import org.openjdk.jol.info.ClassLayout; import java.util.ArrayList; import java.util.List; import java.util.concurrent.TimeUnit; public class BiasedLockDemo { public static void main(String[] args) throws InterruptedException { Thread.sleep(5000); List<User> List = new ArrayList<>(); new Thread(()->{ for (int i=0; i<20; User User = new User(); Synchronized (user){list.add(user); System. The out. Println (" t1 thread first "+ (I + 1) +" objects: "+ ClassLayout. ParseInstance (user). ToPrintable ()); } } },"t1").start(); try { Thread.sleep(10000); } catch (InterruptedException e) {e.printStackTrace(); } System.out.println("------------------------------------------------------"); new Thread(()->{ for (int j=0; j<20; j++){ User user = list.get(j); Synchronized (user) {System. Out. Println (" t2 thread first "+ (j + 1) +" objects: "+ ClassLayout. ParseInstance (user). ToPrintable ()); } } },"t2").start(); }}Copy the code
The first 19 threads of T2 are 000, and the 20th thread has reached the threshold, causing heavy bias.
101 Three-digit description:
First: 0- indicates non-biased 1- indicates biased
The last two: 00- indicates a lightweight lock 01- indicates a biased lock 10 indicates a heavyweight lock
Of course, there are batch heavy bias, there are also batch undo, here will not do too much narrative, later have time to separate more in-depth write a write, interested can pay attention to!
Flowchart of bias lock and revocation
Partial locking precautions
Biased locking in Java SE and Java SE 1.7 1.6 is enabled by default, but it’s on the application startup after a few seconds to activate, if necessary, you can use the JVM parameters to close the delay: – XX: BiasedLockingStartupDelay = 0. If you are sure that all locks in your application are normally contested, you can turn off biased locking with the JVM argument: -xx: -usebiasedLocking =false, and your application will enter the lightweight locking state by default. If thread contention is present in most cases in our application, it is recommended that biased locking be turned off, as this will cause more resource consumption due to biased lock undo operations.
Lightweight lock
Lightweight lock, generally used for two threads in the alternate use of the lock, because there is no lock at the same time, belongs to a more harmonious state, you can use lightweight lock.
Lightweight lock on lock
Before the thread executes the synchronized code block, the JVM will first create a space for storing lock records in the stack frame of the current thread, and copy the Mark Word in the object header into the lock record, officially known as the product Mark Word. The thread then tries to use CAS to replace the Mark Word in the object header with a pointer to the lock record. If it succeeds, the current thread acquires the lock; if it fails, other threads compete for the lock and the current thread attempts to acquire the lock using spin.
Lightweight lock unlock
When lightweight unlocks the product of the Hermite product, atomic CAS operation will be used to replace the product back to the object head. If successful, no competition will occur. If it fails, it indicates that the current lock is competing, and the lock expands to a heavyweight lock
Flow chart of lightweight lock and expansion
spinlocks
Lightweight lock in the lock process, the use of spin lock. The so-called spin means that when another thread is competing for the lock, the thread will wait in the loop, rather than blocking the thread until the thread that acquired the lock releases the lock, the thread can immediately acquire the lock. Why spin wait? Because most of the time the process of acquiring and releasing the lock is very short, it is very likely that the thread will encounter the lock after a certain number of spins. Therefore, lightweight locking is suitable for scenarios where the synchronized code block executes quickly, so that the thread can acquire the lock by waiting a short time in place. Note: the lock consumes CPU resources while waiting in place. So the spin must be conditionally controlled, otherwise if a thread executes a block of synchronized code for a long time, the thread waiting for the lock will loop over and over again and consume CPU resources. By default, the number of spins is 10. You can use the -xx :PreBlockSpin parameter to set the number of spins to wait.
Adaptive spin
In JDK1.7, adaptive spin locking was introduced. The JVM parameter to modify the number of spin locks was removed, and the vm no longer supports user configuration of the number of spin locks, but the VM automatically adjusts the number of spin locks. Adaptive means that the number of spins is not fixed, but is determined by the time of the previous spin on the same lock and the state of the lock owner. If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time. If spin is rarely successfully acquired for a lock, it is possible to omit the spin process and block the thread directly in future attempts to acquire the lock, avoiding wasting processor resources.
Heavyweight lock
When the lightweight lock expands beyond the heavyweight lock, it means that the thread has to be suspended and blocked waiting to wake up. Each object has a Monitor Monitor, which relies on the MutexLock of the operating system to implement the Monitor. After the thread is blocked, it will enter the kernel scheduling state, which will cause the system to switch back and forth between the user state and the kernel state, seriously affecting the lock performance. The MonitoRenter directive inserts at the start of the synchronized code block after compilation, while the Monitorexit inserts at the end of the method and at the exception. The JVM ensures that each Monitorenter must have a Monitorexit paired with it. And when a Monitor is held, it is locked. When a thread executes a Monitorenter instruction, it attempts to acquire ownership of the object’s monitor, that is, the lock on the object. The monitorenter directive executes on weighted locks and the Monitorexit directive executes on unlocked locks.
Compare the advantages and disadvantages of locks
The lock | advantages | disadvantages | Applicable scenario |
---|---|---|---|
Biased locking | Locking and unlocking require no additional cost, and there is only a nanosecond difference between executing asynchronous blocks of code | If there is lock contention between threads, there is an additional lock cancellation cost | Applies to scenarios where only one thread accesses a synchronized code block |
Lightweight lock | Competing threads do not block, improving the response time of the program | If you never get a lock, using spin consumes CPU | Pursuit of response time; The synchronous code block execution time is very short |
Heavyweight lock | Thread contention does not use spin and does not consume CPU | Threads are blocked and response time is slow | Throughput pursuit; Synchronous code blocks take a long time to execute |
conclusion
Synchronized solves three major problems in concurrent programming: Atomicity, visibility, and orderliness are optimized and sometimes less important in the JDK, but in some scenarios we can use the volatile keyword instead of synchronized. If the volatile modifier is used properly, It is cheaper to use and execute than synchronized because it does not cause thread context switching and scheduling.