Monitorenter and MonitoreXit are the synchronization primitives of THE JVM operation monitor. This time, the abstract memory semantics will be used to explain how to lock and unlock
1. Working memory and main memory
define
-
Main memory: Generally the physical memory on a computer’s operating system, in short, what we generally mean by computer memory
-
Working memory: Based on the JMM(Java Memory Model) specification, variables used by threads will copy data variables from main memory into the workspace of their own thread stack
Thread working memory and main memory read and write schematic
That was introduced earlierCPU cache knowledge pointsThe following is a simple architecture diagram of the CPU and the flow of reading and writing to working memory and main memoryFrom the above we can see that the CPU Cache containing L1 to L3 Cache, thread each time to read and write need to pass the CPU Cache, thus produces the data Cache are not consistent, have been talked about in front of the CPU vendor do the improvement for this kind of problem, use the Cache consistency to achieve final data consistency, so if there is a demand is at this time Strong consistency, even for a short period of time, and I want to make sure I see the results immediately after I write the data. In order to solve this problem, the JMM specification makes a specification to force threads in Java programs to directly skip the CPU cache data to read the main memory data, which is a means to solve the invisible memory data.
2. Synchronized code demonstration
-
Scenario: there is a shared variable, sharedVar, and the write operation of thread-1 takes 500ms. The read operation of thread-2 is delayed for 600ms due to network reasons, and the read operation of thread-3 is normal
-
The desired scenario is that after writing data, other threads know that the data has changed and need to read the latest data
// Sync2memory.javapublic class Sync2memory { private static Integer sharedVar = 10; public static void main(String[] args) throws Exception { testForReadWrite(); // testForReadWriteWithSync(); TimeUnit.SECONDS.sleep(2L); System.out.printf("finish the thread task,the final sharedVar %s .... \n", sharedVar); } private static void testForReadWriteWithSync() throws Exception { Thread thread1 = new Thread(new Runnable() { @Override public void run() { try { // modify the sharedVar TimeUnit.MICROSECONDS.sleep(500L); synchronized (sharedVar){ System.out.printf("%s modify the shared var ... \n", "thread-1"); sharedVar = 20; } }catch (Exception e){ System.out.println(e); }}}); Thread thread2 = new Thread(new Runnable() { @Override public void run() { try { // network delay TimeUnit.MICROSECONDS.sleep(600L); synchronized (sharedVar){ System.out.printf("%s read the shared var %s \n", "thread-2", sharedVar); } }catch (Exception e){ System.out.println(e); }}}); Thread thread3 = new Thread(new Runnable() { @Override public void run() { try { synchronized (sharedVar){ System.out.printf("%s read the shared var %s \n", "thread-3", sharedVar); } }catch (Exception e){ System.out.println(e); }}}); thread2.start(); thread3.start(); thread1.start(); thread1.join(); thread2.join(); thread3.join(); } private static void testForReadWrite() throws Exception { Thread thread1 = new Thread(new Runnable() { @Override public void run() { try { // modify the sharedVar TimeUnit.MICROSECONDS.sleep(500L); System.out.printf("%s modify the shared var ... \n", "thread-1"); sharedVar = 20; }catch (Exception e){ System.out.println(e); }}}); Thread thread2 = new Thread(new Runnable() { @Override public void run() { try { // network delay TimeUnit.MICROSECONDS.sleep(600L); System.out.printf("%s read the shared var %s \n", "thread-2", sharedVar); }catch (Exception e){ System.out.println(e); }}}); Thread thread3 = new Thread(new Runnable() { @Override public void run() { try { System.out.printf("%s read the shared var %s \n", "thread-3" , sharedVar); }catch (Exception e){ System.out.println(e); }}}); //thread1-3 start and join .... }}
Copy the code
-
A result of execution without synchronized (multiple runs)
Thread-3 read the shared var 10thread-1 modify the shared var to 20... thread-2 read the shared var 10finish the thread task,the final sharedVar 20 .... Process finished with exit code 0## Thread analysis three normal execution, and haven't before they have read the data in the event of a write operation, belong to the normal output thread one write operation on takes 500 ms modifications to the data synchronization to the main memory thread 2 due to network latency 600 ms, but write operation has been completed, then read out data belongs to dirty data, is not correct, so the thread 2 read is the job of the it has not been refresh memory data after the last to see the results of the output is a write operation data, the CPU will guarantee the consistency of the cached data at the final, there is only the above problems, the results may also occur to normal thread - 2 will read data, only in the above code we can't be confirmed Thread 2 must be able to read the correct data
Copy the code
-
Add synchronized execution results (multiple times)
Thread-3 read the shared var 10thread-1 modify the shared var... thread-2 read the shared var 20finish the thread task,the final sharedVar 20 .... After thread 1 has performed a write operation, we can see that thread 2 has received the data after thread 1 has performed a write operation. Now the program can ensure that thread 2 has received the data properly
Copy the code
3. Understanding of synchronized memory semantics
Summary of memory semantics
-
Based on the execution results of the above code, it can be seen that the shared variables in synchronized blocks will be cleared or invalidated from the working memory of the thread. In this case, the program will not read data from the working memory, but directly from the main memory, so as to ensure the strong consistency of cached data
-
From this,synchronized can solve the memory visibility problem of shared variables semantically
-
Synchronized, on the other hand, is the JVM equivalent of acquiring monitorenter instructions, which load the cache invalescence of the shared variable directly from main memory into the lock block’s memory, and flush the lock block’s shared variable data into main memory during monitorexit instructions
Lack of synchronized
-
Using Monitor in a metux lock manner (heavyweight locking) can degrade application performance (response times can be slow, which is equivalent to trading performance for data consistency)
-
Another is that threads are scheduled by the CPU, and switching threads back and forth incurs additional scheduling overhead
Thank you for reading, if it is helpful, welcome to forward or click good, thank you!