The memory model

By the former article again www.jianshu.com/p/623cf38cc… It explains the memory model, but it also introduces the three problems of threading: atomicity, visibility, and order

Because CPU speed so fast, and the speed of the operation of the main memory can’t keep up with the speed of the CPU, CPU on the data read from main memory to wait for long time, so, increase the CPU cache area is the need to save data, CPU need data from the cache when you area to get the data copy, although cache area to run fast, But it’s also expensive. The emergence of cache improves the efficiency of CPU execution, but it also leads to the problems of atomicity and visibility of data in multithreading

We simulate the execution of a two-core CPU in which multiple threads read the same data:CPU1 and CPU2 respectively from main memory to obtain a copy of two cache area a = 0, thread a by CPU1 assign a value to 3 after operation, and then put a = 3 cache to cache the result of the area, but not to worry the results back to the main memory, thread B through CPU2 also arithmetic operation +, The a=0 value from CPU2’s cache is +1 instead of 3+1, so the memory model causes data to be out of sync across multiple threads.

Thread atomicity: In the case of multithreading, data may be executed on multiple threads at the same time, resulting in inconsistent results. Thread atomicity means that when data is executed by one thread, other threads cannot run the data at the same time. Synchronized can be used in Java to solve the problem of atomicity of threads.

Thread visibility: Atomicity solves the problem of multiple threads accessing data at the same time, but the CPU’s cache does not update the data to main memory immediately after execution, so other threads do not see the results of another thread’s calculations. In Java, volatile can interface with thread visibility issues.

Thread orderability: This is also an instruction reorder problem, where the CPU performs non-sequential execution of code instructions if the result of a single thread is correct, in order to make full use of the internal processor unit.

Atomicity – Synchronized

Synchronized: Each object has a lock, which is acquired for a thread when Synchronized is executed. Other threads can access Synchronized content without the object’s lock. So it guarantees the atomicity of multiple threads.

Learn the use of Synchronized with a Bean data class:

Public class StudentBean {// Each object has a lock private String name; private String age; private String sex; Public synchronized void setNameAndAge(String name, String age) {this.name = name; this.age = age; Public void setName(String name) {synchronized (this) {this.name = name; synchronized (this) {this.name = name; Public static synchronized void start() {system.out.println ("start");} // synchronized void start(); } // The synchronized method or block will release the object lock // the thread that did not acquire the object lock, Public void setAge(String age) {this.age = age; }}Copy the code

Synchronized can be used in methods, code blocks that lock Java objects, and static methods that lock Class objects. After executing the locking method, the thread that owns the lock releases the object lock. Other threads without locks either wait in the CPU for a while or block.

Synchronized Principles Take a look at what the bytecode of a Synchronized code block looks like after decompilation.

Public class SynchronizedTest {public static void main(String[] args) {synchronized (SynchronizedTest.class) { System.out.println("this is in synchronized"); }}}Copy the code

The synchronization block starts after the Monitorenter and ends at the Monitorenter. Monitor: A Lock can be understood as an object Lock, which is implemented by the Java Virtual Machine (JVM). The underlying implementation depends on the operating system’s Mutex Lock implementation. Every Java object has a Monitor Lock.

Monitorenter: When Monitorenter is executed, an attempt is made to acquire the lock. If monitor is not locked, or if it already has a Monitor lock, the lock counter is +1 and synchronization code is executed.

Monitorexit: When monitorexit is executed, the lock’s counter is -1, and if the counter is 0, the thread releases the monitor lock and another thread can acquire the monitor lock.

  1. Static methods:
public class SynchronizedTest { public static void main(String[] args) { doSynchronizedTest(); } public static synchronized void doSynchronizedTest(){system.out.println ("this is in ") synchronized"); }}Copy the code

The static method does not appear monitorenter and monitorexit, but is executedACC_SYNCHRONIZED,ACC_SYNCHRONIZED acquires the monitor lock before executing the synchronized code block, and releases the monitor lock after executing the synchronized method.

  1. Lock optimization after JDK1.6

Before 1.6, if another thread could not acquire the lock, it would enter the blocking state. Blocking would involve a context switch, which would consume a lot of time, so after 1.6, the problem of context switches with frequent locks was optimized.

When the first thread in acquiring a lock, we get biased locking, biased locking recording the thread Id, if have thread locking biased locking, then, would be the lightweight locks, lock in lightweight the CPU thread will execute cyclotron operation, if you have access to lock the thread soon completes, then to lightweight lock execution, if the waiting time is long, Enter a heavyweight lock, and the thread is blocked. So this optimizes the problem of the thread blocking without acquiring the lock.

Visibility – Volatile

The value of a Volatile variable is visible to other threads in a timely manner. It is how to implement, after modification, the Volatile modified variables will be updated immediately from the cache to the main memory, and let the main memory to send a notice to the other threads, told the other thread’s current variable has been updated, you cache the variables have failed, if you want to use, please go to the main memory to the latest variable values.