What is the memory model and why use it
In the absence of synchronization, there are many factors that prevent threads from seeing the results of a thread’s operations immediately or even forever
- The compiler stores variables in local registers instead of memory
- The order of instructions generated in the compiler can be different from the order in the source code
- The processor executes instructions out of order or in parallel
- Values stored in the processor’s local cache are not visible to other processors
In a single-threaded program, all of the above operations are allowed as long as the end result of the program is the same as the result of execution in a strictly serial environment
In multithreading, the JVM synchronizes operations to find out when these coordinated operations will occur
The JMM specifies that the JVM must follow a set of minimum guarantees that specify when a write to a variable will be visible to other threads
1. The platform’s memory model
Each processor has its own cache and is regularly coordinated with main memory, providing different levels of cache consistency across different processor architectures, allowing different processors to see different values from the same storage location at any one time. The JVM shields the differences between the JMM and the underlying platform memory model by inserting memory fences in place. Java programs do not need to specify the location of the memory fence, but simply find out when shared state will be accessed by using synchronization correctly
2. Reorder
**** A variety of different reasons for delayed or seemingly out-of-order execution can be attributed to reordering, and memory-level reordering can make a program behave unpredictably
Thread one = new Thread(new Runnable() { public void run() { a = 1; x = b; }});Copy the code
3. Introduction to the Java memory pattern
**** The Java memory model is defined by a variety of operations, including read/write operations on variables, lock and release operations on monitors, and start and merge operations on threads
****JMM defines a partial ordering relationship, called happens-before, for all operations in the program, so that there is no data race in a properly synchronized program (without the happens-before relationship, the JVM can reorder them arbitrarily)
- Program order rules. If operation A precedes operation B in the program, operation A precedes operation B in the thread
- Monitor lock rules. Unlock operations on the monitor lock must precede lock operations on the same monitor lock. (Explicit and built-in locks have the same memory semantics for operations such as locking and unlocking)
- Volatile variable rules. Writes to a volatile variable must precede reads to that variable. (Atomic variables have the same semantics as volatile variables for read and write operations)
- Thread start rules. A call to Thread.start on a Thread must be executed before any operation is performed in that Thread
- Thread termination rules. Any operation in a Thread must be performed before another Thread detects that the Thread has terminated, either by returning successfully from Thread.join, or by returning false when thread. isAlive is called
- Interrupt rules. When a thread calls interrupt on another thread, it must do so before the interrupted thread detects the interrupt call (by throwing InterruptException, or calling isInterrupted and interrupted).
- Finalizer rules. The constructor of an object must complete before starting the object’s finalizer
- Transitivity. If operation A is performed before operation B and operation B is performed before operation C, then operation A must be performed before operation C.
4. With synchronization
Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback: Piggyback
Happens-before order includes:
- The operation of putting an element into a thread-safe container is performed before another thread retrieves the element from the container
- The reciprocal operation on CountDownLatch will be performed before the thread returns from the await method on the latch
- The Semaphore license release operation will be performed before a license is obtained from the Semaphore
- All actions of the tasks represented by the Future will be performed before they are returned from future.get
- The action of submitting a Runnable or Callable to Executor will be executed before the task begins execution
- One thread reaching a CyclicBarrier or Exchange will be executed before other threads reaching that barrier or Exchange point are released. If a CyclicBarrier uses a fence operation, the operation to the fence will be performed before the fence operation, which in turn will be performed before the thread is released from the fence
Second, the release
The real cause of incorrect publishing: the lack of a happens-before relationship between “publishing a shared object” and “another thread accessing that object”
1. Unsafe releases
With the exception of immutable objects, it is generally unsafe to use an object initialized by another thread unless the object is published before the thread using the object begins to use it
public class UnsafeLazyInitialization { private static Object resource; public static Object getInstance(){ if (resource == null){ resource = new Object(); } return resource; }}Copy the code
Cause one: Thread B sees that thread A has published half of the objects
Cause 2: Even if thread A initializes the Resource instance and then sets Resource to point to it, thread B may see that writes to Resource occur before writes to each of the Resource fields. Because thread B may not see the order of operations in thread A in the same order that thread A performs those operations
2, security release
For example, the synchronization mechanism of BlockingQueue guarantees that A put is executed after A take, and that thread A can put an object in to ensure that thread B can retrieve it safely
With a synchronous container now in the library, locking a shared variable, or both using a shared volatile variable, it is possible to ensure that reads and writes to that variable are happens-before
Happens-before can actually be more visible and sequenced than a safe release promise
3. Safe initialization mode
Method 1: Lock to ensure visibility and sorting, there are performance problems
private static Object resource; public synchronized static Object getInstance(){ if (resource == null){ resource = new Object(); } return resource; }}Copy the code
Method 2: Early initialization may waste resources
private static Object resource = new Object(); public static Object getInstance(){ return resource; }}Copy the code
Method 3: Delay initialization
public class ResourceFactory { private static class ResourceHolder{ public static Object resource = new Object(); } public static Object getInstance(){ return ResourceHolder.resource; }}Copy the code
Method 4: Double locking. Ensure that the type is volatile. Otherwise, consistency problems may occur
public class DoubleCheckedLocking { private static volatile Object resource; public static Object getInstance(){ if (resource == null){ synchronized (DoubleCheckedLocking.class){ if (resource == null){ resource = new Object(); } } } return resource; }}Copy the code
Iii. Security during initialization
- Properly constructed immutable objects can be safely shared between multiple threads without synchronization if the security of the initialization process is ensured
- If the security of initialization is not ensured, the values of some supposedly immutable objects will change
Initialization security only guarantees that values reachable through final fields are visible from the completion of the construction process. For values that are reachable through non-final fields, or that may change after the composition process is complete, synchronization must be used to ensure visibility