Welcome to pay attention to the public number [Ccww technology blog], original technical articles launched earlier than the blog

An understanding of the Java Memory Model (JMM) should begin with an understanding of the use of Volatile and Synchronized


Java Memory Model (JMM)

The Java Memory (JMM) model is a higher level abstraction based on the hardware memory model. It shields the difference of memory access between different hardware and operating system, so that Java programs can achieve the same concurrency effect on various platforms.

The internal workings of the JMM

  • Main memory: Stores shared variable values (instance variables and class variables, not local variables, because local variables are thread private and therefore do not have contention issues)

  • Working memory: each thread in the CPU keeps a copy of the shared variable. The working memory of the thread is realized by synchronizing the shared variable back to the main memory after modification and refreshing the value of the variable from the main memory before the variable is read.

  • Inter-memory interaction: Different threads cannot directly access variables that do not belong to their own working memory. Values of inter-thread variables need to be passed through main memory. (Lock, unlock, read, load, use, assign, store, write)

There will be instruction reordering in the JMM, and the idea of AF-if-Serial and happening-before ensures that instructions are correct

  • To improve performance, compilers and processors often reorder instructions from a given code execution order
  • Af-if-serial: no matter how reordered, the result of execution in a single thread cannot be changed
  • The coincidentally -before principle: there are many cogency principles, including the program order principle. In a thread, the program order principle is implemented in accordance with the order in which the program is written, and the operation written in the front takes place before the operation written in the back. To be precise, it is the control flow order rather than the code order

The Java memory model contains three features to solve the problem of consistency of shared variables in multi-threaded environments.

  • Atomicity: The atomic operations in memory include read, load, user, assign, store, write, and so on. If a wider range of atomicity is required, it can be achieved using synchronized, the operation between synchronized blocks.
  • Visibility: When one thread changes the value of a shared variable, other threads immediately sense the change, synchronize it back to main memory immediately after the change, and flush it from main memory immediately before each read. Visibility can be guaranteed using volatile, or the keywords synchronized and final.
  • Orderliness: All operations in this thread are ordered; In another thread, where it appears that all operations are out of order, volatile, which is inherently ordered, may be used to maintain order because it prohibits reordering.

In understanding the JMM, let’s talk about the use of Volatile and Synchronized. What does Volatile and Synchronized do?


Volatile

Characteristics of Volatile:

  • This ensures visibility when different threads operate on the variable, i.e. when one thread changes the value of a variable, the new value is immediately visible to other threads. (Implement visibility)
  • Command reordering is disabled. (Implement order)
  • Volatile guarantees atomicity only for single reads/writes; i++ does not guarantee atomicity

Volatile visibility

When writing a volatile variable, the JMM updates the value of the shared variable in the thread’s working memory to the main memory.

When a volatile variable is read, the JMM invalidates the thread’s working memory, and the thread reads the shared variable from main memory.

Write operations:

Read:

Volatile disallows instruction reordering

The JMM uses a memory barrier insertion strategy for volatile forbidden instruction rearrangements:

Insert a StoreStore barrier before each volatile write. Insert a StoreLoad barrier after each volatile write

Insert a LoadLoad barrier after each volatile read. Insert a LoadStore barrier after each volatile read

Synchronized

Synchronized is one of the most common and easiest ways to solve concurrency problems in Java. There are three main functions of Synchronized:

  • Atomicity: access synchronization code that ensures threads are mutually exclusive;
  • Visibility: Changes to a shared variable must be visible in the main memory before an unlock operation is performed. If you lock a variable, the value of the variable will be emptied from working memory. This is guaranteed by reinitializing the value from the load or assign operation from main memory before the execution engine can use the variable
  • Order: effectively solve the reordering problem, i.e., “an unlock operation happens before another lock operation on the same lock”;

Synchronized has three uses:

  1. When synchronized acts on instance methods, the monitor lock is the object instance (this);
  2. When synchronized is applied to a static method, the monitor lock is the Class instance of the object. Because the Class data exists in the permanent generation, the static method lock is equivalent to a global lock of that Class.
  3. When synchronized acts on an object instance, the monitor lock is the object instance enclosed in parentheses;

See Java concurrency Synchronized for a more detailed analysis

Now that we understand Volatile and Synchronized, how can we use Volatile and Synchronized to optimize singleton patterns


Singleton pattern optimization – Double Check Lock (DCL)

Let’s start with the generic singleton pattern:

class Singleton{
    private static Singleton singleton;    
    private Singleton(a){}

    public static Singleton getInstance(a){
            if(singleton == null){
                singleton = new Singleton();   // Create an instance
        }
        returnsingleton; }}Copy the code

Possible problems: When there are two threads A and B,

  • Thread A judgmentif(singleton == null)The thread is suspended when it is ready to execute the creation instance,
  • Thread B will also determine that the singleton is empty, and then execute the create instance object return.
  • Finally, since thread A has entered, it will also create instance objects, leading to the situation of multiple singletons

The first thing that comes to mind is that synchronized is used in static methods:

public class Singleton {
    private static Singleton singleton;
    private Singleton(a){}
    public static synchronized Singleton getInstance(a){
        if(singleton == null){
       		 singleton = new Singleton();
        }
        returnsingleton; }}Copy the code

Although such a simple and crude solution will lead to this method is relatively inefficient, resulting in a serious degradation of program performance, is there another better solution?

After creating an instance, the thread checks that the Singleton is not null before resynchronizing the lock and returns the object reference directly instead of checking that the singleton is not null each time in the synchronized code block.

If only synchronized is preceded by a singleton that is not null, the first situation will occur when multiple threads simultaneously execute the conditional statement, and multiple instances will be created

Therefore, it is necessary to add a singleton after synchronized, so that multiple instances will not be created.

class Singleton{
    private static Singleton singleton;    
    private Singleton(a){}
    
    public static Singleton getInstance(a){
        if(singleton == null) {synchronized(Singleton.class){
                if(singleton == null)
                    singleton = newSingleton(); }}returnsingleton; }}Copy the code

This optimization solution solves the problem of creating a single instance, but it is unsafe in multithreading due to the reordering of instructions (unexpected problems occur when subsequent threads find that singleton is not null and use it directly after reordering). . Singleton = new Singleton()

  • 1. Allocate memory
  • 2. The initialization
  • 3. Return the object reference

Reordering may occur in steps 2 and 3 due to reordering as follows:

  • 1. Allocate memory space
  • 2. Assign the address of the memory space to the reference
  • 3. Initialize the object

So the problem is found, how to solve it? This disallows reordering of initialization steps 2 and 3, just as Volatile disallows instruction reordering, allowing double-checking to come into play.

public class Singleton {
    // Use the volatile keyword to ensure security
    private volatile static Singleton singleton;
    private Singleton(a){}
    public static Singleton getInstance(a){
        if(singleton == null) {synchronized (Singleton.class){
                if(singleton == null){
                singleton = newSingleton(); }}}returnsingleton; }}Copy the code

Finally we have this perfect double detection singleton pattern


conclusion

  • Volatile essentially tells the JVM that the value of the current variable in the register (working memory) is indeterminate and needs to be read from main memory; Synchronized locks the current variable so that only the current thread can access it and other threads are blocked.
  • Volatile can only be used at the variable level; Synchronized can be used at the variable, method, and class levels
  • Volatile only enables change visibility of variables, not atomicity. Synchronized can guarantee the change visibility and atomicity of variables
  • Volatile does not block threads; Synchronized can cause threads to block.
  • Volatile variables are not optimized by the compiler; Variables of the synchronized tag can be optimized by the compiler
  • The only safe time to use volatile instead of synchronized is when there is only one mutable field in a class

Is everyone still ok? If you like, move your hands to show 💗, point a concern!! Thanks for your support!

Welcome to pay attention to the public number [Ccww technology blog], original technical articles launched at the first time