Art is long, life is long

visibility

Visibility definition: Changes made by one thread to a shared variable can be immediately seen by another thread.

Shared variables are not visible between threads.

1.1. Thread cross execution

Thread cross execution is mostly caused by thread switching. For example, thread A in the following figure switches to thread B during execution and then switches back to thread A to perform the rest operations after completion of execution. In this case, thread B’s changes to the variable are not immediately visible to thread A, which results in A discrepancy between the calculated result and the desired result.

1.2. Reordering combined with thread cross execution

Take the following code for example

int a = 0;    / / line 1
int b = 0;    Line / / 2
a = b + 10;   / / 3
b = a + 9;    / / line 4
Copy the code

If lines 1 and 2 change order at compile time, the execution result is not affected;

If lines 3 and 4 were swapped when mutating, the execution result would be affected because the value of B is less than expected 19;

The result is inconsistent because the execution order is changed during compilation. The cross execution of the two threads leads to the result of the thread change is not the expected value, it is worse!

1.3 The updated value of the shared variable is not timely updated in the working memory and main memory

Because the master thread does not update the modification of the shared variable in time, the child thread cannot obtain the latest value immediately, causing the program cannot execute as expected.

For example, the following code:

public class VisibilityDemo {

    // Status flag
    private static boolean flag = true;

    public static void main(String[] args) throws InterruptedException {
        System.out.println(LocalDateTime.now() + "Main thread start count child thread");
        new CountThread().start();

        Thread.sleep(1000);
        // Set flag to false to break out of the while loop
        VisibilityDemo.flag = false;
        System.out.println(LocalDateTime.now() + "The main thread has set its status flag to false.");
    }

    static class CountThread extends Thread {
        @Override
        public void run(a) {
            System.out.println(LocalDateTime.now() + "Count child thread start count");
            int i = 0;
            while (VisibilityDemo.flag) {
                i++;
            }
            System.out.println(LocalDateTime.now() + "Count child thread end count, end of run: the value of I is"+ i); }}}Copy the code

The results are as follows:

As can be seen from the printed result of the console, due to the modification of flag by the master thread, the counting child thread is not immediately visible, so the counting child thread cannot jump out of the while loop and end the child thread for a long time.

How to solve the problem of invisibility between threads

2.1. Volatile: Only visibility is guaranteed

The volatile keyword guarantees visibility, but only visibility, where changes to the flag are immediately picked up by the counting child threads.

To correct the above example, simply add the volatile keyword to the global variable definition

// Status flag
private static volatile boolean flag = true;
Copy the code

2.2 Atomic Related classes: Ensure visibility and atomicity

The visibility and atomicity of flag attributes can be well guaranteed if the flag status is defined using Atomic related classes.

To correct the above example, simply define the global variables as Atomic related classes when you define them

// Status flag
private static AtomicBoolean flag = new AtomicBoolean(true); 
Copy the code

It is worth noting, however, that the method associated with the atomic class sets the new value and gets the value slightly differently, as follows:

// Set the value of flag
VisibilityDemo.flag.set(false);        
// Get the value of flag
VisibilityDemo.flag.get() 
Copy the code

2.3 Lock: Ensure visibility and atomicity

Here we use the common Java synchronized keyword.

To correct the problem in the example above, simply add the synchronized keyword modifier to the count operation i++

synchronized (this) {    
    i++;    
} 
Copy the code

Through the above three methods, the expected results are similar to the following:

atomic

Atomicity definition: The ability for one or more operations to be performed without interruption by the CPU.

First, the cause of the atomicity problem

The cause of atomicity problems with shared variables between threads is context switching.

So let’s reproduce the atomicity problem with an example.

package td;

import java.util.ArrayList;
import java.util.List;

/** * demonstration: atomic problem -> means that when one thread is working on a shared variable, another thread may also work on the shared variable, interfering with the first thread */
public class Test02Atomicity {

    // Define a shared variable
    private static int number = 0;

    public static void addNumber(a){
        number++;
    }

    public static void main(String[] args) throws InterruptedException {
        // Add 1000 ++ to number
        Runnable runnable = () -> {
            for (int i = 0; i < 1000; i++) { addNumber(); }}; List<Thread> list =new ArrayList<>();
        // Use 10 threads for the operation
        for (int i = 0; i < 10; i++) {
            Thread t = new Thread(runnable);
            t.start();
            list.add(t);
        }

        for (Thread t : list) {
            //t.join() only causes the main thread to enter the wait pool and wait until the t thread finishes executing. It does not affect other threads running at the same time
            t.join();
        }

        System.out.println("number = "+ number); }}Copy the code

If we run the above program several times, there will also be the result we expect number = 10000, which is not what we want.

The reason for this is because

 public static void addNumber(a){
        number++;
    }
Copy the code

This code is not an atomic operation, where number is a shared variable. Interruptions may occur in multithreaded environments. Thus the question of atomicity is starkly presented.

Second, solve the atomicity problem caused by context switch

2.1, synchronized

Synchronized can guarantee both the visibility of operations and the atomicity of operation results.

So, here we simply set the addNumber() method to synchronized to ensure atomicity.

public synchronized static void addNumber(a){
    number++;
}
Copy the code

2.2, the Lock Lock

static Lock lock = new ReentrantLock();

public static void addNumber(a){
    lock.lock();/ / lock
    try{
        number++;
    }finally {
        lock.unlock();/ / releases the lock}}Copy the code

Lock ensures atomicity in a way similar to synchronized

2.3 types of atomic operations

The JDK provides a number of atomic operation classes to ensure atomicity of operations. For example, the most common basic type:

AtomicBoolean
AtomicLong
AtomicDouble
AtomicInteger
Copy the code

At the bottom of these atomic operation classes is the CAS mechanism, which ensures that the entire assignment operation is atomic and cannot be broken, thus ensuring that the final result is correct.

In contrast to synchronized, atomic operation types are equivalent to guaranteeing atomicity from the micro level, whereas synchronized guarantees atomicity from the macro level.

public class Test02Atomicity {

    // Define a shared variable
    private static AtomicInteger number = new AtomicInteger();

    public static void add(a){
        number.incrementAndGet();
    }

    public static int get(a){
        return number.get();
    }

    public static void main(String[] args) throws InterruptedException {
        // Add 1000 ++ to number
        Runnable runnable = () -> {
            for (int i = 0; i < 1000; i++) { add(); }}; List<Thread> list =new ArrayList<>();
        // Use 5 threads for the operation
        for (int i = 0; i < 10; i++) {
            Thread t = new Thread(runnable);
            t.start();
            list.add(t);
        }

        for (Thread t : list) {
            //t.join() only causes the main thread to enter the wait pool and wait until the t thread finishes executing. It does not affect other threads running at the same time
            t.join();
        }

        System.out.println("number = "+ get()); }}Copy the code

order

Order definition: refers to the code sequence structure, we can intuitively specify the order of code execution, that is, from top to bottom. However, the compiler and CPU processor reorder the code execution order at their discretion. Optimize the order of instruction execution to improve the performance and execution speed of the program, so that the order of statement execution changes, reordering occurs, but the end result looks the same (single-core).

Ordering problem refers to the problem that the result is not expected in multi-threaded environment (multi-core) because the part of the statement reordering is not executed together and the reordering is switched to another thread. This is the order problem that compiler optimization brings to concurrent programming

Here is the picture:

1. Reasons for ordering:

If a thread writes values to field A and then to field B, and the values of B are independent of the values of A, the processor is free to adjust their execution order, and the buffer can flush b’s values to main memory before A. This is where the problem of order may arise.

Example:

public class OrderlyDemo {

    static int value = 1;
    
    private static boolean flag = false;

    public static void main(String[] args) throws InterruptedException {
        for (int i = 0; i < 500; i++) {
            value = 1;
            flag = false;
            Thread thread1 = new DisplayThread();
            Thread thread2 = new CountThread();
            thread1.start();
            thread2.start();
            System.out.println("= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =");
            Thread.sleep(4000); }}static class DisplayThread extends Thread {
        @Override
        public void run(a) {
            System.out.println(Thread.currentThread().getName() + " DisplayThread begin, time:" + LocalDateTime.now());
            value = 1024;
            System.out.println(Thread.currentThread().getName() + " change flag, time:" + LocalDateTime.now());
            flag = true;
            System.out.println(Thread.currentThread().getName() + " DisplayThread end, time:"+ LocalDateTime.now()); }}static class CountThread extends Thread {
        @Override
        public void run(a) {
            if (flag) {
                System.out.println(Thread.currentThread().getName() + The value of "value" is: + value + ", time:" + LocalDateTime.now());
                System.out.println(Thread.currentThread().getName() + " CountThread flag is true, time:" + LocalDateTime.now());
            } else {
                System.out.println(Thread.currentThread().getName() + The value of "value" is: + value + ", time:" + LocalDateTime.now());
                System.out.println(Thread.currentThread().getName() + " CountThread flag is false, time:"+ LocalDateTime.now()); }}}}Copy the code

Running results:

It can be seen from the print: When the DisplayThread is executed, there must be a reordering, which results in the assignment of flag first and then switch to the CountThread thread. That is why the printed value is 1 and falg is true, and then the value is assigned. The reason for this is that there is no connection between the two assignments, so the compiler can reorder the instructions when compiling the code.

With the diagram, it is:

Second, how to solve the order

2.1, volatile

Underlying volatile is the use of memory barriers to ensure orderliness (the technique of making state (variables) in one Cpu cache visible to other Cpu caches).

A rule for volatile variables is that writes to a volatile variable are happens-before subsequent reads to that volatile variable. And this rule is transitive, that is to say:

Using volatile to modify flags avoids reordering and memory visibility problems. When writing volatile variables, you can ensure that operations prior to volatile writes will not be reordered by the compiler after volatile writes. read

When volatile variables are volatile, you can ensure that operations after volatile reads are not reordered by the compiler to those before volatile reads.

We define flag with the volatile keyword, as in:

    private static volatile boolean flag = false; 
Copy the code

At this point, the variable means something like this:

That is, as long as flag=true; Value =1024; Otherwise, flag=false is read; And the unmodified initial state of value=1;

However, there may be atomicity problems caused by thread switching, which is to read flag=false; Value =1024; Those of you who read the last article on atomicity might immediately understand that this is due to thread switching.

2.2, lock

Here we directly use the Java language built-in synchronized keyword to lock the parts that might be reordered, so that it does not appear to be reordered at a macro level or in the execution result.

Code modification is also simple, just use the synchronized keyword to modify the run method, the code is as follows:

public synchronized void run(a) {
    value = 1024;
    flag = true;
}
Copy the code

The last

Finally, a brief summary of the differences between several solutions:

features Atomic variables The volatile keyword The Lock interface The synchronized keyword
atomic Can guarantee Can’t guarantee Can guarantee Can guarantee
visibility Can guarantee Can guarantee Can guarantee Can guarantee
order Can’t guarantee Degree of security Can guarantee Can guarantee