What is a thread

According to the description of the operating system, the thread is the smallest unit of CPU scheduling, intuitively speaking, the thread is the code in order to execute down, the execution of the end of a line.

For example 🌰, an assembly shop at Futukang is equivalent to a CPU, and threads are the current assembly line. In order to improve productivity and efficiency, workshops generally have multiple assembly lines operating at the same time. Also in our Android development multithreading can be seen everywhere, such as the execution of time-consuming operations, network requests, file read and write, database read and write, and so on will open a separate child thread to execute.

So is your thread safe? What is the principle of thread safety? (The content of this article is a summary of personal learning, if there are mistakes, please pat and correct them.)

Thread safety

Before we look at thread safety, let’s look at Java’s memory model and understand how threads work.

Java Memory Model – JMM

What is the JMM

The Java Memory Model (JMM) is a mechanism and specification based on the computer Memory Model (which defines the specifications of read and write operations of multithreaded programs in a shared Memory system). It shields the access differences of various hardware and operating systems and ensures that Java programs can access Memory on various platforms with consistent results. Ensure atomicity, visibility, and order of shared memory.

Where you can use graphs, try not to talk nonsense. Let’s take a look at a picture:

The figure above depicts a multithreaded execution scenario. Thread A and thread B respectively read and write to variables in main memory. The variable in the main memory is a shared variable, that is, this variable is only one copy, shared between multiple threads. Each thread has its own working memory. To read or write a shared variable of the main memory, the thread must first copy the variable to its own working memory and then perform all operations on the variable in its own working memory. The thread working memory needs to synchronize the results to main memory after completing operations on the variable copy.

Thread working memory is thread private memory. Threads cannot access each other’s working memory.

To make it easier to understand, use a diagram to describe the process by which a thread assigns a value to a variable.

So how does the thread working memory know when and how to synchronize data to main memory? This is where JMM comes in. The JMM specifies when and how to synchronize data between thread working memory and main memory.

Have a preliminary understanding of JMM, a brief summary of atomicity, visibility, order

Atomicity:

Operations on shared memory must either be performed to the end of the execution without interruption by any external factors, or not be performed at all.

Visibility:

When multiple threads operate the shared memory, the execution results can be synchronized to the shared memory in a timely manner, ensuring that other threads can see the results in a timely manner.

Order:

In a single-threaded environment, the execution of the program is orderly, but in a multi-threaded environment, the compiler and processor will rearrange the instructions for performance optimization, and the execution of the program will become disordered.

This brings us to the topic of this article: Thread safety.

The nature of thread safety

The problem with the first example is that variables in main memory are shared and accessible to all threads, while thread working memory is thread private and not accessible to each other. In A multi-threaded scenario, if thread A and thread B operate on the same variable in shared memory at the same time, the variable data in main memory will be corrupted. This means that this variable in main memory is not thread-safe. Let’s look at a little code example to help you understand.

public class ThreadDemo {
    private int x = 0;

    private void count(a) {
        x++;
    }

    public void runTest(a) {
        new Thread() {
            @Override
            public void run(a) {
                for (int i = 0; i < 1 _000_000; i++) {
                    count();
                }
                System.out.println("final x from 1: " + x);
            }
        }.start();
        new Thread() {
            @Override
            public void run(a) {
                for (int i = 0; i < 1 _000_000; i++) {
                    count();
                }
                System.out.println("final x from 2: " + x);
            }
        }.start();
    }

    public static void main(String[] args) {
        newThreadDemo().runTest(); }}Copy the code

The runTest method in the example code has two threads each executing the 1_000_000 count() method. The count() method only performs the simple X ++ operation. In theory, one thread each executing the runTest method should produce an x result of 2_000_000. But the actual result is not what we thought:

final x from 1: 989840
final x from 2: 1872479
Copy the code

I ran it 10 times, and one thread output x with a value of 2_000_000 only happened 2 times.

final x from 1: 1000000
final x from 2: 2000000
Copy the code

The reason for this result is what we said above. In a multi-threaded environment, the x variable in our main memory is corrupted. We all know that completing i++ once is equivalent to executing:

int tmp = x + 1;
x = tmp;
Copy the code

Int TMP = x + 1; This line of code causes a thread switch, and when the thread cuts back again, x will be assigned repeatedly, resulting in the above run result, where neither thread can output 2_000_000.

The following diagram depicts the execution sequence of the sample code:

So how does Java solve these problems to ensure thread-safety and atomicity, visibility, and orderliness of shared memory?

Thread synchronization

Java provides a series of keywords and classes to ensure thread-safety

The Synchronized keyword

Synchronized action

1. Ensure atomicity of method or code block operations

Synchronized guarantees mutually exclusive access to resources (data) within a method or code block. That is, code monitored by the same Monitor can be accessed by at most one thread at a time.

To learn more about the implementation principles of Monitor and Synchronized, read these two articles: The implementation principles of Synchronized and Moniter

Without further ado, here’s a GIF describing how Monitor works:

Methods or code blocks described by Synchronized keyword can only be accessed by one thread at a time in a multi-threaded environment. Before the execution of the thread holding the current Monitor is completed, other threads that want to call relevant methods must queue up until the execution of the thread holding the current Monitor is completed. Release Monitor so that the next thread can obtain Monitor execution.

If there are multiple monitors, they are not mutually exclusive.

Synchronized can specify a custom Monitor when describing a code block. The default value is this, that is, the current class.

2. Ensure visibility of monitoring resources

Ensure data synchronization of monitored resources in a multi-threaded environment. That is, any thread will copy the data in the shared memory to its own cache for the first time after acquiring Monitor. The first time any thread releases Monitor, it copies data from the cache to shared memory.

3. Ensure the order of operations between threads

The atomicity of Synchronized ensures that the methods or code operations described by Synchronized are ordered and can only be accessed by at most one thread at a time, without triggering the JMM instruction rearrangement mechanism.

The Volatile keyword

Volatile role

Ensure visibility and order for operations on variables described by the Volatile keyword (disallow instruction reordering)

Note: 1.Volatile only applies to assignment of primitive types (byte, char, short, int, long, float, double, Boolean) and to reference assignment of objects. 2 for compound operations such as i++, Volatile does not guarantee orderliness and atomicity. 3. Volatile is lighter than Synchronized.

3. java.util.concurrent.atomic

Java. Util. Concurrent. Atomic package provides a series of AtomicBoolean, AtomicInteger, AtomicLong class. Using these classes to declare variables ensures that operations on them are atomic and thread-safe.

Implementation principle and Synchronized using the Monitor (Monitor) ensure that block the mutually exclusive access to different resource in a multithreaded environment, Java. Util. Concurrent. The atomic package under the atomic classes based on the principle of CAS (CompareAndSwap) operation.

CAS, also known as lock-free operation, is an optimistic locking policy. The principle is that in a multi-threaded environment, threads accessing shared variables will not be locked, blocked, and queued, and threads will not be suspended. In layman’s terms, the comparison loop continues, and if there are access conflicts, retry until there are no conflicts.

4. Lock

Lock is also an interface under the java.util.Concurrent package, which defines a series of Lock operations. The Lock interface mainly has already, ReentrantReadWriteLock ReadLock, ReentrantReadWriteLock. WriteLock implementation class. Different from Synchronized, Lock provides interfaces related to Lock acquisition and Lock release, which makes use more flexible and can also perform more complex operations, such as:

ReentrantReadWriteLock lock = new ReentrantReadWriteLock();
Lock readLock = lock.readLock();
Lock writeLock = lock.writeLock();
private int x = 0;
private void count(a) {
    writeLock.lock();
    try {
        x++;
    } finally{ writeLock.unlock(); }}private void print(int time) {
    readLock.lock();
    try {
        for (int i = 0; i < time; i++) {
            System.out.print(x + "");
        }
        System.out.println();
    } finally{ readLock.unlock(); }}Copy the code

About the implementation principle of Lock and more detailed use of the recommended following two articles: the use of Lock Lock source code analysis

conclusion

  1. Causes of thread-safety issues: In multiple threads concurrent environment, multiple threads access the same Shared memory resources together, one of the threads on the halfway to the resources to write (write into the already, but haven’t end), other threads on this wrote half of the resources of the read operation, or to the wrote half of the resources of the write operation, lead to this resource data error.

  2. How can I avoid thread safety issues?

  • Ensure that shared resources can only be operated on by one thread at a time (atomicity, orderliness).
  • Refresh the results of thread operations in a timely manner to ensure that other threads can immediately obtain the latest modified data (visibility).

References:

HenCoder Plus

Send this article to the next person who asks you what the Java memory model is