preface
Thread concurrency series:
Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Java Thread State Unsafe/CAS/LockSupport Application and principle Java concurrency “lock” nature (step by step to implement the lock) Java Synchronized implementation of mutual exclusion application and source exploration Java object header analysis and use (Synchronized related) Java The evolution process of Synchronized partial lock/lightweight lock/heavyweight lock Java Synchronized principle of heavyweight lock in depth (mutual exclusion) Java Synchronized principle of heavyweight lock in depth (synchronization) Java concurrency AQS In-depth analysis (on) the Java concurrency of AQS deep parsing (under) Java Thread. Sleep/Thread. Join/Thread. The yield/Object. Wait/Condition. Await explanation of Java concurrency Already the thorough analysis of concurrent Java (with Synchronized difference) Java Semaphore/CountDownLatch/CyclicBarrier ReentrantReadWriteLock in-depth analysis Deep parsing (principle), Java Semaphore CountDownLatch/CyclicBarrier in-depth analytical (application), the most detailed graphic analytic Java various locks (ultimate) thread pool will understand series
Mastery of thread principle and use is the only way for programmers to advance. There are many knowledge about Java threads on the Internet, such as the visibility of variables between multithreads, atomicity of operations, and then extended knowledge of Volatile, Lock (CAS/Synchronized/Lock), semaphore and so on. Some articles only say general concepts, some articles get lost in the underlying source code, some articles only say one of the points without mentioning the internal connection. For these reasons, this series of articles attempts to systematically analyze and summarize the knowledge related to Java threads in order to deepen the impression, lay a solid foundation, and also to introduce jade. If relevant article is helpful to you see officer, lucky to zai. Through this article, you will learn:
1. Difference between process and thread 2. Start/stop thread 3
1. The difference between processes and threads
Programs and processes
When writing a program/software, such as writing an APK, which can be directly sent to another device for installation, we say sending you a program/software, is a static collection of single files/multiple files. When APK is installed and the program is executed by the CPU, the process is said to be running. Therefore, a process is a dynamic representation of a program and a description of the CPU’s execution time.
Of course, programs and processes are not one-to-one, which means that a program can fork() multiple processes to perform tasks.
Processes and threads
Before the CPU schedules the execution program, it is necessary to prepare some data, such as the memory area where the program is located, the peripheral resources that the program needs to access, and some intermediate variables generated in the process of running the program need to be temporarily stored in registers. These things associated with the process itself are called process contexts.
The problem caused by this: THE CPU will inevitably involve context switching in the process of process switching, which will occupy CPU time.
Process 1 is scheduled for execution by the CPU, and then process 2 is scheduled to execute after a certain period of time. Consider another scenario: A program implements two related functions, A and B, which are implemented in different processes. Process A needs to interact with process B, which is called IPC(inter-process communication). As we know, IPC needs to share memory or get caught up in kernel calls, which can be costly. Android interprocess communication (IPC
As computer hardware became more powerful, CPU frequencies became higher and even multiple cpus were developed. In order to make full use of the CPU, threads came into being. Processes are divided into smaller granularity, where A process used to perform tasks A, B, and C, but now the three tasks are executed in three separate threads.
Process and thread relationships
1. Processes and threads are descriptions of CPU execution periods. 2. Processes are the basic unit of resource allocation and threads are the basic unit of CPU scheduling. 3. There is at least one thread in a process. 4. Threads in the same process can share variables, and the communication between them is called inter-thread communication. 5. Threads can be thought of as smaller processes.
Advantages of threads
1. Starting a new process is much cheaper and faster than starting a new process. 2. Inter-thread communication is simpler, faster and easier to understand than IPC. POSIX compliant threads can be portable across platforms.
2. Start/stop threads
Since threads are so important, let’s take a look at how threads are started and stopped in Java.
Open the thread
Thread implements the Runnable interface, so we need to override the Runnable method :run().
#Thread.java
@Override
public void run() {
if (target != null) {
target.run();
}
}
Copy the code
The method to execute a task once the thread is started is run(). This method checks whether target is not empty, and if so, executes target.run().
#Thread.java
/* What will be run. */
private Runnable target;
Copy the code
The target is of type Runnable, and the reference can be assigned through the Thread constructor. This makes it obvious that for the thread to implement the task, it must either override the run() method directly or pass in a Runnable reference.
Inheriting the Thread
Declare MyThread inherits from Thread and override the run() method
static class MyThread extends Thread { @Override public void run() { System.out.println("thread running by extends..." ); } } private static void startThreadByExtends() { MyThread t2 = new MyThread(); t2.start(); }Copy the code
After the Thread reference is generated, the start() method is called to start the Thread.
Implement Runnable
Construct Runnable and pass the Runnable reference to Thread.
private static void startThreadByImplements() {
Runnable runnable = new Runnable() {
@Override
public void run() {
System.out.println("thread running by implements...");
}
};
Thread t1 = new Thread(runnable);
t1.start();
}
Copy the code
After the Thread reference is generated, the start() method is called to start the Thread.
Stop the thread
After the thread is started and scheduled by the CPU, the run() method is executed. After the execution of this method, the thread exits normally. You can also exit the run() method in the middle of execution (by setting the flag bit to exit) and the thread will stop. If the run() method is thread.sleep (xx), object.wait (), etc., you can interrupt the Thread with the interrupt() method.
private static void stopThread() { MyThread t2 = new MyThread(); t2.start(); // Interrupt thread t2.interrupt(); // Deprecated t2.stop(); }Copy the code
For more complete test instructions, go to:Java gracefully interrupts threads
3. Thread interaction
Hardware level
Let’s take a look at the INTERACTION between CPU and main memory:
The CPU can perform operations much faster than it can access main memory, that is, when the CPU needs to compute the following expression:
int a = a + 1;
Copy the code
First, the CPU gets the value of A from main memory. During the process of accessing main memory, the CPU waits until it gets the value of A from main memory. This process is obviously waste CPU time, therefore increased between main memory and CPU cache, as the name suggests, after obtaining a value when, in the cache, during a visit to a next time again to see whether there is in the cache, so directly get into the register, finally according to certain rules will change after the value of a flushed to main memory. Access speed: register –> cache –> main memory. The CPU looks for the value first in the register, then in the cache, and finally in main memory. You may have noticed the problem with the following code:
int a = 1;
int a++;
Copy the code
Threads A and B execute the above code respectively. Assume that thread A is scheduled by CPU1 and thread B is scheduled by CPU2. When thread A executes A ++, thread A finds A value in the cache, and the result is A =2. When thread B executes, it also retrieves the value from the cache to calculate the result: a=2. Finally, the cache writes the modified value back to main memory, resulting in a=2. This result is not what we want to see, the CPU for this situation to design a set of synchronous cache + main memory mechanism: MESI (Cache consistency protocol) this protocol is agreed between each CPU cache and main memory, as far as possible to ensure that the cache data is consistent. But because of the existence of StoreBuffer/InvalidateQueue need use with Volatile. For more details on Volatile parsing, go to: Really understand the Java uses of Volatile
Software level
Thanks to registers and caches, there is a sense that each thread has its own local memory. In fact, the JVM designs the JMM(Java Memory Model) :
Local memory is a virtual concept, as follows:
static Integer integer = new Integer(0); public static void main(String args[]) { Thread t1 = new Thread(new Runnable() { @Override public void run() { integer = 5; }}); Thread t2 = new Thread(new Runnable() { @Override public void run() { integer = 6; }}); t1.start(); t2.start(); }Copy the code
Integer has only one copy of main memory, and may also exist in registers, caches, and other places that correspond to local memory. Instead of each thread making another copy of the data.
Take another look at this code:
static boolean flag = false; static int a = 0; public static void main(String args[]) { Thread t1 = new Thread(new Runnable() { @Override public void run() { a = 1; //1 flag = true; / / 2}}); Thread t2 = new Thread(new Runnable() { @Override public void run() { if (flag) { //3 a = 2; / / 4}}}); t1.start(); t2.start(); }Copy the code
If thread 1 finishes first and thread 2 finishes later, the result is fine. If two threads are executing at the same time, the compiler/processor may switch places to //1 //2, since there is no dependency between //1 //2. This is an instruction rearrangement. After this, the execution order may be: 2->3->4->1, or other order, the final result is not controllable.
The heart of thread interaction
According to the above analysis at the software and hardware levels, the local memory of thread 1, thread 2, and thread 3 is invisible to other threads. There may be dirty data when multiple threads write to main memory. Instruction rearrangement leads to uncontrollable results. Multithreaded interactions need to address the above three problems, which are at the heart of thread concurrency:
1. Visibility 2. Atomicity 3
The above three are not only the core of concurrency, but also the basis. Only when the three are satisfied, the results of shared variables of thread concurrency can be controllable. Familiar locks, Volatile, and so on are proposed solutions for one or all of the three.
Mutual exclusion and synchronization
The origin of mutual exclusion
What do you do to satisfy the three conditions for concurrency? Let’s look at the atomicity, since multithreaded access to A Shared variable at the same time easy to A problem, so think of people queuing to access it, when one thread during A visit to (A), other threads can not access, and waiting in line after A thread has been completed, waiting for the thread to try again to access to A Shared variable, we put the code for handling Shared variables area known as the critical section, Shared variables are called critical resources.
{a = 5; b = 6; c = a; }Copy the code
In the code above, multiple threads cannot access a critical section at the same time. This access mode is called mutual exclusion. That is, multiple threads mutually exclusive access to critical sections can achieve atomicity of operations.
Origin of synchronization
The shared variable of an operation in a critical region may be handled differently by different threads, as shown in the following code:
Int a = 0; Private void add() {while(true) {if (a < 10) a++; Private void sub() {while(true) {if (a > 0) a--; }}Copy the code
Thread 1 and thread 2 both operate on variable A, and both rely on the value of a to do something. Thread 1 determines that if a<10, a needs to be incremented. Thread 2 determines that if a>0, a needs to decrement. Thread 1 and thread 2, respectively, constantly check the value of A to see if it satisfies the condition before doing further operations, which is fine, but inefficient. If thread 1 and thread 2 check that the condition is not met, they will stop and wait, and the other thread will tell them when the condition is met. This way, they will not have to run to ask what a is every time, which greatly improves efficiency. So, the interaction looks like this:
Int a = 0; Private void add() {while(true) {if (a < 10) a++; // Thread 2 executes private void sub() {while(true) {if (a > 0) a--; Else // Wait and notify thread 1}}Copy the code
So the process is a little boring, let’s use a small analogy: Using Xiaoming to represent thread 1 and Xiaogang to represent thread 2, Xiaoming wants to send a batch of containers, first take the boxes to the open space outside the warehouse, the open space is limited, can only put 10 boxes, waiting for Xiaogang to come to pick up the goods.
1. At first, Xiaogang found that the vacant lot was out of stock, so he waited for Xiaoming’s notice. Xiao Ming found no goods, began to release goods. Xiao Ming found that he could put the box in the open space, so he continued to put it. 3. Xiao Ming found that there were 10 boxes and the space was full, so he took a rest and told Xiao Gang that my goods were enough. Come and get the goods quickly. Xiaogang received the notice, come to take the goods, has been taking, when found after the goods, no longer take, and call to tell Xiaoming, the goods are finished, you quickly put the goods.
So the whole process is summarized: Xiaoming puts 10 boxes and waits for Xiaogang to take them. After xiaogang takes them, he notifies Xiaoming to continue putting them. It is worth noting that the above is a batch of boxes, and then take the boxes in batches, not take one to put one. As for this problem, we will talk about it later, because both Xiao Ming and Xiao Gang rely on the number of boxes, we know that this part of the operation needs to be wrapped in the critical region for mutually exclusive access through the analysis of mutual exclusion above.
We call this interaction process synchronization
Synchronization and mutual exclusion
As can be seen, synchronization adds wait-notification mechanism on the basis of mutual exclusion to achieve orderly access to mutually exclusive resources, so synchronization itself has achieved mutual exclusion.
Synchronization is a complex mutual exclusion and mutual exclusion is a special kind of synchronization
The concept of mutual exclusion and synchronization is explained, so how to implement it? The next series of articles will focus on how the mechanisms provided by the system enable visibility, atomicity, orderliness, mutual exclusion, synchronization, and all three.
Unsafe is an important word for broadening.