A concept,
process
A process is the smallest unit of resources allocated for a program to run.
A process is an execution activity of a program on a computer. When you run a program, you start a process. Obviously, programs are dead and static, and processes are alive and dynamic. Processes can be divided into system processes and user processes. Generally used to complete the various functions of the operating system process is the system process, they are in the running state of the operating system itself, the user process is all the process started by you.
A process is the smallest unit of resource allocation in an operating system. Resources include CPU, memory space, and disk I/O. Multiple threads in a process share all system resources in the process, and processes are independent of each other. Process is a program with a certain independent function on a data set of a running activity, process is a system for resource allocation and scheduling an independent unit.
thread
Threads are the smallest unit of CPU scheduling and must be process-dependent.
A thread is an entity of a process. It is the basic unit of CPU scheduling and dispatching. It is a smaller, stand-alone unit than a process. A thread has virtually no system resources of its own, only a few resources that are essential to running (such as a program counter, a set of registers, and a stack), but it can share all the resources owned by a process with other threads that belong to the same process
CPU time slice rotation mechanism (ALSO called RR scheduling, which is concurrent)
Number of CPU cores and number of threads
Multi-core processors are also referred to as Chip Multiprocessors (CMP). CMP was developed by Stanford University in the United States. CMP integrates SMPS (symmetric Multiprocessors) from massively parallel processors into the same Chip, and each processor executes different processes in parallel.
Simultaneous Multithreading. SMT. Allows multiple lines on the same processor to step together and share the execution resources of the processor.
Number of cores, number of threads: the current mainstream CPU is multi-core. The reason for increasing the number of cores is to increase the number of threads, because the operating system uses threads to perform tasks. In general, there is a 1:1 relationship, which means that a quad-core CPU has four threads. However, Intel introduced the hyper-threading technology, so that the number of cores and the number of threads form a 1:2 relationship
CPU time slice rotation mechanism
Time slice rotation scheduling is the oldest, simplest, fairest and most widely used algorithm, also known as RR scheduling. Each process is assigned a period of time, called its slice, which is the amount of time the process is allowed to run. Baidu Baike explains the CPU time slice rotation mechanism as follows:
If the process is still running at the end of the time slice, the CPU is stripped and allocated to another process. If the process blocks or freezes before the time slice ends, the CPU switches immediately. All the scheduler has to do is maintain a list of ready processes that are moved to the end of the queue when the process runs out of its time slice.
The only interesting thing about slice rotation scheduling is the length of the slice. Switching from one process to another takes time, including saving and loading register values and memory images, updating various tables and queues, and so on. If processwitch, sometimes called context switch, takes 5ms, and if the timesslice is set to 20ms, then the CPU will take 5ms to switch the process after doing 20ms of useful work. Twenty percent of CPU time is wasted in administrative overhead.
To improve CPU efficiency, we can set the time slice to 5000ms. Only 0.1% of the time is wasted. But consider that in a time-sharing system, what happens if 10 interactive users press enter almost simultaneously? Assuming all the other processes use their full time slice, the last unfortunate process will have to wait 5 seconds for a chance to run. Most users can’t stand the fact that it takes five to respond to a short command, and the same problem can occur on a PC that supports multiple programs
The conclusion can be summarized as follows: too short time slice will lead to too many process switches and reduce CPU efficiency; too long time slice may lead to poor response to short interaction requests. A time slice of 100ms is usually a reasonable compromise.
Second, using threads
1. Start the thread
There are two methods: Thread and Runnable
Thread class UserThread extends Thread{@override public void run() {super.run(); . } } UserThread thread = new UserThread(); thread.start(); Runnable new Thread(new Runnable() {@override public void run() {... } }).start();Copy the code
Understand run() and start() in depth
The Thread class is an abstraction of the Thread concept in Java. We simply create an instance of a Thread through a new Thread(), without the actual threads of the operating system being hooked up.
Only after the start() method is executed is the thread truly started. The start() method causes a thread to enter the ready queue for CPU allocation, and then calls the implemented run() method. The start() method cannot be called repeatedly, and if repeated calls are made, an exception will be thrown. The run method is where the business logic is implemented, essentially no different from any member method of any class, and can be executed repeatedly or called separately.
2. End the thread
2.1 stop() 、suspend() 、resume()
Suspend, resume, and stop operations correspond to the Thread APIS suspend(), resume(), and stop(). However, these methods have been deprecated due to the fact that when suspend() is called, the thread does not release occupied resources, such as locks, but rather hogs resources to sleep, which is prone to deadlocks. The stop() method does not guarantee that a thread’s resources will be released when it terminates. It usually does not give the thread a chance to complete the resource release, so the program may work in an indeterminate state.
2.2 Interrupt (), isInterrupted(), Thread.interrupted()
A safe interrupt is one that interrupts thread A by calling interrupt() on it. Interrupt () calls interrupt() and says “A, you are interrupted”. Set thread A’s interrupt position to true and use the thread’s isInterrupted() flag to interrupt the thread. This gives the thread a chance to release resources, etc. If isInterrupted() is true, it releases resources, terminates the thread, etc.
You can also use thread.interrupted () to check the flag bit. This method is static and, unlike isInterrupted(), sets the flag position to false after the call has been interrupted.
When a thread is blocking, such as when it calls sleep, JOIN, wait, etc., and finds that the interrupt flag is true, it throws InterruptedException at the invocation of the blocking method and clears the interrupt flag immediately after the exception is thrown. Reset to false.
A thread in a deadlock state cannot be interrupted.
Thread common methods and thread state
3.1 yield ()
Causes the current thread to relinquish CPU ownership for an undefinable amount of time and does not release locked resources. After yield() is called to yield CPU possession, the CPU will re-select all threads for execution, so it is possible that a thread that performs yield() will be selected again by the operating system as soon as it enters the ready state.
3.2 the join ()
Call join() on thread B while thread A is executing, and thread A is suspended, freeing the CPU to execute thread B. After thread B finishes executing, thread A continues to execute.
3.3 Thread Status
3.3.1 Initial Status
A new thread is created, but start() has not yet been called
3.3.2 Operating Status
Java threads that speak of ready and running states are collectively referred to as “running.” After the thread is created, other threads, such as the main thread, call the start() method of the object. The thread in this state is in the runnable thread pool, waiting to be selected by thread scheduling to obtain the right to use the CPU. At this time, the thread is in the ready state, and the thread in the ready state becomes the running state after obtaining the CPU time slice.
3.3.3 Blocking Status
The thread blocks the lock.
3.3.4 Wait Status
A thread that enters this state needs to wait for other threads to do some specific action (notification or interrupt).
3.3.5 Timeout Wait
This state can return automatically after a specified time.
3.3.6 Termination Status
The thread completes execution.
3.4 Priority of threads
In Java threads, the priority is controlled by an integer member variable priority, which ranges from 1 to 10. The setPriority(int) method can be used to change the priority during thread construction. The default priority is 5. Threads with higher priority allocate more time slices than threads with lower priority. When setting thread priorities, threads that block frequently (sleep or I/O operations) need to be set to a higher priority, while threads that are computation-heavy (requiring more CPU time or bias) need to be set to a lower priority to ensure that the processor is not monopolized. Thread planning varies across JVMS and operating systems, and some operating systems even ignore thread priority Settings.
3.5 Daemon Thread
A Daemon thread is a support thread because it is used primarily for background scheduling and support work within a program. This means that a Java virtual machine will exit when there are no non-Daemon threads in the machine. Threads can be set to Daemon threads by calling thread.setdaemon (true). We don’t usually use it, for example garbage collection threads are Daemon threads. Daemon threads are used to do support work, but the finally block in the Daemon thread does not necessarily execute when the Java VIRTUAL machine exits. When building Daemon threads, you cannot rely on the contents of the finally block to ensure that the logic to close or clean up resources is performed.
4. Thread lifecycle
As shown above: After creating a thread and calling its Start(), the thread is in the ready state, waiting for the CPU’s time slice to turn, or after joining (), the thread starts running, and returns to the ready state when the CPU’s allotted time slice expires or when yield() is invoked. Keep waiting. When run() ends or stop() is called, the thread terminates. When a thread is running, it blocks if sleep() or wait() is called. When sleep() or notify() or notifyAll() is called, the thread returns to the ready state and waits to run.
Third, thread safety
1. What is sharing between threads?
A thread starts running, has its own stack space, and acts like a script, following the code step by step until it terminates. However, each running thread, if only in isolation, has little value, or very little value, if multiple threads can work together to get things done, including sharing data, and doing things together. This is going to be of tremendous value.
2. Synchronized keyword
2.1 Functions of synchronized
Java support multiple threads at the same time, access to an object or object member variable, the keyword synchronized method can be modified or for use in the form of a synchronized block, it is mainly to ensure multiple threads at the same time, there can be only one thread in methods or synchronized block, it ensures the thread on the variable visibility and exclusivity, Also known as built-in locking mechanism.
2.2
// TODO is not finished yet
3. Volatile keyword
Volatile ensures the visibility of different threads operating on the variable. When one thread changes the value of a variable, the new value is immediately visible to other threads.
4, ThreadLocal
4.1 What is ThreadLocal?
A ThreadLocal is called a thread variable, meaning that the variables that populate a ThreadLocal belong to the current thread and are isolated from other threads. ThreadLocal creates a copy of a variable in each thread, so that each thread can access its own internal copy of the variable.
4.2 ThreadLocal principle
The principles of ThreadLocal are covered in detail in my blog LCODER’s Multithreading series: Analyzing Thread communication in Android from a source code perspective, which I’ve moved on to here.
ThreadLocal’s most important function is data isolation between threads, which is why it exists. So how does ThreadLocal isolate data between threads? Conclusion first:
- Put data into a ThreadLocal: Every time you put data into a ThreadLocal, the first thing you do is get the current thread, then get the ThreadLocalMap from the current thread, and put data into the ThreadLocalMap with key = ThreadLocal and vaule= values to put in
- Fetch data from a ThreadLocal: The thread is fetched first, then the ThreadLocalMap is fetched, and then the Map is fetched using the ThreadLocal key.
- ThreadLocalMap is a global variable in the Thread class, with one copy for each Thread. Therefore, even if the ThreadLocal key is the same, the data extracted is the data of each thread, and there is no conflict, thus achieving data isolation.
This conclusion comes from my analysis of the source code for ThreadLocal. Let’s look at the source code for ThreadLocal. Let’s start by tracing the source code to see how ThreadLocal places and fetches data.
1.1 Fetching data from ThreadLocal :get()
Thread T = thread.currentThread (); ThreadLocalMap map = getMap(t); // Through the following two steps, you can see that the map is now empty. if (map ! = null) { ThreadLocalMap.Entry e = map.getEntry(this); if (e ! = null) { @SuppressWarnings("unchecked") T result = (T)e.value; return result; } } return setInitialValue(); }Copy the code
Click on the source code and you’ll find that this method first retrieves the current thread and then retrieves the ThreadLocalMap from the current thread. We trace getMap(t):
ThreadLocalMap getMap(Thread t) {
return t.threadLocals;
}
Copy the code
Returns threadLocals, which is a global variable in the Thread class. Thread. Java
ThreadLocal.ThreadLocalMap threadLocals = null;
threadLocals = null;
Copy the code
In Thread. Java, this global variable is defined as null. So in get(), the map is empty and goes to setInitialValue(). We continue to trace back to setInitialValue(). Look at how setInitialValue() is implemented in the source code:
private T setInitialValue() { T value = initialValue(); Thread t = Thread.currentThread(); ThreadLocalMap map = getMap(t); if (map ! = null) map.set(this, value); else createMap(t, value); return value; }Copy the code
InitialValue () is called to get valuse. If the user overrides initialValue(), it gets the user-defined return value. Further down, the resulting Map is still null, so it goes to createMap(t, value). Follow along to see how createMap() is implemented.
void createMap(Thread t, T firstValue) {
t.threadLocals = new ThreadLocalMap(this, firstValue);
}
Copy the code
In this method, the variable threadLocals is defined. ThreadLocalMap is created.
1.2 Adding data to ThreadLocal. set()
public void set(T value) { Thread t = Thread.currentThread(); ThreadLocalMap map = getMap(t); if (map ! = null) map.set(this, value); else createMap(t, value); }Copy the code
GetMap (t) : createMap(t, value) : createMap(t, value) : createMap(t, value); CreateMap (t, value) creates a ThreadLocalMap and puts the value into the ThreadLocalMap.
ThreadLocal takes and retrials data from a class called ThreadLocalMap. What is this class? ThreadLocalMap is a static inner class of ThreadLocal:
static class ThreadLocalMap { /** * The entries in this hash map extend WeakReference, using * its main ref field as the key (which is always a * ThreadLocal object). Note that null keys (i.e. entry.get() * == null) mean that the key is no longer referenced, so the * entry can be expunged from table. Such entries are referred to * as "stale entries" in the code that follows. */ static class Entry extends WeakReference<ThreadLocal<? >> { /** The value associated with this ThreadLocal. */ Object value; Entry(ThreadLocal<? > k, Object v) { super(k); value = v; } } /** * The initial capacity -- MUST be a power of two. */ private static final int INITIAL_CAPACITY = 16; /** * The table, resized as necessary. * table.length MUST always be a power of two. */ private Entry[] table; .Copy the code
In this static inner class, there is also a static inner class called Entry, which is simpler and maintains two variables, a Key of type ThreadLocal and a Value of type Object. Each ThreadLocalMap has an array of type Entry[] to store data.
Set () ¶
if (map ! = null) map.set(this, value);Copy the code
The Map refers to a ThreadLocalMap, tracing to a set() in ThreadLocalMap.
/** * Set the value associated with key. * * @param key the thread local object * @param value the value to be set */ private void set(ThreadLocal<? > key, Object value) { // We don't use a fast path as with get() because it is at // least as common to use set() to create new entries as // it is to replace existing ones, in which case, a fast // path would fail more often than not. Entry[] tab = table; int len = tab.length; int i = key.threadLocalHashCode & (len-1); for (Entry e = tab[i]; e ! = null; e = tab[i = nextIndex(i, len)]) { ThreadLocal<? > k = e.get(); if (k == key) { e.value = value; return; } if (k == null) { replaceStaleEntry(key, value, i); return; } } tab[i] = new Entry(key, value); int sz = ++size; if (! cleanSomeSlots(i, sz) && sz >= threshold) rehash(); }Copy the code
As you can see from the code, the process of loading data into a ThreadLocalMap is the process of putting data into an Entry, which takes the current ThreadLocal as the Key Value and the Value that the user wants to put into the Entry as the Value.
So how does ThreadLocal achieve data isolation? Take a look at this graph:
Each Thread has its own ThreadLocalMap. Each Thread has its own ThreadLocalMap.
/* ThreadLocal values pertaining to this thread. This map is maintained
* by the ThreadLocal class. */
ThreadLocal.ThreadLocalMap threadLocals = null;
Copy the code
In Thread, threadLocals is null. Where is it assigned? The answer is assigned in a ThreadLocal. When a ThreadLocal set() or get() is set to null, or if the value to get() is null, the ThreadLocalMap is null. If ThreadLocalMap is empty, it will go to createMap.
/** * Create the map associated with a ThreadLocal. Overridden in * InheritableThreadLocal. * * @param t the current Thread t = thread.currentThread (); * @param firstValue value for the initial entry of the map */ void createMap(Thread t, T firstValue) { t.threadLocals = new ThreadLocalMap(this, firstValue); }Copy the code
A new ThreadLocalMap is created for each thread. Each ThreadLocalMap has a key value of ThreadLocal, which is the same key for the same ThreadLocal variable, but does not conflict. Since the data is stored in an array of entries in a ThreadLocalMap, even if the Key is the same, the ThreadLocalMap in each thread is different, which are two completely unrelated ThreadLocalMaps with even more different entries []. Thus, ThreadLocal provides data isolation between threads.
In a nutshell, ThreadLocal allows data isolation between threads because each thread has its own ThreadLocalMap to store data.
4.3 Analysis of Memory leaks caused by ThreadLocal
4.3.1 Why does this Cause memory leaks?
ThreadLocal does not store values in a ThreadLocalMap, but in a ThreadLocalMap. ThreadLocal is used as a Map key for the thread to select values from the ThreadLocalMap. ThreadLocalMap uses a weak reference to ThreadLocal as the Key. Weakly referenced objects are reclaimed during GC. Weak references will be covered in the next series of LCODER JVMS. ThreadLocal, which is a Key, is reclaimed when a GC occurs. However, entries that are used as keys by these ThreadLocal are strongly referenced. Entry is a static inner class in ThreadLocalMap that is referenced by ThreadLocalMap, a global variable in ThreadLocalMap that is referenced by Thread, Thread -> ThreadLocalMap -> Entry, so the Entry is strongly referenced as long as the Thread does not end, and the Entry key is: The ThreadLocal is reclaimed, and the piece of memory in which the Entry is located is never accessed, causing a memory leak. Therefore, to avoid memory leaks, ThreadLocalMap calls its remove() after it has been used to reclaim the data. In remove(), expungeStaleEntry() is explicitly called to remove entries whose key is null.
/** * Expunge a stale entry by rehashing any possibly colliding entries * lying between staleSlot and the next null slot. This also expunges * any other stale entries encountered before the trailing null. See * Knuth, Section 6.4 * * @param staleSlot index of slot known to have null key * @return the index of the next null slot after staleSlot * (all between staleSlot and this slot will have been checked * for expunging). */ private int expungeStaleEntry(int staleSlot) { Entry[] tab = table; int len = tab.length; // expunge entry at staleSlot tab[staleSlot].value = null; tab[staleSlot] = null; size--; // Rehash until we encounter null Entry e; int i; for (i = nextIndex(staleSlot, len); (e = tab[i]) ! = null; i = nextIndex(i, len)) { ThreadLocal<? > k = e.get(); if (k == null) { e.value = null; tab[i] = null; size--; } else { int h = k.threadLocalHashCode & (len - 1); if (h ! = i) { tab[i] = null; // Unlike Knuth 6.4 Algorithm R, we must scan until // null because multiple entries could have been stale. while (tab[h] ! = null) h = nextIndex(h, len); tab[h] = e; } } } return i; }Copy the code
So why use weak references when ThreadLocal can leak memory? The answer is that while using weak references can cause memory leaks, it is possible to reclaim an Entry with a null key when calling remove(), set(), or get(). However, if you use strong references, even if the object that references ThreadLocal is reclaimed, but the ThreadLocal is strongly referenced by ThreadLocalMap, it will leak memory and will not be released until the end of the thread. Therefore, although weak references can leak memory, they are still safer than strong references.
Comparison of Thread and Synchonized
Fourth, thread collaboration
Threads work together to accomplish something, such as one thread changing the value of an object, and another thread perceives the change and acts accordingly. The whole process starts in one thread and ends in another thread.
4.1 Wait and notification mechanism
Thread A calls the wait() method of object O to enter the wait state, while thread B calls the notify() or notifyAll() method of object O. After receiving the notification, thread A returns from the wait() method of object O and performs subsequent operations. The two threads interact through object O, and the relationship between wait() and notify/notifyAll() on the object is like a switch signal to complete the interaction between the waiting and notifying parties.
notify()
Notify a thread WAITING on an object to return from wait if it has acquired the lock on the object. The thread that has not acquired the lock is WAITING again.
notifyAll()
Notifies all threads waiting on the object.
wait()
The thread calling this method enters a WAITING state and returns only when it is notified by another thread or interrupted. Note that after calling wait(), the lock on the object is released.
The operating system limits the number of threads per process: Linux 1000 Windows 2000