The state of a thread in Java

NEW, RUNNABLE (RUNNING or READY), BLOCKED, WAITING, TIME_WAITING, and TERMINATED

Java combines the running and ready states of the operating system as the running state. The blocking state is the state of a thread blocking when it enters a method or code block modified by the synchronized keyword (acquiring the Lock), but the state of a thread blocking the Lock interface in a JUC package is the wait state, because the Lock interface in JUC implements blocking through the related methods in the LockSupport class.

Priority of the thread

Threads in Java have priorities ranging from 1 to 10. If the priority is less than 1 or greater than 10, the JDK throws IllegalArgumentException(). The default priority is 5. In Java, thread priorities are inherited. For example, if thread A starts thread B, thread B has the same priority as thread A. Note that program correctness cannot depend on the priority of the thread, because the operating system can ignore the priority decisions of the Java thread.

Daemon thread

There are two types of threads in Java: user threads and daemon threads. When no non-daemon thread exists in the process, the daemon thread is destroyed automatically. Set the thread to background with setDaemon(true). Pay attention to the thread. SetDaemon (true) must be in the thread. The start () set before, or you will quote IllegalThreadStateException exception; New threads created in Daemon threads are Daemon threads; With multithreaded frameworks such as ExecutorService, daemons are converted to user threads, and the priority is set to thread.norm_PRIORITY. When building Daemon threads, you cannot rely on the contents of the finally block to ensure that the logic to close or clean up resources is performed.

To construct a thread

A newly constructed thread object is allocated space by its parent thread, The child thread inherits whether the parent is Daemon, priority, ThreadGroup, contextClassLoader to load the resource, and InheritableThreadLocal (InheritableThreadLocal), as well as assigning a unique I D to identify the child thread.

Synchronization does not have inheritance

When code executed by a thread is abnormal, the lock it holds is automatically released. Synchronization has no inheritance (the parent method A, declared synchronized, does not have the properties of synchronized when overridden in A subclass).

Using multithreading

  • extends Thread
  • implements Runnable
  • Use Future and Callable

The Executor framework uses Runnable as the basic task representation. Runnable is a very limited abstraction, and while run can write to a log file or put results into a shared data structure, it cannot return a value or throw a checked exception. Many tasks are actually computation with latency — performing database queries, fetching resources from the network, or computing some complex function. A Callable is a better abstraction for these tasks: it thinks the main entry point (call()) will return a value and possibly throw an exception. Runnable and Callable describe abstract computing tasks. These tasks are usually scoped, meaning they have a clear starting point and end eventually.

Thread. The yield () method

The yield() method gives up the current CPU resource to some other task that consumes CPU execution time. However, the time to give up is uncertain. It is possible to give up and get CPU time slices immediately. Note that the yield() method, like the sleep() method, does not yield the lock, unlike wait(), which is why the sleep() method is designed in the Thread class and not in the Object class.

Thread.sleep(0)

In a thread, a call to sleep(0) frees up CPU time to immediately return to the ready queue instead of the wait queue, and sleep(0) frees up the remaining time slice (if any) of the current thread, allowing the operating system to switch to another thread for execution, improving efficiency.

The semantics of thread.yield and thread.sleep (0) are undefined [JLS17.9]; the JVM is free to implement them as no-ops or treat them as scheduling hints. In particular, They are not required to have the semantics of sleep(0) on Unix systems — put the current thread at the end of the run Queue for that priority, yielding to other threads of the same priority — though some JVMs supported yieldin this way.

Thread.join()

If thread A executes A thread.join statement, it means that thread A is waiting for thread A to terminate before returning from thread.join(). Join differs from synchronized in that a JOIN uses the wait() method internally, while the synchronized keyword uses the object monitor as synchronization. Join provides two other implementations: JOIN (Long millis) and JOIN (Long millis, int Nanos), which wait at most for as long as possible before exiting the wait (freeing the lock) and then continue running. Internally, the wait method is used.

Wait, notify, and notifyAll

Wait () can only be used in synchronized methods or synchronized blocks. After executing wait(), the current thread releases the lock (unlike sleep and yield). Threads that invoke wait wait until another thread invokes notify or notifyAll of the same object. Note that: Being awakened does not mean that the lock is acquired immediately. The current thread releases the lock only after the notify() thread exits the synchronized block, and a thread in wait can acquire the lock. If the call wait () method without holding appropriate locks, it throws IllegalMonitorStateException, it is a subclass RuntimeException, therefore, don’t need a try-catch statement to catch exceptions. Notify () only wakes up (randomly) one waiting thread, while notifyAll() wakes up all waiting threads. If an object has not previously called wait, calling notify has no effect. The function of the wait(long timeout) or wait(long timeout, int nanos) methods with arguments is to wait for a certain amount of time for a thread to wake up the lock.

setUncaughtExceptionHandler

When an uncaught exception occurs in a single-threaded program, try… In a multi-threaded environment, exceptions thrown by threads cannot be caught with a try… This can lead to problems such as failure to reclaim some system resources during an exception, or failure to close the current connection. Thread’s run method does not throw any checking exceptions, but it can itself be aborted by an exception, causing the Thread to terminate. UncaughtExceptionHandler is provided in the Thread ApI, which detects a condition that has ended due to an uncaught exception.

thread.setUncaughtExceptionHandler(new UncaughtExceptionHandler() {});Copy the code

We can also set a default UncaughtExceptionHandler for all threads. By calling Thread. SetDefaultUncaughtExceptionHandler (Thread. UncaughtExceptionHandler eh) method, which is a static method of the Thread. In the thread pool, only a task submitted through execute() can hand its exceptions to UncaughtExceptionHandler, while a task submitted through submit(), whether it throws undetected exceptions or checked exceptions, is considered part of the state returned by the task. If both contains setUncaughtExceptionHandler contains setDefaultUncaughtExceptionHandler again, then will be setUncaughtExceptionHandler processing, SetDefaultUncaughtExceptionHandler is ignored.

Close the hook

The JVM can be shut down normally or forcibly, or abnormally. Closing hooks can perform specific operations when the JVM is shut down, such as service or application cleanup. Closing hooks can be used in the following scenarios: 1. A program exits normally (in this case, a JVM instance); 2. Use system.exit (); 3. The terminal uses Ctrl+C to trigger the interrupt; 4. The system shuts down. 5. OutOfMemory is down. 6. Run the Kill PID command to Kill the process (note: The command is not invoked when the Kill -9 PID command is used). Usage (Runtime.geTruntime ().addshutdownhook (Thread hook)).

Finalizers finalize

Finalizer finalize: After the collector frees them, call their Finalize method to ensure that some persistent resources are freed. In most cases, you can manage resources better by using finally code blocks and displaying close methods than by using finalizers. The only exception is when you need to manage an object and the object holds resources that are obtained through local methods. But for some reasons (such as object resurrection), we should avoid writing or using classes that contain finalizers.

The pipe

A variety of input/output streams are provided in Java to make it easy to manipulate data. A pipeStream is a special type of Stream used to transfer data directly between different threads. One thread sends data to the output pipe, and another thread reads data from the input pipe, enabling communication between different threads without resorting to such things as temporary files. There are four classes used in the JDK to enable threads to communicate: PipedInputStream, PipedOutputStream, PipedReader, and PipedWriter. Create a communication connection between two streams using code like inputStream.connect(outputStream) or outputStream.connect(inputStream).

Several modes of communication between processes

  • Pipe: A pipe is a half-duplex communication mode in which data flows only in one direction and can only be used between related processes. Process kinship usually refers to the parent-child process relationship.

  • Named pipe: Named pipe is also a half-duplex communication mode, but it allows communication between unrelated processes.

  • Semophore: A semaphore is a counter that can be used to control access to a shared resource by multiple processes. It is often used as a locking mechanism to prevent other processes from accessing a shared resource while one process is accessing it. Therefore, it is mainly used as a means of synchronization between processes and between different threads within the same process.

  • Message queue: MESSAGE queues are linked lists of messages stored in the kernel and identified by message queue identifiers. The message queue overcomes the disadvantages of little signal transmission, pipe carrying only plain byte stream and limited buffer size.

  • Sinal: Signals are complex forms of communication used to inform the receiving process that an event has occurred.

  • Shared memory: A shared memory map maps a segment of memory that can be accessed by other processes. This shared memory is created by one process but can be accessed by multiple processes. Shared memory is the fastest IPC method and is specifically designed for the low efficiency of other interprocess communication methods. It is often used in conjunction with other communication mechanisms, such as signal two, to achieve synchronization and communication between processes.

  • Socket: A socket is also an interprocess communication mechanism. Unlike other communication mechanisms, it can be used to communicate between different processes.

Synchronized class and object locks

Class lock: Add static synchronized or synchronized(xxx.class) locks to methods. Method1 and method2 in the following code: object locks: see method4, method5, method6.

public class LockStrategy
{
    public Object object1 = new Object();

    public static synchronized void method1(){}
    public void method2(){
        synchronized(LockStrategy.class){}
    }

    public synchronized void method4(){}
    public void method5()
    {
        synchronized(this){}
    }
    public void method6()
    {
        synchronized(object1){}
    }
}
Copy the code

Note that the synchronization blocks in methods 4 and method5 are also mutually exclusive. Let’s do an exercise to deepen our understanding of object locking and class locking. A class is defined as follows:

public class SynchronizedTest
{
    public synchronized void method1(){}
    public synchronized void method2(){}
    public static synchronized void method3(){}
    public static synchronized void method4(){}
}
Copy the code

Now, with SynchronizedTest instances A and B, which of the following options can be accessed by more than one thread simultaneously? A. a.method1() vs. a.method2() B. a.method1() vs. b.method1() C. a.method3() vs. b.method4() D. a.method3() vs. A. method3() E. a.method1() vs. A. method3() BE.

ReentrantLock

ReentrantLock provides the tryLock method. When a tryLock is called, if the lock is held by another thread, tryLock will immediately return false. If the lock is not held by another thread, the current calling thread holds the lock, and tryLock returns true.

boolean tryLock()
boolean tryLock(long timeout, TimeUnit unit)
Copy the code

A ReentranLock can be constructed using a fair lock, which means that multiple threads waiting for the same lock must acquire the lock at once in the order in which the lock was applied. A lock in synchronized is unfair, and ReentrantLock is unfair by default, but you can specify the use of a fair lock in the constructor.

ReentrantLock()
ReentrantLock(boolean fair)
Copy the code

Another useful feature for ReentrantLock is the ability to bind multiple conditions simultaneously for finer synchronization control. ReentrantLock can be used as follows:

    Lock lock = new ReentrantLock();
    lock.lock();
    try{
    }finally{
        lock.unlock();
    }
Copy the code

Releasing a lock ina finally block ensures that it will eventually be released once it is acquired. Do not write the lock acquisition process in the try block, because if an exception occurs while the lock is being acquired, the exception thrown will also cause the lock to be released without reason. IllegalMonitorStateException.

There are only two differences between fair and unfair locks

When an unfair lock is called, CAS will be called to grab the lock. If the lock is not occupied at this time, CAS will return the lock. After a CAS failure, an unfair lock will enter the tryAcquire method just like a fair lock. In the tryAcquire method, if the lock is released (state == 0), the unjust lock will be directly seized by CAS. However, fair lock will determine whether there is a thread waiting in the queue, if there is, it will not grab the lock, obediently queue to the back. There are only two differences between a fair lock and an unfair lock. If both CAS are unsuccessful, then the unfair lock is the same as the fair lock. Relatively speaking, unfair locks have better performance because of their higher throughput. Of course, unfair locking makes the timing of lock acquisition more uncertain and can lead to chronically hungry threads in the blocking queue.

synchronized

In Java, each object has two pools, the monitor pool and the wait pool:

  • Lock pool (SynchronizedQueue) : Suppose thread A already owns the lock on an object (not A class), and other threads want to call A synchronized method (or block) of that object. Since these threads must acquire ownership of the lock before they can access the synchronized method of the object, However, the lock on this object is currently owned by thread A, so these threads enter the lock pool on this object.

  • Wait pool (WaitQueue) : If thread A calls the wait() method of an object, thread A releases the lock on that object (since A wait() must occur in synchronized, thus owning the lock before executing its wait()), and thread A enters the object’s wait pool. If another thread calls notifyAll() on the same object, all the threads in the object’s wait pool enter the lock pool, ready to compete for ownership of the lock. If another thread calls notify() on the same object, only one thread in the wait pool of that object (randomly) will enter the lock pool of that object.

Synchronized modified blocks use monitorenter and Monitorexit directives, while synchronized methods rely on ACC_SYNCHRONIZED on method modifiers. Either way, it essentially acquires the monitor of an object, and the acquisition process is exclusive, that is, only one thread can acquire the monitor of the object protected by synchronized at a time. Any object has its own monitor, when this object by the synchronized block or the object synchronization method invoked, execution method of the thread must first access to the object’s monitor can enter into a synchronized block or synchronization method, without access to the monitor (execute) the method of thread will be blocked at the entrance of the synchronized block and synchronization method, The virtual drive is BLOCKED.

Access to Object (Synchronized) by any thread must first obtain the monitor of Object. If the fetch fails, the thread enters the synchronization queue (SynchronizedQueue) and its state becomes BLOCKED. When the precursor accessing Object (the thread that acquired the lock) releases the lock, the release wakes up the thread blocking in the synchronization queue to retry the acquisition of the monitor.

After the wait method is called, the thread state changes from Runnable to WAITING/TIME_WAITING and the current thread is placed in the object’s wait queue. The notify() method moves one thread in the wait queue from the wait queue to the synchronous queue, and the notifyAll method moves all threads in the wait queue to the synchronous queue, and the status of the moved thread changes from WAITING to BLOCKED.

There is a threadId field in the object header of the lock object. When the first thread accesses the lock and the threadId field is empty, the JVM lets it hold the biased lock and sets the threadId field value to the ID of the thread. The next time the lock is acquired, the current threadId is determined to be consistent with the threadId of the lock object. If consistent, the thread will not acquire the lock repeatedly, thus improving the efficiency of the program. If lock contention occurs, biased locks are revoked and upgraded to lightweight locks. If the competition for resources is intense, it will be upgraded to heavyweight locks.

Condition

A Condition is associated with a Lock, just as a conditional queue is associated with a built-in Lock. To create a Condition, call the lock. newCondition method on the associated Lock. Just as Lock offers more functionality than built-in locking, Condition offers more functionality than built-in conditional queuing: multiple waits can exist on each Lock, conditional waits can be interruptible or non-interruptible, time-based waits, and fair or unfair queue operations. For each Lock, there can be any number of Condition objects. Condition inherits the fairness of the related Lock object. For fair locks, the thread releases them from condition.await in FIFO order. Note: In Condition, the corresponding methods to wait,notify, and notifyAll are await,signal, and signalAll respectively. But Condition extends Object, so it also includes wait and notify methods. Be sure to use versions — await and signal.

The Condition interface definition:

public interface Condition{
    void await() throws InterruptedException;
    boolean await(long time, TimeUnit unit) throws InterruptedException;
    long awaitNanos(long nanosTimeout) throws InterruptedException;
    void awaitUniterruptibly();
    boolean awaitUntil(Date deadline) throws InterruptedException;
    void signal();
    void signalAll();
}
Copy the code

AQS has a synchronization queue (CLH) that holds the queue of threads waiting to acquire locks. Here we introduce another concept called condition queue.

Condition 1. Condition 1. Condition 1. We know that an instance of ReentrantLock can generate multiple Condition instances by calling newCondition() multiple times, which corresponds to condition1 and condition2. Notice that ConditionObject has only two attributes: firstWaiter and lastWaiter; Each condition has an associated queue. For example, thread 1 can wrap condition1.await() as Node and join the queue, and then block it without further execution. The queue is a one-way list. Condition1.signal () moves the “firstWaiter” in condition1 to the end of the queue waiting for the lock to be obtained, returns the await method and continues.

ReentrantLock versus synchonized

  • ReentrantLock can obtain the lock interruptively (void lockInterruptibly() throws InterruptedException)

  • ReentrantLock can try to acquire the lock without blocking (Boolean tryLock())

  • ReentrantLock can timeout to obtain the lock. With tryLock(timeout, unit), you can try to acquire the lock and specify how long to wait.

  • ReentrantLock enables fair locking. Implemented with new ReentrantLock(true).

  • ReentrantLock can bind multiple Condition objects simultaneously, while synchronized implements an implicit Condition with the wait(), notify(), and notifyAll() methods. You have to add an additional lock, whereas ReentrantLock doesn’t need to do this, just calling the newCondition() method multiple times.

Lock interface methods:

void lock();
void lockInterruptibly() throws InterruptedException;
boolean tryLock();
boolean tryLock(long time, TimeUnit unit) throws InterruptedException;
void unlock();
Condition newCondition();
Copy the code

Reentrant lock implementation principle

Each lock is associated with a request count and the thread that owns it

Choose between synchronized and ReentrantLock

ReentrantLock offers a number of features compared to synchronized: Timed lock wait, interruptible lock wait, fair lock, non-blocking lock acquisition, etc. In terms of performance, ReentrantLock is slightly better than synchronized (JDK6), and far better than synchronized in JDK5. Why not give up synchronized? ReentrantLock is more dangerous than synchronization, and if you forget to call UNLOCK ina finally block, your code may appear to be working, but in fact you have a ticking time bomb that could potentially hurt other code. ReentrantLock should only be considered if the built-in lock does not meet your requirements.

Read-write lock ReentrantReadWriteLock

Read/write locks also have two locks. One is related to read operations, also known as shared locks. The other is a write-related lock, also known as an exclusive lock. That is, multiple read locks are not mutually exclusive, read locks are mutually exclusive with write locks, and write locks are mutually exclusive with write locks. If no Thread writes data, multiple read threads can acquire the read lock, while the write Thread can write data only after obtaining the write lock. That is, multiple threads can read data at the same time, but only one Thread can write data at a time. (lock.readlock.lock(), lock.readlock.unlock, lock.writelock.lock, lock.writelock.unlock)

Lock degradation refers to the degradation of a write lock to a read lock. If the current thread owns the write lock, then releases it, and finally acquires the read lock, this piecewise completion process is not called lock degradation. Lock degradation is the process of holding (currently owned) write locks, acquiring read locks, and finally releasing (previously owned) write locks. Are read locks necessary in lock degradation? The answer is yes. In order to ensure the visibility of data, if the current thread does not acquire the read lock but directly releases the write lock, assuming that another thread (T) acquires the write lock and changes the data, the current thread will not be aware of the data update of thread T. If the current thread acquires a read lock, that is, following the steps of lock degradation, thread T will be blocked until the current thread uses the data and releases the read lock. Thread T can acquire the write lock to update the data.

Happens-before rules

  • Program order rule: If operation A precedes operation B in the program, operation A precedes operation B in the thread.
  • Monitor lock rule: an UNLOCK operation currently occurs after a lock operation on the same lock.
  • Rule for volatile Variables: Writes to a volatile variable occur first after reads, again in chronological order.
  • Thread start rule: The start() method of the Thread object occurs first for each action of the Thread.
  • Thread termination rule: All operations on a Thread occur before the Thread terminates. We can terminate by using thread.join () and the return value of thread.isalive () is equal to the segment that detects that the Thread has terminated.
  • Thread interrupt rule: The threadinterrupt () method is invoked when code in the interrupted thread detects the occurrence of an interrupt event.
  • Finalizer rule: An object’s constructor must be executed before starting the object’s finalizer.
  • Transitivity: If operation A precedes operation B and operation B precedes operation C, it follows that operation A precedes operation C.

Note: If there is a happens-before relationship between two operations, it does not mean that the specific implementation of the Java platform must be executed in the order specified by the happens-before relationship. The reorder is not illegal if the result of the reorder is the same as the result of the happens-before relationship.

reorder

A means by which compilers and processors reorder instruction sequences to optimize program performance.

as-if-serial

No matter how reordered, the execution structure of a single-threaded program cannot be changed.