Thread

The difference between the start and run methods in Thread

  • callstart()Method creates a new child thread and starts it
  • run()Method is just a normal method call to Thread

What is the relationship between Thread and Runnable

public class Test implements Runnable{
    
    @Override
    public void run() {
        
    }
}
Copy the code
public class Test extends Thread {
    
}
Copy the code
  • Thread is a class that implements the Runnable interface, making Run multithreaded
  • The Runnable interface is recommended because of the single inheritance principle of the class

How to implement the return value of processing thread

  • Use the Join () of the Thread class to block the current Thread while the child threads finish processing
  • Through the Callable interface: FutureTask or thread pool fetching

Six states of a thread

The difference between “sleep” and “wait”

  • sleepIs a method of the Thread class,waitIs a method of the Object class
  • sleep()Methods can be used anywhere,wait()Methods can only be used in synchronized methods or synchronized blocks
  • Thread.sleepIt will only release the CPU, but will not change the locking behavior.Object.waitNot only does it release the CPU, but it also releases the synchronized resource lock that it already has

The difference between notify and notifyAll

  • Two concepts: EntryList lock pool, WaitSet wait pool

    • Lock pool EntryList: Suppose thread A already owns the lock of an object (other than A class), and other threads B and C want to call A synchronized method of the object. Since threads B and C must acquire ownership of the lock before entering the synchronized method of the object, the lock of the object is currently occupied by thread A. Threads B and C will block and wait for the lock to be released in a place called the lock pool

    • WaitSet: If thread A calls wait() on an object, thread A releases the lock on the object, and thread A enters the wait pool for the object. Threads entering the wait pool do not compete for the lock on the object

  • NotifyAll causes all threads in the waiting pool to compete for the lock

  • Notify will only randomly select a thread in the wait pool to enter the lock pool to compete for the lock


synchronized

Mutex features

  • Mutual exclusivity:

    • That is, only one thread is allowed to hold a certain object lock at a time, through this feature to achieve multi-threaded coordination mechanism, this only one thread at a time to synchronize the code block (compound operations) access. Mutuality also becomes atomicity of operations
  • Visibility:

    • You must ensure that changes made to a shared variable before the lock is released are visible to the other thread that subsequently acquired the lock (that is, the value of the latest shared variable should be obtained when the lock was acquired), or another thread may continue on a copy of the local cache, causing inconsistencies

Two uses for obtaining an object lock

  • 1. Synchronize code blocks (Synchronized (this),synchronized(class instance)), the lock is the instance object in the parentheses
public class Test { public void doWhat1() { synchronized (this) { } } public void doWhat2() { synchronized (Test.class) {}}}Copy the code
  • 2. Synchronize non-static methods (synchronized method), the lock is an instance object of the current object
public class Test {

    public synchronized void doWhat3() {

    }
}
Copy the code

Two ways to get a class lock

  • 1. Synchronize code blocks (Synchronized (class. The class)), the lock is a class object in parentheses ()
  • 2. Synchronous static method (synchronized static method), the lock is the class object of the current object (class object)

Summary of object locking and class locking

  • 1. When a thread accesses a synchronized code block of an object, another thread can access an unsynchronized code block of the object
  • 2. If the same object is locked, when one thread accesses the synchronized code block of the object, another thread accessing the synchronized code block of the object will be blocked
  • 3. If the same object is locked, one thread accessing the synchronization method of the object will block another thread accessing the synchronization method of the object
  • 4. If the same object is locked, one thread accessing the synchronized code block of the object will block another thread accessing the synchronized method of the object, and vice versa
  • 5. Object locks of different objects of the same class do not interfere with each other
  • 6. The class lock is also a special type of object lock, so it behaves in the same way as above 1, 2, 3, and 4. Since a class has only one object lock, different objects of the same class will use the class lock synchronously
  • 7. Class locks and object locks do not interfere with each other

Underlying implementation principles of synchronized

The basis for implementing synchronized

    1. Java object head
    1. Monitor

Spin locks and adaptive spin locks

  • Spinlock: In many cases, the locking state of shared data is short-lived, and it is not worth switching threads by having the thread execute a loop waiting for the lock to be released without relinquishing the CPU
  • Disadvantages: There is a lot of performance overhead if the lock is held by other threads for a long time

Adaptive spin lock

The number of spins is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner

Biased locking: Reduces the cost of acquiring locks for the same thread

Core ideas: If a thread acquires a lock, the lock enters biased mode, and the Mark Word structure changes to biased lock structure. When the thread requests the lock again, there is no need to do any synchronization operation. In other words, the process of acquiring the lock only needs to check that the lock bit of Mark Word is biased lock and the ThreadID of the current ThreadID is equal to Mark Word, thus saving a lot of operations related to lock application. Biased locking is not suitable for multi-threaded situations where lock competition is fierce

Lightweight lock:

Lightweight locks are upgraded from biased locks. Biased locks work when one thread enters a synchronized block, but when the second thread joins the lock contention, biased locks are upgraded to lightweight locks

If the same lock is accessed at the same time, the lightweight lock expands to the heavyweight lock

The lock

The lock advantages disadvantages Usage scenarios
Biased locking Locking and unlocking require no CAS operation, no additional performance cost, and a nanosecond difference compared to performing asynchronous methods If there is lock contention between threads, there is additional lock cancellation cost Scenarios where only one thread accesses a synchronized block or method
Lightweight lock Competing threads do not block, improving response times If a thread does not grab the lock for a long time, the spin can cost CPU performance A scenario in which threads alternately execute synchronous methods
Heavyweight lock Thread contention does not use spin and does not consume CPU Threads are blocked and the response time is slow. In multithreading, frequent lock acquisition and release will bring huge performance consumption Throughput pursuit, synchronous blocks, or scenarios where synchronous methods take a long time to execute

The difference between synchronized and ReentrantLock

  • ReentrantLock, based on AQS implementation

  • Capable of achieving finer grained control than synchronized

  • Not necessarily better than synchronized, and also reentrant

  • When ReentrantLock is set to fair, it tends to grant the lock to the thread that has waited the longest

  • Synchronized is an unfair lock

  • Synchronized is the keyword and ReentrantLock is the class

  • ReentrantLock allows you to set the wait time for acquiring locks to avoid deadlocks

  • ReentrantLock can obtain information about various locks

  • ReentrantLock provides flexibility for multiple notifications

  • Synchronized operates on Mark Word, and Lock calls the park() method of the Unsafe class

Java Memory Model (JMM)

The Java Memory Model, or JMM, is an abstract concept that doesn’t really exist. It describes a set of rules or specifications that define how variables in a program (including instance fields, static fields, and the elements that make up array objects) can be accessed

Main memory in the JMM

  • Store Java instance objects
  • This includes member variables, class information, constants, static variables, etc
  • It is an area of data sharing that causes thread safety issues when multiple threads operate concurrently

Working memory in the JMM

  • Stores information about all local variables of the current method, which are not visible to other threads
  • Bytecode line number indicator, Native method information
  • It is a thread-private data area and does not have thread-safety issues

Different conceptual levels of JMM and Java memory region partitioning

  • The JMM describes a set of rules around atomicity, orderliness, and visibility
  • Similarities: There are shared areas and private areas

The types of data stores for main and working memory and how they operate

  • Local variables of the basic data types in the method are stored directly in the stack frame structure of working memory
  • A local variable of a reference type. The reference is stored in working memory and the instance is stored in main memory
  • Member variables, static variables, and class information are stored in main memory
  • The main memory sharing mode is that each thread copies a copy of the data to the working memory, and then flusher the data back to the main memory

Conditions for instruction reordering

  • You cannot change the results of a program run in a single-threaded environment
  • Data dependencies cannot be reordered

Instructions can only be reordered if they cannot be deduced by the happens-before principle

The eight principles of happens-before

  • 1. Program order rule: in a thread, according to the code order, the first operation written before the operation written later;
  • 2. Lock rule: An unLock operation occurs first in a subsequent lock operation.
  • 3. The volatile variable rule: Writes to a variable occur before reads to that variable;
  • 4. Transfer rule: If operation A precedes operation B and operation B precedes operation C, it can be concluded that operation A precedes operation C;
  • 5. Thread start rule: The start() method of the Thread object occurs first for each action of the Thread;
  • 6. Thread interrupt rule: Calls to the threadinterrupt () method occur first when the code of the interrupted thread detects the occurrence of an interrupt event;
  • 7. Thread termination rule: All operations in a Thread occur before Thread termination detection. We can terminate the Thread by using thread.join () method, and detect that the Thread has terminated by using the return value method of thread.isalive ().
  • 8. Object finalization rule: The finalization of an object happens first at the beginning of its Finalize () method;

Volatile: Lightweight synchronization mechanism provided by the JVM

  • Ensure that shared variables that are volatile are always visible to all threads
  • Disallows reordering optimization of instructions

Why are volatile variables immediately visible

  • When a volatile variable is written, the JMM flushers the value of the shared variable in the thread’s working memory to main memory
  • When a volatile variable is read, the JMM invalidates the thread’s working memory

How does Volatile prohibit rearrangement optimizations

  • Memory barriers

  • 1. Ensure the sequence of specific operations

    • Disallow reordering optimization of instructions before and after the memory barrier by inserting a memory barrier instruction
  • 2. Make certain variables visible in memory

    • The cache data of various cpus is forced to spawn so that any thread on the CPU can read the latest version of the data

CAS (Compare and Swap) is an efficient method to implement thread security

  • Supports atomic update operation, suitable for counters, sequence generators and other scenarios
  • Optimistic locking mechanism, known as lock-free
  • If the CAS operation fails, it is up to the developer to decide whether to continue trying or do something else

The difference between volatile and synchronized

  • 1.volatileEssentially telling the JVM that the value of the current variable in the register (working memory) is indeterminate and needs to be read from main memory;synchronizedThe current variable is locked. Only the current thread can access the variable, and other threads are blocked until the thread completes the variable operation.
  • 2.volatileCan only be used at variable level;synchronizedCan be used at variable, method, and class levels;
  • 3.volatileOnly modification visibility of variables can be realized, but atomicity cannot be guaranteed. whilesynchronizedThe visibility and atomicity of variable modification can be guaranteed.
  • 4:volatileDoes not cause the thread to block;synchronizedPit causes threads to block;
  • 5:volatileMarked variables are not optimized by the compiler;synchronizedMarked variables can be optimized by the compiler;

Java thread pool

Use Executors to create different thread pools to meet the requirements of different scenarios

    1. newFixedThreadPool(int nThreads)The thread pool that specifies the number of worker threads
    1. newCachedThreadPool(), a thread pool that processes a large number of work tasks over time
    • An attempt is made to cache the thread and reuse it. When no cached thread is available, a new worker thread is created.
    • If a thread is idle for more than a threshold, it is terminated and removed from the cache.
    • What resources will not be consumed if the system is idle for a long time
    1. newSingleThreadExecutor(), creates a unique worker thread to execute the task, and if the thread terminates abnormally, another thread will take its place
    1. newSingleThreadScheduledExecutor()newScheduledThreadPool(int corePoolSize), timed or periodic work scheduling, the difference between the two is a single worker thread or multiple threads
    1. newWorkStealingPool()Forkjoinpools are built internally to process tasks in parallel, using work-stealing algorithms, regardless of the order in which they are processed

Work-stealing algorithm: A thread Stealing tasks from another queue to execute

Fork/Join framework: a framework that divides large tasks into several small tasks for parallel execution and finally summarizes the results of each small task to obtain the results of large tasks

Three Executor interfaces for J.U.C

  • Executor: A simple interface to run new tasks that decouples task submission from task execution details
  • ExecutorService: Provides methods to manage the executor and task life cycle, and a more complete task submission mechanism
  • 3. ScheduledExecutorService: support for the Future and perform tasks on a regular basis

ThreadPoolExecutor constructor

  • CorePoolSize: number of core threads
  • MaximumPoolSize: Maximum number of threads that can be created when there are not enough threads
  • KeepAliveTime: The idle time allowed by the thread pool to maintain threads
  • WorkQueue: indicates the queue for waiting tasks
  • ThreadFactory: Creates a new thread
  • Handler: Saturation policy for the thread pool
    • AbortPolicy: Directly throws an exception. Default policy
    • CallerRunsPolicy: Executes the task with the caller’s thread
    • DiscardPolicy: Discards the first task in the queue and executes the current task
    • DiscardOldestPolicy: Directly discards the task

Status of the thread pool

  • RUNNING: Can accept newly submitted tasks and also process tasks in a blocking queue
  • SHUTDOWN: New submitted tasks are no longer accepted, but existing tasks can be processed
  • STOP: No longer accepts newly submitted tasks or processes existing tasks
  • TIDYING: All missions are terminated
  • TERMINATED: The state is entered after the TERMINATED () method is executed