How Java threads are created
- Thread class inheritance
- Implement the Runnable interface
- Implement threads with return values through ExecutorService and Callable
- Based on thread pool
Thread class inheritance
A Thread can be created by inheriting the Thread class by implementing the Runnable interface and setting methods for manipulating threads. This is done by creating a class that inherits the Thread class, then instantiating the Thread object and calling the start method. The start method is a native method that creates a thread by calling the operating system’s interface and eventually executing the run method to start a thread. The code inside the run method is the implementation logic of the thread class.
/ / step 1: Public class NewThread extends Thread {public void run() {system.out.println (" This is a new thread"); } // step 2: instantiate the thread object NewThread NewThread = new NewThread(); // Step 3: call the start method newthread.start ();Copy the code
Implement the Runnable interface
If a subclass already inherits from a parent class, it cannot inherit from Thread. In this case, threads can be created by implementing the Runnable interface. Create a ChildrenClassThread by implementing the Runnable interface and instantiate the ChildrenClassThread object. Instantiate the Thread class and pass in the childrenClassThread object, which is started by the Thread’s start.
/ / step 1: Public class ChildrenClassThread extends superClass implements Runnable {public void run() { System.out.println("this is a new thread"); }} // Step 2: Instantiate ChildrenClassThread ChildrenClassThread = new ChildrenClassThread(); Thread Thread = new Thread(childrenClassThread); // Step 4: Start the thread thread.start();Copy the code
Implement threads with return values through ExecutorService and Callable<Class>
The Callable interface is used when you need to start multiple threads to perform a task, then collect the results returned by thread execution and aggregate the final results. Create a class that implements the Callable interface and implements the Call method, which contains concrete calculation logic and returns results. Create a thread pool, receive the Future List and Callable instances that return the results, use the thread pool to submit the task and store the results of the thread execution in the Future object. The Furure object in the Future List is facilitated at the end of thread execution, where the get method is called to retrieve the data returned by the Callable thread task and summarize the results.
Public class MyCallable implements Callable<String> {private String name; public MyCallable(String name) { this.name = name; } @Override public String call() throws Exception { return name; }} / / step 2: create a thread pool ExecutorService pool = Executors. NewFixedThreadPool (5); List<Future> List = new ArrayList<>(); for (int i = 0; i < 5; Callable c = new MyCallable(I + ""); Future Future = pool.submit(c); // Submit (c); list.add(future); } // step 6: Close the thread pool and wait for the thread to complete the pool.shutdown(); For (Future Future: list) {system.out.println (future.get().toString()); }Copy the code
Based on thread pool
Threads are a valuable resource, and it is a waste of resources to create them every time they are needed and destroy them after they have run. We can create threads by caching policies and using thread pools to create thread pools.
ExecutorService pool = Executors.newFixedThreadPool(10); for (int i = 0; i < 10; i++) { pool.execute(new Runnable() { @Override public void run() { System.out.println(Thread.currentThread().getName() + "is running"); }})}Copy the code
How thread pools work
The main functions of the thread pool are thread reuse, thread resource management, and control of the maximum concurrency of the operating system to ensure that the system runs efficiently (through thread resource reuse) and safely (by controlling the maximum number of concurrent threads).
Core components and core classes for thread pools
A Java thread pool consists of the following four core components
- Thread pool manager: Used to create and manage threads
- Worker thread: a thread in a thread pool that performs a specific task
- Task interface: Used to define scheduling and execution policies for worker threads. Only threads that implement this interface can be scheduled by the thread pool
- Task queue: Stores pending tasks. New tasks are continuously added to the queue, and the completed task queue is removed from the queue
Java thread pool workflow
When a thread pool is first created, it simply requests a thread resource to execute the thread queue and manage the thread pool. When calling the execute method to add a task:
- If the number of running threads is less than corePoolSize (the user-defined number of core threads), the thread pool immediately creates a thread and executes the task
- If the number of running threads is greater than or equal to corePoolSize, the task is placed in the blocking queue
- If the blocking queue is full and the number of running threads is less than maximumPoolSize, the thread pool creates a non-core thread to execute the task immediately
- If the blocking queue is full and the number of running threads is greater than or equal to maximumPoolSize, the thread pool rejects the thread task and throws a RejectExecutorException
- When a thread task completes, the task is removed from the thread pool queue, and the thread pool picks up the next task from the queue to continue execution
- When a thread is idle for longer than keepAliveTime and the number of tasks in progress exceeds corePoolSize, the thread is considered idle and stops.
Thread pool rejection policy
If the core thread count of the thread pool is used up and the blocking queue is full, then the thread resources of the thread pool are exhausted and the thread pool will not have enough thread resources to perform new tasks. For the safety of the operating system, the thread pool will handle the newly added process through a rejection policy. The built-in rejection policies in the JDK are:
- AbortPolicy: Directly throws an exception to prevent the thread from running properly
- CallerRunsPolicy: Executes the discarded thread task if it is not closed
- DiscardOldestPolicy: Removes the earliest thread task from the thread queue and attempts to commit the current task
- DiscardPolicy: Discards the current thread task without doing any processing
- User-defined rejection policy: You can extend the RejectedExecutionHandler interface to customize a RejectedExecutionHandler policy
// AbprtPolicy public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { throw new RejectExecutionException("Task" + r.toString() + "rejected from " + e.toString()); } // CallerRunsPolicy public void rejectedExcetion(Runnable r, ThreadPoolExecutor e) { if(! e.isShutdown()) { r.run(); } } // DiscardOldestPolicy public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { if(! e.isShutdown()) { e.getQueue().poll(); // poll the oldest task e.execute(r); // try to execute current task } } // DiscardPolicy public void rejectedExecution(Runnable r, ThreadPoolExecutor e) {// Not do anything} DiscardOldestNPolicy public class DiscardOldestNPolicy implements RejectedExecutionHandler { private int discardNumber = 5; private List<Runnable> discardList = new ArrayList<>(); public DiscardOldestNPolicy(int discardNumber) { this.discardNumber = discardNumber; } public void rejectedExecution(Runnable r, ThreadPoolExecutor e) { if(e.getQueue().size() > this.discardNumber) { e,getQueue.drainTo(discardList, discardNumber); discardList.clear(); if(! e.shutdown()) { e.execute(r); }}}}Copy the code
Five common thread pools
The ExecutorService interface has multiple implementation classes that can be used to create different thread pools
The name of the | instructions |
---|---|
newCachedThreadPool | Cacheable thread pools that reuse threads if they are available when new threads are created. For tasks with short execution time, threads can be reused to a large extent, thus improving system performance |
newFixedThreadPool | Fixed size thread pool |
newScheduledThreadPool | A pool of thread scheduling tasks that can be set to execute a task after a given delay or on a regular basis |
newSingleThreadExecutor | A single thread pool that guarantees that there is always one and only one thread available |
newWorkStealingPool | A thread pool of sufficient size for fast computation is new in JDK1.8 |
ScheduledExecutorService scheduledExecutionPool = Executors.newScheduledThreadPool(5); / / 1: Create a delay in 5 seconds thread of execution scheduledExecutionPool. The schedule (new Runnable () {@ Override public void the run () {System. Out. Println (" delay 5 seconds execute"); } }, 5, TimeUnit.SECONDS); / / 2: Create a 1 second delay and once every 3 seconds to perform the thread scheduledExecutionPool. ScheduleAtFixedRate (new Runable () {@ Override public void the run () {/ / do something } }, 1, 3, TimeUnit.SECONDS);Copy the code
The life cycle of a thread
new
Call the new method to create a new thread, which is in the new state
ready
Call the start method to start a thread in the ready state
run
A thread in the ready state waits for a thread to acquire CPU resources, and when it obtains CPU resources, it enters the running state
A running thread enters the ready state again when it invokes the Yeild method or loses a processor resource
blocking
The executing thread is called to the blocked state after executing the sleep method, I/O blocking, waiting for synchronization locks, waiting for notifications, calling the suspend method, and so on
A blocked thread enters the ready state again because it has run out of sleep, the I/O method returns, obtains the synchronization lock, receives a notification, calls resume, and so on
death
A thread in the running state enters the dead state when a call to run or call completes normally, a call to stop the thread or an abnormal exit due to a program execution error
The basic method of threading
Basic thread-related methods include wait, notify, notifyALl, sleep, Join, yield, interrupt, etc
Thread wait: Wait method
A thread that calls the wait method enters a WAITING state and will not return until another thread is notified or interrupted. The lock is released when the wait method is called, so the wait method is typically used in synchronized methods or blocks of synchronized code
Thread sleep: The sleep method
The thread that calls sleep goes to sleep. Unlike wait, sleep does not release the currently held lock
Thread concession: yield method
Invoking the yield method causes the current thread to give up CPU resources and recompete with other threads for CPU slices
Thread interrupts: Interrupt method
The interrupt method is used to issue a termination signal to a thread that affects an interrupt flag bit inside the thread. The thread itself does not change state when the interrupt method is invoked.
-
Calling the interrupt method does not interrupt a running thread; it simply changes the internally maintained interrupt flag bit
-
If a thread is in the TIMED WATING state because of calling the sleep method, calling interrupt throws InterruptedException, causing the current thread to terminate the TIMED WAITING state prematurely
-
Many methods that declare InterruptedException such as Thread.sleep(Long Mills) clear the interrupt identifier bit before throwing an exception, so calling the isInterrupted method after throwing an exception returns false
-
The interrupt state is an identifier bit inherent in a thread that can be safely terminated
Thread join: join method
The join method is used to wait for other threads to terminate. If the join method of one thread is called in the current thread, the current thread becomes blocked. When the other thread finishes, the current thread changes from blocked state to ready state and waits for the CPU to use it
System.out.println("ChildThread start..." ); ChildThread childThread = new ChildThread(); childThread.join(); System.out.println("ChildThread end, main Thread start..." );Copy the code
Thread wake up: notify method
We typically call the wait method on one of the objects to wait on the object’s monitor until the current thread waivers the lock on the object before resuming execution of the awakened thread, which then competes with other actively synchronizing threads on the object in the usual way
The difference between sleep and Wait methods
- The sleep method belongs to the Thread class and the wait method belongs to the Object class
- The sleep thread does not release the object lock, while the wait thread releases the object lock and needs to wake up with notify
The difference between the start method and the run method
- The start method is used to start a thread that is ready and not running
- The run method, also called the method body, contains the logic of the thread to be executed. When the run method is called, the thread enters the running state
Four ways to terminate a thread
-
Normal operation end
-
Exit the thread with exit flags, such as setting a synchronization variable to control the loop
-
Terminate with the interrupt method
3.1 When a thread is blocked, its interrupt method is called and InterruptException is thrown, code captures the exception, and then breaks out of the state checking loop
3.2 If a thread is not blocked, use the isInterrupted method to determine the interrupt flag of the thread to exit the loop
-
Using the stop method to terminate a thread: unsafe
When thread. stop is used, a child Thread of that Thread raises a ThreadDeatherror error and releases all locks held by the child Thread. This may cause inconsistent data to be protected, and the use of data by other threads may cause program execution errors
In Java lock
Java locks are used to ensure data consistency in multiple concurrent threads
Before using object or method need to be locked, if found the lock is being used by other threads, then the thread needs to wait for holding the lock is released after the process has been completed, so is it possible to get the current thread lock on the operation, so that security at the same time there is only one thread holds a lock in the object and modify the object, so as to ensure the security of data
Lock from the view of optimism and pessimism can be divided for optimistic locking and pessimistic locks, from access to resources Angle can be divided into the lock of fairness and equity are not fair lock, from the perspective of whether sharing can be divided into a Shared lock and an exclusive lock, from the state of the lock can be divided into biased locking, lightweight and heavyweight lock lock, the JVM also cleverly designed a spin lock to use faster CPU resources
Optimistic locking
Optimistic locking by adopting the idea of optimistic processing data, in each time data is read all think that other people won’t change the data, so won’t be locked, but the update will determine whether others during this period to modify the data, the concrete process is: to compare the current version and the version number of the last time, if the same is updated, if inconsistencies are repeated reading, the comparison, the write operation
Optimistic locking in Java is mostly implemented using CAS operations. CAS is an atomic update operation that compares whether the current value and the incoming value are the same before operating on data. If so, the update is performed
Pessimistic locking
Pessimistic lock uses pessimistic thinking to deal with data. It believes that data will be modified every time data is read and written, so it needs to be locked. In this way, when people want to read and write data, they will block and wait until they get the lock
Pessimistic locks in Java are mostly implemented based on the AQS architecture. AQS defines a synchronization framework for multithreading access to shared resources, and many classes of implementations rely on it, such as Synchronized, ReetrantLock, Semaphore, CountDownLatch, and so on. In this framework, locks will first try to obtain the CAS optimistic lock, if not, it will turn to pessimistic lock
spinlocks
Spinlocks think: if the thread holding the lock will be in a very short time to release the lock, then wait for the thread lock is not need to do the kernel and user state switch into jam, wait states, you just need to wait for a while (also called spin), waiting for the thread holding the lock can obtain immediately after releasing the lock locks, thus avoiding the threads in the state of the switch on the amount of time
If a thread spins for a long time without acquiring the lock, CPU will be wasted, and sometimes it will never be able to acquire the lock, resulting in a permanent waste of CPU. Therefore, it needs to set a maximum spin waiting time, beyond which the thread will exit the spin mode and release the lock held
- Advantages: Reduced CPU context switching, significantly improved performance for lock time or less competitive code blocks
- Disadvantages: When the thread holding the lock holds the lock for too long or the lock competition is too fierce, it will lead to the long time spin of the thread, which will cause the WASTE of CPU. It is not suitable to use the spin lock in the case of complex lock dependence
Time threshold
JDK1.5 is a fixed DE time, and JDK1.6 introduces adaptive spin lock. The spin time is no longer a fixed value, but is determined by the last time a lock was used and the state of the lock owner. It can be basically considered that the time of a thread context switch is a most important time
synchronized
The synchronized keyword provides thread-safe operations for Java objects, methods, and code blocks. Synchronized is an exclusive pessimistic lock, as well as a reentrant lock. With synchronized modifiers, only one thread can manipulate the object at a time; When this keyword is used to modify a method or code block, only one thread can execute the method or code block at a time. Other threads can access the object or execute the method or code block only after the current thread completes execution and releases the lock resource.
Every object in Java has a Monitor object, and locking competes with the Monitor object. Locking blocks of code is preceded and followed by Monitorenter and Monitorexit. Locking methods is determined by a flag bit.
Scope of synchronized
-
For member variables and non-static methods, synchronized locks this instance
-
When acting on static methods, synchronized locks class instances
-
Acting on a code block locks all objects in the code block
The realization principle of synchronized
Synchronized includes:
-
ContentionList: Lock contention queue. All threads requesting locks are placed in the contention queue
-
EntryList: Contention candidate list, threads that qualify as candidates in ContentionList to compete for lock resources move to EntryList
-
WaitSet: Wait set in which a blocked thread enters the WaitSet after calling the wait method
-
OnDeck: Contending candidate. At most one thread is contending for a lock resource at any one time. The state of this thread is called OnDeck
-
Owner: Threads that compete for the lock resource are called Owner status threads
-
! Owner: After the Owner state releases the lock, the thread’s state becomes! Owner
Synchronized spins when it receives a new lock request, and if it does not acquire a lock by spinning, it is placed in the ContentionList
In order to prevent the performance of ContentionList elements from being affected by a large number of concurrent CAS access, the Owner thread releases the lock resources and puts part of the ContentionList elements into the EntryList. The Owner thread does not pass the lock directly to OnDeck. Instead, it grants OnDeck the right to compete for the lock again. This operation is called “contention switching” and compromises fairness. But it improves performance
The OnDeck thread that has acquired the lock resource is converted to the Owner thread, while the thread that has not acquired the lock resource stays in the EntryList
The Owner thread is blocked by wait and enters the WaitSet until it is woken up by notify and re-enters the EntryList
The ContentionList, EntryList, and WaitSet threads are all blocked, and this blocking is done by the operating system
The Owner thread releases the lock resource and converts to! The Owner status
Why is synchronized unfair
-
In synchronized, before a thread enters the ContentionList, it will try to acquire the lock through spin. If it fails to acquire the lock, it will enter the ContentionList, which is unfair to the thread already in the queue
-
The thread that acquired the lock in spin can preempt the lock resource of the OnDeck thread
In JDK1.6, synchronized introduced adaptive spin, lock elimination, lock bias, lightweight and heavyweight locking to improve locking efficiency. Locks can be upgraded from bias locks to lightweight locks and then to heavyweight locks, a process called lock inflation. Biased locking and lightweight locking are enabled by default in JDK1.6
ReentrantLock
ReentrantLock inherits the Lock interface and implements the methods defined in the interface, which is a reentrant exclusive Lock. ReentrantLock uses custom synchronization queues (AQS) to acquire and release locks
An exclusive lock is a lock that can only be held by one thread at a time, and other threads can only wait in the queue. Reentrant locking refers to the operation that allows the same thread to repeatedly lock the same object
ReetrantLock supports both fair and unfair locks. Fair or not is a mechanism for competing threads to acquire locks
ReentrantLock not only provides sychronized operations on locks, but also provides methods to avoid multithreaded deadlocks such as responsive interrupt locks, polling lock requests, and timed locks
The use of the already
ReentrantLock has an explicit process, and when to lock and release the lock is under the control of the programmer. To define a ReentrantLock, use the lock method where the lock needs to be added. When the resource is used, use the UNLOCK method to release the lock
public class ReentrantLockDemo implements Runable { public static ReentrantLock lock = new ReentrantLock(); public static int i = 0; @Override public void run() { for(int j = 0; j < 10; j++) { lock.lock(); // lock.lock(); // reentrant lock try {i++; } finally { lock.lock(); // lock.lock(); } } } public static void main(String[] args) throws InterruptedException { ReentrantLockDemo reentrantLockDemo = new ReentrantLockDemo(); Thread thread = new Thread(reentrantLock); thread.start(); thread.join(); }}Copy the code
Lock and lock release times should be equal
If the lock is locked more times than the lock is released, the thread holds the lock forever and no other thread can acquire it
If the lock times less than the number of lock is released, will throw out the Java. Lang. IllegalMonitorStateException
How does ReentrantLock avoid deadlocks: response interrupts, pollable locks, timed locks
-
Response interrupt: Synchronized if a thread attempts to acquire a lock, the result is either to acquire the lock and continue execution, or to wait; ReentrantLock provides a responsible interrupt lock, and while waiting for the lock, the thread can cancel the request for the lock as needed
-
Pollable lock: Get the lock via Boolean tryLock(), get the lock and return true if one is available, otherwise return false immediately
-
Timing lock: Obtain the timing lock by running Boolean tryLock(long time, TimeUnit Unit) throws InterruptedException. If the available lock was acquired at a given time and the thread was interrupted, the lock is acquired and returns true. If no available lock is acquired at a given time, the current state is disabled, and the thread remains dormant until three of the following conditions occur
3.1 The current thread acquires the available lock and returns true
3.2 If the current thread sets the interrupt flag before entering this method or is interrupted while acquiring the lock, InterruptedException is thrown and the interrupted state of the current thread is cleared
3.3 Lock acquisition time exceeds the given time, false will be put back, if the given time is less than or equal to 0, will not wait at all
Fair locks and unfair locks
Fair lock refers to the lock distribution and competition mechanism is fair, on a first-come, first-served basis. Unfair lock follows the mechanism of random and nearby lock allocation
ReentranrLock defines different types of locks by passing different parameters in the constructor. By default, it implements an unfair lock, so it performs significantly more efficiently than a fair lock
The difference between tryLock, Lock and lockInterruptibly
-
The tryLock method acquires the lock immediately if it is available and returns true, otherwise it returns false immediately. You can also add a time limit to the tryLock method, which returns false if it has not acquired the lock within a given time
-
The lock method acquires the lock immediately if it has an available lock and returns true, otherwise it enters the wait state
-
LockInterruptibly throws an exception when the lock breaks, but lock does not
Comparison of Synchronized and ReentrantLock
Thing in common:
-
Are used to control multithreading methods on shared objects
-
Both are reentrant locks
-
Both guarantee visibility and mutual exclusion
Difference:
-
ReentrantLock Explicitly acquires and releases locks; Synchronized acquires and releases locks implicitly. To avoid a situation where the program fails to release the lock due to an exception, you must release the lock in the finally control block when using ReentrantLock
-
ReentrantLock provides more flexibility by responding to actions such as interrupts
-
ReentrantLock is API level and synchronized is JVM level
-
ReenTrantLock defines a fair lock
-
ReenTrantLock can bind multiple conditions through Condition
-
Sychronized is synchronous blocking, and pessimistic concurrency strategy is adopted. ReentrantLock is synchronous non-blocking and uses an optimistic concurrency strategy
-
Lock is an interface and synchronized is a keyword
-
You can use the lock to determine whether the lock was successfully acquired
-
Lock can improve the efficiency of multiple thread read operations by defining read and write locks separately
Semaphore
Semaphore is a counting – based Semaphore. A threshold can be set when a Semaphore object is defined. Based on this threshold, multiple threads compete for the license signal, and then execute the specific business logic, releasing the Semaphore after execution. When the number of competing semaphore threads exceeds this threshold, the newly added thread applying for the license semaphore is blocked until another license semaphore is released
Semaphore semp = new Semaphore(5); try { semp.acquire(); } catch(Exception e) {semp.release(); } } catch(Exception e) { }Copy the code
AtomicInteger
AtomicInteger is the Integer class that provides atomic operations. Common atomic operation classes include AtomicBoolean and so on. Its performance is several times that of synchronized and ReentrantLock
Read-write lock
Read/write locks are classified into read locks and write locks. Multiple read locks are not mutually exclusive. Read locks and write locks are mutually exclusive
Heavyweight and lightweight locks
Heavyweight locks are based on the mutex implemented by the operating system. They cause the process to switch between user mode and kernel mode, which is relatively expensive
Lightweight lock is relative to heavyweight lock. The core design of lightweight lock is to reduce the use of heavyweight lock to improve the efficiency of the system under the premise of no multi-thread competition. Lightweight locks are suitable for cases where threads alternate executing blocks of code. If multiple threads are competing for the same lock at the same time, the lightweight lock will swell into a heavyweight lock
Biased locking
The purpose of biased lock is to minimize the execution path of lightweight lock when the same thread obtains a lock for many times, because the acquisition and release of lightweight lock requires multiple CAS atomic operations, while the switch of biased lock requires only one CAS operation
As lock competition becomes more intense, locks may be upgraded from partial locks to lightweight locks to heavyweight locks, but in Java locks are only upgraded in one direction and do not degrade
Segmented lock
Segmentation is not an actual lock, but rather the idea of segmenting data and placing separate locks on each segment, further fine-grained to improve concurrency. ConcurrrentHashMap is implemented internally using segmented locking
How to optimize locks
-
Reduces the duration of lock holding
-
Reduce lock granularity
-
Lock the separation
-
Lock coarsening: The lock operations with strong correlation are centralized to improve the overall efficiency of the system
-
Lock elimination: Eliminates performance degradation caused by misuse of lock operations