Hello everyone, I am xiao CAI, a desire to do CAI Not CAI xiao CAI in the Internet industry. Soft but just, soft praise, white piao just!
Ghost ~ remember to give me a three – even oh!
This article focuses on the JDK and packages
Refer to it if necessary
If it is helpful, do not forget the Sunday
To better support concurrent programs, the JDK provides a number of useful apis and frameworks internally.
Synchronization control
When it comes to synchronous control, the synchronized keyword comes to mind, which is the simplest control method that determines whether a thread can access a critical section resource. Together with wait() and notify(), thread wait and notification functions can be achieved. Another way to control synchronization is to use a reentrant lock. Reentrant locking is a complete substitute for the synchronized keyword
Example Of ReentrantLock:
Using reentrant lock can protect the critical section resource I and ensure the security of multithreading operation on I. In contrast to synchronized, reentrant locks have a displayed operation. The developer must manually specify when to lock and when to release the lock. Because of this, reentrant locks provide much more flexibility for logical control than synchronized. Note, however, that when exiting a critical section, you must remember to release the lock, otherwise no other thread will have a chance to access the critical section again.
Interrupt response
For the synchronized keyword, if a thread is waiting on a lock, there are two outcomes: it either acquires the lock and continues to execute, or it continues to wait. With a reentrant lock, there is a possibility that the waiting thread can be interrupted to make it stop waiting. This interrupt mechanism is necessary to help with deadlocks.
The above figure simulates a deadlock state, t1 thread starts to acquire the lock1 lock, T2 thread starts to acquire the lock2 lock, T1 thread waits for 1 second to acquire the lock2 lock, T2 thread waits for 1 second to acquire the lock1 lock, thus causing a deadlock. On line 58, we flag the interrupt with t2.interrupt(), and respond with isHeldByCurrentThread() on lines 35 and 38 to allow the T1 thread to acquire the lock2 lock and continue. In the end, only T1 actually completes the work, while T2 gives up its task and exits, freeing resources.
In addition to waiting for external notifications, there is another way to avoid deadlocksTime wait for
In the figure above, the tryLock() method takes two arguments, one for the wait time and one for the unit of time. During the specified wait time, flase is returned if the lock was not acquired, or true if the lock was successfully acquired. The 3-second wait time is set, but the thread holding the lock holds it for 5 seconds, so the second thread will fail to request the lock.
The tryLock() method can also be run without arguments, in which case the current thread attempts to acquire the lock and returns true if the lock is not held by another thread. If the lock is held by another thread, the current lock does not wait and flase is returned.
Fair lock
In most cases, lock claims are unfair.
Unfair lock: thread A requests lock 1, thread B also requests lock 1, and another thread C also requests lock 1. Although thread B requests lock 1 one step earlier than thread C, it is not certain which thread finally requests lock 1, thinking that it is randomly selected from the thread waiting queue.
Fair lock: It guarantees first come, first served, first come, first served, in chronological order. It doesn’t cause hunger, as long as you wait in line, you can get the resources eventually.
If we use the synchronized keyword for lock control, the resulting lock is unfair, and reentrant locks allow us to set their fairness.
public ReentrantLock(boolean fair){}
Copy the code
When fair is passed in as true, the lock is fair. However, fair locking requires the maintenance of an ordered queue, which results in high implementation cost and low performance. By default, locks are unfair.
Fair lock usage example:
In the case of a fair lock, the resulting output order is executed sequentially.
Example of unfair lock usage:
It can be seen that, according to the system scheduling, a thread will tend to acquire the lock already held again. This allocation method is efficient, but not fair.
There are several important methods for ReentrantLock:
lock()
: Obtains the lock, and waits if the lock is occupiedlockInterruptibly()
: Gets the lock, but responds to interrupt firsttryLock()
: Attempts to acquire the lock, returning true if successful; Otherwise return false; The method does not wait and returns immediatelytryLock(long time, TimeUnit unit)
: Obtains the lock at a given timeunlock()
: the lock is released
Three important elements of reentrant locking:
State of the atom
Atomic state The CAS operation is used to store the current lock state and determine whether the lock is already held by another threadWaiting queue
: All threads that do not request the lock are queued to wait. After a thread releases the lock, the system can wake up a thread from the wait queue to continue workingBlock source words park() and unpark()
: Used to suspend and resume threads. Threads that do not acquire locks will be suspended.
Reentrant lock partner (Condition)
Condition acts in much the same way as wait() and notify(), but wait() and notify() are used in conjunction with the synchronized keyword, and Condition is associated with reentrant locking.
The basic methods provided by the Condition interface:
void await(a) throws InterruptedException; // Make the current thread wait while the lock is released. You can respond to interrupts while waiting
boolean await(long time, TimeUnit unit) throws InterruptedException; // Wait for the specified time
void awaitUninterruptibly(a); // Same as await, but will not respond to interrupt while waiting
long awaitNanos(long nanosTimeout) throws InterruptedException;
boolean awaitUntil(Date deadline) throws InterruptedException;
void signal(a); // Wake up a waiting thread void signalAll(a); // Wake up all waiting threads Copy the code
Condition Usage Example:
Note: on line 30 we need to release the lock and let thread A regain the lock, otherwise we have woken up thread A, but since it did not regain the lock, we cannot actually execute.
Part of the ArrayBlockingQueue implementation code:
Semaphore
Semaphores provide a more powerful control method for multithreading. In a broad sense, semaphores are extensions to locks. Synchronized and ReentrantLock allow only one thread to access a resource at a time, whereas semaphore can specify multiple threads to access a resource at the same time.
Semaphore constructor:
public Semaphore(int permits){}public Semaphore(int permits, boolean fair){}// The second argument controls whether it is fair
Copy the code
Semaphore main methods:
// An attempt is made to obtain permission for an access, failing which it waits until a thread releases a permission or the thread is interrupted. This method responds to the interrupt
public void acquire(a) throws InterruptedException {}
public void acquireUninterruptibly(a) {} // Same as acquire(), but does not respond to interrupts
public boolean tryAcquire(a) {} // Try to obtain a license, return true on success, false on failure, no waiting
public boolean tryAcquire(long timeout, TimeUnit unit) {} // Wait for the specified time
public void release(a) {} // Release a license Copy the code
Semaphore example:
Apply for semaphores using the acquire() method, and on departure, be sure to release the semaphores using the replease() method.
ReadWriteLock
Read/write separation locks can effectively reduce lock contention and improve system performance. For example, three threads A1, A2, and A3 perform write operations, and three threads B1, B2, and B3 perform read operations. If a reentrenter lock or internal lock is used, all read operations, read operations, and write operations are serial. However, since read operations do not compromise data integrity, this wait is not reasonable.
Access constraints on read/write locks:
read | write | |
---|---|---|
read | non-blocking | blocking |
write | blocking | blocking |
Read/write lock example:
The reader thread is completely parallel, and writes block reads
Counter downlatch
CountDownLatch is a very useful multithreaded control utility class. This tool is commonly used to control thread waiting, which allows a thread to wait until the count ends before starting execution.
CountDownLatch constructor:
public CountDownLatch(int count) {} //count indicates the number of counts of this counter
Copy the code
Examples of CountDownLatch usage:
The count here is 6, and every time a thread completes the task, the counter is reduced by 1, and the main thread is required to wait for all tasks to complete, using the await() method.
CyclicBarrier
CyclicBarrier is another multithreaded concurrency control tool. Cyclic, which means that the counter can be used over and over again, is a little more powerful than CountDownLatch, and it can receive a parameter as a barrierAction, which is the action that the system will perform when the counter completes a count.
CyclicBarrier constructor:
public CyclicBarrier(int parties, Runnable barrierAction) {} //parties represent the total number of counts
Copy the code
Example CyclicBarrier usage:
Thread reuse: Thread pools
Threads are a lightweight tool compared to processes. However light it is, it takes time to create and destroy. If you create a thread for each small task, you are likely to create and destroy threads that take longer than they actually work, and when the number of threads becomes too large, you will exhaust CPU and memory resources. In addition, threads themselves occupy Memory space, and a large number Of threads will seize precious Memory resources, which may lead to Out Of Memory if not handled properly
In order to avoid the system creating and destroying threads frequently, we can let the creating thread reuse, so this time came the thread pool, somewhat similar to a database connection pool, when need to use the database system, is not to create a new connection, but from the connection pool to obtain a connection, the use of database connection pool can maintain some connection to the database, Keep them in an active state for a long time. Thread pools are a similar concept.
The JDK provides an Executor framework to help developers with effective thread control, essentially a thread pool. ThreadPoolExecutor represents a thread pool. The ThreadPoolExecutor class implements the Executor interface, so that any Runnable object can be scheduled by the ThreadPoolExecutor thread pool.
The various types of thread pools for the Executor framework:
public static ExecutorService newFixedThreadPool(int nThreads) {}
public static ExecutorService newCachedThreadPool(a) {}
public static ExecutorService newSingleThreadExecutor(a) {}
public static ScheduledExecutorService newSingleThreadScheduledExecutor(a) {}
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {}
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) {} Copy the code
newFixedThreadPool
: Returns aFixed number of threadsThe number of threads in the thread pool is always the same. When a new task is submitted, it is executed immediately if there are idle threads in the thread pool. If no, the new task is temporarily stored in a task queue. When a thread is idle, the task in the task queue is processed.newCachedThreadPool
: This method returns oneYou can adjust the number of threads as requiredThread pool. The number of threads in the thread pool is uncertain. If there are idle threads that can be reused, reusable threads are preferred. If all threads are working and a new task is submitted, a new thread will be created to process the task. When all threads finish executing the current task, they will return to the thread pool for reuse.newSingleThreadExecutor
: This method returns oneOnly one threadThread pool. If a new task is submitted, it is stored in the task queue and executed in a first-in, first-out order until the thread is idle.
NewFixedThreadPool Example:
Several other types of thread pools are used similarly
Internal implementation of thread pools
ThreadPoolExecutor is used internally by newFixedThreadPool, newCachedThreadPool, and newSingleThreadExecutor.
- newFixedThreadPool:
public static ExecutorService newFixedThreadPool(int nThreads) {
return new ThreadPoolExecutor(nThreads, nThreads,
0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>());
}
Copy the code
- newCachedThreadPool:
public static ExecutorService newCachedThreadPool(a) {
return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
60L, TimeUnit.SECONDS,
new SynchronousQueue<Runnable>());
}
Copy the code
- newSingleThreadExecutor:
public static ExecutorService newSingleThreadExecutor(a) {
return new FinalizableDelegatedExecutorService
(new ThreadPoolExecutor(1.1. 0L, TimeUnit.MILLISECONDS,
new LinkedBlockingQueue<Runnable>()));
} Copy the code
Constructor of ThreadPoolExecutor class:
public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit, BlockingQueue<Runnable> workQueue, ThreadFactory threadFactory, RejectedExecutionHandler handler) {} Copy the code
-
CorePoolSize: Number of core threads in the thread pool
-
MaximumPoolSize: specifies the maximum number of threads in the thread pool
-
KeepAliveTime: indicates the keepAliveTime of redundant threads
-
Unit: Indicates the unit of survival time
-
WorkQueue: A task queue storing submitted but unexecuted tasks
- SynchronousQueue: Commits directly to the queue. The queueThere is no capacity, each insert waits for a delete, and vice versa, each delete waits for an insert. If there are no free threads and the maximum number of threads has been reached, the rejection policy is executed.
- ArrayBliockingQueue: bounded queue. You can bring oneCapacity size parameterIs the maximum capacity of the queue. A bounded queue can only raise the number of threads to when the task queue is full
maximumPoolSize
If the task continues to be submitted, the rejection policy will be implemented. - LinkedBlockingQueue: unbounded queue. An unbounded task queue does not fail to enqueue unless resources are exhausted.
- PriorityBlockingQueue: Priority queue. Is a special unbounded queue that can be executed according to the priority order of tasks themselves.
-
ThreadFactory: a threadFactory used to create threads, usually using the default
-
Handler: Indicates the policy for rejecting tasks when there are too many tasks to be processed
- AbortPolicy strategy: Directly throws an exception to prevent the system from running properly.
- CallerRunsPolicy strategy: Does not actually drop the task, running the currently dropped task while the thread pool is still open.
- DiscardOldestPolicy strategy: Discard the oldest task, which is about to be executed, and try to submit the current task again.
- DiscardPolicy strategy: Silently discard unmanageable tasks, leaving nothing to be done.
Today you work harder, tomorrow you will be able to say less words!
I am xiao CAI, a man who studies with you. 💋