1. Four ways to create a thread
- Implement Runnable to override the run method
- Inheriting Thread overrides the run method
- Thread pool create Executors. NewCachedThreadPool ()
- Implement Callable interface
2. Thread Indicates the operation method of a Thread
The current thread sleep specifies Mills milliseconds
- Thread.sleep([mills])
The current thread gracefully cedes execution
- Thread.yield()
For example, Thread T1, t2, calls t1.join() in t2’s run method, and t2 waits for T1 to complete
- join
3. The state of the Thread
state | Usage scenarios |
---|---|
NEW | After Thread is created, but before start |
RUNNABLE | This is also the only way a thread can get into a running state after calling the start() method. It is either ready or running, which is used when a Thread is suspended or when thread.yield () is called |
WAITING | When a thread executes Object.wait(), it must wait for another thread to execute Object.notify() or Object.notifyall (). When thread.join() is executed on the main thread, the main thread waits for the thread to complete. When a thread executes locksupport.park (), it is waiting to execute locksupport.unpark (thread). When the thread is WAITING, it is in a state of WAITING. What needs to be noted is that there is no time limit for waiting. When a thread in this state is found, if it is in this state for a long time, it needs to pay attention to whether there are logical exceptions in the program. |
TIMED_WAITING | The difference between this state and WAITING is, Thread.sleep(long) object.wait (long) thread.join (long) locksupport.parknanos () locksupport.parkuntil () |
BLOCKED | The state at which a method or block of code modified by the synchronized keyword acquires a lock |
TERMINATED | The state of a thread after it has finished executing. Once a thread terminates, it cannot be resurrected. On a termination of the thread calls the start () method, which will be thrown. Java lang. IllegalThreadStateException anomalies |
4. synchronized
- Objects are locked, not code
- This is equivalent to the current class. Class
- Locking method and non-locking method simultaneously
- The lock will be automatically released when an exception occurs during execution
- A lock acquired by synchronized is reentrant
- Lock upgrade bias lock – spin lock – Heavyweight lock
- Synchronized (object) cannot use String, Integer, Long and other basic data types
- When locking an object, ensure that the object cannot be overwritten, preferably with a final definition
4. volatile
- Ensure thread visibility
- Disallow instruction reordering
- Volatile does not guarantee consistency of changes across threads, and the synchronized keyword is still required for consistency
- Volatile reference types (including arrays) only guarantee the visibility of the reference itself, not the visibility of the internal field. The volatile key can only be used on variables and cannot modify methods or code blocks
5. Comparison of efficiency between Synchronized, AtomicLong and LongAdder
Synchronized needs to be locked, low efficiency; AtomicLong does not need to apply for locks and uses CAS mechanism. LongAdder uses segmented locking, so it works best when the number of concurrent requests is particularly high
6. Principle of segmental locking of ConcurrentHashMap
Segmented locking is to lock data in segments to further fine-grained locking, which helps improve concurrency efficiency. HashTable containers in the fierce competition in the concurrent environment show the cause of the inefficient competition is all access to the HashTable thread must be the same lock, if the container have much lock, every lock part used to lock the container data, then when multithreaded access container different data section of the data, the existed between threads lock contention, This is the lock fragmentation technique used by ConcurrentHashMap to improve concurrent access efficiency. Data is first stored in segments, and then each segment is assigned a lock. When a thread uses the lock to access one segment of data, other segments of data can also be accessed by other threads.
7. ReentrantLock
ReentrantLock can replace synchronized, but ReentrantLock must be manually unlocked or unlocked. Synchronized automatically releases the lock if an exception occurs. ReentrantLock must be manually disabled. Lock Lock = new ReentrantLock(); Open the lock. The lock (); Close the lock unlock (); Using Reentrantlock, a tryLock can be made, which cannot be locked, or which cannot be locked for a specified period of time, and the thread can decide whether to continue waiting. TryLock (time) throws an exception, so unclock must be in finally. TryLock (time) throws an exception, so unclock must be in finally. If tryLock is unlocked, unlock is not required. You can use ReentrantLock to call the lockInterruptibly method, which can respond to the threadinterrupt method while a thread is waiting for the lock. Can be interrupted New ReentrantLock(true) Fair lock. The default value is false without parameters
8. CountDownLatch
The countDownLatch class enables a thread to wait for other threads to finish executing. This is done with a counter whose initial value is the number of threads. When the countDown() method is called, each time a thread completes, the counter has a value of -1, and a value of 0 indicates that all threads have completed, and the threads waiting on the lock can resume work.
The countDown() method is called from the thread to start counting; In a thread calling the await() method, execution will continue only if the counter is 0, otherwise it will wait; You can also use latch.await(timeout, unit) to continue the thread if the counter is not zero after a timeout. The code after countDown() is not controlled by the counter. Unlike join, threads using join will block, threads using countDown will not be affected and will block only when await is called
8. CyclicBarrier
The effect is to cause a specified number of threads (specified by the constructor) to wait for completion before proceeding to the next step. Constructor: public CyclicBarrier(int parties) public CyclicBarrier(int parties, Runnable barrierAction) parties BarrierAction is the task to be performed by the last thread to arrive. All threads wait for all threads to reach the fence before performing the task, and the last thread to arrive completes the Runnable task. How it works: a Lock object is defined inside a CyclicBarrier. Each time a thread calls an await method, subtract one from the number of intercepted threads and determine whether the remaining intercepts are parties with the initial value. If not, the Lock object is queued conditionally to wait. If so, execute the Runnable method of the barrierAction object and put all the threads in the conditional queue of the lock into the lock wait queue. These threads in turn acquire and release the lock.
9. Phaser
A reusable synchronization barrier that functions like CyclicBarrier and CountDownLatch, but supports more flexible use.
Phaser allows us to set up barriers where logical threads need to perform the next step.
We can coordinate multiple execution phases and reuse Phaser instances for each program phase. Each phase can have a different number of threads waiting to progress to another phase. We’ll look at an example of a usage phase later.
To participate in coordination, the thread needs to use the Phaser instance Register () itself. Please note: This only increases the number of registrants and we cannot check whether the current thread is registered – we must subclass the implementation to support this.
The thread prevents it from reaching the barrier by calling arriAndAwaitAdvance(), which is a blocking method. When the number reaches equal to the number registered, execution of the program continues and the number increases. We can get the current amount by calling the getPhase () method.
10. ReadWriteLock
ReadWriteLock allows you to create a read lock and a write lock
ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
Lock readLock = readWriteLock.readLock();
Lock writeLock = readWriteLock.writeLock();
Copy the code
With ReadWriteLock, the same data is read by a large number of threads but modified by a small number of threads. ReadWriteLock guarantees: \
Allow only one thread to write (other threads can neither write nor read);
With no writes, multiple threads are allowed to read at the same time (improving performance) \
Read/write split locks can effectively help reduce lock contention and improve system performance. Read/write locks are not mutually exclusive. Read/write locks are mutually exclusive
11. Semaphore
Semaphore is a counting Semaphore that must be released by the thread that fetched it. Often used to limit the number of threads that can access certain resources, for example through Semaphore flow limiting. Semaphore guarantees mutual exclusion, not synchronization, at the same time, but mutual exclusion. It only limits the number of threads that can access certain resources. There is no synchronization.
1, Acquire (int permitting)
Gets a given number of permissions from this semaphore, blocks the thread until those permissions are provided, or the thread has been interrupted. It’s like a student with two Windows. And that also corresponds to the corresponding release method.
2, release (int permits)
Releases a given number of permits, returning them to the semaphore. This corresponds to the method above, how many Windows does a student occupy and how many more to release after they’re done
3, availablePermits ()
Returns the number of permissions currently available in this semaphore. This returns how many Windows are currently available.
4, reducePermits (int reduction)
Reduces the number of available licenses by the specified reduction.
5, hasQueuedThreads ()
Query whether there are threads waiting to acquire resources.
6, getQueueLength ()
Returns the estimated number of threads waiting to be fetched. This value is only an estimate.
5, tryAcquire(int permits, long timeout, TimeUnit unit)
A given number of permissions are obtained from this semaphore if all permissions are available for the given waiting time and the current thread is not interrupted.
8, acquireUninterruptibly (int permits)
Gets a given number of permissions from this semaphore and blocks the thread until those permissions are provided.
12. Exchanger
The wrapper tool class is used to exchange data between two worker threads. Simply put, when a thread wants to exchange data with another thread after completing a certain transaction, the first thread that takes out data will wait for the second thread to arrive with the data before exchanging the corresponding data with each other. It is a Sano1100Generic type, in which V indicates the data type that can be exchanged. The external interface is simple as follows:
Sanorecovery: Non-parametric construction method.
V exchange(V V) : Waits for another thread to reach this exchange point (unless the current thread is interrupted), then passes the given object to that thread and receives the object from that thread.
V exchange(V V, long timeout, TimeUnit Unit) : Waits for another thread to reach this exchange point (unless the current thread is interrupted or the specified wait time is exceeded), then passes the given object to that thread and receives the object from that thread.
13. LockSupport
LockSupport is a very handy thread blocking tool that can block threads anywhere.
LockSupport’s static method park () blocks the current thread, as do parkNanos(), parkUntil(), etc., which implement a timed wait.
methods | describe |
---|---|
Park (void) : | Blocks the current thread, which can be returned from the park() method if the unpark method is called or the current thread is interrupted |
void park(Object blocker) | The function is the same as method 1. Add an Object Object to the entry parameter, which is used to record the blocking Object that causes the thread to block, facilitating troubleshooting. |
void parkNanos(long nanos) | Blocks the current thread for a maximum of nanos nanoseconds, adding a timeout return feature; |
void parkNanos(Object blocker, long nanos) | The function is the same as method 3. Add an Object Object to the entry parameter, which is used to record the blocking Object that causes the thread to block, facilitating troubleshooting. |
void parkUntil(long deadline) | Block the current thread until deadline; |
void parkUntil(Object blocker, long deadline) | The function is the same as method 5. Add an Object Object to the entry parameter, which is used to record the blocking Object that causes the thread to block, facilitating troubleshooting. |
Again, there are ways to block, and of course there are ways to wake up. What? Unpark (Thread) method. This method wakes up the specified thread.
Note that the order in which the PARK and unpark methods are executed is not so strict. The suspend and resume methods mentioned in the Thread class, for example, will never wake up if they are in the wrong order, but the Park and unpark methods will not, because LockSupport uses semaphore like mechanisms. He prepares a license for each thread (not available by default). If the license is available, the Park function returns immediately and consumes the license (making the license unavailable). If the license is not available, it blocks. The unpark method makes a license available
14. AQS
Short for AQS to AbstractQueuedSynchronizer
AQS is a synchronization framework provided under the JDK for implementing BLOCKING locks and associated synchronizers based on FIFO wait queues. This abstract class is designed to be the base class for synchronizers that use atomic int values to represent state. AQS manages a single integer of state information that can represent any state. Semaphore uses it to represent the number of permits remaining, ReentrantLock uses it to represent how many locks the owning thread has requested; FutureTask uses it to represent the status of the task (not yet started, running, completed, and cancelled)
- Use the information
Usage
To use this class as the basis of a synchronizer, redefine the following methods, as applicable, by inspecting and/or modifying the synchronization state using {@link #getState}, {@link #setState} and/or {@link #compareAndSetState}:
- {@link #tryAcquire}
- {@link #tryRelease}
- {@link #tryAcquireShared}
- {@link #tryReleaseShared}>
- {@link #isHeldExclusively}
The above methods do not need to be implemented completely, and different methods can be implemented according to the type of lock acquired. Synchronizers that support exclusive (exclusive) lock acquisition should implement tryAcquire, tryRelease, and isHeldExclusively. Synchronizers that support shared access locks should implement tryAcquireShared, tryReleaseShared, isHeldExclusively.
- AQS analyses
The main implementation of AQS is to maintain a “volatile int state” (representing shared resources) and a FIFO thread wait queue (which is entered when multithreaded contention for resources is blocked). Each node in the queue is an encapsulation of the thread, including basic thread information, state, waiting resource type, and so on.
GetState () setState() compareAndSetState()
ReentrantLock (ReentrantLock) ReentrantLock (ReentrantLock) Different custom synchronizers compete for shared resources in different ways, such as Semaphore/CountDownLatch. The implementation of custom synchronizer only needs to realize the acquisition and release of shared resource state. As for the maintenance of specific thread waiting queue (such as failure to acquire resources in queue/wake up queue, etc.), AQS has been implemented at the top level. The implementation of a custom synchronizer is mainly implemented in the following ways:
IsHeldExclusively () : Whether the thread is monopolizing resources. You only need to implement it if you use condition.
TryAcquire (int) : Exclusive mode. Attempts to obtain the resource return true on success and false on failure.
TryRelease (int) : Exclusive mode. Attempts to free resources return true on success and false on failure. TryAcquireShared (int) : Share mode. Attempt to obtain resources. Negative numbers indicate failure; 0 indicates success, but no available resources are available. A positive number indicates success and free resources.
TryReleaseShared (int) : share mode. Attempts to free the resource, returning true if subsequent wait nodes are allowed to wake up after release, false otherwise.
Take ReentrantLock for example
State is initialized to 0, indicating an unlocked state. When thread A locks (), tryAcquire() is called to monopolize the lock and state+1. After that, another thread will fail to tryAcquire() until the unlock() of thread A reaches state=0. Of course, thread A can repeatedly acquire the lock itself before releasing it (state will accumulate), which is the concept of reentrant. But be careful how many times you get it and how many times you release it, so that state can go back to zero.
Take CountDownLatch as an example \
Tasks are divided into N child threads to execute, and state is initialized to N (note that N must be consistent with the number of threads). The N child threads are executed in parallel, countDown() once for each child thread, and state is reduced by 1. After all child threads have finished executing (i.e., state=0), unpark() the calling thread, and then the calling thread returns from the await() function to continue the residual action. In general, custom synchronizers are either exclusive or shared methods, and they only need to implement either Tryacquire-TryRelease or tryAcquireShared. However, AQS also supports both exclusive and shared custom synchronizers, such as ReentrantReadWriteLock.
15. Basic concepts of locking
- Fair lock/unfair lock
- Reentrant lock
- Exclusive lock/shared lock
- Mutex/read-write lock
- Optimistic lock/pessimistic lock
- Segmented lock
- Bias lock/lightweight lock/heavyweight lock
- spinlocks
- Fair lock/unfair lock
A fair lock is one in which multiple threads acquire locks in the order in which they are applied. An unfair lock means that multiple threads acquire the lock in a different order than the one that applies for it first. This can lead to priority reversals or starvation. In the case of Java ReentrantLock, the constructor specifies whether the lock is fair, and the default is unfair. The advantage of an unfair lock is that the throughput is greater than that of a fair lock. Synchronized is also an unfair lock. Since it does not implement thread scheduling via AQS like ReentrantLock, there is no way to make it a fair lock. \
- Reentrant lock
A reentrant lock, also known as a recursive lock, means that the inner method automatically acquires the lock when the same thread acquires the lock in the outer method. ReentrantLock and Synchronized are both reentrantlocks. One advantage of reentrant locks is that deadlocks can be avoided to some extent
- Exclusive locks/shared locks
An exclusive lock means that the lock can only be held by one thread at a time. A shared lock means that the lock can be held by multiple threads. In the case of ReentrantLock, this is the exclusive lock. But ReadWriteLock, another implementation of Lock, has a shared read Lock and an exclusive write Lock. The shared lock of read lock ensures that concurrent read is very efficient. Read, write, read, and write processes are mutually exclusive. Exclusive lock and shared lock is also achieved through AQS, through the implementation of different methods, to achieve exclusive or shared. In the case of Synchronized, of course, the lock is exclusive.
- Mutex/read-write lock
The exclusive/shared lock mentioned above is a broad term, and the mutex/read-write lock is a concrete implementation. ReentrantLock ReadWriteLock\
- Optimistic lock/pessimistic lock
Optimistic and pessimistic locks do not refer to specific types of locks, but rather to the perspective of concurrent synchronization.
Pessimistic lock (Synchronized and ReentrantLock) believes that concurrent operations on the same data must be modified, even if they are not, they will be considered modified. Therefore, for concurrent operations on the same data, pessimistic locking takes the form of locking. The pessimistic view is that concurrent operations without locking are bound to cause problems. Optimistic locking (Java. Util. Concurrent. Atomic package) that for the same data of concurrent operation, will not change. When the data is updated, the data will be updated by trying to update and constantly re – updating. Optimistically, concurrency without locking is fine. Pessimistic locks are suitable for scenarios with a large number of write operations, while optimistic locks are suitable for scenarios with a large number of read operations. Pessimistic locking is used in Java to take advantage of various locks. Optimistic lock is used in Java, is lockless programming, often using CAS algorithm. A typical example is the atomic class, which updates atomic operations through CAS spin. \
- Segmented lock
Segmented locking is actually a lock design, not a specific lock. ConcurrentHashMap concurrency implementation is to achieve efficient concurrent operations in the form of segmented locking. The Segment lock in ConcurrentHashMap is called a Segment, which is similar to the structure of HashMap (JDK7 and JDK8 implementations of HashMap). It contains an array of entries, each of which is a linked list. It is also a ReentrantLock (the Segment inherits ReentrantLock). When a put element is required, instead of locking the entire HashMap, hashCode first knows which segment it is to be placed in, and then locks that segment. So when multiple threads put elements, as long as they are not placed in a segment, they achieve true parallel inserts. However, in order to obtain the global information of the HashMap, we need to obtain all the segment locks to obtain the statistics. Segment locking is designed to refine the granularity of the lock. When an operation does not need to update the entire array, only one item in the array is locked.
- Bias lock/lightweight lock/heavyweight lock
These three types of locks refer to lock states and are specific to Synchronized. In Java 5, efficient Synchronized is implemented by introducing a mechanism of lock escalation. The status of these three locks is indicated by an object monitor field in the object header. Biased locking is when a piece of synchronized code is always accessed by a thread, and that thread automatically acquires the lock. Reduce the cost of acquiring locks. Lightweight lock refers to that when a biased lock is accessed by another thread, the biased lock will be upgraded to lightweight lock. Other threads will try to acquire the lock through the form of spin, which will not block and improve performance. Heavyweight lock means that when the lock is a lightweight lock, the other thread spins, but the spin will not continue. When the spin is a certain number of times, the lock has not been acquired, it will enter the block, and the lock expands to the heavyweight lock. Heavyweight locks can block other threads that apply for them and degrade performance.
- spinlocks
In Java, a spin lock means that the thread attempting to acquire the lock does not block immediately, but instead attempts to acquire the lock in a loop. This has the advantage of reducing the cost of thread context switching, but the disadvantage is that the loop consumes CPU. For an example of a typical spinlock implementation, see the spinlock implementation