Threads and thread pools

thread

The difference between threads and processes

A process is an executable program and the basic unit by which a system allocates resources. Thread is a relatively independent executable unit within a process and the basic unit of task scheduling.

Advantages and disadvantages of multi-threading

Advantages: make full use of the advantages of multi-core CPU, improve CPU utilization and program running efficiency disadvantages: 1, too many threads affect performance, CPU switching increases memory overhead. 2. Thread synchronization and thread safety problems exist. 3. Deadlocks can occur. 4. Increased technical difficulty for developers

Threads have several states

Five states: New, Ready, Running, blocked and Dead.

  • New state: new Creates a thread before it has started running.
  • Ready: After creation, call the start() method and the thread is ready for CPU scheduling.
  • Running state: When the thread has acquired CPU time, it enters the running state and executes the contents of run()
  • Blocked: a thread can be blocked at any time while running: for example, calling sleep(); Blocking while waiting to acquire lock The thread is waiting for another trigger condition. Temporarily release CPU resources.
  • Dead: A thread dies for two reasons: the run() method ends normally; An uncaught exception terminates the run() method

If one thread is OOM, can other threads still run

The answer is that it works, even though the heap is shared. If one thread is OOM, it probably means that other threads are OOM too. However, when a thread is OOM, the space it occupies is immediately released so that it does not affect other threads. In addition, child threads can run if the main thread fails. Threads are not like processes. There is no parent-child relationship between threads of a process.

Several ways to create threads

  • Inherit the Thread class and override the run() method to start the Thread with thread.start ()
  • Implement the Runnable interface, override the run() method, create a Thread with new Thread(Runnable a), call the start() method to start the Thread.
  • Create threads through Callable and futureTask, implement the Callable interface, override the Call method, wrap Callable instances with future objects, and create threads through Thread methods.
  • Create a thread from a thread pool

Sleep and wait

  • Wait can only be called in synchronized, which is an object-level method. Sleep is not required, which is a Thread method.
  • The wait method releases the lock, but sleep does not
  • After the wait times out, the thread enters the ready state and waits for the CPU to continue.

The yield and the join

  • Yield frees CPU resources, does not release the lock, puts the current thread into a ready state, and only allows threads of the same or higher priority to execute.
  • A join releases CPU resources and locks. The underlying implementation is the wait() method, which waits for the thread calling the join() method to complete execution before continuing.

A deadlock

Deadlock definition

A deadlock is a phenomenon in which two or more threads are waiting for each other (starvation) as they compete for resources during execution. None of them can operate without an external force

A deadlock because

  • Contention for system resources: There are inalienable resources in the system, but the number of resources is not enough to meet the needs of multiple threads. As a result, threads are locked in the process of fighting for resources.
  • Thread advance order is illegal: a thread applying for another lock (L2) after acquiring lock L1, i.e. applying for lock (L2) without releasing lock L1, is the most fundamental cause of deadlock.

Necessary conditions for deadlocks

  • Mutual exclusion: A process requires that allocated resources be owned by only one process at a time.
  • Inalienable conditions: A resource cannot be taken away by another process before it has finished using it, unless it is released voluntarily
  • Request and hold conditions: a process that has held a resource applies for another resource that has already been held by another process.
  • The waiting conditions are as follows: Process A has resource 1 and applies for resource 2. Process B has resource 2 and applies for resource 1

How do I avoid deadlocks

  • Lock in a reasonable order
  • The lock duration should be appropriate: a thread attempting to acquire a lock must apply a certain amount of time, timeout must abandon the request, and release its own lock.
  • Deadlock detection

The thread pool

concept

It’s just a pool that manages threads. Helps us manage threads, reducing the resource consumption of thread creation and thread destruction. The benefits of thread pooling are many: faster response times, faster to pull threads directly from the pool than to create them, and threads can be reused.

ThreadPoolExecutor

A thread pool can be created using ThreadPoolExecutor. Look at the constructor, which takes seven parameters

public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize,long keepAliveTime,TimeUnit unit,
   BlockingQueue<Runnable> workQueue,
   ThreadFactory threadFactory,
   RejectedExecutionHandler handler)
Copy the code
  • CorePoolSize: number of core threads
  • MaximumPoolSize: indicates the maximum number of threads
  • KeepAliveTime: The idle lifetime of non-core threads in the thread pool
  • TimeUnit: Indicates the keepAliveTime unit
  • WorkQueue: block queue for storing tasks
  • ThreadFactory: a threadFactory used to create core threads, which can be named
  • Handler: Saturation policies (rejection policies) for the thread pool. There are four types

Thread pool execution flow

Four rejection strategies

  • AbortPolicy: Reject directly, throw an exception, default reject policy
  • DiscardPolicy: Discards tasks directly
  • DiscardOldestPolicy: Discards the oldest task in a task and commits the current task to the thread pool
  • CallerRunsPolicy: Hands it to the thread from which the thread pool call is made

Blocking queue

  • ArrayBlockingQueue: a bounded queue, an array implementation of a bounded blocking queue, sorted by FIFO.
  • LindkedBlockingQueue: A block queue based on a linked list, sorted by FIFO, with a capacity that can be set, otherwise it is a borderless blocking queue (maximum length integer.max_value) with a higher throughput than ArrayBlockingQueue. NewFixedThreadPool is the queue used.
  • DelayQueue: a queue for delayed execution of tasks on a periodic basis, sorted from smallest to largest according to the specified execution, otherwise used by newScheduledThreadPool according to the order in which they were inserted into the queue.
  • PriorityBlockingQueue: A priority queue is an unbounded blocking queue with a priority.
  • SynchronousQueue: a blocking queue that does not store elements. Each insert must wait for another thread to invoke the remove operation, otherwise the insert will block. NewCachedThreadPool uses this queue.

Common thread pool

newFixedThreadPool

A thread pool with a fixed number of threads, internally using LinkedBlockingQueue

public static ExecutorService newFixedThreadPool(int nThreads, ThreadFactory threadFactory) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>(),
                                      threadFactory);
    }
Copy the code
  • Number of core threads = Maximum number of threads
  • No non-idle time, that is, keepAliveTime=0
  • The blocking queue is the unbounded queue LinkedBlockingQueue.
  • Usage scenario: Suitable for processing CPU intensive tasks, to ensure that the CPU in the case of long-term worker threads, as few threads allocated.

newCachedThreadPool

A thread pool for cacheable threads, internally using SynchronousBlockingQueue

public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>(),
                                      threadFactory);
    }
Copy the code
  • Number of core threads =0, maximum number of threads = integer.max_value
  • Non-core threads live for 60 seconds
  • There is also a problem with this pool: when the number of submitted tasks is greater than the number of processed tasks, one task per submission will inevitably create a non-core thread, and in extreme cases, too many threads will be created, running out of CPU and memory
  • Usage scenario: Perform a large number of short-term tasks concurrently

newSingleThreadPool

A single-threaded thread pool, internally using linkedBlockingQueue

public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>(),
                                    threadFactory));
    }
Copy the code
  • Core thread count = maximum thread count =1, i.e., there is only one thread alive in the pool at all times.
  • KeepAliveTime =0, this parameter is invalid
  • Blocking queues are unbounded linkedBlockingQueues
  • Usage scenario: This mode applies to the scenario where tasks are executed in serial mode.

newScheduledThreadPool

A pool of threads that execute regularly and periodically, internally using DelayQueue

public ScheduledThreadPoolExecutor(int corePoolSize) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue());
    }
Copy the code
  • The maximum number of core threads is integer.max_value
  • keepAliveTime=0
  • Application scenario: Periodically performs tasks.

ScheduleAtFixedRate () and scheduleWithFixedDelay() differ as follows:

  • ScheduleAtFixedRate: Executes periodically at a certain rate, regardless of whether the last task is completed.
  • ScheduleWithFixedDelay: Execution after a certain delay, which is calculated after the execution of a previous task.

Thread pool state

Five states are TERMINATED: RUNNING, SHUTDOWN, STOP, TIDYING, and TERMINATED

Private static final int RUNNING = -1 << COUNT_BITS; private static final int SHUTDOWN = 0 << COUNT_BITS; private static final int STOP = 1 << COUNT_BITS; private static final int TIDYING = 2 << COUNT_BITS; private static final int TERMINATED = 3 << COUNT_BITS;Copy the code

  • Running: The thread pool in this state receives new tasks and processes tasks in the blocking queue. Call the shudown() method to switch to the SHUTDOWN state. Calling the shutdownNow() method changes to STOP.
  • Shutdown: This state thread pool does not receive new tasks but processes tasks in the blocking queue. The queue is empty, and the tasks executed in the thread pool are also empty, entering the TIDYING state.
  • Stop: The thread pool in this state does not receive new tasks, does not process tasks in the queue, and interrupts ongoing tasks.
  • Tidying: This state indicates that all tasks are terminated and the number of logged tasks is 0. Terminated () is in the terminated state.
  • Terminated: This state indicates that the thread pool is Terminated or dead.

ThreadLocal

What is a ThreadLocal

ThreadLocal is ostensibly a utility class related to multithreading and thread synchronization, but it has nothing to do with thread synchronization. While thread synchronization allows multiple threads to share a variable, ThreadLocal creates a separate copy of a variable for each thread, and each thread can change its own copy of a variable without affecting the corresponding copies of other threads. Official API: This class provides thread-local variables. These variables differ from their normal counterparts because each thread accessing a variable has its own local variable, independent of the initialized copy of the variable. ThreadLocal instances are typically private static fields that they want to associate state with a thread (such as a user ID or transaction ID).

API

The four methods

  • Get () : Returns the value of the current copy of this thread-local variable
  • Set (T value) : Sets the current copy of a thread-local variable to the specified value.
  • InitialValue () : returns the initialValue of the current copy of this thread-local variable.
  • Remove () : removes the value of the current copy of this thread-local variable

ThreadLocal has an important static internal class, ThreadLocalMap, that is key to implementing the thread isolation mechanism. Get (), set(), and remove() are all based on this map. The key of the ThreadLocalMap is the current ThreadLocal object, and the value is the variable copy of the corresponding thread. Each thread has its own ThreadLocal object. ThreadLocalMap operations do not affect each other, so there is no thread safety issue.

Use the sample

Assuming that every thread needs to count and change the value of the counter at run time, ThreadLocal is a good choice.

public class SeqCount { private static ThreadLocal<Integer> threadLocal = new ThreadLocal<Integer>() { @Override protected Integer initialValue() { return 0; }}; public int nextSeq() { threadLocal.set(threadLocal.get()+1); return threadLocal.get(); } public static void main(String [] args) { SeqCount seqCount = new SeqCount(); SeqThread seqThread1 = new SeqThread(seqCount); SeqThread seqThread2 = new SeqThread(seqCount); SeqThread seqThread3 = new SeqThread(seqCount); Thread thread1 = new Thread(seqThread1); Thread thread2 = new Thread(seqThread2); Thread thread3 = new Thread(seqThread3); thread1.start(); thread2.start(); thread3.start(); } public static class SeqThread implements Runnable { private SeqCount seqCount; public SeqThread(SeqCount seqCount) { this.seqCount = seqCount; } @Override public void run() { for (int i=0; i<5; i++) { System.out.println(Thread.currentThread().getName()+",seqCount="+seqCount.nextSeq()); }}}} /* One possible output result thread-0,seqCount=1 thread-0,seqCount=2 thread-0,seqCount=3 thread-0,seqCount=4 thread-0,seqCount=5  Thread-1,seqCount=1 Thread-1,seqCount=2 Thread-1,seqCount=3 Thread-1,seqCount=4 Thread-1,seqCount=5 Thread-2,seqCount=1  Thread-2,seqCount=2 Thread-2,seqCount=3 Thread-2,seqCount=4 Thread-2,seqCount=5 */Copy the code

Memory leak problem

The key of a ThreadLocalMap is an instance of ThreadLocal, which is a weak reference. We know that weak references are good for GC collection. When key==null, GC will reclaim this space, but value may not be reclaimed. Because there is a strong reference relationship between it and the Current Thread, the value cannot be reclaimed because of this strong reference relationship. If the thread object does not remove this strong reference, it may appear in OOM.

Java Memory Model (JMM)

Hardware memory structure

The processing speed between the processor and the computer storage device is several orders of magnitude different, in order to achieve “high concurrency” effect, a Cache between the processor and the memory.

The data needed for the operation is copied to the cache so that the operation can be carried out quickly. After the operation is completed, the result in the cache is written to the main memory, so that the operator does not have to wait for the main memory to read or write. Each CPU has its own cache and operates the same main memory. Data inconsistency may occur when multiple cpus operate main memory at the same time. Therefore, the Cache Consistency protocol is required to ensure data inconsistency.

Java memory model

The Java Memory Model (JMM) is used to screen out the Memory access differences of various hardware and operating systems, so that Java programs can achieve consistent Memory access effects on different platforms. The JMM defines an abstract relationship between threads and main memory: shared variables between threads are stored in main memory, and each thread has a private local memory that stores copies of variables read and written by that thread. Local memory is an abstraction of the JMM, not a real thing.

Main memory: The main store for Java instance objects, where all thread-created instance objects are stored, whether they are member variables or local variables in methods, as well as shared class information, constants, and static variables. Shared data areas and multiple threads accessing the same variable can cause thread-safety issues. Working memory: Currently main storage methods of all the local variable information, each thread can only access to their working memory, which the local variables of the thread is not visible to other threads, because working memory is the private data of each thread, the thread can’t visit each other between the working memory, so data stored in the working memory is not thread safe problems, For example, ThreadLocal.

The JMM’s eight memory interactions

There are eight types of memory interaction, and the VIRTUAL machine must ensure that each operation is atomic and non-divisible

  • Lock: A variable that acts on main memory, marking a variable as thread-exclusive.
  • Read: Acts on a main memory variable to transfer the value of a variable from main memory to the thread’s working memory for subsequent load action.
  • Load: Variable acting on working memory, which puts the variables from main memory obtained by the read operation into a copy of the variables in working memory.
  • Use: variable applied to working memory, which transfers working memory variables to the execution engine. This operation is performed whenever the virtual machine reaches a bytecode instruction that requires the value of the variable to be used.
  • Assign: a working memory variable that assigns a value received from the execution engine to a copy of the variable in working memory.
  • Store: Variable applied to working memory that passes the value of a variable from working memory to main memory for later write use.
  • Write: a variable in main memory that puts the value of a variable in main memory from a store operation in working memory.
  • Unlock: A variable that acts on main memory. It releases a locked variable so that it can be locked by another thread.

If a variable is copied from main memory to working memory, read->load->use is used. If a variable is synchronized from working memory to main memory, assign->store->write is used. The JMM requires sequential execution, but does not require sequential execution.

Three features of the JMM

atomic

The JMM provides atomic operations with read, load, use, assign, Store, and write instructions. We can think of Java’s basic variables as atomic (except for double and long, because some virtual machines can split 64-bit into high and low 32-bit operations). For lock and unlock, the VM does not directly provide operations to users. Instead, the VM provides higher-level bytecode instructions Monitorenter and Monitorexit to implicitly use these two operations, which correspond to Java synchronized.

visibility

Threads are isolated from each other. Threads get a copy of main memory variables, and changes need to be flushed back to main memory. Other threads need to retrieve the changed values from main memory again. All variables need to go through this process, but volatile variables can be forcibly flushed back to main memory after modification and retrieved from main memory when used. Synchronized after execution and before Unlock, the shared variable must be synchronized back to the main memory (store and write) final modified field, as long as in the constructor once initialization is complete and no object escapes, The value of the final field can then be seen in other threads.

order

Orderliness: Results observed in a single thread are ordered. But in multithreading, it’s out of order. Java provides volatile and synchronized to keep threads in order. Volatile uses memory barriers, while synchronized uses the rule that a lock must unlock before another thread can lock again, allowing synchronized blocks to execute serially.

Happens-Before

Antecedent is the order of two operations defined in the Java memory model. If operation A happens before operation B, the influence of operation A can be observed by B, including modifying the value of shared variables in memory, sending messages, calling methods, etc. There are some natural antecedents under the JMM:

  • Program order principle: in a thread, the program execution rules and the program writing rules are the same, from top to bottom.
  • Lock rule: An Unlock operation must precede the next Lock operation. This has to be the same lock.
  • Rule for volatile Variables: The first write to a volatile variable must precede any subsequent read.
  • Thread start rule: The start() operation of the Thread object occurs first for each action of the Thread.
  • Thread termination rule: Thread object termination detection operations must be later than all operations in the Thread.
  • Object termination rule: An object’s initialization method must precede its Finalize () method
  • Transitivity: If operation A precedes operation B and operation B precedes operation C, then operation A precedes operation C.

CAS

CAS definition

CAS: compares And swaps. Doug Lea makes extensive use of CAS technology to implement concurrent operations in Java multithreading. The entire AQS component, Atomic class operations are implemented based on CAS, and even ConcurrentHashMap changes ReentrantLock to CAS+Synchronized in JDK1.8.

The CAS theory

There are three parameters in CAS: the memory value V, the old expected value A, and the value to be updated B. The memory value V is changed to B if and only if the value of the memory value V is equal to the old expected value A, otherwise nothing is done. You can imagine why? When one thread updates the value of this variable, the memory value is V, and if no other thread modiates the variable, it is assumed that its expected value, A=V, will not cause problems with the update. The pseudocode is as follows:

if (this.value==A){
    this.value=B;
    return true;
} else{
    return false;
}
Copy the code

AtomicInteger is used as an example to illustrate the implementation of CAS

private static final Unsafe unsafe = Unsafe.getUnsafe();
private static final long valueOffset;

static {
    try {
        valueOffset = unsafe.objectFieldOffset
            (AtomicInteger.class.getDeclaredField("value"));
    } catch (Exception ex) { throw new Error(ex); }
}

private volatile int value;
Copy the code

CAS ensures that a read/write operation is atomic, which is easy to implement on a single processor, but a bit more complicated to implement on multiple processors. The CPU provides two methods to implement atomic operations on multiple processors:

  • Bus locking: bus locking uses a processor to provide a LOCK# signal. When one processor outputs this signal on the bus, requests from other processors will be blocked and the processor can exclusively use the shared memory. This is a one-size-fits-all process. During the lock period, no other processor can access the data of the memory address, which is expensive.
  • Cache locking: in the above case we only need to ensure that the operation on a memory address is atomic at the same time. Cache locking is the data cached in memory region. If during the locking period, when it performs the locking operation to write back to memory, the processor no longer outputs the LOCK# signal, but changes the internal memory address, using the cache consistency protocol to ensure atomicity. The cache consistency mechanism ensures that data in a memory region can only be modified by one processor. This means that if CPU1 modifies the I in the cache row by using a cache lock, then CPU2 cannot also cache the I in the cache row.

The defect of the CAS

The cycle time is too long

If the spin CAS are unsuccessful for long periods of time, it can be very expensive for the CPU, and there are places in JUC that limit the number of spins the CAS can spin.

Only one shared variable atomic operation is guaranteed

CAS only works for one shared variable; multiple shared variables can only be locked.

ABA

The CAS needs to check whether the value of the operation has changed, and if it has not changed, it will update it. However, there is A case where the value of A variable has changed from A to B, and then to A, and the CAS checks that the value has not changed. This is known as the ABA problem. The solution to this problem is to add the version number, that is, add the version number to each variable, increment by 1 for each change, and A->B->A becomes 1A->2B->3A.

AQS

AQS introduction

AQS (AbstractQueuedSynchronizer) core idea is: if the requested free sharing of resources, will thread is set to the current request resources effective worker threads, and set the Shared resource is locked. If the requested shared resource is occupied, then a mechanism is needed for the thread to block and wait and to be allocated when it is awakened. This mechanism is implemented by CLH queue lock, that is, the thread that cannot acquire the lock temporarily is added to the queue. CLH is a virtual machine bidirectional queue (virtual bidirectional queue does not have queue instances, only the association between nodes). AQS is a Node that encapsulates each thread requesting shared resources into a CLH lock queue to realize lock allocation.

Queue synchronizer AQS (hereinafter referred to as synchronizer) is used to build locks or other synchronization components of the basic framework, it uses an int type of member variable state to represent the synchronization state, through the built-in FIFO queue to complete the queuing of resources to obtain threads. The main use of synchronizer is inheritance. Subclasses inherit synchronizer and implement its abstract methods to manage synchronization state. In the implementation process of abstract methods, synchronization state must be changed. The three methods provided by the synchronizer are getState(), setState(int newState), and campareAndSeState(int expect, int update). Synchronizers can be exclusive or shared, enabling different types of synchronization components (ReentrantLock, ReenTrantReadWriteLock, CountdownLock, etc.). The relationship between synchronizer and lock: synchronizer is the key to realize lock, synchronizer is aggregated in the realization of lock, and the semantic of lock is realized by using synchronizer. Locking is consumer-oriented, with interfaces defined and implementation details shielded. The synchronizer is targeted at the implementer of the lock, which simplifies the implementation of the lock. The design of the synchronizer is based on the template method pattern, which means that the consumer inherits the synchronizer and overwrites the specified methods, then combines the synchronizers in the implementation of the custom synchronization component and calls the template methods provided by the synchronizer, which will call the overridden methods and the synchronizer overrides the methods:

Method names describe
protected boolean tryAcquire(int arg) Obtain the synchronization status exclusively. To achieve this method, you need to query the current status and determine whether the synchronization status meets expectations. Then set the CAS synchronization status
protected boolean tryRelease(int arg) Exclusive release of synchronization state, waiting for synchronization state will have the opportunity to acquire synchronization state
protected int tryAcquireShared(int arg) If the value is greater than or equal to 0, the synchronization status is obtained successfully. Otherwise, the synchronization fails
protected boolean tryReleaseShared(int arg) Release synchronization state in shared mode
protected boolean isHeldExclusively() Whether the current synchronizer is occupied by the thread in exclusive mode. This method usually indicates whether the current synchronizer is occupied by the thread
The template methods provided by the synchronizer fall into three categories: exclusive access and release synchronization state, shared access and release synchronization state, and query the status of waiting threads in the synchronization queue. Custom synchronization components implement their synchronization semantics using the template methods provided by the synchronizer.

Implementation of Mutex exclusive lock

An exclusive lock means that only one thread can acquire the lock at the same time. Other threads that do not acquire the lock can only wait in the synchronization queue. Only after the thread that acquired the lock releases the lock, the subsequent threads can continue to acquire the lock.

Public class Mutex implements Lock {private static class Sync extends AbstractQueuedSynchronizer {/ whether in the occupied state * * * * @return */ @Override protected boolean isHeldExclusively() { return getState() == 1; } @param acquires @return @override protected Boolean tryAcquire(int acquires) {assert acquires = = 1; If (compareAndSetState (0, 1)) {setExclusiveOwnerThread (Thread. The currentThread ()); return true; } return false; } /** * release lock, Set state to 0 * @param acquires * @return */ @override protected Boolean tryRelease(int acquires) {assert acquires == 1; if (getState() == 0) { throw new IllegalMonitorStateException(); } setExclusiveOwnerThread(null); setState(0); return true; Return Condition newConditon() {return new ConditionObject(); } } private final Sync sync = new Sync(); @Override public void lock() { sync.acquire(1); } @Override public boolean tryLock() { return sync.tryAcquire(1); } @Override public void unlock() { sync.release(1); } @Override public Condition newCondition() { return sync.newConditon(); } public boolean isLocked() { return sync.isHeldExclusively(); } public boolean hasQueuedThreads() { return sync.hasQueuedThreads(); } @Override public void lockInterruptibly() throws InterruptedException{ sync.acquireInterruptibly(1); } @Override public boolean tryLock(long time, TimeUnit unit) throws InterruptedException{ return sync.tryAcquireNanos(1, unit.toNanos(time)); }}Copy the code

In the code above, the exclusive lock Mutex is a custom synchronization component that defines a static inner class Sync that inherits the synchronizer and implements the exclusive acquisition and release of synchronization state. In the tryAcquire(int acquires) method, if the CAS setting succeeds, the synchronization status is obtained. In tryRelease(int Releases) we simply set the synchronization state to 0.

The source code interpretation

Synchronous queue

Synchronizer rely on internal synchronous queue to complete synchronization state management, the current thread for synchronous state failure, synchronizer will the current thread and wait state information structure into a Node to Node in the synchronous queue, blocks the current thread at the same time, when the sync release, will head nodes of threads in the wake up, make it try again to access sync. A Node Node in a synchronous queue holds thread references, wait states, and precursor and successor nodes, whose attribute types and names are as follows:

Static final int CANCELLED = 1; static final int CANCELLED = 1; Static final int SIGNAL = -1; static final int SIGNAL = -1; static final int SIGNAL = -1; Condition. When another thread calls signal on Condition, the node will be moved from the waiting queue to the synchronous queue. Static final int CONDITION = -2; static final int CONDITION = -2; Static final int PROPAGATE = -3;Copy the code

Node also defines different nodes

// Volatile Node next; // Prev is set to volatile Node when the Node joins the synchronization queue. // Obtain the synchronization status of the Thread volatile Thread Thread; If the current node is SHARED, then this field is a SHARED constant, that is, the node type (exclusive and SHARED) and the successor node in the wait queue share the same field. Node nextWaiter;Copy the code

Nodes form the basis of the synchronization queue. The synchronizer has a head node and a tail node. Threads that fail to achieve synchronization status are added to the tail of the queue

The synchronizer follows the FIFO. The header node is the node that has successfully acquired the synchronization state. The thread of the header node will wake up the successor node when it releases the synchronization state, and the successor node will set itself as the header when it has successfully acquired the synchronization state.

Exclusive synchronization state get and release

Acquire method

The synchronization state can be obtained by calling the acquire(int arg) method of the synchronizer, which is interrupt-insensitive

Public final void acquire(int arg) {// Try to acquire if (! TryAcquire (ARG) && // If tryAcquire(ARG) && // AcquireQueued Returns true to interrupt acquireQueued(addWaiter(Node.exclusive), arg)) SelfInterrupt (), the current thread's interrupt method, is called; }Copy the code

The tryAcquire() method implemented by the custom synchronizer is first called, if it fails the Node is constructed and added to the end of the synchronization queue by addWaiter(), and finally the acquireQueued(Node Node, int arg) method is called, Causes the node to spin its synchronization state, and if not, blocks the thread in the node by calling locksupport.park (this).

Private Node addWaiter(Node mode) {new Node(thread.currentThread (), mode); //tail is the tail of the AQS queue. Node pred = tail; if (pred ! = null) {// Set node.prev = pred; If (compareAndSetTail(pred, node)) {pred.next = node; return node; }} // add tail=null for the first time to add node to the queue enq(node); return node; }Copy the code
Private Node enq(final Node Node) {// Loop +CAS for (;;) { Node t = tail; // Add to queue for the first time If (compareAndSetHead(new Node())) tail= head; } // Otherwise add logic like addWaiter else {node.prev = t; if (compareAndSetTail(t, node)) { t.next = node; return t; }}}}Copy the code

In enQ (final Node Node), the synchronizer ensures the correct addition of nodes through an infinite loop, in which the current thread returns only after the Node is set to tail through CAS. After the node enters the synchronization queue, it enters the spin process. Each node (or thread) observes whether its conditions are met and exits from the spin process once the synchronization status is obtained.

final boolean acquireQueued(final Node node, int arg) { boolean failed = true; try { boolean interrupted = false; for (;;) { final Node p = node.predecessor(); If (p == head && tryAcquire(arg)) {setHead(node); p.next = null; // help GC failed = false; // Get the lock successfully, return return interrupted; } / / don't get the lock, shouldParkAfterFailedAcquire methods: return whether to block the current thread / / parkAndCheckInterrupt method: Block the current thread, and when the thread wakes up again, Return whether interrupt the if (shouldParkAfterFailedAcquire (p, node) && parkAndCheckInterrupt ()) / / modify the interrupt flag bit interrupted = true; }} finally {if (failed) CANCEL cancelAcquire(node); }} /** Check and update the Node waitStatus if the thread needs to block. Returns true * / private static Boolean shouldParkAfterFailedAcquire (Node Mr Pred, Node Node) {int ws = Mr Pred. WaitStatus; If (ws == node.signal) /* * If (ws == node.signal) /* * If (ws == node.signal) /* * If (ws == Node.signal) /* * If (ws == Node.signal) If (ws > 0) {* * The thread corresponding to the precursor node is CANCELLED, CANCELLED=1. */ do {node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else {/* * Set the state of the precursor Node to SIGNAL */ compareAndSetWaitStatus(pred, ws, node.signal); } return false; * * when shouldParkAfterFailedAcquire} / returns true, */ private final Boolean parkAndCheckInterrupt() {// Block the current thread locksupport.park (this); // Determine whether the current thread is interrupted, and return true if it is interrupted. return Thread.interrupted(); }Copy the code

The exclusive acquisition of synchronization state can be summarized as follows: for each thread that can not get the lock, it will be packaged as Node and added to the tail of the synchronization queue. Then, it will spin to determine whether its precursor Node is the head Node. If so, it will try to obtain the lock and set itself as the head Node if it succeeds in obtaining the lock. If it fails, continue to check the status of the precursor node: if the state is SIGNAL (-1), the current node is blocked (the precursor node can wake itself up and reduce spin overhead). For states other than CANCELLED (1), change to SIGNAL (-1). When a new node is added, spin to determine whether the precursor node of the current node is the head node. If it is not, check whether the status is SIGNAL (-1). If it is CANCELLED (1), the status is blocked.

There are also release methods and shared access and release synchronization for you to explore.

Synchronized

An overview of the

Synchronized can ensure that only one thread can enter the critical area at the same time when a method or code is executed, and can also ensure the memory visibility of shared variables. Each object in Java can be used as a lock, which is the basis for synchronization:

  • Normal synchronization method: The lock is the current instance object
  • Statically synchronized methods: The lock is the class object of the current class
  • Static code block: Locks are objects in parentheses

The principle of

Monitorenter and Monitorexit instructions appear in the compiled bytecode after synchronized code is locked, which can be interpreted as locking before code block execution and unlocking when exiting synchronization. When the Monitorenter directive is executed, the thread associates the lock object with an ObjectMonitor object. Each thread has two ObjectMonitor object lists, free and Used. If the current free list is empty, the thread will request an ObjectMonitor assignment to the global Global List. The owner, WaitSet, Cxq, and EntryList properties of ObjectMonitor are critical. The queue elements of WaitSet, Cxq, and EntryList are the wrapped thread object -ObjectWaiter. The thread that obtains the owner is the thread that obtains the lock. In a word:

  • When a thread encounters synchronized synchronization, it first enters the EntryList queue, then attempts to set the owner variable to the current thread, and increments the count of the monitor counter to acquire the object lock. Otherwise, if the spin lock fails, the Cxq queue is blocked and waiting.

  • The owner variable is restored to null, count-1, so that other threads can enter and acquire the lock.

The Synchronize modifier works similarly, except that instead of the monitor directive, ACC_SYNCHRONIZED is used to identify the synchronization of the methods.

Lock the optimization

Adaptive spin

When a thread blocks, the waiting thread performs a spin (usually an empty loop) instead of giving up CPU resources. This is called a spin lock. Spin wait avoids the overhead of thread switching but consumes CPU time, so if the lock is held for a short period of time, it works well, but if the lock is held for a long period of time, the spinning thread is wasting resources. Therefore, the spin wait time must be limited. If the spin exceeds a certain number of times and the lock is not acquired, it should be suspended.

Lock elimination

When the virtual machine is running, some locks on the code that require synchronization but detect that there is no contention for shared data are removed. Generally, the data support of escape analysis is used as the basis for judgment.

Lock coarsening

In principle, we write the code to narrow the scope of the synchronization block to reduce the lock wait time. However, if a series of operations frequently lock and unlock the same object, or lock and unlock within the body of the loop, the lock range can be extended.

Lightweight lock

Lightweight locks are introduced in JDK1.6 to reduce the performance cost of traditional heavy locks using OS mutex without multithreading competition. The object header of the HotSpot VIRTUAL machine is divided into two parts of information: the first part stores the runtime data of the object itself, called the Mark Word, and the second part stores Pointers to the object type data of the method area.

lock

When code enters the synchronization block, if the synchronization object is not locked (the Lock flag is 01), the virtual machine first creates a space in the current thread’s stack frame called the Lock Record, which stores the current Mark Word copy of the object. The VIRTUAL machine then uses the CAS operation to update the Mark Word of the object to a pointer to the Lock Record. If the update succeeds, the thread owns the Lock of the object, and the Lock flag of the object changes to 00, indicating that the object is in the lightweight locked state. If this fails, the virtual machine first checks whether the object’s Mark Word refers to the current thread’s stack frame. If it does, the current thread already owns the lock on the object. Otherwise, the lock object has been preempted by another thread. If there are more than two threads competing for the same lock, the lightweight lock is no longer valid and expands to the heavyweight lock, with the lock flag bit changed to “10”.

unlock

Unlocking is also a CAS operation. If the object’s Mark Word still points to the thread’s lock record, CAS is used to replace the object’s current Mark Word with the Displaces Mark Word copied from the thread. If the replacement succeeds, the synchronization process is complete. If the replacement fails, another thread is trying to acquire the lock, and the suspended thread must be woken up at the same time the lock is released.

Biased locking

Biased locking is also an optimization introduced in JDK1.6 to eliminate synchronization primitives for data in uncontested situations. If a lightweight lock uses CAS to eliminate synchronized mutex without contention, a biased lock eliminates the entire synchronization without contention. When the lock object is acquired by a thread for the first time, the VM sets the flag bit of the object header to “01”, that is, the bias mode, and records the ID of the thread that obtained the lock in the Mark Word of the object using CAS. If the CAS succeeds, the thread that holds the bias lock can enter the synchronization block without any synchronization operation. The bias mode ends when another thread attempts to acquire the lock. HotSpot implements JVM Mark Word information on 64-bit machines:

Lock escalation

Lock upgrades: New -> Bias lock -> Lightweight lock (spin lock) -> Heavyweight lock.

Volatile

The principle of

  • Guaranteed visibility, not guaranteed atomicity
  • Disallow instruction reordering
  • The underlying implementation uses a memory barrier

Semantic memory

  • When a volatile variable is written, the JMM immediately flusher the value of the shared variable from the thread’s local memory to main memory.
  • When a volatile variable is read, the JMM invalidates the thread-local variable value and reads it from main memory.

Reordering rule

  • If the first operation is a volatile read, reordering is not allowed regardless of the second operation. This operation ensures that operations after volatile reads are not reordered by the compiler to those before volatile reads.
  • If the second operation is a volatile write, reordering is not allowed regardless of the first operation. This operation ensures that operations prior to volatile writes will not be reordered by the compiler after volaile writes.

The underlying implementation of volatile is by inserting a memory barrier:

  • Insert a StoreStore barrier before each volatile write.
  • Insert a StoreLoad barrier after each volatile write.
  • Insert a LoadLoad barrier after each volatile read.
  • Insert a LoadStore barrier after each volatile read.

Lock

ReentrantLock

ReentrantLock is a class that implements the Lock interface. It supports reentrancy and means that a shared resource can be repeatedly locked. That is, the current thread will not block when it obtains the Lock again. Fair and unfair locks are also supported.

How reentrancy works

To support reentrancy, two problems need to be solved:

  • When a thread acquires a lock, it succeeds if the thread that has acquired the lock is the current thread.
  • Since the lock will be acquired n times, the lock will not be fully released until it is released the same n times.

For the first question, let’s look at the implementation of ReentrantLock

final boolean nonfairTryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); If (c == 0) {if (compareAndSetState(0, acquires)) {setExclusiveOwnerThread(current); setExclusiveOwnerThread(current); return true; Else if (current == getExclusiveOwnerThread()) {int nexTC = c + acquires; if (nextc < 0) // overflow throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }Copy the code

This code logic is very simple, if the lock has been occupied by the thread continue to check whether the thread holding the current thread, if so, synchronization status +1, indicating that it can obtain success again. Let’s talk about release

Protected final Boolean tryRelease(int releases) {// Release state minus 1 int c = getState() -releases; If (thread.currentThread ()! = getExclusiveOwnerThread()) throw new IllegalMonitorStateException(); boolean free = false; If (c == 0) {return true free = true only if the synchronization status is 0; setExclusiveOwnerThread(null); } // Lock is not released completely, return false setState(c); return free; }Copy the code

The reentrant lock can be released only when the synchronization status is 0. Otherwise, the lock is still being released.

Fair and unfair locks

ReentrantLock supports both unfair and fair locks. Fairness is about acquiring locks. If a lock is fair, the order in which the lock is acquired should match the absolute time order on the request and satisfy the FIFO. The default is unfair lock.

public ReentrantLock(boolean fair) {
    sync = fair ? new FairSync() : new NonfairSync();
}
Copy the code

Let’s look at fair lock handling:

protected final boolean tryAcquire(int acquires) { final Thread current = Thread.currentThread(); int c = getState(); if (c == 0) { if (! hasQueuedPredecessors() && compareAndSetState(0, acquires)) { setExclusiveOwnerThread(current); return true; } } else if (current == getExclusiveOwnerThread()) { int nextc = c + acquires; if (nextc < 0) throw new Error("Maximum lock count exceeded"); setState(nextc); return true; } return false; }Copy the code

The only difference between this code and an unfair lock is that it adds hasQueuedProdecessors(). This method is used to determine whether the current node has a precursor node in the synchronization queue. If there is a precursor node, it indicates that the current thread requested the resource earlier than the current thread.

ReentrantReadWriteLock

In some business scenarios that read too much and write too little, data correctness will not be affected if only the data is read, and performance bottlenecks will occur if exclusive locks are still used. Java provides a read and write lock, ReentrantReadWriteLock. Read/write locks allow access by multiple threads at the same time, but all reader threads and other writer threads must block when the writer thread accesses them.

  • Fair choice: Support unfairness and fairness.
  • Reentrant: Supports reentrant, read lock can be acquired again, write lock can be acquired again, and read lock can be acquired.
  • Lock degradation: A write lock can be degraded to a read lock in the order of acquiring a write lock, acquiring a read lock, and releasing the write lock.

Lock down

Lock degradation refers to the process in which a write lock is degraded to a read lock. The write lock is obtained, the read lock is obtained, and then the write lock is released to ensure data visibility. Assume that there are two threads A and B. If thread A obtains the write lock, it directly releases the write lock instead of acquiring the read lock, and thread B obtains the write lock and modiifies the data, thread A cannot know the data update of thread B. If thread A acquires the read lock, that is, following the lock degradation step, thread B will be blocked until thread A uses the data and releases the read lock, and thread B can acquire the write lock to update the data.

  • When a thread acquires the read lock, no other thread is allowed to acquire the write lock
  • When a thread acquires a write lock, no other thread is allowed to acquire the read lock or write lock.
  • A write lock can be downgraded to a read lock, but a read lock cannot be upgraded to a write lock.

Condition

Like Object’s monitor methods, the Condition interface provides a similar method to implement the wait/notify mode in conjunction with Lock.

Compare the item Object monitor method Condition
precondition Gets the lock of the object Call lock.lock () to get the Lock, and then call lock.newcondition () to get the Condition object
Call way Call Object.wait() directly Call condition.wait() directly
Number of waiting queues a multiple
The current thread releases the lock and enters the wait state support support
The current thread releases the lock and enters the wait state, where it does not respond to interrupts Does not support support
The current thread releases the lock and enters the timeout wait state support support
The current thread releases the lock and enters the wait state until some future time Does not support support
Wakes up a thread in the wait queue support support
Wake up all threads in the wait queue support support

Concurrent utility class

CountdownLatch

CountdownLatch is implemented through a counter. When we create a new CountdownLatch object, we pass in the counter value, which represents the number of threads. Each time a thread completes its task, the counter value decreases by one. This means that all threads have completed their tasks and can resume the waiting threads. CyclicBarrier and CyclicBarrier:

  • CountdownLatch is used to allow one or more threads to wait for other threads to complete. Cyclicbarriers, on the other hand, allow multiple threads to wait on each other.
  • CountdownLatch’s counter cannot be reset, and CyclicBarrier’s counter can be reset and used.

Realize the principle of

CountdownLatch provides only one constructor

public CountDownLatch(int count) {
        if (count < 0) throw new IllegalArgumentException("count < 0");
        this.sync = new Sync(count);
    }
Copy the code

Take a look at Sync, which is an internal class for CountDownLatch

private static final class Sync extends AbstractQueuedSynchronizer { private static final long serialVersionUID = 4982264981922014374L; Sync(int count) { setState(count); } // get synchronization status int getCount() {return getState(); Protected int tryAcquireShared(int acquires) {return (getState() == 0)? 1:1; } // Try releasing the synchronization state. Protected Boolean tryReleaseShared(int Releases) {for (;) { int c = getState(); if (c == 0) return false; int nextc = c-1; if (compareAndSetState(c, nextc)) return nextc == 0; }}}Copy the code

CountDownLatch is internally implemented through shared locks:

  • When creating the CountDownLatch instance, pass an int parameter: count, which is the initial value of the counter, or the total number of times the shared lock can be acquired.
  • When a thread calls await(), it first checks if count is 0 and waits until it is 0 if it is not.
  • When another thread calls the countdown() method, it releases the shared lock state, count-1.
  • Note that CountDownLatch cannot be rolled back to reset.

CyclicBarrier

CyclicBarrier is a synchronization helper class that allows a group of threads to wait for each other until some common barrier point is reached. In a program that involves a set of threads of a fixed size, the threads must wait for each other from time to time. Because cyclicbarriers can be reused once waiting threads are released, they become circular barriers.

Realize the principle of

Take a look at the definition of CyclicBarrier

private final ReentrantLock lock = new ReentrantLock(); private final Condition trip = lock.newCondition(); Private final int parties; // The barrierCommand variable is the Runnable command received by CyclicBarrier and is used to preferentially execute barrierCommand when the thread reaches the barrier for more complex business scenarios. private final Runnable barrierCommand; Private generation generation = new generation ();Copy the code

You can see that cyclicbarriers use reentrantLocks and conditions inside. It has two constructors

/** Creates a new CyclicBarrier that starts when a given number of participants (threads) are in the wait state and performs the given barrier action when the barrier is started, which is performed by the last thread to enter the barrier. */ public CyclicBarrier(int parties, Runnable barrierAction) { if (parties <= 0) throw new IllegalArgumentException(); this.parties = parties; this.count = parties; this.barrierCommand = barrierAction; } /** Creates a new CyclicBarrier that starts when a given number of participants (threads) are in the wait state, but it does not perform predefined actions when the barrier is started. */ public CyclicBarrier(int parties) { this(parties, null); }Copy the code

How does a CyclicBarrier keep threads waiting once they reach the barrier? With the await() method, each thread calls the await() method and tells the CyclicBarrier that it has reached the barrier, and the current thread is blocked. When all threads have reached the barrier, the block ends and all threads can proceed with the subsequent logic.

public int await(long timeout, TimeUnit unit) throws InterruptedException, BrokenBarrierException, TimeoutException { return dowait(true, unit.toNanos(timeout)); } private int dowait(boolean timed, long nanos) throws InterruptedException, BrokenBarrierException, TimeoutException {// Final ReentrantLock lock = this.lock; lock.lock(); Try {// Final Generation g = Generation; If (g.broken) throw new BrokenBarrierException(); CyclicBarrier if (thread.interrupted ()) {breakBarrier(); CyclicBarrier if (thread.interrupted ()) {breakBarrier(); throw new InterruptedException(); } // create a thread, count-1 int index = --count; Trigger barrierCommand if (index ==0) {// tripped Boolean ranAction = false; try { final Runnable command = barrierCommand; if (command ! = null) command.run(); ranAction = true; // Wake up all waiting threads and update generation nextGeneration(); return 0; CyclicBarrier if (! CyclicBarrier if (! ranAction) breakBarrier(); } } for (;;) {try {// If it is not timeout waiting, the condition.await () method is called to wait if (! timed) trip.await(); Else if (nanos > 0L) // Condition. AwaitNanos () wait for nanos = trip.awaitnanos (nanos); } catch (InterruptedException ie) { if (g == generation && ! g.broken) { breakBarrier(); throw ie; } else { Thread.currentThread().interrupt(); } } if (g.broken) throw new BrokenBarrierException(); // Generation has been updated to return Index if (g! = generation) return index; If (timed && nanos <= 0L) {breakBarrier(); // Timed && nanos <= 0L; throw new TimeoutException(); }}} finally {// unlock lock.unlock(); }Copy the code

application

CyclicBarrier is applicable to the scenario where multiple threads are combined to compute data and finally combine the results

public class CyclicBarrierTest { private static CyclicBarrier cyclicBarrier; private static final Integer THREAD_COUNT = 10; static class CyclicBarrierThread implements Runnable { @Override public void run() { System.out.println(thread.currentThread ().getName()+" in class "); try { cyclicBarrier.await(); } catch (Exception e) { e.printStackTrace(); } } } public static void main(String [] args) { cyclicBarrier = new CyclicBarrier(THREAD_COUNT, New Runnable() {@override public void run() {system.out.println ("... ); }}); for (int i=0; i< THREAD_COUNT; i++) { Thread thread = new Thread(new CyclicBarrierThread()); thread.start(); }}}Copy the code

Semophore

Semophore is a counter that controls access to multiple shared resources. Like CountdownLatch, Semophore is essentially a shared lock that maintains a license set. Take the parking lot as an example: suppose there are five parking Spaces in the parking lot. At the beginning, all the parking Spaces are empty. Three cars have arrived successively and the parking Spaces are enough. Then three more cars arrived, and since there were only two parking Spaces, only two cars could be parked. One of them had to wait outside until a space was available. From a programmatic point of view, the parking lot is the Semophore, the permit set is 5, the vehicle is the thread, every time a car comes, the permit number is -1, but must >=0, otherwise the thread will block (vehicle waiting). If a car pulls out, the permit number is +1 and a thread (which can be put into a car) is woken up.

public class SemaphoreTest { static class Parking { private Semaphore semaphore; Parking(int count) { semaphore = new Semaphore(count); } public void park() {try {// acquire semaphore.acquire(); long time = (long) (Math.random()*10+1); System.out.println(thread.currentThread ().getName()+" +time "); Thread.sleep(time); System.out.println(thread.currentThread ().getName()+"..." ); } catch (InterruptedException e) { e.printStackTrace(); } finally {// Semaphore. Release (); } } } static class Car implements Runnable{ private Parking parking; Car(Parking parking) { this.parking = parking; } /** * public void run() {parking. Park (); }} public static void main(String [] args) {public static void main(String [] args) { // There are 5 cars at the same time, only 3 cars can park, the other 2 cars have to wait for a free space to park. for (int i=0; i<5; i++) { Thread thread = new Thread(new Car(parking)); thread.start(); }}}Copy the code

Exchanger

Sanoer is a synchronizer. The main purpose of this class is to exchange data. The NON-recovery barrier is similar to a CyclicBarrier, which is a one-way barrier and the non-recovery is a two-way barrier. A CyclicBarrier is a one-way barrier, and the non-recovery is a two-way barrier. When thread 1 reaches the barrier, it checks to see if any other thread has reached the barrier and waits. If another thread (i.e. thread 2) has arrived, the information is exchanged in pairs, which is suitable for data exchange between the two threads.

Concurrent container

CopyOnWriteArrayList

Design idea of COW

If the read/write lock ReentrantReadWriteLock is used, the read/write thread is blocked after the write lock is obtained. Only after the write lock is released can the read thread obtain the lock and read the latest data. But the reader thread can get the latest data at any time. COW uses copy-on-write, or the idea of copy-on-write, to realize the final consistency of data through delayed update and ensure that there is no blocking between reader threads. COW means that when we add elements to a container, we do not directly add elements to the current container. Instead, we copy the current container, create a new container, and then add elements to the new container. After adding elements, we reference the original container to the new container. Therefore, CopyOnWrite is an idea of reading and writing separation. The strategy of delayed update is realized by aiming at different data containers when writing, giving up real-time data and achieving final consistency.

The source code interpretation

CopyOnWriteArrayList internally maintains an array

/** The array, accessed only via getArray/setArray. */
private transient volatile Object[] array;
Copy the code

And the array is volatile to ensure visibility. See the add() method

public boolean add(E e) { final ReentrantLock lock = this.lock; //1. Use Lock to ensure that the writer thread has only one lock. Lock () at a time; Object[] elements = getArray(); int len = elements.length; Object[] newElements = array.copyof (Elements, len + 1); NewElements [len] = e; //5. Point the old array reference to the new array setArray(newElements); return true; } finally { lock.unlock(); }}Copy the code

The add method needs to be aware of the following:

  • With ReentrantLock, only one writer thread is copying the array at a time, otherwise there will be multiple copies in memory.
  • Array references are volatile, so the old array reference points to the new array. Under the happens-before rule for volatile, changes made by the writer thread to the array reference are not visible to the reader thread.
  • Since the data is written by inserting data into a new array, the read and write is guaranteed to be performed in two different data containers.

ConcurrentSkipListMap

ConcurrentSkipListMap is implemented internally using the data structure of a hop list, which is a multi-layer linked list with a normal list at the bottom and then reduced by layers, usually through a simple algorithm that each layer has half the elements of the next layer, so that when searching for elements, the search starts at the top. It’s another form of binary search. Theoretically, its search, insert, and delete time complexity is O(logN).

Insert the element with the value 15

After inserting