Implementation of concurrent containers in multithreading

We know that the existing java.util collection classes, such as arrayList, map, etc., are not thread-safe, that is, not suitable for use in multithreading.

1. Use methods in Collection to achieve:

The following code

Map map = Collections.synchronizedMap(new HashMap<String, String>());

Is actually with multi-thread method under the Collections to wrap around the original thread unsafe collection classes: simply look at the Collections, synchronizedMap implementation, it maintains a normal map: M and the static mutex, through which all operations are determined, are thread-safe by using the synchronized keyword for methods in the Map interface to ensure thread-safe operations on Map.


private static class SynchronizedMap<K,V>
    implements Map<K,V>, Serializable {
    private static final long serialVersionUID = 1978198479659022715L;

    private final Map<K,V> m;     // Backing Map
    final Object      mutex;        // Object on which to synchronize

    SynchronizedMap(Map<K,V> m) {
        this.m = Objects.requireNonNull(m);
        mutex = this;
    }

    SynchronizedMap(Map<K,V> m, Object mutex) {
        this.m = m;
        this.mutex = mutex;
    }
    public int size() {
        synchronized (mutex) {return m.size();}
  

    private transient Set<K> keySet;
    private transient Set<Map.Entry<K,V>> entrySet;
    private transient Collection<V> values;

    public Set<K> keySet() {
        synchronized (mutex) {
            if (keySet==null)
                keySet = new SynchronizedSet<>(m.keySet(), mutex);
            return keySet;
        }
    }

    public Set<Map.Entry<K,V>> entrySet() {
        synchronized (mutex) {
            if (entrySet==null)
                entrySet = new SynchronizedSet<>(m.entrySet(), mutex);
            return entrySet;
        }
    }

Copy the code

Methods in collection. synchronized are all implemented in this way, so we won’t expand on them here. The efficiency was not good before synchronized optimization, but the performance is still good after synchronized optimization.

Advantages: easy to use, debugging, implementation is very simple and free.

2. implementation in java.util.concurrent package:

2.1 ConcurrentLinkedQueue

So let’s start with this

class ConcurrentLinkedQueue<E> extends AbstractQueue<E>
        implements Queue<E>, java.io.Serializable {
Copy the code

ConcurrentLinkedQueue is first of all an ordered queue, all of which follow various characteristics associated with LinkedList. The difference is that the nodes in ConcurrentLinkedQueue store data, add nodes, do Cas, all of which are unlocked, The concurrency problem is guaranteed by the queue algorithm.

private static class Node<E> { volatile E item; volatile Node<E> next; Node(E item) { UNSAFE.putObject(this, itemOffset, item); } boolean casItem(E cmp, E val) { return UNSAFE.compareAndSwapObject(this, itemOffset, cmp, val); } // Unsafe mechanics private static final sun.misc.Unsafe UNSAFE; private static final long itemOffset; private static final long nextOffset; static { try { UNSAFE = sun.misc.Unsafe.getUnsafe(); Class<? > k = Node.class; itemOffset = UNSAFE.objectFieldOffset (k.getDeclaredField("item")); nextOffset = UNSAFE.objectFieldOffset (k.getDeclaredField("next")); } catch (Exception e) { throw new Error(e); }}}Copy the code

For example UNSAFE.com pareAndSwapObject, tell me the the UNSAFE class is simple. Java does not access the underlying operating system directly, but rather through native methods. Even so, the JVM has a backdoor. The Unsafe class in the JDK provides hardware-level atomic operations. The UNSAFE class is part of the realization of the cas as UNSAFE.com pareAndSwapObject comparison method and the exchange of the Object, there is no discussion of cas. All we need to know is that the lock-free methods in the java.util.Concurrent package rely entirely on CAS methods in the UNSAFE class. Briefly discuss the offer method:

public boolean offer(E e) { checkNotNull(e); final Node<E> newNode = new Node<E>(e); for (Node<E> t = tail, p = t;;) { Node<E> q = p.next; If (q == null) {// p is the last node if (p.casnext (null, newNode)) {// Successful CAS will make the queue line processing // to make e an element in the queue, // make newNode the node of the actual queue. if (p ! CasTail (t, newNode); // casTail handles the tail node, and failure is acceptable. } // If the cas race with another thread fails; Else if (p == q) p = (t! = (t = tail)) ? t : head; P = (p! = t && t ! = (t = tail)) ? t : q; }}Copy the code

The casNext method is where the CAS algorithm is specifically called in the offer method, which is used by the insert node

boolean casNext(Node<E> cmp, Node<E> val) {
    return UNSAFE.compareAndSwapObject(this, nextOffset, cmp, val);
}
Copy the code

CasTail is used to locate tail nodes, and the methods in ConcurrentLinkedQueue are basically implemented by cas algorithm

private boolean casTail(Node<E> cmp, Node<E> val) {
    return UNSAFE.compareAndSwapObject(this, tailOffset, cmp, val);
}
Copy the code

The other add(), remove(),peek() methods are implemented in the same way and will not be discussed here;

2.2 BlockingQueue Blocks the queue

BlockingQueue has two subclasses LinkedBlockingQueue, ArrayBlockingQueue

class BlockingQueue<E> extends Queue<E>
     java.io.Serializable
Copy the code

Let’s start with ArrayBlockingQueue

final Object[] items;

final ReentrantLock lock;

private final Condition notEmpty;

private final Condition notFull;
Copy the code

The implementation of the block in BlockingQueue relies on two conditions: one is not full, one is not empty, and a ReentrantLock.

public boolean offer(E e) { checkNotNull(e); final ReentrantLock lock = this.lock; lock.lock(); If (count == items. Length) return false; if (count == items. else { enqueue(e); return true; } } finally { lock.unlock(); }}Copy the code

This first locks the whole increment method with a reentrant lock, and then enqueue()

private void enqueue(E x) {
    final Object[] items = this.items;
    items[putIndex] = x;
    if (++putIndex == items.length)
        putIndex = 0;
    count++;
    notEmpty.signal();
}
Copy the code
/** The pointer to the next put, offer, or add position in the queue */ int putIndex;Copy the code

The enqueue() method places the data at putindex, count++, and notifies the non-empty condition notempty.signal (). The blocking condition of the entire queue is implemented through the notification of the two conditions. Add () and offer() are locked using lock.lock()

public void put(E e) throws InterruptedException {
    checkNotNull(e);
    final ReentrantLock lock = this.lock;
    lock.lockInterruptibly();
Copy the code

The put method uses lock.lockInterruptibly(), which reads the API documentation:

  • Lock. lock() : Attempts to obtain the lock. When the function returns, it is in the acquired lock state. Sleep if another thread currently holds the lock.

  • LockInterruptibly () : Attempts to obtain the lock. Sleep if another thread currently holds the lock. When this function returns, there are two possibilities:

    A. The lock has been obtained

    B. Failed to acquire the lock, but another thread interrupted it. The thread returns with an IterruptedException and the thread’s interrupt flag is cleared.

    If the offer() method does not acquire the lock, the method waits for the lock to be released, while the PUT method directly throws an IterruptedException waiting for an outside method to handle the exception. And the difference with put is that the queue is full

 while (count == items.length)
   notFull.await();
Copy the code
  • Add (): Adds Object to BlockingQueue, or returns true if the BlockingQueue can hold it, otherwise returns an exception

  • Offer (): If possible, add Object to BlockingQueue, return true if BlockingQueue can hold it, false otherwise.

  • Put (): Adds an Object to BlockingQueue. If there is no room in the BlockQueue, the thread calling this method is blocked until there is room in the BlockingQueue.

Here we implement a sample code that implements a simple BlockingQueue to better illustrate some of the changes within BlockingQueue

Public class CustomBlockingQueue<T> {/** queue length */ private final int size; Private final LinkedList<T> list = new LinkedList<>(); /** ReentrantLock */ private final Lock Lock = new ReentrantLock(); Private final Condition notFull = lock.newCondition(); Private final Condition notEmpty = lock.newCondition(); public CustomBlockingQueue(final int size) { this.size = size; } /** * if the queue is full, block ** / public void add(T value) throws InterruptedException {lock.lock(); Try {while(list.size() == size) {system.out.println (" queue is full -- queue is waiting "); notFull.await(); } /** add to the end of the list */ list.add(value); /** Wake up the thread waiting in the notEmpty condition */ notempty.signal (); } finally { lock.unlock(); }} /** * if the queue is empty, block ** / public T take() throws InterruptedException {T value; lock.lock(); Try {while(list.size() == 0) {system.out.println (" queue is empty -- queue is waiting "); notEmpty.await(); } /** Remove and return the list header element */ value = list.removeFirst(); /** Wake up the thread waiting in the notFull condition */ notfull.signal (); } finally { lock.unlock(); } return value; }}Copy the code

Here is the code called

public class QueueTest { static final CustomBlockingQueue<Integer> queue = new CustomBlockingQueue<>(3); Public static void main(String[] args) {/** new Thread(() -> {for (int I = 0; i < 100; i++) { try { Thread.sleep(10); System.out.println(" queue: "+ I); queue.add(i); } catch (InterruptedException e) { e.printStackTrace(); } } }).start(); /** new Thread(() -> {for (int I = 0; i < 100; i++) { try { Thread.sleep(10); System.out.println(" queue: "+ I); queue.take(); } catch (InterruptedException e) { e.printStackTrace(); } } }).start(); }}Copy the code

The basic implementation of a simple blocking queue

2.3 CopyOnWriteList

In most common service scenarios, the read operation is much faster than the write operation. CopyOnWriteList does not modify the original data. The write operation copies itself once, writes the modified content to the copy, and then replaces it. For example, CopyOnWriteArrayList

class CopyOnWriteArrayList<E>
    implements List<E>, RandomAccess, Cloneable,
Copy the code

As you can see CopyOnWriteArrayList implements RandomAccess and Cloneable first let’s look at the get method

private E get(Object[] a, int index) {
    return (E) a[index];
}
Copy the code

The get method does not have a lock, so we can understand that this is actually thread unsafe. And then let’s look at the set method

public E set(int index, E element) {
    final ReentrantLock lock = this.lock;
    lock.lock();
    try {
        Object[] elements = getArray();
        E oldValue = get(elements, index);

        if (oldValue != element) {
            int len = elements.length;
            Object[] newElements = Arrays.copyOf(elements, len);
            newElements[index] = element;
            setArray(newElements);
        } else {
            // Not quite a no-op; ensures volatile write semantics
            setArray(elements);
        }
        return oldValue;
    } finally {
        lock.unlock();
    }
}
Copy the code

The global copy method is done after locking, and the old data is returned, thus achieving thread-safe write operations. This method is only suitable for the business model with a lot of read operations and little write: for example, microblog, news, etc., where there is little write but a lot more read than write.

2.4 ConcurrentHashMap

ConcurrentHashMap is thread-safe, while HashTable is thread-safe. However, the implementation of HashTable uses a large number of synchronized keywords to ensure thread-safe. In this case, the JDK introduced a thread-safe map structure called ConcurrentHashMap.

Introduction to the JDK: However, even though all operations are thread-safe, retrieval operations do not entail locking, and there is not any support for locking the entire table in a way that prevents all access. This class is fully interoperable with Hashtable in programs that rely on its thread safety but not on its synchronization details.

In short, all functions of Hashtable are fully implemented. All operations are thread-safe. All entries are provided without locking the entire table structure, and the whole structure is implemented without synchronization.

In JDK 1.7 and 1.8, the ConcurrentHashMap has been changed a lot. In JDK 1.8, the ConcurrentHashMap has been changed a lot. In JDK 1.7, the HashMap has been changed a lot. The 1.7 map implementation is array + list structure, while the 1.8 HashMap implementation is array + list + red-black tree data structure (when the list depth reaches 8, which is the default threshold, It’s going to automatically expand and turn the linked list into a red-black tree data structure to change the time complexity from O (n) to O (nlog n) and improve efficiency. This idea is also reflected in the implementation of ConcurrentHashMap. The Java8 ConcurrentHashMap structure is basically the same as the Java8 HashMap, but it is thread-safe.

JDK 1.7 implements the ConcurrentHashMap method

In JDK1.7, the data structure of ConcurrentHashMap consists of an array of segments and multiple Hash entries that encapsulate key/value pairs of the mapping table. Each bucket is a linked list of several HashEntry objects. An instance of ConcurrentHashMap contains an array of Segment objects. A Segment is locked by inheritingreentrantLock, so every Segment that needs to be locked is locked. Thus, as long as each Segment is thread-safe, we can achieve global thread-safe.

ConcurrentHashMap has 16 Segments by default, so it is theoretically possible for up to 16 threads to write concurrently, as long as their operations are distributed over different Segments. This value can be set to another value at initialization, but it cannot be expanded once initialized. Inside each Segment is more like a Hashmap.

Differences between JDK1.8 and 1.7ConcurrentHashMap

  • Data structure: Remove Segment locking and replace it with arrays, linked lists, and red-black trees.
  • Ensure thread-safe mechanism: JDK1.7 uses segment locking mechanism to achieve thread-safe, where segments inherit from ReentrantLock. JDK1.8 adopts CAS+Synchronized to ensure thread safety.
  • Lock granularity: Lock each element of array (Node) from Segment where data is to be manipulated.
  • The simplification of the hash algorithm for locating nodes may lead to disadvantages: increased hash conflicts. Therefore, when the number of linked list nodes is greater than 8, the linked list is converted into a red-black tree for storage.
  • Query time complexity: from traversal list O(n) to traversal red-black tree O(logN).

JDK 1.8 implements this

class ConcurrentHashMap<K,V> extends AbstractMap<K,V>
    implements ConcurrentMap<K,V>
Copy the code

The code analysis

First, the initialCapacity of the whole ConcurrentHashMap was controlled according to the initialCapacity of the input parameter, and the sizeCtl was calculated. If initialCapacity was 10, then the sizeCtl was 16. If initialCapacity is 11, sizeCtl is 32.

public ConcurrentHashMap(int initialCapacity) {
    if (initialCapacity < 0)
        throw new IllegalArgumentException();
    int cap = ((initialCapacity >= (MAXIMUM_CAPACITY >>> 1)) ?
               MAXIMUM_CAPACITY :
               tableSizeFor(initialCapacity + (initialCapacity >>> 1) + 1));
    this.sizeCtl = cap;
}
Copy the code
put

Then let’s briefly examine the PUT operation

final V putVal(K key, V value, boolean onlyIfAbsent) { if (key == null || value == null) throw new NullPointerException(); int hash = spread(key.hashCode()); Int binCount = 0; For (Node<K,V>[] TAB = table;;) { Node<K,V> f; int n, i, fh; / / if the array is "empty", to initialize an array if (TAB = = null | | (n = TAB. Length) = = 0) / / initialize the array TAB = initTable (); F else if ((f = tabAt(TAB, I = (n-1) & hash)) == null) {// If this position is empty, // If the CAS fails, there is a concurrent operation, If (casTabAt(TAB, I, null, new Node<K,V>(hash, key, value, null))) break; Else if ((fh = F.hash) == MOVED) // Do data migration TAB = helpTransfer(TAB, f); // No lock when adding to empty bin} Else {// If f is the head node of the position and is not null, V oldVal = null; Synchronized (f) {if (tabAt(TAB, I) == f) {// The hash value of the header is greater than 0, If (fh >= 0) {// Record the length of the list binCount = 1; For (Node<K,V> e = f; ++binCount) { K ek; / / if found the key of "equal", determine whether to value, and then also can end the if (e.h ash = = hash && (= e.k (ek ey) = = key | | (ek! = null && key.equals(ek)))) { oldVal = e.val; if (! onlyIfAbsent) e.val = value; break; } // At the end of the list, put the new value at the end of the list Node<K,V> pred = e; if ((e = e.next) == null) { pred.next = new Node<K,V>(hash, key, value, null); break; Else if (f instanceof TreeBin) {Node<K,V> p; binCount = 2; If ((p = ((TreeBin<K,V>)f).puttreeval (hash, key, value))! = null) { oldVal = p.val; if (! onlyIfAbsent) p.val = value; } } } } // binCount ! If (binCount! If (binCount >= TREEIFY_THRESHOLD) {if (binCount >= TREEIFY_THRESHOLD) {if (binCount >= TREEIFY_THRESHOLD) { TreeifyBin (TAB, I); if (oldVal ! = null) return oldVal; break; } } } addCount(1L, binCount); return null; }Copy the code

From the put operation, we can summarize the main points of ConcurrentHashMap:

  1. If it is inserted in a linked list, it is inserted at the end of the list.
  2. ConcurrentHashMap switches the storage structure – a linked list or a red-black tree – based on the number of nodes, binCount. The threshold for a linked list to become a red-black tree is the same as for a HashMap of 1.8, which is 8
  3. If the current arraytabThe length of theMIN_TREEIFY_CAPACITYIf the default is below 64, expansion will be done instead of turning to red black tree

Code for turning red black tree:

private final void treeifyBin(Node<K,V>[] tab, int index) { Node<K,V> b; int n, sc; if (tab ! = null) {// If the number of tables in the table is less than 64, expand the table to double the number of tables. If ((n = tab.length) < MIN_TREEIFY_CAPACITY) tryPresize(n << 1); else if ((b = tabAt(tab, index)) ! = null && b.hash >= 0) { synchronized (b) { if (tabAt(tab, index) == b) { TreeNode<K,V> hd = null, tl = null; for (Node<K,V> e = b; e ! = null; E = e.next) {// Encapsulate TreeNode TreeNode<K,V> p = new TreeNode<K,V>(e.bash, e.ey, e.val, null, null); if ((p.prev = tl) == null) hd = p; else tl.next = p; tl = p; } // Convert TreeNode to a setTabAt(TAB, index, new TreeBin<K,V>(hd)); } } } } }Copy the code
  1. For put write operations, if the current list has been migrated, the head node will be set to FWD node, and the writer thread will help expand the list. If expansion is not completed, the head node of the current list will be locked, so the writer thread will be blocked until the expansion is complete.
Initialize initTable
private final Node<K,V>[] initTable() { Node<K,V>[] tab; int sc; / / the empty table into the initialization while ((TAB = table) = = null | | TAB. The length = = 0) {/ / if sizeCtl < 0 on behalf of the other threads have already begun the initialization, If ((sc = sizeCtl) < 0) thread.yield (); // lost initialization race; SizeCtl set to -1, Else if (U.compareAndSwapInt(this, SIZECTL, sc, - 1)) {try {if ((TAB = table) = = null | | TAB. The length = = 0) {/ / initializes the array length is the length of the 16 or initialization time provide int n = > 0 (sc)? Sc: DEFAULT_CAPACITY; @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<? ,? >[n]; // Assign this array to table, which is volatile table = TAB = nt; Sc = n - (n >>> 2); sc = n - (n >>> 2); }} finally {// Set default sizeCtl to complete initialization. } break; } } return tab; }Copy the code
Capacity: tryPresize
Private final void tryPresize(int size) {// c: 1.5 times the size, add 1, and raise the nearest 2 to the n. int c = (size >= (MAXIMUM_CAPACITY >>> 1)) ? MAXIMUM_CAPACITY : tableSizeFor(size + (size >>> 1) + 1); int sc; while ((sc = sizeCtl) >= 0) { Node<K,V>[] tab = table; int n; / / if the branch and the initialization of the array in front of the code is basically the same, no longer shows the if (TAB = = null | | (n = TAB. Length) = = 0) {n = > c (sc)? sc : c; if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) { try { if (table == tab) { @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<?,?>[n]; table = nt; sc = n - (n >>> 2); // 0.75 * n}} finally {sizeCtl = sc; } } } else if (c <= sc || n >= MAXIMUM_CAPACITY) break; else if (tab == table) { int rs = resizeStamp(n); if (sc < 0) { Node<K,V>[] nt; if ((sc >>> RESIZE_STAMP_SHIFT) ! = rs || sc == rs + 1 || sc == rs + MAX_RESIZERS || (nt = nextTable) == null || transferIndex <= 0) break; // Add 1 to sizeCtl with CAS, If (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) Transfer (TAB, nt); } // set sizeCtl to (rs << RESIZE_STAMP_SHIFT) + 2. // Call the transfer method, Else if (U.compareAndSwapInt(this, SIZECTL, SC, (rs << RESIZE_STAMP_SHIFT) + 2)) transfer(TAB, null); }}}Copy the code

The core of this method lies in the sizeCtl value operation, first setting it to a negative number, then transferring (TAB, NULL), then the next loop incrementing sizeCtl by 1 and transferring (TAB, nt).

Data migration: Transfer
private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) { int n = tab.length, stride; if ((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // subdivide range if (nextTab == null) { // initiating try { @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<? ,? >[n << 1]; nextTab = nt; } catch (Throwable ex) { // try to cope with OOME sizeCtl = Integer.MAX_VALUE; return; } nextTable = nextTab; transferIndex = n; } int nextn = nextTab.length; ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab); boolean advance = true; boolean finishing = false; // to ensure sweep before committing nextTab for (int i = 0, bound = 0;;) { Node<K,V> f; int fh; while (advance) { int nextIndex, nextBound; if (--i >= bound || finishing) advance = false; else if ((nextIndex = transferIndex) <= 0) { i = -1; advance = false; } else if (U.compareAndSwapInt (this, TRANSFERINDEX, nextIndex, nextBound = (nextIndex > stride ? nextIndex - stride : 0))) { bound = nextBound; i = nextIndex - 1; advance = false; } } if (i < 0 || i >= n || i + n >= nextn) { int sc; if (finishing) { nextTable = null; table = nextTab; sizeCtl = (n << 1) - (n >>> 1); return; } if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) { if ((sc - 2) ! = resizeStamp(n) << RESIZE_STAMP_SHIFT) return; finishing = advance = true; i = n; // recheck before commit } } else if ((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); else if ((fh = f.hash) == MOVED) advance = true; // already processed else { synchronized (f) { if (tabAt(tab, i) == f) { Node<K,V> ln, hn; if (fh >= 0) { int runBit = fh & n; Node<K,V> lastRun = f; for (Node<K,V> p = f.next; p ! = null; p = p.next) { int b = p.hash & n; if (b ! = runBit) { runBit = b; lastRun = p; } } if (runBit == 0) { ln = lastRun; hn = null; } else { hn = lastRun; ln = null; } for (Node<K,V> p = f; p ! = lastRun; p = p.next) { int ph = p.hash; K pk = p.key; V pv = p.val; if ((ph & n) == 0) ln = new Node<K,V>(ph, pk, pv, ln); else hn = new Node<K,V>(ph, pk, pv, hn); } setTabAt(nextTab, i, ln); setTabAt(nextTab, i + n, hn); setTabAt(tab, i, fwd); advance = true; } else if (f instanceof TreeBin) { TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> lo = null, loTail = null; TreeNode<K,V> hi = null, hiTail = null; int lc = 0, hc = 0; for (Node<K,V> e = t.first; e ! = null; e = e.next) { int h = e.hash; TreeNode<K,V> p = new TreeNode<K,V> (h, e.key, e.val, null, null); if ((h & n) == 0) { if ((p.prev = loTail) == null) lo = p; else loTail.next = p; loTail = p; ++lc; } else { if ((p.prev = hiTail) == null) hi = p; else hiTail.next = p; hiTail = p; ++hc; } } ln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc ! = 0)? new TreeBin<K,V>(lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc ! = 0)? new TreeBin<K,V>(hi) : t; setTabAt(nextTab, i, ln); setTabAt(nextTab, i + n, hn); setTabAt(tab, i, fwd); advance = true; } } } } } }Copy the code
Data acquisition: get
public V get(Object key) { Node<K,V>[] tab; Node<K,V> e, p; int n, eh; K ek; Hash int h = spread(key.hashcode ()); if ((tab = table) ! = null && (n = tab.length) > 0 && (e = tabAt(TAB, (n-1) &h))! = null) {/ / if the node is the first node is returned if (eh = e.h (ash) = = h) {if (= e.k (ek ey) = = key | | (ek! = null && key.equals(ek))) return e.val; Else if (eh < 0) return (p = e.type (h, key))! = null ? p.val : null; // It is neither the first node nor ForwardingNode, so go through while ((e = e.next)! = null) { if (e.hash == h && ((ek = e.key) == key || (ek ! = null && key.equals(ek)))) return e.val; } } return null; }Copy the code

The process of the ConcurrentHashMap GET operation is simple and clear, and can be described in three steps

Don’t lock

  1. Compute the hash value and locate the corresponding position in the array according to the hash value: (n-1) &h

If the position is null, it is ok to return NULL directly. If the node at the position is exactly what we need, return the value of the node. 2. If the hash value of this node is less than 0, it indicates that during capacity expansion, the find method that marks ForwardingNode is being expanded will be called to search for this node. If the node matches, 3 is returned. If none of the above is true, it is a linked list, and it traverses the nodes down, and returns a match, otherwise it returns null

Summary and Reflection

The ConcurrentHashMap data structure is similar to that of HashMap. ConcurrentHashMap only adds synchronous operations to control concurrency. From JDK1.7’s ReentrantLock+Segment+HashEntry to JDK1.8’s synchronized+CAS+HashEntry+ red-black tree, we can sum up our thinking as follows

  1. The JDK1.8 implementation reduces the granularity of locks. The JDK1.7 version locks are segment-based and contain multiple hashentries, whereas the JDK1.8 lock granularity is HashEntry.

  2. JDK1.8’s data structure has become simpler, making the operation more clear and smooth, because synchronized has been used to synchronize, so there is no need for the concept of Segment lock, so there is no need for data structure such as Segment, due to the reduction of granularity, implementation complexity has increased

  3. JDK1.8 uses red-black trees to optimize linked lists. Traversal based on long linked lists is a long process, and red-black trees traversal efficiency is fast, instead of a certain threshold of linked lists, so as to form an optimal partner

  4. Why does JDK1.8 use synchronized instead of ReentrantLock

    1. Because of the reduced granularity, synchronized is not inferior to ReentrantLock in relatively low-granularity locking. In coarse-granularity locking, ReentrantLock may control the boundary of each low-granularity through Condition, which is more flexible. In low-granularity locking, synchronized is not inferior to ReentrantLock. Condition’s advantage is gone
    2. The JVM development team never gave up on synchronized, and JVM based synchronized optimizations have more room for optimization and are more natural to use embedded keywords than apis
    3. Api-based ReentrantLock can consume more memory for the JVM’s memory stress under heavy data operations, which is not a bottleneck, but is a choice

2.5 ConcurrentSkipListMap