There are two main reasons why we chose ConcurrentHashMap as the introduction to Java from beginning to end. First, since it is the beginning, it must start with the most basic JavaSE collection. As a thread-safe version of HashMap, ConcurrentHashMap can basically cover the knowledge points of HashMap. Second, I just did a little research on ConcurrentHashMap during the day and just wrote it.

Introduction to ConcurrentHashMap

ConcurrentHashMap class diagram

I’m going to start with a class diagram for ConcurrentHashMap (I forgot where I stole it from, please forgive me). Although I have not formed the habit of looking at class diagrams, but always feel that putting a class diagram will look very professional.

Implementation principle of ConcurrentHashMap

ConcurrentHashMap is basically a thread-safe HashMap, so this is pretty much how HashMap is implemented in Java1.8.

ConcurrentHashMap structure

ConcurrentHashMap algorithm

What makes a HashMap stand out is that it uses a Hash algorithm. The Hash algorithm uses one operation to Map the Key value into a Hashcode of Int type, and then performs a high-order operation (1.8new) on the Hashcode, takes modulus operation, maps it into an Int value smaller than the size of the Map array, and stores the key-value pair into the corresponding array location. Why do you do that? When the number of data in a HashMap is small, the calculation can make the key-value pairs more evenly stored in different positions of the array, and try not to generate a linked list. The time complexity of the linked list is O (n), and that of the array is O (1). Therefore, in order to improve the efficiency of HashMap, the linked list should not be generated when the amount of data is small. This can also be used to explain why a red-black tree is used instead of a linked list when the list length is too long. The time complexity of a red-black tree is O (log2(n)), which is much smaller than a linked list when the data volume is large. Also mention today’s main character ConcurrentHashMap. Compared to HashMap, ConcurrentHashMap is primarily effective in maintaining thread safety. Multithreading is one area where Java can dig deep, and ConcurrentHashMap helps (how much I don’t know). ConcurrentHashMap ensures thread security by using the CAS method, which is only briefly described here. CAS stands for compare and swap. It has four parameters: the object to be modified, the memory value, the expected value, and the modified value. Compare the memory value with the expected value. If the value is the same, assign the modified value to the object. If the value is different, leave it unchanged. This comparison is used to ensure thread safety. The specific in-depth principle needs to see multithreading and JVM two parts, here will not go into detail (in fact, I also do not have a thorough study). The algorithm of red-black tree will not be expanded here (I also forget about it), but the structure of the algorithm.

ConcurrentHashMap source code parsing

ConcurrentHashMap Important parameters and inner classes

Here are the important constants for ConcurrentHashMap.

Private static final int MAXIMUM_CAPACITY = 1 << 30; // The default size is 16. Private static final int DEFAULT_CAPACITY = 16; // The default concurrency is 16. Private static final int DEFAULT_CONCURRENCY_LEVEL = 16; // The load parameter is 0.75 private static finalfloatLOAD_FACTOR = 0.75 f; Static final int TREEIFY_THRESHOLD = 8; static final int TREEIFY_THRESHOLD = 8; Static final int UNTREEIFY_THRESHOLD = 6; static final int UNTREEIFY_THRESHOLD = 6; Static final int MIN_TREEIFY_CAPACITY = 64; static final int MIN_TREEIFY_CAPACITY = 64; static final int MIN_TREEIFY_CAPACITY = 64; Private static final int MIN_TRANSFER_STRIDE = 16; Private static int RESIZE_STAMP_BITS = 16; private static int RESIZE_STAMP_BITS = 16; Private static final int MAX_RESIZERS = (1 << (32-resize_stamp_bits)) - 1; // The generated stamp is stored in sizeCtl as the size of the thread count. Private static final int RESIZE_STAMP_SHIFT = 32-resize_stamp_bits;Copy the code

Here are the important variables for ConcurrentHashMap.

//Map Hash bucket array TRANSIENT Volatile Node<K,V>[] table; // Hash bucket array created during expansion, note the TRANSIENT keyword, which will not be serialized private TRANSIENT volatile Node<K,V>[] nextTable; Private TRANSIENT Long baseCount; private transient long baseCount; //sizeCtl = -(1 + nThreads), sizeCtl = -(1 + nThreads), SizeCtl > 0: specifies the Map size for subsequent initialization, or specifies the size threshold after initialization or capacity expansion. Default value private TRANSIENT volatile int sizeCtl; Private transient volatile int transferIndex; private transient volatile int transferIndex;Copy the code

Here are the important inner classes for ConcurrentHashMap.

Static class Node<K,V> implements map. Entry<K,V> {final inthash; final K key; // The val value and the next Node Node<K,V> next are both volatile to ensure thread-safe volatile V val; volatile Node<K,V> next; // Initialize method Node(inthash, K key, V val, Node<K,V> next) {
            this.hash = hash;
            this.key = key;
            this.val = val;
            this.next = next;
        }

        public final K getKey()       { return key; }
        public final V getValue()     { return val; }
        public final int hashCode()   { return key.hashCode() ^ val.hashCode(); }
        public final String toString() {return key + "="+ val; } // For thread safetysetValue cannot be called and an exception is thrown public final VsetValue(V value) { throw new UnsupportedOperationException(); Public final Boolean equals(Object o) {Object k, v, u; Map.Entry<? ,? > e;return((o instanceof Map.Entry) && (k = (e = (Map.Entry<? ,? >)o).getKey()) ! = null && (v = e.getValue()) ! = null && (k == key || k.equals(key)) && (v == (u = val) || v.equals(u))); } // Use map.get() to override Node<K,V> find(int h, Object K) {Node<K,V> e = this;if(k ! = null) {do {
                    K ek;
                    if(e.hash == h && ((ek = e.key) == k || (ek ! = null && k.equals(ek))))return e;
                } while((e = e.next) ! = null); }returnnull; }}Copy the code
Static final class ForwardingNode<K,V> extends Node<K,V> {final Node<K,V>[] nextTable; // An inner class used in expansion methods to mark Hash buckets that have already been processed.  // Constructor, ForwardingNode, whose Hash value is MOVED, NextTable points to new Map ForwardingNode(Node<K,V>[] TAB) {super(MOVED, NULL, NULL, null); this.nextTable = tab; } Node<K,V> find(int h, Object K) {ForwardingNode <K,V> find(int h, Object K)for (Node<K,V>[] tab = nextTable;;) {
                Node<K,V> e; int n;
                if (k == null || tab == null || (n = tab.length) == 0 ||
                    (e = tabAt(tab, (n - 1) & h)) == null)
                    return null;
                for (;;) {
                    int eh; K ek;
                    if((eh = e.hash) == h && ((ek = e.key) == k || (ek ! = null && k.equals(ek))))return e;
                    if(eh < 0) {// A ForwardingNode is encountered, which is equivalent to a recursive operationif (e instanceof ForwardingNode) {
                            tab = ((ForwardingNode<K,V>)e).nextTable;
                            continue outer;
                        }
                        else
                            return e.find(h, k);
                    }
                    if ((e = e.next) == null)
                        returnnull; }}}}Copy the code
// Static inner class of tree node, same as TreeBinTreeNode<K,V> extends Node<K,V> {// TreeNode<K,V> parent; TreeNode<K,V> left; TreeNode<K,V> right; TreeNode<K,V> prev; boolean red; // the constructor TreeNode(inthash, K key, V val, Node<K,V> next,
                 TreeNode<K,V> parent) {
            super(hash, key, val, next); this.parent = parent; } Node<K,V> find(int h, Object K) {returnfindTreeNode(h, k, null); Final TreeNode<K,V> findTreeNode(int h, Object K, Class<? > kc) {if(k ! = null) { TreeNode<K,V> p = this;do  {
                    int ph, dir; K pk; TreeNode<K,V> q;
                    TreeNode<K,V> pl = p.left, pr = p.right;
                    if ((ph = p.hash) > h)
                        p = pl;
                    else if (ph < h)
                        p = pr;
                    else if((pk = p.key) == k || (pk ! = null && k.equals(pk)))return p;
                    else if (pl == null)
                        p = pr;
                    else if (pr == null)
                        p = pl;
                    else if((kc ! = null || (kc = comparableClassFor(k)) ! = null) && (dir = compareComparables(kc, k, pk)) ! = 0) p = (dir < 0) ? pl : pr; // Recursively traverse the right subtreeelse if((q = pr.findTreeNode(h, k, kc)) ! = null)return q;
                    else
                        p = pl;
                } while(p ! = null); }returnnull; }}Copy the code
// The root node of the red-black tree maintains the read/write lock of the red-black tree static final class TreeBin<K,V> extends Node<K,V> { TreeNode<K,V> root; volatile TreeNode<K,V> first; volatile Thread waiter; volatile int lockState; Static final int WRITER = 1; Static final int WAITER = 2; Static final int READER = 4; / / inhashStatic int tieBreakOrder(Object a, Object b) {int d;if(a == null || b == null || (d = a.getClass().getName(). compareTo(b.getClass().getName())) == 0) d = (System.identityHashCode(a) <= System.identityHashCode(b) ? 1:1);returnd; } // define a red-black tree TreeB based on the head nodein(TreeNode<K,V> b) {
            super(TREEBIN, null, null, null);
            this.first = b;
            TreeNode<K,V> r = null;
            for(TreeNode<K,V> x = b, next; x ! = null; x = next) { next = (TreeNode<K,V>)x.next; x.left = x.right = null;if (r == null) {
                    x.parent = null;
                    x.red = false;
                    r = x;
                }
                else{ K k = x.key; int h = x.hash; Class<? > kc = null;for (TreeNode<K,V> p = r;;) {
                        int dir, ph;
                        K pk = p.key;
                        if ((ph = p.hash) > h)
                            dir = -1;
                        else if (ph < h)
                            dir = 1;
                        else if ((kc == null &&
                                  (kc = comparableClassFor(k)) == null) ||
                                 (dir = compareComparables(kc, k, pk)) == 0)
                            dir = tieBreakOrder(k, pk);
                            TreeNode<K,V> xp = p;
                        if ((p = (dir <= 0) ? p.left : p.right) == null) {
                            x.parent = xp;
                            if (dir <= 0)
                                xp.left = x;
                            else
                                xp.right = x;
                            r = balanceInsertion(r, x);
                            break; } } } } this.root = r; assert checkInvariants(root); } // Write lock on root node private final voidlockRoot() {
            if (!U.compareAndSwapInt(this, LOCKSTATE, 0, WRITER))
                contendedLock(); // offload to separate method
        }

        //根节点释放写锁
        private final void unlockRoot() { lockState = 0; } // Since the ConcurrentHashMap method locks the head node, the read/write lock does not consider write contention, only read/write contention private final voidcontendedLock() {
            boolean waiting = false;
            for(int s;;) {// Try to acquire the write lock when no thread holds the read lockif(((s = lockState) & ~WAITER) == 0) {// Try to acquire the write lock when no thread is holding itif(U.compareAndSwapInt(this, LOCKSTATE, s, WRITER)) {// Clear the waiting thread (the waiting thread is itself)if (waiting)
                            waiter = null;
                        return; }} // If a thread holds a write lock and the thread is not in WAITER stateelse if(s & WAITER) == 0) {// Try to hold a waiting threadif (U.compareAndSwapInt(this, LOCKSTATE, s, s | WAITER)) {
                        waiting = true; waiter = Thread.currentThread(); }} // Block itself if a thread holds a write lock and this thread is in WAITER stateelse if(waiting) LockSupport.park(this); }} final Node<K,V> find(int h, Object K) {if(k ! = null) {for(Node<K,V> e = first; e ! = null; ) { int s; K ek; // Write locks are held using linked list traversalif(((s = lockState) & (WAITER|WRITER)) ! = 0) {if(e.hash == h && ((ek = e.key) == k || (ek ! = null && k.equals(ek))))returne; e = e.next; } // If the write lock is not held, hold a read lock and traverse it with a red-black treeelse if(U.compareAndSwapInt(this, LOCKSTATE, s, s + READER)) { TreeNode<K,V> r, p; try { p = ((r = root) == null ? null : r.findTreeNode(h, k, null)); } finally { Thread w; // Notify the waiter thread to acquire the write lock when the current thread holds the last read lockif(U.getAndAddInt(this, LOCKSTATE, -READER) == (READER|WAITER) && (w = waiter) ! = null) LockSupport.unpark(w); }returnp; }}}returnnull; } final TreeNode<K,V> putTreeVal(int h, K K,V V) {Class<? > kc = null; boolean searched =false;
            for (TreeNode<K,V> p = root;;) {
                int dir, ph; K pk;
                if (p == null) {
                    first = root = new TreeNode<K,V>(h, k, v, null, null);
                    break;
                }
                else if ((ph = p.hash) > h)
                    dir = -1;
                else if (ph < h)
                    dir = 1;
                else if((pk = p.key) == k || (pk ! = null && k.equals(pk)))return p;
                else if ((kc == null &&
                          (kc = comparableClassFor(k)) == null) ||
                         (dir = compareComparables(kc, k, pk)) == 0) {
                    if(! searched) { TreeNode<K,V> q, ch; searched =true;
                        if(((ch = p.left) ! = null && (q = ch.findTreeNode(h, k, kc)) ! = null) || ((ch = p.right) ! = null && (q = ch.findTreeNode(h, k, kc)) ! = null))return q;
                    }
                    dir = tieBreakOrder(k, pk);
                }

                TreeNode<K,V> xp = p;
                if ((p = (dir <= 0) ? p.left : p.right) == null) {
                    TreeNode<K,V> x, f = first;
                    first = x = new TreeNode<K,V>(h, k, v, f, xp);
                    if(f ! = null) f.prev = x;if (dir <= 0)
                        xp.left = x;
                    elsexp.right = x; // If the parent node is a black node, a red node can be mounted without lockingif(! xp.red) x.red =true; // Other times you may need to rotate the red-black tree to rebalance, and add write locks hereelse{ lockRoot(); try { root = balanceInsertion(root, x); } finally { unlockRoot(); }}break;
                }
            }
            assert checkInvariants(root);
            returnnull; } final Boolean removeTreeNode(TreeNode<K,V> p) {TreeNode<K,V> next = (TreeNode<K,V>) p.ext; TreeNode<K,V> pred = p.prev; // unlink traversal pointers TreeNode<K,V> r, rl;if (pred == null)
                first = next;
            else
                pred.next = next;
            if(next ! = null) next.prev = pred;if (first == null) {
                root = null;
                return true; } // If the red-black tree is too small, return True and convert to a linked listif ((r = root) == null || r.right == null ||
                (rl = r.left) == null || rl.left == null)
                return true; // If the size of the red-black tree is large, add a write lock and delete the node from the tree lockRoot(); try { TreeNode<K,V> replacement; TreeNode<K,V> pl = p.left; TreeNode<K,V> pr = p.right;if(pl ! = null && pr ! = null) { TreeNode<K,V> s = pr, sl;while((sl = s.left) ! = null) // find successor s = sl; boolean c = s.red; s.red = p.red; p.red = c; // swap colors TreeNode<K,V> sr = s.right; TreeNode<K,V> pp = p.parent;if (s == pr) { // p was s's direct parent p.parent = s; s.right = p; } else { TreeNode
      
        sp = s.parent; if ((p.parent = sp) ! = null) { if (s == sp.left) sp.left = p; else sp.right = p; } if ((s.right = pr) ! = null) pr.parent = s; } p.left = null; if ((p.right = sr) ! = null) sr.parent = p; if ((s.left = pl) ! = null) pl.parent = s; if ((s.parent = pp) == null) r = s; else if (p == pp.left) pp.left = s; else pp.right = s; if (sr ! = null) replacement = sr; else replacement = p; } else if (pl ! = null) replacement = pl; else if (pr ! = null) replacement = pr; else replacement = p; if (replacement ! = p) { TreeNode
       
         pp = replacement.parent = p.parent; if (pp == null) r = replacement; else if (p == pp.left) pp.left = replacement; else pp.right = replacement; p.left = p.right = p.parent = null; } root = (p.red) ? r : balanceDeletion(r, replacement); if (p == replacement) { // detach pointers TreeNode
        
          pp; if ((pp = p.parent) ! = null) { if (p == pp.left) pp.left = null; else if (p == pp.right) pp.right = null; p.parent = null; } } } finally { unlockRoot(); } assert checkInvariants(root); return false; }
        ,v>
       ,v>
      ,v>Copy the code
Static Final Class ReservationNode<K,V> extends Node<K,V> {// ReservationNode<K,V> {ReservationNode() {
            super(RESERVED, null, null, null);
        }

        Node<K,V> find(int h, Object k) {
            returnnull; }}Copy the code

That concludes the basic parameters and inner classes. You can see that the nodes in the Map are composed of static inner classes, which are easy to call. ConcurrentHashMap has a number of methods, and here are some that are significantly different from the HashMap implementation to ensure multithreading safety. First, the initialization method is introduced.

Private final Node<K,V>[]initTable() {
        Node<K,V>[] tab; int sc;
        while((TAB = table) = = null | | TAB. The length = = 0) {/ / sizeCtl negative population initialization and other threads are table, thread to yield the CPUif((sc = sizeCtl) < 0) Thread.yield(); // Use CAS to set sizeCtl to -1 and initialize the tableelse if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
                try {
                    if ((tab = table) == null || tab.length == 0) {
                        int n = (sc > 0) ? sc : DEFAULT_CAPACITY;
                        @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<? ,? >[n]; table = tab = nt; sc = n - (n >>> 2); } } finally { sizeCtl = sc; }break; }}return tab;
    }
Copy the code

Next, the expansion method is introduced. The capacity expansion method will be triggered in two cases. One is when the linked list is converted to a red-black tree, if the Map capacity is less than 64, the capacity expansion operation will be performed first, and the linked list will be converted to a red-black tree only when the upper limit of 64 is reached. Another method is to call the addCount method after each new node is added, and then determine whether the threshold is reached and expand the capacity. This section describes the basic procedure for capacity expansion under a single thread. Create a new table and copy the nodes from the original table. For the data in Hash bucket I of the original table with capacity N, part of the data is stored in the same location I of the new table and the other part is stored in Hash bucket I + N. In this way, capacity expansion is completed, reducing data in each Hash bucket and improving Map performance.

Private final void Transfer (Node<K,V>[] TAB, Node<K,V>[] nextTab) {int n = tab.length, stride; // If the number of cpus per thread is less than the preset number of Hash buckets, set it to the preset valueif((stride = (NCPU > 1) ? (n >>> 3) / NCPU : n) < MIN_TRANSFER_STRIDE) stride = MIN_TRANSFER_STRIDE; // If there is no new table, create a table with twice the size to replicate it. // The thread safety of creating new tables is guaranteed by CAS before calling this methodif (nextTab == null) {           
            try {
                @SuppressWarnings("unchecked") Node<K,V>[] nt = (Node<K,V>[])new Node<? ,? >[n << 1]; nextTab = nt; } catch (Throwable ex) { // try to cope with OOME sizeCtl = Integer.MAX_VALUE;return; } nextTable = nextTab; transferIndex = n; } int nextn = nextTab.length; ForwardingNode<K,V> FWD = new ForwardingNode<K,V>(nextTab); //advance indicates completion of Hash bucket operation Boolean advance =true; // Finishing as a marker variable for expansion completion Boolean finishing =false; // Use the stride to calculate the Hash bucket that the thread needs to processfor (int i = 0, bound = 0;;) {
            Node<K,V> f; int fh;
            while (advance) {
                int nextIndex, nextBound;
                if (--i >= bound || finishing)
                    advance = false;
                else if ((nextIndex = transferIndex) <= 0) {
                    i = -1;
                    advance = false;
                }
                else if (U.compareAndSwapInt
                         (this, TRANSFERINDEX, nextIndex,
                          nextBound = (nextIndex > stride ?
                                       nextIndex - stride : 0))) {
                    bound = nextBound;
                    i = nextIndex - 1;
                    advance = false; }}if(i < 0 || i >= n || i + n >= nextn) { int sc; // If the flag variable is true, the thread is terminatedif (finishing) {
                    nextTable = null;
                    table = nextTab;
                    sizeCtl = (n << 1) - (n >>> 1);
                    return; } // Double check before submittingif (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
                    if((sc - 2) ! = resizeStamp(n) << RESIZE_STAMP_SHIFT)return;
                    finishing = advance = true; i = n; }} // If the Hash bucket is empty, the flag node is placed inside the bucket to indicate that the bucket is no longer processedelse if((f = tabAt(tab, i)) == null) advance = casTabAt(tab, i, null, fwd); // If there are marked nodes in the Hash bucket, the bucket has been processed. Skip this bucketelse if ((fh = f.hash) == MOVED)
                advance = true; 
            elseSynchronized (f) {synchronized (f) {synchronized (f) {if(tabAt(TAB, I) == f) {// Create two nods to split the Hash bucket data into two new Hash buckets Node<K,V> ln, hn; // Check whether the Hash value of the head node is greater than 0. If it is less than 0, it may be a tree node, placeholder node, etcifInt runBit = fh&n; Node<K,V> lastRun = f; · //lastRun is the first node of the last fh & n (runBit) segment in the previous listfor(Node<K,V> p = f.next; p ! = null; p = p.next) { int b = p.hash & n;if(b ! = runBit) { runBit = b; lastRun = p; }} // Determine which new list to insert according to runBitif (runBit == 0) {
                                ln = lastRun;
                                hn = null;
                            }
                            else{ hn = lastRun; ln = null; } // Insert the remaining nodes into the two new lists. You can see that the new list is inverted relative to the old listfor(Node<K,V> p = f; p ! = lastRun; p = p.next) { int ph = p.hash; K pk = p.key; V pv = p.val;if ((ph & n) == 0)
                                    ln = new Node<K,V>(ph, pk, pv, ln);
                                elsehn = new Node<K,V>(ph, pk, pv, hn); } // Insert the new linked list into the new table, insert the marker node into the original table, copy the linked list data is completesetTabAt(nextTab, i, ln);
                            setTabAt(nextTab, i + n, hn);
                            setTabAt(tab, i, fwd);
                            advance = true; } // The data in the Hash bucket to be processed is a tree nodeelse if (f instanceof TreeBin) {
                            TreeBin<K,V> t = (TreeBin<K,V>)f; TreeNode<K,V> lo = null, loTail = null; TreeNode<K,V> hi = null, hiTail = null; int lc = 0, hc = 0;for(Node<K,V> e = t.first; e ! = null; e = e.next) { int h = e.hash; TreeNode<K,V> p = new TreeNode<K,V> (h, e.key, e.val, null, null); // Divide nodes into two classes according to h & n // maintain both tree and list structuresif ((h & n) == 0) {
                                    if ((p.prev = loTail) == null)
                                        lo = p;
                                    else
                                        loTail.next = p;
                                    loTail = p;
                                    ++lc;
                                }
                                else {
                                    if ((p.prev = hiTail) == null)
                                        hi = p;
                                    elsehiTail.next = p; hiTail = p; ++hc; }} // If the number of nodes in the new tree after splitting is less than the threshold, convert back to the linked list ln = (lc <= UNTREEIFY_THRESHOLD)? untreeify(lo) : (hc ! = 0)? new TreeBin<K,V>(lo) : t;
                            hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) :
                                (lc != 0) ? new TreeBin<K,V>(hi) : t; // Insert the new linked list into the new table, insert the flag node into the original table, and copy the red-black tree datasetTabAt(nextTab, i, ln);
                            setTabAt(nextTab, i + n, hn);
                            setTabAt(tab, i, fwd);
                            advance = true;
                        }
                    }
                }
            }
        }
    }
Copy the code

Finally, a look at the multithreaded PUT operation.

Final V putVal(K key, V value, Boolean onlyIfAbsent) {//ConcurrentHashMap The key and value pairs with K and V values NULL are not allowed to be inserted into the Mapif(key == null || value == null) throw new NullPointerException(); // Compute Hash values with high and low values, so that both high and low values participate in the operation, reducing the Hash collision probability inthash = spread(key.hashCode());
        int binCount = 0;
        for(Node<K,V>[] tab = table;;) { Node<K,V> f; int n, i, fh; // If there is no table, initialize oneif(tab == null || (n = tab.length) == 0) tab = initTable(); // If there are no nodes in the Hash bucket, add the nodes directly through the CAS without locking themelse if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
                if (casTabAt(tab, i, null,
                             new Node<K,V>(hash, key, value, null)))
                    break; } // If there is a marked node in the Hash bucket, expand the Hash bucketelse if ((fh = f.hash) == MOVED)
                tab = helpTransfer(tab, f);
            else{ V oldVal = null; Synchronized (f) {synchronized (f) {if (tabAt(tab, i) == f) {
                        if (fh >= 0) {
                            binCount = 1;
                            for (Node<K,V> e = f;; ++binCount) {
                                K ek;
                                if (e.hash == hash&& ((ek = e.key) == key || (ek ! = null && key.equals(ek)))) { oldVal = e.val;if(! onlyIfAbsent) e.val = value;break;
                                }
                                Node<K,V> pred = e;
                                if ((e = e.next) == null) {
                                    pred.next = new Node<K,V>(hash, key,
                                                              value, null);
                                    break; }}} // Red-black tree inserts nodeselse if (f instanceof TreeBin) {
                            Node<K,V> p;
                            binCount = 2;
                            if ((p = ((TreeBin<K,V>)f).putTreeVal(hash, key, value)) ! = null) { oldVal = p.val;if(! onlyIfAbsent) p.val = value; }}}} // If the value is greater than the threshold, the tree becomes a red-black treeif(binCount ! = 0) {if (binCount >= TREEIFY_THRESHOLD)
                        treeifyBin(tab, i);
                    if(oldVal ! = null)return oldVal;
                    break;
                }
            }
        }
        addCount(1L, binCount);
        return null;
    }
Copy the code

This is the end of the introduction of ConcurrentHashMap. If there are any new discoveries, please comment on them.