Java collection source code parsing series:

  • Unpacking decoding Java collection source code overview
  • Decoding Java Collection source code Collection of three systems
  • Iterator for decoding Java collection source code
  • Unpacking Java collection source code ArrayList
  • Unpack the LinkedList of Java collection source code
  • Decode HashMap of Java collection source code
  • Decode Java collection source code Hashtable
  • Unlink decode Java collection source code LinkedHashMap
  • Decode Java collection source code PriorityQueue
  • ArrayDeque decoding Java collection source code

features

  • Key and Value can be null.

  • You want the key to be immutable, that is, hashCode immutable.

  • Not synchronized, thread unsafe. Synchronization using wrapper classes: Collections. SynchronizedMap.

  • This implementation provides constant-time performance for the basic operations (GET and PUT), assuming that the Hash function distributes the elements correctly within the bucket.

  • The time required to iterate over the view is proportional to the “capacity” of the HashMap instance (table[].length) and its size (key-value versus number).

    Iterative performance is important, so don’t set the initial capacity too high (or the load factor too low).

  • Important parameters that affect performance: initial capacity and load factor.

    • Capacity is the number of buckets in the hash table. The initial capacity is only the capacity when the hash table is created.

    • The load factor is the ratio of hash table elements to the total hash table capacity that is allowed before the table capacity is automatically increased.

    • When the key-value logarithm in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, the internal data structure is rebuilt) to twice its original capacity.

      The default load factor (.75) provides a good trade-off between time and space costs.

      Too large to reduce space, but poor search performance; Too small conversely, but frequent rehash affects performance.

    • The capacity and load factor should be properly set during initialization, especially if the number of elements is predictable, to avoid subsequent rehashes.

  • Upper and lower thresholds that control the transition between nodes and TreeNode.

    When retrieving linked lists or red-black trees, different lookup logic is performed depending on the type of header.

    In the case of tree, traversal is also supported, even when there are many nodes, the search performance is faster. But it is not usually converted to a tree, and each check of the node type incurs an additional cost to the list lookup.

  • The use and conversion between normal and tree patterns is complicated by the LinkedHashMap subclass.

    Hook methods are defined to be called at insert, delete, and access time, allowing the LinkedHashMap to remain internally independent of these mechanisms. The template method, in which subclasses may extend the node (save the front node and back node), allows the converted node to be supplemented when converted.

Nodal data structure

The linked list Node
static class Node<K.V> implements Map.Entry<K.V> {
        final int hash;    // Used to locate the array index position
        final K key;
        V value;
        Node<K,V> next;   // Next node in the list

        Node(int hash, K key, V value, Node<K,V> next) { ... }
        public final K getKey(a){... }public final V getValue(a) {... }public final String toString(a) {... }public final int hashCode(a) {... }public final V setValue(V newValue) {... }public final boolean equals(Object o) {... }}Copy the code
Red and black tree Node
LinkedHashMap:
static class Entry<K.V> extends HashMap.Node<K.V> {
        Entry<K,V> before, after;
        Entry(int hash, K key, V value, Node<K,V> next) {
            super(hash, key, value, next); }}// Left <= parent <= right
static final class TreeNode<K.V> extends LinkedHashMap.Entry<K.V> {
    TreeNode<K,V> parent;  // red-black tree links
    TreeNode<K,V> left;
    TreeNode<K,V> right;
    // and next form a bidirectional list of tree nodes for easy traversal
    // When initializing the tree, prev and next are the order in the original list
    // Put is maintained in the same way as linked lists. To the left or right of the bottom node (leaf node or only one subtree),
    // The parent is the prev of the new node, and the next of the parent becomes the next of the new node.
    // The insertion sequence is no longer the insertion sequence.
    TreeNode<K,V> prev;    // needed to unlink next upon deletion
    boolean red; // mark the node color
    TreeNode(int hash, K key, V val, Node<K,V> next) {
        super(hash, key, val, next);
    }
    final TreeNode<K,V> root(a) { // return the root node
            for (TreeNode<K,V> r = this, p;;) {
                if ((p = r.parent) == null)
                    returnr; r = p; }}}Copy the code

The source code

List of preset parameters
  • Initial capacity: 16

  • Capacity: 2 to the NTH power <= (1 << 30)

  • Default load factor: 0.75

  • Tree threshold: greater than 2, at least 8

  • The de-tree threshold is 6

  • Table [] capacity: >= 64 (4 times the tree threshold).

    If it does not reach 64, the tree threshold is reached, and only capacity expansion is performed. It’s not tree-like.

    To avoid conflict between tree and capacity expansion.

    static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
    /** * 2 to the NTH must be less than or equal to 1 << 30 */
    static final int MAXIMUM_CAPACITY = 1 << 30;
    static final float DEFAULT_LOAD_FACTOR = 0.75 f;
    static final int TREEIFY_THRESHOLD = 8;
    static final int UNTREEIFY_THRESHOLD = 6;
    /** * Table [] size: >= 64 (4 times the tree threshold). * If it does not reach 64 and the tree threshold is reached, it will only be expanded. It's not tree-like. * To avoid conflicts between tree and capacity expansion. * /
    static final int MIN_TREEIFY_CAPACITY = 64;
Copy the code
Related Parameters
	/** * the capacity must be 2 to the NTH power */
    transient Node<K,V>[] table;

    transient Set<Map.Entry<K,V>> entrySet;

    transient int size;

    transient int modCount;

    If the table array has not been allocated, this field retains the initial array capacity, or is zero, indicating DEFAULT_INITIAL_CAPACITY. * /
    int threshold;

    /** * The load factor for the hash table. */
    final float loadFactor;
Copy the code
The constructor

To ensure a capacity of two to the NTH power, you need to control the largest input: the constructor (internal expansion, as long as the initial two to the NTH power, later double expansion is ok).

    public HashMap(int initialCapacity, float loadFactor) {
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        if (initialCapacity > MAXIMUM_CAPACITY)
            // Control the maximum capacity
            initialCapacity = MAXIMUM_CAPACITY;
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);
        this.loadFactor = loadFactor;
        // ensure that the initialCapacity is only greater than or equal to initialCapacity to the NTH power of 2
        // table[] is lazily loaded. Before the allocation, threshold = capacity, temporary storage.
        this.threshold = tableSizeFor(initialCapacity);
    }

    public HashMap(int initialCapacity) {
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }

    public HashMap(a) {
        this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
    }

    public HashMap(Map<? extends K, ? extends V> m) {
        this.loadFactor = DEFAULT_LOAD_FACTOR;
        // Transfer key-value pairs
        putMapEntries(m, false);
    }

    final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
        int s = m.size();
        if (s > 0) {
            // Not only for constructors, but also for putAll. So it's two cases
            if (table == null) { // Calculate the estimated size to initialize
                float ft = ((float)s / loadFactor) + 1.0 F; // "Estimated capacity/load factor + 1" is recommended in any scenario to avoid frequent rehashes
                int t = ((ft < (float)MAXIMUM_CAPACITY) ?
                         (int)ft : MAXIMUM_CAPACITY);
                if (t > threshold) // In this case, threshold is the preset capacity
                    // ensure that the initialCapacity is only greater than or equal to initialCapacity to the NTH power of 2
                    threshold = tableSizeFor(t);
            }
            else if (s > threshold)
                // The number of initializations is greater than the threshold. This is a case of pre-expanded capacity and then put elements
                // Why not the original number + new number > threshold? Because there may be duplication, you can't just add them up
                resize();
            for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
                K key = e.getKey();
                V value = e.getValue();
                // Resize is still possible
                putVal(hash(key), key, value, false, evict); }}}Public static int numberOfLeadingZeros(int I) {* // HD, Count leading 0's * if (i <= 0) * return i == 0 ? 32:0; * / int n = 31; * / int n = 31; * // The following process is actually equivalent to binary search * // judge the high 16 bits. If there is a value, move the high 16 to the low 16. If (I >= 1 << 16) {n -= 16; i >>>= 16; } * // Judge the upper 8 bits of the lower 16 bits. If there is a value, move the high 8 to the low 8. -8 * if (I >= 1 << 8) {n -= 8; i >>>= 8; } * // Judge the upper 4 bits of the lower 8 bits. If there is a value, move the high 4 to the low 4. 4 * if (I >= 1 << 4) {n -= 4; i >>>= 4; } * // Judge the upper 2 bits of the lower 4 bits. If there is a value, move the high 2 to the low 2. 2 * if (I >= 1 << 2) {n -= 2; i >>>= 2; } * // clear the influence of the last bit. Return n - (I >>> 1); *} * /
    static final int tableSizeFor(int cap) {
        NumberOfLeadingZeros returns the number of zeros before an unsigned number
        // -1 >>>: this is equivalent to moving 1 from the highest digit to the first digit of the original binary.
        // Cap = 32, cap-1 = 31(11111); The number of zeros in front is 27
        // -1 >>> 27 = 11111 + 1 = 32
        int n = -1 >>> Integer.numberOfLeadingZeros(cap - 1);
        return (n < 0)?1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
Copy the code
The Hash function

Each time a HashMap adds a key-value pair, it calculates the index position of the key-value pair in the table[] based on the hash code of the key.

The process takes two steps:

  • Hashcode gets a hash value from the hash function; (Use disturbance function to avoid conflict as much as possible)

1.8 and later: 1 displacement, 1 xOR

static final int hash(Object key) {
    int h;
    // High and low xOR, which makes full use of the key's own features to reduce hash conflicts.
    return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
}
Copy the code

1.7 and before: 4 shifts + 5 xOR operations.

h = k.hashCode();
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4);Copy the code

In fact, 1.8 is enough to reduce the chance of a collision. Any number of operations, because the marginal benefit is 80 percent and you only get 20 percent more benefit, it’s really not necessary.

  • The hash value is mod to the length of table[] to obtain the index value.
static int indexFor(int h, int length) {  //jdk1.7 source code, JDk1.8 does not have this method, but the implementation principle is the same
     return h & (length-1);  // take the modulo operation, equivalent to h%length
}
Copy the code

You can quickly get the index value through binary operation. That’s why the array length has to be 2 to the NTH power.

Photo: (tech.meituan.com/2016/06/24/…

Put

Put elements, including putAll (putMapEntries), PUT, and putIfAbsent, add key-value pairs by calling the putVal method directly or in a loop.

So the logic of put focuses on the putVal method; There are other similar ways to add key-value pairs, but they are all similar and may involve all or some of the following logical judgments.

Let’s think about it for a moment. What’s the process of putting a key-value pair?

  • Compute the hash of the key;
  • Compute an array index;
  • Whether there are nodes in the position;
  • Is the node a linked list or a tree?
  • Whether the same key exists: add or replace;
  • Whether expansion is required;
  • Whether to tree;
  • Whether to de-tree;

    final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
                   boolean evict) {
        Node<K,V>[] tab; Node<K,V> p; int n, i;
        if ((tab = table) == null || (n = tab.length) == 0)
            // Initialize the array
            n = (tab = resize()).length;
        if ((p = tab[i = (n - 1) & hash]) == null)
            // The current position of the array is empty, insert directly into the linked list node
            tab[i] = newNode(hash, key, value, null);
        else {
            Node<K,V> e; K k;
            if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
                // The key already exists
                e = p;
            else if (p instanceof TreeNode)
                // Is a tree node, go tree node insertion: involves comparison, balance
                e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
            else {
                // Iterate over the current list
                for (int binCount = 0; ; ++binCount) {
                    if ((e = p.next) == null) {
                        / / stern interpolation
                        Table [I] = table[I]
                        p.next = newNode(hash, key, value, null);
                        // -1 indicates the added length: binCount + 1 >= TREEIFY_THRESHOLD
                        if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                            / / tree
                            treeifyBin(tab, hash);
                        break;
                    }
                    // The key was found during the traversal
                    if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
                        break;
                    // p = p.nextp = e; }}if(e ! =null) { // existing mapping for key
                V oldValue = e.value;
                // Empty, or does not restrict conditions that do not exist
                // To handle putIfAbsent
                if(! onlyIfAbsent || oldValue ==null)
                    e.value = value;
                // Leave the hook method to the subclass after changing the node
                afterNodeAccess(e);
                return oldValue;
            }
        }
        ++modCount;
        // If the added number is greater than the threshold, the system expands the capacity
        if (++size > threshold)
            resize();
        // The new node is left to the subclass's hook method
        afterNodeInsertion(evict);
        return null;
    }

    final void treeifyBin(Node<K,V>[] tab, int hash) {
        int n, index; Node<K,V> e;
        // Only the capacity is greater than or equal to 64
        if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
            resize();
        else if ((e = tab[index = (n - 1) & hash]) ! =null) {
            TreeNode<K,V> hd = null, tl = null;
            do {
                // Iterate through the list and replace it with a tree node
                TreeNode<K,V> p = replacementTreeNode(e, null);
                if (tl == null)
                    // Reserve the header
                    hd = p;
                else {
                    // Keep the list properties
                    p.prev = tl;
                    tl.next = p;
                }
                tl = p;
            } while((e = e.next) ! =null);
            // set the header to the array
            if((tab[index] = hd) ! =null)
                / / tree
                hd.treeify(tab);
        }
    }
Class TreeNode {
    final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
                                       int h, K k, V v) { Class<? > kc =null;
            boolean searched = false;
            // Tree removes nodes in iterators, not necessarily root nodes in arrays.
            // It needs to be fetched again.TreeNode<K,V> root = (parent ! =null)? root() :this;
            for (TreeNode<K,V> p = root;;) {
                int dir, ph; K pk;
                // Determine the size based on the hash first
                if ((ph = p.hash) > h)
                    dir = -1;
                else if (ph < h)
                    dir = 1;
                // Key already exists
                else if((pk = p.key) == k || (k ! =null && k.equals(pk)))
                    return p;
                Comparable, then according to Comparable
                else if ((kc == null &&
                          (kc = comparableClassFor(k)) == null) ||
                         (dir = compareComparables(kc, k, pk)) == 0) {
                    // Classes are not comparable (there may be multiple types but no generics specified), or they are identical
                    if(! searched) { TreeNode<K,V> q, ch;// In the same case, the comparisons are all thrown under the same subtree. So go through it all to find it.
                        searched = true;
                        if(((ch = p.left) ! =null&& (q = ch.find(h, k, kc)) ! =null) || ((ch = p.right) ! =null&& (q = ch.find(h, k, kc)) ! =null))
                            return q;
                    }
                    // Different types of hashcode
                    dir = tieBreakOrder(k, pk);
                }

                TreeNode<K,V> xp = p;
                // Find the lowest node (no children or only one)
                if ((p = (dir <= 0)? p.left : p.right) ==null) {
                    Node<K,V> xpn = xp.next;
                    TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
                    if (dir <= 0)
                        // Put it on the left side
                        xp.left = x;
                    else
                        // Big is on the right
                        xp.right = x;
                    // next = cur
                    xp.next = x;
                    // prev = parent
                    x.parent = x.prev = xp;
                    // next = parent. The next
                    if(xpn ! =null)
                        ((TreeNode<K,V>)xpn).prev = x;
                    // Put root node in table[I]
                    moveRootToFront(tab, balanceInsertion(root, x));
                    return null; }}}// Compare hash, Comparable, and Class
        final void treeify(Node<K,V>[] tab) {
            TreeNode<K,V> root = null;
            for (TreeNode<K,V> x = this, next; x ! =null; x = next) {
                next = (TreeNode<K,V>)x.next;
                x.left = x.right = null;
                if (root == null) {
                    x.parent = null;
                    x.red = false;
                    root = x;
                }
                else {
                    K k = x.key;
                    inth = x.hash; Class<? > kc =null;
                    for (TreeNode<K,V> p = root;;) {
                        int dir, ph;
                        K pk = p.key;
                        if ((ph = p.hash) > h)
                            dir = -1;
                        else if (ph < h)
                            dir = 1;
                        else if ((kc == null &&
                                  (kc = comparableClassFor(k)) == null) ||
                                 (dir = compareComparables(kc, k, pk)) == 0)
                            dir = tieBreakOrder(k, pk);

                        TreeNode<K,V> xp = p;
                        if ((p = (dir <= 0)? p.left : p.right) ==null) {
                            x.parent = xp;
                            if (dir <= 0)
                                xp.left = x;
                            else
                                xp.right = x;
                            // After each insertion, balance the tree and return the root
                            root = balanceInsertion(root, x);
                            break; }}}}// Place root node in table[I]moveRootToFront(tab, root); }}Copy the code
Expand the resize method

Note that the following two steps are in the resize method, just separate analysis.

Capacity and threshold calculation

Capacity expansion the calculation of capacity and capacity expansion threshold is a little bit different. In simple terms, it is to control the range of [” default “(16), maximum (1 << 30)].

		Node<K,V>[] oldTab = table;
        int oldCap = (oldTab == null)?0 : oldTab.length;
        int oldThr = threshold;
        int newCap, newThr = 0;
        
        // Initialized
        if (oldCap > 0) {
            if (oldCap >= MAXIMUM_CAPACITY) {
                Set the threshold to the maximum value and return to the old table
                threshold = Integer.MAX_VALUE;
                return oldTab;
            }
            // Just expand to the maximum
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                    // If the old capacity is greater than the default capacity, the threshold is doubled
                     oldCap >= DEFAULT_INITIAL_CAPACITY)
                // Double the capacity and double the threshold
                newThr = oldThr << 1; // double threshold
        }
        // It is not initialized yet. The initial capacity is placed on threshold
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;
        else {               // zero initial threshold signifies using defaults
            // No arguments
            newCap = DEFAULT_INITIAL_CAPACITY;
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
        }
        if (newThr == 0) {
            // There are two possible ways to enter the if
            If (oldCap > 0) is entered and the two IFs in the if are not satisfied: indicates that the capacity is just expanded to the maximum capacity (the threshold may exceed the maximum capacity if the threshold is doubled) or the old capacity is less than 16(the original capacity is very small, so the original threshold may be 0?).
            Else if (oldThr > 0) else if (oldThr > 0) else if (oldThr > 0
            float ft = (float)newCap * loadFactor;
            // Maximum capacity, and the load factor is large, resulting in the calculated threshold also exceeds the maximum capacity
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        
        threshold = newThr;
        Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
        table = newTab;
Copy the code
Reindex a node

The prerequisite is oldTab! = null, so array initialization does not go this far.

You should then iterate through each position of the oldTab in turn, recalculating the individual nodes under that position, or iterate through the linked list and tree to calculate each node.

If the array is doubled, the hash & (newcap-1) of the new node will calculate the result:

  • Or index is equal to the original calculationhash & (oldCap.length - 1)
  • Either the original result + the location of the old capacity:index + oldCap

Photo source from (tech.meituan.com/2016/06/24/…

So, as can be seen from the above figure, it is just the result of the binary and Node hash of the added oldCap that determines whether the index is unchanged or + oldCap.

Instead of recalculating hash & (newCap-1), the code can hash & oldCap == 0 and quickly get the index.

Then, the value on the single bit is 0 or 1, and the odds are 50-50. This will split the linked list evenly.

The final result looks like this (for example, expanding from 16 to 32) :

Photo source from (tech.meituan.com/2016/06/24/…

JDK 7 and prior to that, we used header interpolation. When the nodes have the same index value, they are placed directly on the array. So the node that you put at the beginning, you end up at the end of the list.

After expansion, the linked list will be inverted.

JDK 8 and later use tail interpolation to add nodes to the end of a linked list. After expansion, the order of nodes after migration remains unchanged.

At the same time, it avoids the looped linked list in multithreading capacity expansion.

if(oldTab ! =null) {
    // Iterate over the header on the array
    for (int j = 0; j < oldCap; ++j) {
        Node<K,V> e;
        // No node, no tree, no linked list, just skip
        if((e = oldTab[j]) ! =null) {
            oldTab[j] = null;
            if (e.next == null)
                // If there is only one header, then recalculate the index (hash & oldCap is not applicable, and if.. else.. The branch)
                newTab[e.hash & (newCap - 1)] = e;
            else if (e instanceof TreeNode)
                / / split the tree
                ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
            else { // The order of the linked list nodes remains the same
                // newTab[j])
                Node<K,V> loHead = null, loTail = null;
                // newTab[j + oldCap]
                Node<K,V> hiHead = null, hiTail = null;
                Node<K,V> next;
                do {
                    next = e.next;
                    if ((e.hash & oldCap) == 0) {
                        // On the original list
                        if (loTail == null)
                            loHead = e;
                        else
                            loTail.next = e;
                        loTail = e;
                    }
                    else {
                        // on the new list
                        if (hiTail == null)
                            hiHead = e;
                        elsehiTail.next = e; hiTail = e; }}while((e = next) ! =null);
                // The old list is not empty, set the new header
                if(loTail ! =null) {
                    loTail.next = null;
                    newTab[j] = loHead;
                }
                // The new list is not empty, set the header
                if(hiTail ! =null) {
                    hiTail.next = null;
                    newTab[j + oldCap] = hiHead;
                }
            }
        }
    }
}

Class TreeNode {
    	/ * * *@param tab newTab
         * @param index  j
         * @paramBit oldCap * splits the nodes in the tree box into higher and lower tree boxes, and if they are now too small, untree. Call */ only from resizing
    final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
        TreeNode<K,V> b = this;
        // Relink into lo and hi lists, preserving order
        TreeNode<K,V> loHead = null, loTail = null;
        TreeNode<K,V> hiHead = null, hiTail = null;
        int lc = 0, hc = 0;
        // Same list logic, just need to calculate the number of additional (whether to tree)
        for(TreeNode<K,V> e = b, next; e ! =null; e = next) {
            next = (TreeNode<K,V>)e.next;
            e.next = null;
            / / the old position
            if ((e.hash & bit) == 0) {
                if ((e.prev = loTail) == null)
                    loHead = e;
                else
                    loTail.next = e;
                loTail = e;
                ++lc;
            }
            else {
                / / the new location
                if ((e.prev = hiTail) == null)
                    hiHead = e;
                elsehiTail.next = e; hiTail = e; ++hc; }}if(loHead ! =null) {
            if (lc <= UNTREEIFY_THRESHOLD)
                // Number <= 6, tree
                // Because the linked list is already maintained, we just need to replace the node type and remove the tree structure
                tab[index] = loHead.untreeify(map);
            else {
                tab[index] = loHead;
                // If hiHead is empty, nothing exits the tree
                if(hiHead ! =null) // (else is already treeified)loHead.treeify(tab); }}if(hiHead ! =null) {
            / / same as above
            if (hc <= UNTREEIFY_THRESHOLD)
                tab[index + bit] = hiHead.untreeify(map);
            else {
                tab[index + bit] = hiHead;
                if(loHead ! =null) hiHead.treeify(tab); }}}}Copy the code
Get & Remove
Remove
public V remove(Object key) {
    Node<K,V> e;
    return (e = removeNode(hash(key), key, null.false.true)) = =null ?
        null : e.value;
}

	/ * * *@paramIf matchValue is true, the Value must be matched *@paramMovable if false does not move other nodes */
final Node<K,V> removeNode(int hash, Object key, Object value,
                           boolean matchValue, boolean movable) {
    Node<K,V>[] tab; Node<K,V> p; int n, index;
    // From the point of finding the node, it is exactly the same.
    // Priority header, either traverses the tree or the linked list
    if((tab = table) ! =null && (n = tab.length) > 0 &&
        (p = tab[index = (n - 1) & hash]) ! =null) {
        Node<K,V> node = null, e; K k; V v;
        if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
            // The header is
            node = p;
        else if((e = p.next) ! =null) {
            if (p instanceof TreeNode)
                / / traversal tree
                node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
            else {
                do {
                    // Iterate over the list
                    if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k)))) {
                        node = e;
                        break;
                    }
                    p = e;
                } while((e = e.next) ! =null); }}if(node ! =null&& (! matchValue || (v = node.value) == value || (value ! =null && value.equals(v)))) {
            if (node instanceof TreeNode)
                ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
            else if (node == p)
                tab[index] = node.next;
            else
                p.next = node.next;
            ++modCount;
            --size;
            // Hook method
            afterNodeRemoval(node);
            returnnode; }}return null;
}
Copy the code
Hook method

The hook method lets subclasses extend to maintain their own node structure.

// Use put, replace, compute, merge to access data
void afterNodeAccess(Node<K,V> p) {}// After a node is inserted, it is used for PUT, computeIfAbsent, compute, and merge
void afterNodeInsertion(boolean evict) {}// Remove the node with removeNode
void afterNodeRemoval(Node<K,V> p) {}Copy the code