An overview of the

In a nutshell, a HashMap is an associative array, hash table, and is thread-unsafe, allowing null keys and null values. The duration is disordered. The underlying data structure is an array called a hash bucket, and inside each bucket is a linked list, and each node in the linked list is each element in the hash table. In JDK8, when the list length reaches 8, it is transformed into a red-black tree to improve its query and insert efficiency. It implements Map

, Cloneable, Serializable interfaces.
,v>

Because the data structure of the underlying hash bucket is an array, it will also involve the problem of capacity expansion.

When the capacity of the HashMap reaches the threshold, capacity expansion is triggered. Before and after capacity expansion, the length of the hash bucket must be 2 power. In this way, bit operation can be used instead of mod operation to find the corresponding hash bucket based on the hash value of the key, which is more efficient.

The hash value of the key is not only returned by the hashCode() method of the key object, but is also perturbed by the perturbation function to make the hash value more balanced. Since hashCode() is an int with a range of over 4 billion values, as long as the hash function mapping is fairly uniform and loose, collisions are minimal. But even if the original hashCode() is good, and each key has a different hashCode(), the hash bucket length of a HashMap is much smaller than the hash value range (the default value is 16), so when the hash value is mod to the bucket length to find the index of the bucket where the key is stored, Because mod is done by and, the high hash value is ignored. Therefore, only the lower part of hashCode() will participate in the operation, and the probability of different hash values will occur, but the same index will be obtained is greatly increased, which is called hash collision. That is, the collision rate will increase.

The perturbation function is designed to solve hash collisions. It combines the features of the high and low hash values and puts them in the low one. Therefore, when it is used with the high and low bits, it is equivalent to participating in the operation together to reduce the probability of hash collisions. (Before JDK8, the perturbed function perturbed four times. JDK8 simplifies this operation.)

During capacity expansion, a new Node array is created as a hash bucket, and all data in the original hash table (Node nodes) is moved to the new hash bucket, which is equivalent to performing a new PUT operation on all data in the original hash table. So the performance cost is huge, and as you can imagine, the bigger the hash table, the bigger the performance cost.

If hash collisions occur during capacity expansion, the number of nodes is less than 8. Then, according to the hash value of each node in the linked list, the corresponding subscript position of the new hash bucket should be put successively. Because capacity expansion doubles capacity, each node on the original linked list may now be stored in the original index, low bit, or in the expanded index, high bit. High bit = Low bit + Original hash bucket capacity If the number of linked lists is 8 after nodes are added, it is converted to a red-black tree

As you can see from the implementation of the iterator, a HashMap is traversed in order of hash buckets from lowest to highest and lists from front to back. It’s an unordered set.

The source code for HashMap is littered with bits that replace regular operations to improve efficiency:

  • And operation replaces modular operation. Hash & (table.length-1) instead of hash % (table.length)
  • Run if ((e.hash & oldCap) == 0) to determine whether node E is in a low or high zone after expansion.

Linked list Node Node

Let’s first look at the structure of a linked list of elements mounted on a hash table:

    static class Node<K.V> implements Map.Entry<K.V> {
        final int hash;/ / hash value
        final K key;//key
        V value;//value
        Node<K,V> next;// The list is followed by a node

        Node(int hash, K key, V value, Node<K,V> next) {
            this.hash = hash;
            this.key = key;
            this.value = value;
            this.next = next;
        }

        public final K getKey()        { return key; }
        public final V getValue()      { return value; }
        public final String toString() { return key + "=" + value; }

        // The hash value of each node is either the hashCode of the key or the hashCode of the value.
        public final int hashCode() {
            return Objects.hashCode(key) ^ Objects.hashCode(value);
        }
        // Set the new value and return the old value
        public final V setValue(V newValue) {
            V oldValue = value;
            value = newValue;
            return oldValue;
        }
        
        public final boolean equals(Object o) {
            if (o == this)
                return true;
            if (o instanceof Map.Entry) {
                Map.Entry<? ,? > e = (Map.Entry<? ,? >)o;if (Objects.equals(key, e.getKey()) &&
                    Objects.equals(value, e.getValue()))
                    return true;
            }
            return false; }}Copy the code

Conclusion:

  • This is a singly linked list
  • The hash value of each node is obtained by combining the hashCode of the key and the hashCode of the value.

The constructor

    // Maximum capacity is 2 to the 30th
    static final int MAXIMUM_CAPACITY = 1 << 30;
    // Default load factor
    static final float DEFAULT_LOAD_FACTOR = 0.75f;
    
    // Hash buckets to store linked lists. The length is 2 to the N, or 0 when initialized.
    transient Node<K,V>[] table;
        
    // Load factor, used to calculate the threshold of the number of hash table elements. Threshold = hash bucket. Length * loadFactor;
    final float loadFactor;
    // Threshold for the number of elements in the hash table. When the number of elements in the hash table exceeds the threshold, resize() occurs.
    int threshold;

    public HashMap() {
        // The default constructor, assigning the loading factor to the default 0.75f
        this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
    }
    public HashMap(int initialCapacity) {
        // Specify a constructor that initializes the capacity
        this(initialCapacity, DEFAULT_LOAD_FACTOR);
    }
    // specify both the initial capacity and the loadFactor. LoadFactor is rarely used
    public HashMap(int initialCapacity, float loadFactor) {
        // boundary processing
        if (initialCapacity < 0)
            throw new IllegalArgumentException("Illegal initial capacity: " +
                                               initialCapacity);
        // The initial capacity cannot exceed 2 ^ 30
        if (initialCapacity > MAXIMUM_CAPACITY)
            initialCapacity = MAXIMUM_CAPACITY;
        // The loading factor cannot be negative
        if (loadFactor <= 0 || Float.isNaN(loadFactor))
            throw new IllegalArgumentException("Illegal load factor: " +
                                               loadFactor);
        this.loadFactor = loadFactor;
        // Set the threshold to "= initialize the capacity to the 2 n value
        this.threshold = tableSizeFor(initialCapacity);
    }
    // Create a hash table and add all the elements from the other map m to the table
    public HashMap(Map<? extends K, ? extends V> m) {
        this.loadFactor = DEFAULT_LOAD_FACTOR;
        putMapEntries(m, false);
    }
    
Copy the code
    // Return the length of the bucket's actual capacity in the form of 2 ^ n, based on the expected capacity cap. The return value will generally be >=cap
    static final int tableSizeFor(int cap) {
    // After the following or and displacement operations, n ends up with 1 bits.
        int n = cap - 1;
        n |= n >>> 1;
        n |= n >>> 2;
        n |= n >>> 4;
        n |= n >>> 8;
        n |= n >>> 16;
        // Check whether n is out of bounds, return 2 to the n as the table threshold
        return (n < 0)?1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
    }
Copy the code
    // Add all the elements of another Map to the table with evict initialized to false or otherwise true
    final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
        // Get the number of elements in m
        int s = m.size();
        // If the number is greater than 0
        if (s > 0) {
            // If the current table is empty
            if (table == null) { // pre-size
                // Calculate the threshold based on the number of elements in m and the loading factor of the current table
                float ft = ((float)s / loadFactor) + 1.0F;
                // The correction threshold boundary cannot exceed MAXIMUM_CAPACITY
                int t = ((ft < (float)MAXIMUM_CAPACITY) ?
                         (int)ft : MAXIMUM_CAPACITY);
                // If the new threshold is greater than the current threshold
                if (t > threshold)
                    // Return a new threshold of 2 to the n
                    threshold = tableSizeFor(t);
            }
            // If the current element table is not empty, but the number of elements in M is greater than the threshold, it must be expanded.
            else if (s > threshold)
                resize();
            // Iterate over m to add elements to the current table.
            for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
                K key = e.getKey();
                V value = e.getValue();
                putVal(hash(key), key, value, false, evict); }}}Copy the code

Take a look at the expansion function first: this is a big deal! The point! The point! Initialize or double the hash bucket size. If the current hash bucket is NULL, an initial capacity target matching the current threshold is allocated. Otherwise, because we’re doubling the size. During expansion, it is important to distinguish between the nodes that used to be in the hash bucket with the same index, and whether they are now in the previous index or index+ oldLength

final Node<K,V>[] resize() {
        OldTab is the hash bucket of the current table
        Node<K,V>[] oldTab = table;
        // The current hash bucket capacity length
        int oldCap = (oldTab == null)?0 : oldTab.length;
        // The current threshold
        int oldThr = threshold;
        // Initialize the new capacity and set the threshold to 0
        int newCap, newThr = 0;
        // If the current capacity is greater than 0
        if (oldCap > 0) {
            // If the current capacity has reached the upper limit
            if (oldCap >= MAXIMUM_CAPACITY) {
                // Set the threshold to 2 ^ 31 -1
                threshold = Integer.MAX_VALUE;
                // Return the current hash bucket
                return oldTab;
            }// Otherwise, the new capacity is twice the old capacity.
            else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY &&
                     oldCap >= DEFAULT_INITIAL_CAPACITY)// If the old capacity is greater than or equal to the default initial capacity 16
                // Then the new threshold is equal to twice the old threshold
                newThr = oldThr << 1; // double threshold
        }// If the current table is empty but has a threshold. Indicates that the capacity and threshold are specified during initialization
        else if (oldThr > 0) // initial capacity was placed in threshold
            newCap = oldThr;// The capacity of the new table is equal to the old threshold
        else {}// If the current table is empty and there is no threshold. So so indicates the case where no capacity/threshold parameters are specified during initialization. // Zero initial threshold Using defaults
            newCap = DEFAULT_INITIAL_CAPACITY;// The size of the new table is the default 16
            newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);// The new threshold is the default capacity 16 * default load factor 0.75f = 12
        }
        if (newThr == 0) {// If the new threshold is 0, the current table is empty, but there is a threshold
            float ft = (float)newCap * loadFactor;// Calculate the new threshold based on the new table capacity and load factor
            // Make out-of-bounds repairs
            newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ?
                      (int)ft : Integer.MAX_VALUE);
        }
        // Update the threshold
        threshold = newThr;
        @SuppressWarnings({"rawtypes"."unchecked"})
        // Build a new hash bucket based on the new capacity
            Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
        // Update the hash bucket reference
        table = newTab;
        // If there were elements in the previous hash bucket
        // Start moving all nodes from the current hash bucket to the new hash bucket
        if(oldTab ! =null) {
            // Iterate over the old hash bucket
            for (int j = 0; j < oldCap; ++j) {
                // Retrieve the current node e
                Node<K,V> e;
                // If there are elements in the bucket, the list is assigned to e
                if((e = oldTab[j]) ! =null) {
                    // Empty the old hash bucket for GC
                    oldTab[j] = null;
                    // If there is only one element in the current list, (no hash collision occurred)
                    if (e.next == null)
                        // Place the element directly in the new hash bucket.
                        // Note that the subscript is hashed with the bucket's length -1. Since the length of the bucket is 2 to the n, this is essentially equal to a modular operation. But it's more efficient
                        newTab[e.hash & (newCap - 1)] = e;
                        // If a hash collision occurs, and the number of nodes exceeds 8, it becomes a red-black tree.
                    else if (e instanceof TreeNode)
                        ((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
                    // If a hash collision has occurred, the number of nodes is less than 8. Then, according to the hash value of each node in the linked list, the corresponding subscript position of the new hash bucket should be put successively.
                    else { // preserve order
                        // Since capacity expansion doubles capacity, each node in the original linked list may now be stored in the original index (low bit), or in the expanded index (high bit). High bit = Low bit + Original hash bucket capacity
                        // The first and last nodes of the low level list
                        Node<K,V> loHead = null, loTail = null;
                        // The first and last nodes of the high list
                        Node<K,V> hiHead = null, hiTail = null;
                        Node<K,V> next;// Temporary node stores the next node of e
                        do {
                            next = e.next;
                            // Here is another efficient point using bit operation instead of conventional operation: using the hash value and the old capacity, can get the hash value after modulo, is greater than or equal to oldCap or less than oldCap, equal to 0 means less than oldCap, should be stored in the low, otherwise stored in the high
                            if ((e.hash & oldCap) == 0) {
                                // Assign Pointers to the first and last nodes
                                if (loTail == null)
                                    loHead = e;
                                else
                                    loTail.next = e;
                                loTail = e;
                            }// The same logic applies to the high level
                            else {
                                if (hiTail == null)
                                    hiHead = e;
                                else
                                    hiTail.next = e;
                                hiTail = e;
                            }// Loop until the list ends
                        } while((e = next) ! =null);
                        // Store the low list at the original index,
                        if(loTail ! =null) {
                            loTail.next = null;
                            newTab[j] = loHead;
                        }
                        // Store the list at the new index
                        if(hiTail ! =null) {
                            hiTail.next = null;
                            newTab[j + oldCap] = hiHead;
                        }
                    }
                }
            }
        }
        return newTab;
    }
Copy the code

Consider the putVal function that inserts a node into the hash table. If onlyIfAbsent is true, the value of the same key will not be overwritten. If EVict is false. So the representation is called at initialization time

    final V putVal(int hash, K key, V value, boolean onlyIfAbsent, boolean evict) {
        // TAB holds the current hash bucket, p is used as a temporary list node
        Node<K,V>[] tab; Node<K,V> p; int n, i;
        // If the current hash table is empty, it is initialized
        if ((tab = table) == null || (n = tab.length) == 0)
            // Then expand the hash table directly and assign the length of the expanded hash bucket to n
            n = (tab = resize()).length;
        // If the current index node is empty, no hash collision has occurred. Create a new Node, Node, and mount it at index.
        // Index uses the hash value & the length of the hashed bucket -1 instead of modulo
        if ((p = tab[i = (n - 1) & hash]) == null)
            tab[i] = newNode(hash, key, value, null);
        else {// A hash conflict occurs.
            //e
            Node<K,V> e; K k;
            // If the hash values are equal and the key is equal, the value operation is overridden
            if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
                e = p;// Assign the current node reference to e
            else if (p instanceof TreeNode)// Red black tree
                e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
            else {// Insert a normal list node instead of an override operation
                // Iterate over the list
                for (int binCount = 0; ; ++binCount) {
                    if ((e = p.next) == null) {// traverse to the tail, append a new node to the tail
                        p.next = newNode(hash, key, value, null);
                        // If the number of linked lists =8 after the node is appended, the tree is converted to a red-black tree
                        if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
                            treeifyBin(tab, hash);
                        break;
                    }
                    // If the node to overwrite is found
                    if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
                        break; p = e; }}// If e is not null, there are nodes that need to be overwritten.
            if(e ! =null) { // existing mapping for key
                // Overwrites the node value and returns the old oldValue
                V oldValue = e.value;
                if(! onlyIfAbsent || oldValue ==null)
                    e.value = value;
                // This is an empty implementation function used for the LinkedHashMap rewrite.
                afterNodeAccess(e);
                returnoldValue; }}// If it reaches this point, a new node is inserted, so modCount is changed and null is returned.
        
        / / modify modCount
        ++modCount;
        // Update the size and determine whether expansion is required.
        if (++size > threshold)
            resize();
        // This is an empty implementation function used for the LinkedHashMap rewrite.
        afterNodeInsertion(evict);
        return null;
    }
Copy the code

NewNode looks like this: Build a linked list node

    // Create a regular (non-tree) node
    Node<K,V> newNode(int hash, K key, V value, Node<K,V> next) {
        return new Node<>(hash, key, value, next);
    }
Copy the code
    // Callbacks to allow LinkedHashMap post-actions
    void afterNodeAccess(Node<K,V> p){}void afterNodeInsertion(boolean evict){}Copy the code

Summary:

  • Operations are replaced by bit operations as far as possible, more efficient.
  • If you need to create a new array to hold more elements, in addition to migrating the elements in the old array, remember to null the references in the old array for GC
  • I = (n-1) &hash (bucket length -1). Since the length of the bucket is 2 to the n, this is essentially equal to a modular operation. But it’s more efficient
  • If hash collisions occur during capacity expansion, the number of nodes is less than 8. Then, according to the hash value of each node in the linked list, the corresponding subscript position of the new hash bucket should be put successively.
  • Because capacity expansion doubles capacity, each node on the original linked list may now be stored in the original index, low bit, or in the expanded index, high bit. High bit = Low bit + Original hash bucket capacity
  • If ((e.hash & oldCap) == 0) can be obtained if the hash value is greater than or equal to oldCap or less than oldCap after modulo removal. 0 means less than oldCap. It should be stored at the low level, otherwise, it should be stored at the high level. Here’s another efficient way to use bitwise operations instead of regular operations
  • If the number of linked lists =8 after adding nodes, it is converted to a red-black tree
  • When the node operation is inserted, there are some empty implementation functions that are used for the LinkedHashMap rewrite.

Increase and change

Insert or overwrite a key-value into a table

    public V put(K key, V value) {
        // Get the hash value based on the key. Call the method in the previous section to insert the node
        return putVal(hash(key), key, value, false.true);
    }
Copy the code

The function that takes the hash value based on the key is also worth paying attention to. It is called the “perturbation function”, and its use has been summarized at the beginning:

    static final int hash(Object key) {
        int h;
        return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
    }
Copy the code

The hash value of the key is not only returned by the hashCode() method of the key object, but is also perturbed by the perturbation function to make the hash value more balanced. Since hashCode() is an int with a range of over 4 billion values, as long as the hash function mapping is fairly uniform and loose, collisions are minimal. But even if the original hashCode() is good, and each key has a different hashCode(), the hash bucket length of a HashMap is much smaller than the hash value range (the default value is 16), so when the hash value is mod to the bucket length to find the index of the bucket where the key is stored, Because mod is done by and, the high hash value is ignored. Therefore, only the lower part of hashCode() will participate in the operation, and the probability of different hash values will occur, but the same index will be obtained is greatly increased, which is called hash collision. That is, the collision rate will increase.

The perturbation function is designed to solve hash collisions. It combines the features of the high and low hash values and puts them in the low one. Therefore, when it is used with the high and low bits, it is equivalent to participating in the operation together to reduce the probability of hash collisions. (Before JDK8, the perturbed function perturbed four times. JDK8 simplifies this operation.)

Batch add data to table

    public void putAll(Map<? extends K, ? extends V> m) {
        // This function was also examined in the previous section. // Add all the elements of another Map to the table with evict initialized to false or otherwise true
        putMapEntries(m, true);
    }
Copy the code

Only key-value is inserted into the table. If the value corresponding to the key exists before, it is not overwritten. (jdK8 increment method)

    @Override
    public V putIfAbsent(K key, V value) {
        return putVal(hash(key), key, value, true.true);
    }
Copy the code

delete

Delete by key If the value corresponding to the key exists, delete the key/value pair. And return value. Returns null if none exists.

    public V remove(Object key) {
        Node<K,V> e;
        return (e = removeNode(hash(key), key, null.false.true)) = =null ?
            null : e.value;
    }
Copy the code

// Delete a node from the hash table. If matchValue is true, the key and value must be equal. // If the movable parameter is false, do not move other nodes when deleting nodes

    final Node<K,V> removeNode(int hash, Object key, Object value,
                               boolean matchValue, boolean movable) {
        // p is the front node of the node to be deleted
        Node<K,V>[] tab; Node<K,V> p; int n, index;
        // If the hash table is not empty, then there are nodes under the index based on the hash value.
        if((tab = table) ! =null && (n = tab.length) > 0 &&
            (p = tab[index = (n - 1) & hash]) ! =null) {
            // Node is the node to be deleted
            Node<K,V> node = null, e; K k; V v;
            // If the list header is the node that needs to be deleted
            if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
                node = p;// Assign the node reference to be deleted to node
            else if((e = p.next) ! =null) {// Otherwise loop over to find the node to be deleted and assign the value to node
                if (p instanceof TreeNode)
                    node = ((TreeNode<K,V>)p).getTreeNode(hash, key);
                else {
                    do {
                        if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k)))) {
                            node = e;
                            break;
                        }
                        p = e;
                    } while((e = e.next) ! =null); }}// If node is to be deleted and matchValue is false or equal
            if(node ! =null&& (! matchValue || (v = node.value) == value || (value ! =null && value.equals(v)))) {
                if (node instanceof TreeNode)
                    ((TreeNode<K,V>)node).removeTreeNode(this, tab, movable);
                else if (node == p)// If node == p, the list header is the node to be deleted
                    tab[index] = node.next;
                else// Otherwise, the node to be deleted is in the middle of the table
                    p.next = node.next;
                ++modCount;/ / modify modCount
                --size;/ / modify the size
                afterNodeRemoval(node);//LinkedHashMap callback function
                returnnode; }}return null;
    }
Copy the code
    void afterNodeRemoval(Node<K,V> p){}Copy the code

Delete the vm based on the key value

    @Override
    public boolean remove(Object key, Object value) {
        // value is passed in and matchValue is true
        return removeNode(hash(key), key, value, true.true) != null;
    }
Copy the code

check

Find return value based on key. Returns null if not found

    public V get(Object key) {
        Node<K,V> e;
        // Pass in the disturbed hash and key to find the target Node Node
        return (e = getNode(hash(key), key)) == null ? null : e.value;
    }
Copy the code
    // Pass in the disturbed hash and key to find the target Node Node
    final Node<K,V> getNode(int hash, Object key) {
        Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
        // The search process is basically the same as deleting. If a node is found, null is returned otherwise
        if((tab = table) ! =null && (n = tab.length) > 0 &&
            (first = tab[(n - 1) & hash]) ! =null) {
            if (first.hash == hash && // always check first node((k = first.key) == key || (key ! =null && key.equals(k))))
                return first;
            if((e = first.next) ! =null) {
                if (first instanceof TreeNode)
                    return ((TreeNode<K,V>)first).getTreeNode(hash, key);
                do {
                    if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
                        return e;
                } while((e = e.next) ! =null); }}return null;
    }
Copy the code

Check whether the key is included

    public boolean containsKey(Object key) {
        returngetNode(hash(key), key) ! =null;
    }
Copy the code

Check whether value is included

    public boolean containsValue(Object value) {
        Node<K,V>[] tab; V v;
        // Iterate over each list on the hash bucket
        if((tab = table) ! =null && size > 0) {
            for (int i = 0; i < tab.length; ++i) {
                for(Node<K,V> e = tab[i]; e ! =null; e = e.next) {
                    // Return true if a consistent value is found
                    if((v = e.value) == value || (value ! =null && value.equals(v)))
                        return true; }}}return false;
    }
Copy the code

Java8 new get method with default value conditional on key, find return value. Otherwise, defaultValue is returned

    @Override
    public V getOrDefault(Object key, V defaultValue) {
        Node<K,V> e;
        return (e = getNode(hash(key), key)) == null ? defaultValue : e.value;
    }

Copy the code

traverse

    / / the cache entrySet
    transient Set<Map.Entry<K,V>> entrySet;
     */
    public Set<Map.Entry<K,V>> entrySet() {
        Set<Map.Entry<K,V>> es;
        return (es = entrySet) == null ? (entrySet = new EntrySet()) : es;
    }
Copy the code
    final class EntrySet extends AbstractSet<Map.Entry<K.V>> {
        public final int size()                 { return size; }
        public final void clear()               { HashMap.this.clear(); }
        // We usually use EntrySet to get iterators
        public final Iterator<Map.Entry<K,V>> iterator() {
            return new EntryIterator();
        }
        // Finally call the getNode method
        public final boolean contains(Object o) {
            if(! (oinstanceof Map.Entry))
                return false;
            Map.Entry<? ,? > e = (Map.Entry<? ,? >) o;Object key = e.getKey();
            Node<K,V> candidate = getNode(hash(key), key);
            returncandidate ! =null && candidate.equals(e);
        }
        // Finally call the removeNode method
        public final boolean remove(Object o) {
            if (o instanceof Map.Entry) {
                Map.Entry<? ,? > e = (Map.Entry<? ,? >) o;Object key = e.getKey();
                Object value = e.getValue();
                return removeNode(hash(key), key, value, true.true) != null;
            }
            return false;
        }
        / /...
    }
Copy the code

// Implementation of EntryIterator:

    final class EntryIterator extends HashIterator
        implements Iterator<Map.Entry<K.V>> {
        public final Map.Entry<K,V> next() { returnnextNode(); }}Copy the code
    abstract class HashIterator {
        Node<K,V> next;        // next entry to return
        Node<K,V> current;     // current entry
        int expectedModCount;  // for fast-fail
        int index;             // current slot

        HashIterator() {
            // Since hashMap is also thread unsafe, modCount is saved. Used for the Fail-fast policy
            expectedModCount = modCount;
            Node<K,V>[] t = table;
            current = next = null;
            index = 0;
            // Next initially points to the first non-null list header on the hash bucket
            if(t ! =null && size > 0) { // advance to first entry
                do {} while (index < t.length && (next = t[index++]) == null);
            }
        }

        public final boolean hasNext() {
            returnnext ! =null;
        }

        // As you can see from this method, the HashMap is traversed in order of hash buckets from lowest to highest and lists from front to back. It's an unordered set.
        final Node<K,V> nextNode() {
            Node<K,V>[] t;
            Node<K,V> e = next;
            / / fail - fast strategy
            if(modCount ! = expectedModCount)throw new ConcurrentModificationException();
            if (e == null)
                throw new NoSuchElementException();
            // Select the next node in the list,
            if ((next = (current = e).next) == null&& (t = table) ! =null) {
                // If the current list node is traversed, the next non-null list head is fetched from the hash bucket
                do {} while (index < t.length && (next = t[index++]) == null);
            }
            return e;
        }

        public final void remove() {
            Node<K,V> p = current;
            if (p == null)
                throw new IllegalStateException();
            / / / / fail - fast strategy
            if(modCount ! = expectedModCount)throw new ConcurrentModificationException();
            current = null;
            K key = p.key;
            // Finally remove the node using removeNode
            removeNode(hash(key), key, null.false.false); expectedModCount = modCount; }}Copy the code

conclusion

Differences from HashTable

  • In contrast, HashTable is thread-safe and does not allow null keys and values.
  • The default size of HashTable is 11.
  • HashTable is a hash value that uses the hashCode(key.hashcode ()) of the key directly. Unlike the static final int hash(Object key) function inside a HashMap, the key’s hashCode is disturbed as a hash value.
  • Hash bucket subscript for HashTable is modular %. (because its default capacity is not 2 ^ n.). So you can’t replace modular operations with bitwise operations.)
  • During capacity expansion, the new capacity is twice the original capacity +1. int newCapacity = (oldCapacity << 1) + 1;
  • Hashtable is a subclass of Dictionary that implements the Map interface.