Personal notes, if any mistakes or omissions, kindly advise
Reference source
- Blog.csdn.net/hancoder/ar…
- Jdk1.8 API Chinese version
- Snailclimb. Gitee. IO/javaguide / #…
- www.runoob.com/
A collection of
Data containers, the way data is organized (data structures)
Collection classes hold references to objects
The design requirements
- The framework must be high-performance. The implementation of basic collections (dynamic arrays, linked lists, trees, hashes) must also be efficient.
- The framework allows different types of collections to work in a similar way with a high degree of interoperability.
- The extension and adaptation of a collection must be simple.
A collection of Java
frame
The Java Collections framework consists of two main types of containers: collections, which store a Collection of elements, and maps, which store key/value pair mappings.
interface
An abstract data type for a collection that describes what characteristics a collection of that type should have
implementation
Classes that implement collections interfaces (classes written to conform to the specifications of a collection), reusable data structures
algorithm
Methods in objects that implement the collection interface perform specific functions, such as searching and sorting. These algorithms are called polymorphic because the same method can have different implementations on similar interfaces.
Rows are the logical structure of data
Columns are physical structures that implement logical structures
Common set
- List
- Set
- Map
The Map interface corresponds to a set of key-value pairs (associative)
Collecttion
The Collection interface corresponds to a Collection of sequences
/ / add
boolean add(Object obj);
boolean addAll(Collection c);
//delete
void clear(a);
boolean remove(Object obj);
boolean removeAll(Collection c);
//justice
boolean contains(Object obj);
boolean containsAll(Collection c);
boolean isEmpty(a);
//iterator
Iterator<E> iterator(a);
//length
int size(a);
boolean retainAll(Collection c);
Copy the code
List
Access is ordered, indexed, can be evaluated by index, elements can be repeated
ArrayList:
- The underlying implementation is an array
- The default initial capacity of an ArrayList is 10, which increases by half, or 1.5 times, each time it is expanded
- The navite method is implemented by C/C++.
LinkedList:
- The underlying implementation is bidirectional lists [bidirectional lists facilitate traversal]
Vector:
-
The bottom layer is arrays, which are now used sparingly, replaced by arrayLists, for two reasons:
-
- Vector all methods are synchronous and have a performance penalty.
- Vector starts with a length of 10 and grows at a 100% rate beyond length, consuming more memory than ArrayList.
- Reference data: www.zhihu.com/question/31…
In summary: Query more ArrayList, add and delete more LinkedList.
The ArrayList is not absolute (in the case of large numbers, tested) :
- If adding elements is always used
add()
If added to the end, ArrayList is faster - Always delete the element at the end of the ArrayList is also faster
- As for deleting the middle position, ArrayList is still faster!
But generally speaking, use LinkedList to add or delete most of your LinkedList because it’s extreme
LinkedList
LinkedList implements both the List and Deque interfaces, meaning that it can be viewed as an order container, a Queue, or a Stack. In this way, LinkedList is an all-around champion. Consider using LinkedList when you need to use stacks or queues, partly because The Stack class is officially deprecated and, unfortunately, there is no class in Java called Queue (which is an interface name). For stacks or queues, the current preferred is ArrayDeque, which has better performance than LinkedList (when used as a stack or queue).
Is based on the linked list structure, so the query speed is slow, add and delete speed is fast, provides a special method, at the end of the element operation
implementation
Two-way linked list
Private static class Node<E> {E item; Node<E> next; Node<E> prev; Node(Node<E> prev, E element, Node<E> next) { this.item = element; this.next = next; this.prev = prev; }}Copy the code
add()
public boolean add(E e){
linkLast(e);
return true;
}
void linkLast(E e){
final Node<E> l = last;
final Node<E> newNode = new Node<>(l, e, null);
last = newNode;
if(l ==null){
first = newNode;
}else{
l.next = newNode;
}
size++;
modCount++;
}
Copy the code
remove()
public boolean remove(Object obj){
if(obj ==null) {for(Node<E> x = first; x! =null; x = x.next){
if(x.item ==null){
unlink(x);
return true; }}}else{
for(Node<E> x =first; x! =null; x = x.next){
if(obj.equals(x.item)){
unlink(x);
return true; }}}return false;
}
E unlink(Node<E> x){
final E element = x.item;
final Node<E> next = x.next;
final Node<E> prev = x.prev;
if(prev ==null){
first = next;
}else{
prev.next = next;
x.prev = null;
}
if(next ==null){
last = prev;
}else{
next.prev = prev;
x.next = null;
}
x.iten = null;
size-- ;
modCount++;
return element;
}
Copy the code
set()
public E set(int index, E element) {
checkElementIndex(index);
Node<E> x = node(index);
E oldVal = x.item;
x.item = element;
return oldVal;
}
Copy the code
get()
public E get(int index) {
checkElementIndex(index);
return node(index).item;
}
Node<E> node (int index){
if(index <(size>>1)){
Node<E> x = first;
for(int i=0; iM index; i++){ x = x.next;returnx; }}else{
Node<E> x = last;
for(int i = size-1; i>index; i--){ x = x.prev; }returnx; }}Copy the code
ArrayList
Object[] array implementation, query fast, add and delete slow
ArrayList inherits AbstractList and implements List, RandomAccess, Cloneable, java.io.Serializable interfaces.
RandomAccess
Is a flag interface that indicates that the List collection that implements this interface is supportedFast random access. inArrayList
In, we can get the element object quickly by the ordinal of the element, which is fast random access.ArrayList
To achieve theCloneable
interface, that is, the function is overriddenclone()
Can be cloned.ArrayList
To achieve thejava.io.Serializable
Interface, that meansArrayList
Serialization is supported and can be transmitted by serialization.
The characteristics of
- ArrayList is implemented based on dynamic arrays, which need to be copied when adding and deleting arrays.
- The default initial capacity of an ArrayList is 10, which increases by half, or 1.5 times, each time it is expanded
- Deleting an element does not reduce capacity; if you want to reduce capacity, call trimToSize()
- It is not thread-safe. It can store null values.
private static final long serialVersionUID = 868345258122892189L;
//Default initail capacity
private static final int DEFAULT_CAPACITY = 10;
//Empty array insance
private static final Object[] EMPTY_ELEMENTDATA = {}
//buffer array
transient Object[] elementData;
//length
private int size;
Copy the code
The source code
structure
/** * Default initial capacity */
private static final int DEFAULT_CAPACITY = 10;
private static final Object[] DEFAULTCAPACITY_EMPTY_ELEMENTDATA = {};
/** * The default constructor constructs an empty list with an initial capacity of 10 (no arguments) */
public ArrayList(a) {
this.elementData = DEFAULTCAPACITY_EMPTY_ELEMENTDATA;
}
/** * a constructor that takes an initial capacity argument. (User specified capacity) */
public ArrayList(int initialCapacity) {
if (initialCapacity > 0) {// Initial capacity is greater than 0
// Create an array of initialCapacity size
this.elementData = new Object[initialCapacity];
} else if (initialCapacity == 0) {// Initial capacity equals 0
// Create an empty array
this.elementData = EMPTY_ELEMENTDATA;
} else {If the initial capacity is less than 0, an exception is thrown
throw new IllegalArgumentException("Illegal Capacity: "+ initialCapacity); }}Throws NullPointerException if the specified collection is null. /** * Constructs a list of specified collection elements that return sequentially using the iterators of the collection. * /
public ArrayList(Collection<? extends E> c) {
elementData = c.toArray();
if((size = elementData.length) ! =0) {
// c.toArray might (incorrectly) not return Object[] (see 6260652)
if(elementData.getClass() ! = Object[].class) elementData = Arrays.copyOf(elementData, size, Object[].class); }else {
// replace with empty array.
this.elementData = EMPTY_ELEMENTDATA; }}Copy the code
Set ()
Specify subscript assignment directly
public E set(int index, E element) {
rangeCheck(index);// subscript out of bounds check
E oldValue = elementData(index);
elementData[index] = element;// Assign to the specified location and copy only the reference
return oldValue;
}
Copy the code
get()
We get it directly by subscript
public E get(int index) {
rangeCheck(index);
return (E) elementData[index];// Note the type conversion
}
Copy the code
add()
grow()
Autoscaling gets 1.5 times the old array length a and the new array length b
If b is greater than MAX_ARRAY_SIZE, integer.max_value is used
private void grow(int minCapacity) {
int oldCapacity = elementData.length;
int newCapacity = oldCapacity + (oldCapacity >> 1);// The original 1.5-bit arithmetic division
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
elementData = Arrays.copyOf(elementData, newCapacity);// Expand space and copy
}
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) // overflow
throw new OutOfMemoryError();
return (minCapacity > MAX_ARRAY_SIZE) ?
Integer.MAX_VALUE :
MAX_ARRAY_SIZE;
}
Copy the code
addAll()
remove()
/** Check the number of subscript deleted elements that need to be moved and set the move to null for Gc to collect **/
public E remove(int index) {
rangeCheck(index);
modCount++;
E oldValue = elementData(index);
int numMoved = size - index - 1;
if (numMoved > 0)
System.arraycopy(elementData, index+1, elementData, index, numMoved);
elementData[--size] = null; // Clear references to that location and let GC work
return oldValue;
}
Copy the code
copyOf
public static <T, U> T[] copyOf(U[] original, int newLength, Class<? extends T[]> newType){
T[] copy = ((Object)newType == (Object)Object[].class) ? (T[])new Object[newLength]:(T[])Array.newInstance(newType.getComponentType()),newLength);
System.arraycopy(original, )
}
Copy the code
recheck
// Check the corner markers
private void rangeCheck(int index) {
if (index >= size)
throw new IndexOutOfBoundsException(outOfBoundsMsg(index));
}
// Return the element
E elementData(int index) {
return (E) elementData[index];
}
Copy the code
ensureCapacityInternal()
// Get the minimum expansion capacity
private void ensureCapacityInternal(int minCapacity) {
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
// Get the default capacity and the larger value of the passed argument
minCapacity = Math.max(DEFAULT_CAPACITY, minCapacity);
}
ensureExplicitCapacity(minCapacity);
}
Copy the code
ensureExplicitCapacity()
// Determine whether capacity expansion is required
private void ensureExplicitCapacity(int minCapacity) {
modCount++;
// overflow-conscious code
if (minCapacity - elementData.length > 0)
// Call the grow method to expand the capacity
grow(minCapacity);
}
Copy the code
hugeCapacity()
private static int hugeCapacity(int minCapacity) {
if (minCapacity < 0) // overflow
throw new OutOfMemoryError();
// Compare minCapacity with MAX_ARRAY_SIZE
// If minCapacity is large, use integer. MAX_VALUE as the size of the new array
// If MAX_ARRAY_SIZE is large, use MAX_ARRAY_SIZE as the size of the new array
//MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
return (minCapacity > MAX_ARRAY_SIZE) ?
Integer.MAX_VALUE :
MAX_ARRAY_SIZE;
Copy the code
System.copy
/** * Inserts the specified element at the specified position in this list. * Call rangeCheckForAdd to check the bounds of the index; Then call the ensureCapacityInternal method to ensure that capacity is large enough; * Move all members from index back one place; Insert element into index; And then finally size plus 1. * /
public void add(int index, E element) {
rangeCheckForAdd(index);
ensureCapacityInternal(size + 1); // Increments modCount!!
The arrayCopy () method makes arrays copy themselves
//elementData: source array; Index: the starting position in the source array; ElementData: Target array; Index + 1: the starting position in the target array; Size-index: number of array elements to copy;
System.arraycopy(elementData, index, elementData, index + 1, size - index);
elementData[index] = element;
size++;
}
Copy the code
Array.copyOf
/** Returns an array containing all elements of this list (from the first to the last) in the correct order; The runtime type of the returned array is the runtime type of the specified array. * /
public Object[] toArray() {
//elementData: the array to copy; Size: indicates the length to be copied
return Arrays.copyOf(elementData, size);
}
Copy the code
Vector
More synchronized than ArrayList, thread safety, but low efficiency
ArrayList is 0.5 times larger when the underlying array is insufficient; Vector is 1 times larger
We usually use ArrayList instead of Vector in cases where asynchrony is required
If you want a ArrayList synchronization, can use the method of the Collections: List the List = Collections. SynchronizedList (new ArrayList (…). ); , you can achieve synchronization
Difference between synchronizedList and Vector
Synchronized is added to methods like add(), but listIterator() and iterator() are not. Synchronized is used with synchronized.
Stack
Set
Access is unordered and elements cannot be repeated
HashSet
TreeSet
ListHashSet
Map
Hashtable
HashMap
Note:
- Load factor
- Expansion mechanism
- Binary bit operation optimization
The characteristics of
Mainly store key value pairs, based on Map interface implementation
Before JDK1.8, hashmaps were made up of arrays, the body of a HashMap, and lists, the main object of a HashMap, were used to resolve hash conflicts.
After JDK1.8, HashMap consists of more red-black trees. If the following two conditions are met, linked list to red-black tree operation will be performed to speed up the search.
- Unordered, null allowed, asynchronous
- The bottom layer is implemented by hash tables (hash tables)
- The initial capacity and load factor have a large impact on the HashMap
supplement
When you add an element to a HashMap, you need to determine its position in the array based on the hash value of the key. For efficient access to a HashMap, minimizing collisions means that the data is distributed as evenly as possible, with each list roughly the same length
Expansion mechanism
The HashMap uses a very clever rehash method when it expands, because each time it doubles, it’s just one more bit than the (n-1)&hash of the original array length n, so the nodes are either in their original position or assigned to the “old + old” position. How can I tell if this is a 0 or a 1? : e.hash & oldCap Original capacity, and then determine that equal is not equal to 0. Equal to 0 means the new bit is 0, unequal to 0 means the new bit is 1
Tree and linked list
TreeNodes take up twice as much space as regular Nodes, so bin is converted to TreeNodes only if it contains enough Nodes, which is determined by the value of TREEIFY_THRESHOLD. When the number of nodes in bin becomes small, it changes to the common bin. And when we look at the source code, we find that when the list length reaches 8, it turns into a red-black tree, and when the length falls to 6, it turns into a normal bin.
Ideally, the distribution frequency of all nodes in bin under random hashCode algorithm will follow the Poisson distribution. It can be seen that the probability of the length of the linked list in a bin reaching 8 elements is 0.00000006, which is almost an impossible event
The load factor
LoadFactor loadFactor, 0.75 by default, is used to measure the degree of HashMap table full, represents the degree of storage data density of HashMap array, and affects the probability of hash operation to the same array location. The real-time loading factor of HashMap can be calculated as follows: Size /capacity, instead of the number of buckets used to remove capacity. Capacity is the number of buckets, that is, the length of the table.
Too large a loadFactor leads to inefficient element lookup, while too small a loadFactor leads to inefficient array utilization and scattered data storage. The default value of loadFactor is 0.75F, which is officially a good threshold.
When the number of elements in a HashMap reaches 75% of the length of the HashMap array, it indicates that the HashMap is too crowded and needs to be expanded. This process involves operations such as rehash and data replication, which consumes a lot of performance. , so minimize the number of expansion times during development, which can be avoided by specifying the initial capacity when creating the HashMap collection object.
Binary operation
When calculating a specific position based on the hash, the algorithm is actually modulo, hash%length (table length), but in computers, direct remainder is not as efficient as bitwise operation. So the source code is optimized to use hash&(length-1) where 2 to the NTH power is actually 1 followed by n 0s, 2 to the NTH power -1 is actually n 1s, so actually hash%length==hash&(length-1)
If length is 2 to the NTH power
As a result of this design, there emerged:
- If length is a power of 2, the data can be evenly inserted. If length is not a power of 2, some positions in the array may never be inserted, wasting space in the array and increasing hash collisions
- When initializing the table, the input array length is not a power of 2. The HashMap must have a power of 2, and the number greater than and closest to that number, to ensure that the array length is a power of 2
The source code
attribute
public class HashMap<K.V> extends AbstractMap<K.V> implements Map<K.V>, Cloneable.Serializable {
/ / serial number
private static final long serialVersionUID = 362498820763181265L;
// The default initial capacity is 16
static final int DEFAULT_INITIAL_CAPACITY = 1 << 4;
// Maximum capacity
static final int MAXIMUM_CAPACITY = 1 << 30;
// The default fill factor
static final float DEFAULT_LOAD_FACTOR = 0.75 f;
// When the number of nodes on the bucket is greater than this value, it becomes a red-black tree
static final int TREEIFY_THRESHOLD = 8;
// When the number of nodes on the bucket is less than this value, the tree lists the links
static final int UNTREEIFY_THRESHOLD = 6;
// The minimum size of the table corresponding to the red-black tree in the bucket
static final int MIN_TREEIFY_CAPACITY = 64;
// An array that stores elements, always a power of 2
transient Node<k,v>[] table;
// Store a set of specific elements
transient Set<map.entry<k,v>> entrySet;
// The number of elements to store. Note that this does not equal the length of the array.
transient int size;
// Expand and change the map counter each time
transient int modCount;
// Threshold When the actual size (capacity x fill factor) exceeds the threshold, the system expands the capacity
int threshold;
// Load factor
final float loadFactor;
}
Copy the code
The Node Node class
// Inherit from map. Entry
,v>
static class Node<K.V> implements Map.Entry<K.V> {
final int hash;// Hash value, which is used to compare elements in a hashmap with other elements' hash values
final K key;/ / key
V value;/ / value
// point to the next node
Node<K,V> next;
Node(int hash, K key, V value, Node<K,V> next) {
this.hash = hash;
this.key = key;
this.value = value;
this.next = next;
}
public final K getKey(a) { return key; }
public final V getValue(a) { return value; }
public final String toString(a) { return key + "=" + value; }
// Override the hashCode() method
public final int hashCode(a) {
return Objects.hashCode(key) ^ Objects.hashCode(value);
}
public final V setValue(V newValue) {
V oldValue = value;
value = newValue;
return oldValue;
}
// Override equals()
public final boolean equals(Object o) {
if (o == this)
return true;
if (o instanceofMap.Entry) { Map.Entry<? ,? > e = (Map.Entry<? ,? >)o;if (Objects.equals(key, e.getKey()) &&
Objects.equals(value, e.getValue()))
return true;
}
return false; }}Copy the code
Tree node class source code
static final class TreeNode<K.V> extends LinkedHashMap.Entry<K.V> {
TreeNode<K,V> parent; / / father
TreeNode<K,V> left; / / left
TreeNode<K,V> right; / / right
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red; // Determine the color
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}
// return the root node
final TreeNode<K,V> root(a) {
for (TreeNode<K,V> r = this, p;;) {
if ((p = r.parent) == null)
return r;
r = p;
}
Copy the code
A constructor
// Default constructor.
public HashMap(a) {
this.loadFactor = DEFAULT_LOAD_FACTOR; // all other fields defaulted
}
// Create a HashMap from another Map collection
public HashMap(Map<? extends K, ? extends V> m) {
this.loadFactor = DEFAULT_LOAD_FACTOR;
putMapEntries(m, false);
}
// Specifies the "capacity size" constructor
public HashMap(int initialCapacity) {
this(initialCapacity, DEFAULT_LOAD_FACTOR);
}
// Specifies the "capacity size" and "load factor" constructors
public HashMap(int initialCapacity, float loadFactor) {
if (initialCapacity < 0)
throw new IllegalArgumentException("Illegal initial capacity: " + initialCapacity);
if (initialCapacity > MAXIMUM_CAPACITY)
initialCapacity = MAXIMUM_CAPACITY;
if (loadFactor <= 0 || Float.isNaN(loadFactor))
throw new IllegalArgumentException("Illegal load factor: " + loadFactor);
this.loadFactor = loadFactor;
this.threshold = tableSizeFor(initialCapacity);
}
Copy the code
final void putMapEntries(Map<? extends K, ? extends V> m, boolean evict) {
int s = m.size();
if (s > 0) {
// Check whether the table has been initialized
if (table == null) { // pre-size
// uninitialized, s is the actual number of elements in m
float ft = ((float)s / loadFactor) + 1.0 F;
int t = ((ft < (float)MAXIMUM_CAPACITY) ?
(int)ft : MAXIMUM_CAPACITY);
// If t is greater than the threshold, initialize the threshold
if (t > threshold)
threshold = tableSizeFor(t);
}
// The system has been initialized and the number of M elements exceeds the threshold
else if (s > threshold)
resize();
// Add all elements in m to the HashMap
for (Map.Entry<? extends K, ? extends V> e : m.entrySet()) {
K key = e.getKey();
V value = e.getValue();
putVal(hash(key), key, value, false, evict); }}}Copy the code
tableSizeFor
// Get a number greater than cap and make sure it is a power of 2
// The value of the integer is 1, and the value of the integer is 2, which is larger than cap
static final int tableSizeFor(int cap) {
int n = cap - 1;
n |= n >>> 1;
n |= n >>> 2;
n |= n >>> 4;
n |= n >>> 8;
n |= n >>> 16;
return (n < 0)?1 : (n >= MAXIMUM_CAPACITY) ? MAXIMUM_CAPACITY : n + 1;
Copy the code
The get, geNode
Assume that the number of buckets is N and the key value of the element to be inserted is hash. We then compute the hash % n by doing the remainder, assuming the result is t. At this point, we can insert a new entry into the table[t]. That is, the hash and the number of buckets to get the location of the list to store.
After obtaining the target linked list, there are two cases, One is a linked list and the other is a red-black tree, but it is always the Node that is traversed to find key equals (key and value). If the key is present, the new value is inserted, otherwise the new Node is inserted at the end of the linked list (red-black tree)
public V get(Object key){
Node<K, V> e;
return (e = getNode(hash(key), key) ) == null ? null : e.value;
}
final Node<K,V> getNode(int hash, Object key) {
Node<K,V>[] tab; Node<K,V> first, e; int n; K k;
if((tab = table) ! =null && (n = tab.length) > 0 &&
(first = tab[(n - 1) & hash]) ! =null) {
if (first.hash == hash && // always check first node((k = first.key) == key || (key ! =null && key.equals(k))))
return first;
if((e = first.next) ! =null) {
if (first instanceof TreeNode)
return ((TreeNode<K,V>)first).getTreeNode(hash, key);
do {
if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
return e;
} while((e = e.next) ! =null); }}return null;
}
Copy the code
Put, putVal
public V put(K key, V value) {
return putVal(hash(key), key, value, false.true);
}
final V putVal(int hash, K key, V value, boolean onlyIfAbsent,
boolean evict) {
Node<K,V>[] tab; Node<K,V> p; int n, i;
// The table is not initialized or has a length of 0
if ((tab = table) == null || (n = tab.length) == 0)
n = (tab = resize()).length;
// (n-1) &hash (n - 1) hash (n - 1) hash (n - 1) hash (n - 1) hash (n - 1) hash (n - 1) hash (n - 1) hash (n - 1) hash (n - 1) hash
if ((p = tab[i = (n - 1) & hash]) == null)
tab[i] = newNode(hash, key, value, null);
// Elements already exist in the bucket
else {
Node<K,V> e; K k;
// Compare the hash value of the first element (node in the array) and the key value
if(p.hash == hash && ((k = p.key) == key || (key ! =null && key.equals(k))))
// Assign the first element to e, denoted by e
e = p;
// Hash values are not equal, that is, keys are not equal; Is a red-black tree node
else if (p instanceof TreeNode)
// Put it in the tree
e = ((TreeNode<K,V>)p).putTreeVal(this, tab, hash, key, value);
// Is a linked list node
else {
// Insert nodes at the end of the list
for (int binCount = 0; ; ++binCount) {
// get to the end of the list
if ((e = p.next) == null) {
// Insert a new node at the end
p.next = newNode(hash, key, value, null);
// When the number of nodes reaches the threshold (8 by default), execute the treeifyBin method
// This method determines whether to convert to a red-black tree based on the HashMap array.
// Only if the array length is greater than or equal to 64, the conversion red-black tree operation is performed to reduce the search time. Otherwise, you're just expanding the array.
if (binCount >= TREEIFY_THRESHOLD - 1) // -1 for 1st
treeifyBin(tab, hash);
// Break the loop
break;
}
// Check whether the key value of the node in the list is equal to the key value of the inserted element
if(e.hash == hash && ((k = e.key) == key || (key ! =null && key.equals(k))))
// equal, out of the loop
break;
E = p.ext; // This is used to iterate over the bucket listp = e; }}// Find the node in the bucket whose key value and hash value are equal to the inserted element
if(e ! =null) {
// Record the value of e
V oldValue = e.value;
// onlyIfAbsent Is false or the old value is null
if(! onlyIfAbsent || oldValue ==null)
// Replace the old value with the new value
e.value = value;
// Callback after access
afterNodeAccess(e);
// Return the old value
returnoldValue; }}// Structural changes
++modCount;
// If the actual size is greater than the threshold, expand the capacity
if (++size > threshold)
resize();
// Callback after insertion
afterNodeInsertion(evict);
return null;
}
Copy the code
resize
final Node<K,V>[] resize() {
Node<K,V>[] oldTab = table;
int oldCap = (oldTab == null)?0 : oldTab.length;
int oldThr = threshold;
int newCap, newThr = 0;
if (oldCap > 0) {
// If it exceeds the maximum value, it will no longer be expanded
if (oldCap >= MAXIMUM_CAPACITY) {
threshold = Integer.MAX_VALUE;
return oldTab;
}
// If the value is not higher than the maximum value, the value is doubled
else if ((newCap = oldCap << 1) < MAXIMUM_CAPACITY && oldCap >= DEFAULT_INITIAL_CAPACITY)
newThr = oldThr << 1; // double threshold
}
else if (oldThr > 0) // initial capacity was placed in threshold
newCap = oldThr;
else {
// signifies using defaults
newCap = DEFAULT_INITIAL_CAPACITY;
newThr = (int)(DEFAULT_LOAD_FACTOR * DEFAULT_INITIAL_CAPACITY);
}
// Calculate the new resize upper limit
if (newThr == 0) {
float ft = (float)newCap * loadFactor;
newThr = (newCap < MAXIMUM_CAPACITY && ft < (float)MAXIMUM_CAPACITY ? (int)ft : Integer.MAX_VALUE);
}
threshold = newThr;
@SuppressWarnings({"rawtypes","unchecked"})
Node<K,V>[] newTab = (Node<K,V>[])new Node[newCap];
table = newTab;
if(oldTab ! =null) {
// Move each bucket to the new buckets
for (int j = 0; j < oldCap; ++j) {
Node<K,V> e;
if((e = oldTab[j]) ! =null) {
oldTab[j] = null;
if (e.next == null)
newTab[e.hash & (newCap - 1)] = e;
else if (e instanceof TreeNode)
((TreeNode<K,V>)e).split(this, newTab, j, oldCap);
else {
Node<K,V> loHead = null, loTail = null;
Node<K,V> hiHead = null, hiTail = null;
Node<K,V> next;
do {
next = e.next;
/ / the original indexes
if ((e.hash & oldCap) == 0) {
if (loTail == null)
loHead = e;
else
loTail.next = e;
loTail = e;
}
// Old index +oldCap
else {
if (hiTail == null)
hiHead = e;
elsehiTail.next = e; hiTail = e; }}while((e = next) ! =null);
// Put the original index in the bucket
if(loTail ! =null) {
loTail.next = null;
newTab[j] = loHead;
}
// Add oldCap to bucket
if(hiTail ! =null) {
hiTail.next = null;
newTab[j + oldCap] = hiHead;
}
}
}
}
}
return newTab;
}
Copy the code
treeifyBin
final void treeifyBin(Node<K,V>[] tab, int hash) {
int n, index; Node<K,V> e;
if (tab == null || (n = tab.length) < MIN_TREEIFY_CAPACITY)
resize();
else if ((e = tab[index = (n - 1) & hash]) ! =null) {
TreeNode<K,V> hd = null, tl = null;
do {
TreeNode<K,V> p = replacementTreeNode(e, null);
if (tl == null)
hd = p;
else {
p.prev = tl;
tl.next = p;
}
tl = p;
} while((e = e.next) ! =null);
if((tab[index] = hd) ! =null) hd.treeify(tab); }}Copy the code
RBT red-black tree
/** * Entry for Tree bins. Extends LinkedHashMap.Entry (which in turn * extends Node) so can be used as extension of either regular or * linked node. */
static final class TreeNode<K.V> extends LinkedHashMap.Entry<K.V> {
TreeNode<K,V> parent; // red-black tree links
TreeNode<K,V> left;
TreeNode<K,V> right;
TreeNode<K,V> prev; // needed to unlink next upon deletion
boolean red;
TreeNode(int hash, K key, V val, Node<K,V> next) {
super(hash, key, val, next);
}
/** * Returns root of tree containing this node. */
final TreeNode<K,V> root(a) {
for (TreeNode<K,V> r = this, p;;) {
if ((p = r.parent) == null)
returnr; r = p; }}/** * Ensures that the given root is the first node of its bin. */
static <K,V> void moveRootToFront(Node<K,V>[] tab, TreeNode<K,V> root) {
int n;
if(root ! =null&& tab ! =null && (n = tab.length) > 0) {
int index = (n - 1) & root.hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index];
if(root ! = first) { Node<K,V> rn; tab[index] = root; TreeNode<K,V> rp = root.prev;if((rn = root.next) ! =null)
((TreeNode<K,V>)rn).prev = rp;
if(rp ! =null)
rp.next = rn;
if(first ! =null)
first.prev = root;
root.next = first;
root.prev = null;
}
assert checkInvariants(root); }}/** * Finds the node starting at root p with the given hash and key. * The kc argument caches comparableClassFor(key) upon first use * comparing keys. */
final TreeNode<K,V> find(inth, Object k, Class<? > kc) {
TreeNode<K,V> p = this;
do {
int ph, dir; K pk;
TreeNode<K,V> pl = p.left, pr = p.right, q;
if ((ph = p.hash) > h)
p = pl;
else if (ph < h)
p = pr;
else if((pk = p.key) == k || (k ! =null && k.equals(pk)))
return p;
else if (pl == null)
p = pr;
else if (pr == null)
p = pl;
else if((kc ! =null|| (kc = comparableClassFor(k)) ! =null) && (dir = compareComparables(kc, k, pk)) ! =0)
p = (dir < 0)? pl : pr;else if((q = pr.find(h, k, kc)) ! =null)
return q;
else
p = pl;
} while(p ! =null);
return null;
}
/** * Calls find for root node. */
final TreeNode<K,V> getTreeNode(int h, Object k) {
return((parent ! =null)? root() :this).find(h, k, null);
}
/** * Tie-breaking utility for ordering insertions when equal * hashCodes and non-comparable. We don't require a total * order, just a consistent insertion rule to maintain * equivalence across rebalancings. Tie-breaking further than * necessary simplifies testing a bit. */
static int tieBreakOrder(Object a, Object b) {
int d;
if (a == null || b == null ||
(d = a.getClass().getName().
compareTo(b.getClass().getName())) == 0)
d = (System.identityHashCode(a) <= System.identityHashCode(b) ?
-1 : 1);
return d;
}
/** * Forms tree of the nodes linked from this node. */
final void treeify(Node<K,V>[] tab) {
TreeNode<K,V> root = null;
for (TreeNode<K,V> x = this, next; x ! =null; x = next) {
next = (TreeNode<K,V>)x.next;
x.left = x.right = null;
if (root == null) {
x.parent = null;
x.red = false;
root = x;
}
else {
K k = x.key;
inth = x.hash; Class<? > kc =null;
for (TreeNode<K,V> p = root;;) {
int dir, ph;
K pk = p.key;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0)
dir = tieBreakOrder(k, pk);
TreeNode<K,V> xp = p;
if ((p = (dir <= 0)? p.left : p.right) ==null) {
x.parent = xp;
if (dir <= 0)
xp.left = x;
else
xp.right = x;
root = balanceInsertion(root, x);
break;
}
}
}
}
moveRootToFront(tab, root);
}
/** * Returns a list of non-TreeNodes replacing those linked from * this node. */
final Node<K,V> untreeify(HashMap<K,V> map) {
Node<K,V> hd = null, tl = null;
for (Node<K,V> q = this; q ! =null; q = q.next) {
Node<K,V> p = map.replacementNode(q, null);
if (tl == null)
hd = p;
else
tl.next = p;
tl = p;
}
return hd;
}
/** * Tree version of putVal. */
final TreeNode<K,V> putTreeVal(HashMap<K,V> map, Node<K,V>[] tab,
int h, K k, V v) { Class<? > kc =null;
boolean searched = false; TreeNode<K,V> root = (parent ! =null)? root() :this;
for (TreeNode<K,V> p = root;;) {
int dir, ph; K pk;
if ((ph = p.hash) > h)
dir = -1;
else if (ph < h)
dir = 1;
else if((pk = p.key) == k || (k ! =null && k.equals(pk)))
return p;
else if ((kc == null &&
(kc = comparableClassFor(k)) == null) ||
(dir = compareComparables(kc, k, pk)) == 0) {
if(! searched) { TreeNode<K,V> q, ch; searched =true;
if(((ch = p.left) ! =null&& (q = ch.find(h, k, kc)) ! =null) || ((ch = p.right) ! =null&& (q = ch.find(h, k, kc)) ! =null))
return q;
}
dir = tieBreakOrder(k, pk);
}
TreeNode<K,V> xp = p;
if ((p = (dir <= 0)? p.left : p.right) ==null) {
Node<K,V> xpn = xp.next;
TreeNode<K,V> x = map.newTreeNode(h, k, v, xpn);
if (dir <= 0)
xp.left = x;
else
xp.right = x;
xp.next = x;
x.parent = x.prev = xp;
if(xpn ! =null)
((TreeNode<K,V>)xpn).prev = x;
moveRootToFront(tab, balanceInsertion(root, x));
return null; }}}/** * Removes the given node, that must be present before this call. * This is messier than typical red-black deletion code because we * cannot swap the contents of an interior node with a leaf * successor that is pinned by "next" pointers that are accessible * independently during traversal. So instead we swap the tree * linkages. If the current tree appears to have too few nodes, * the bin is converted back to a plain bin. (The test triggers * somewhere between 2 and 6 nodes, depending on tree structure). */
final void removeTreeNode(HashMap<K,V> map, Node<K,V>[] tab,
boolean movable) {
int n;
if (tab == null || (n = tab.length) == 0)
return;
int index = (n - 1) & hash;
TreeNode<K,V> first = (TreeNode<K,V>)tab[index], root = first, rl;
TreeNode<K,V> succ = (TreeNode<K,V>)next, pred = prev;
if (pred == null)
tab[index] = first = succ;
else
pred.next = succ;
if(succ ! =null)
succ.prev = pred;
if (first == null)
return;
if(root.parent ! =null)
root = root.root();
if (root == null
|| (movable
&& (root.right == null
|| (rl = root.left) == null
|| rl.left == null))) {
tab[index] = first.untreeify(map); // too small
return;
}
TreeNode<K,V> p = this, pl = left, pr = right, replacement;
if(pl ! =null&& pr ! =null) {
TreeNode<K,V> s = pr, sl;
while((sl = s.left) ! =null) // find successor
s = sl;
boolean c = s.red; s.red = p.red; p.red = c; // swap colors
TreeNode<K,V> sr = s.right;
TreeNode<K,V> pp = p.parent;
if (s == pr) { // p was s's direct parent
p.parent = s;
s.right = p;
}
else {
TreeNode<K,V> sp = s.parent;
if((p.parent = sp) ! =null) {
if (s == sp.left)
sp.left = p;
else
sp.right = p;
}
if((s.right = pr) ! =null)
pr.parent = s;
}
p.left = null;
if((p.right = sr) ! =null)
sr.parent = p;
if((s.left = pl) ! =null)
pl.parent = s;
if ((s.parent = pp) == null)
root = s;
else if (p == pp.left)
pp.left = s;
else
pp.right = s;
if(sr ! =null)
replacement = sr;
else
replacement = p;
}
else if(pl ! =null)
replacement = pl;
else if(pr ! =null)
replacement = pr;
else
replacement = p;
if(replacement ! = p) { TreeNode<K,V> pp = replacement.parent = p.parent;if (pp == null)
root = replacement;
else if (p == pp.left)
pp.left = replacement;
else
pp.right = replacement;
p.left = p.right = p.parent = null;
}
TreeNode<K,V> r = p.red ? root : balanceDeletion(root, replacement);
if (replacement == p) { // detach
TreeNode<K,V> pp = p.parent;
p.parent = null;
if(pp ! =null) {
if (p == pp.left)
pp.left = null;
else if (p == pp.right)
pp.right = null; }}if (movable)
moveRootToFront(tab, r);
}
/**
* Splits nodes in a tree bin into lower and upper tree bins,
* or untreeifies if now too small. Called only from resize;
* see above discussion about split bits and indices.
*
* @param map the map
* @param tab the table for recording bin heads
* @param index the index of the table being split
* @param bit the bit of hash to split on
*/
final void split(HashMap<K,V> map, Node<K,V>[] tab, int index, int bit) {
TreeNode<K,V> b = this;
// Relink into lo and hi lists, preserving order
TreeNode<K,V> loHead = null, loTail = null;
TreeNode<K,V> hiHead = null, hiTail = null;
int lc = 0, hc = 0;
for(TreeNode<K,V> e = b, next; e ! =null; e = next) {
next = (TreeNode<K,V>)e.next;
e.next = null;
if ((e.hash & bit) == 0) {
if ((e.prev = loTail) == null)
loHead = e;
else
loTail.next = e;
loTail = e;
++lc;
}
else {
if ((e.prev = hiTail) == null)
hiHead = e;
elsehiTail.next = e; hiTail = e; ++hc; }}if(loHead ! =null) {
if (lc <= UNTREEIFY_THRESHOLD)
tab[index] = loHead.untreeify(map);
else {
tab[index] = loHead;
if(hiHead ! =null) // (else is already treeified)loHead.treeify(tab); }}if(hiHead ! =null) {
if (hc <= UNTREEIFY_THRESHOLD)
tab[index + bit] = hiHead.untreeify(map);
else {
tab[index + bit] = hiHead;
if(loHead ! =null) hiHead.treeify(tab); }}}/ * -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - * /
// Red-black tree methods, all adapted from CLR
static <K,V> TreeNode<K,V> rotateLeft(TreeNode
root, TreeNode
p)
,v>
,v> {
TreeNode<K,V> r, pp, rl;
if(p ! =null&& (r = p.right) ! =null) {
if((rl = p.right = r.left) ! =null)
rl.parent = p;
if ((pp = r.parent = p.parent) == null)
(root = r).red = false;
else if (pp.left == p)
pp.left = r;
else
pp.right = r;
r.left = p;
p.parent = r;
}
return root;
}
static <K,V> TreeNode<K,V> rotateRight(TreeNode
root, TreeNode
p)
,v>
,v> {
TreeNode<K,V> l, pp, lr;
if(p ! =null&& (l = p.left) ! =null) {
if((lr = p.left = l.right) ! =null)
lr.parent = p;
if ((pp = l.parent = p.parent) == null)
(root = l).red = false;
else if (pp.right == p)
pp.right = l;
else
pp.left = l;
l.right = p;
p.parent = l;
}
return root;
}
static <K,V> TreeNode<K,V> balanceInsertion(TreeNode
root, TreeNode
x)
,v>
,v> {
x.red = true;
for (TreeNode<K,V> xp, xpp, xppl, xppr;;) {
if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if(! xp.red || (xpp = xp.parent) ==null)
return root;
if (xp == (xppl = xpp.left)) {
if((xppr = xpp.right) ! =null && xppr.red) {
xppr.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.right) {
root = rotateLeft(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if(xp ! =null) {
xp.red = false;
if(xpp ! =null) {
xpp.red = true; root = rotateRight(root, xpp); }}}}else {
if(xppl ! =null && xppl.red) {
xppl.red = false;
xp.red = false;
xpp.red = true;
x = xpp;
}
else {
if (x == xp.left) {
root = rotateRight(root, x = xp);
xpp = (xp = x.parent) == null ? null : xp.parent;
}
if(xp ! =null) {
xp.red = false;
if(xpp ! =null) {
xpp.red = true;
root = rotateLeft(root, xpp);
}
}
}
}
}
}
static <K,V> TreeNode<K,V> balanceDeletion(TreeNode
root, TreeNode
x)
,v>
,v> {
for (TreeNode<K,V> xp, xpl, xpr;;) {
if (x == null || x == root)
return root;
else if ((xp = x.parent) == null) {
x.red = false;
return x;
}
else if (x.red) {
x.red = false;
return root;
}
else if ((xpl = xp.left) == x) {
if((xpr = xp.right) ! =null && xpr.red) {
xpr.red = false;
xp.red = true;
root = rotateLeft(root, xp);
xpr = (xp = x.parent) == null ? null : xp.right;
}
if (xpr == null)
x = xp;
else {
TreeNode<K,V> sl = xpr.left, sr = xpr.right;
if ((sr == null| |! sr.red) && (sl ==null| |! sl.red)) { xpr.red =true;
x = xp;
}
else {
if (sr == null| |! sr.red) {if(sl ! =null)
sl.red = false;
xpr.red = true;
root = rotateRight(root, xpr);
xpr = (xp = x.parent) == null ?
null : xp.right;
}
if(xpr ! =null) {
xpr.red = (xp == null)?false : xp.red;
if((sr = xpr.right) ! =null)
sr.red = false;
}
if(xp ! =null) {
xp.red = false; root = rotateLeft(root, xp); } x = root; }}}else { // symmetric
if(xpl ! =null && xpl.red) {
xpl.red = false;
xp.red = true;
root = rotateRight(root, xp);
xpl = (xp = x.parent) == null ? null : xp.left;
}
if (xpl == null)
x = xp;
else {
TreeNode<K,V> sl = xpl.left, sr = xpl.right;
if ((sl == null| |! sl.red) && (sr ==null| |! sr.red)) { xpl.red =true;
x = xp;
}
else {
if (sl == null| |! sl.red) {if(sr ! =null)
sr.red = false;
xpl.red = true;
root = rotateLeft(root, xpl);
xpl = (xp = x.parent) == null ?
null : xp.left;
}
if(xpl ! =null) {
xpl.red = (xp == null)?false : xp.red;
if((sl = xpl.left) ! =null)
sl.red = false;
}
if(xp ! =null) {
xp.red = false;
root = rotateRight(root, xp);
}
x = root;
}
}
}
}
}
/** * Recursive invariant check */
static <K,V> boolean checkInvariants(TreeNode<K,V> t) {
TreeNode<K,V> tp = t.parent, tl = t.left, tr = t.right,
tb = t.prev, tn = (TreeNode<K,V>)t.next;
if(tb ! =null&& tb.next ! = t)return false;
if(tn ! =null&& tn.prev ! = t)return false;
if(tp ! =null&& t ! = tp.left && t ! = tp.right)return false;
if(tl ! =null&& (tl.parent ! = t || tl.hash > t.hash))return false;
if(tr ! =null&& (tr.parent ! = t || tr.hash < t.hash))return false;
if(t.red && tl ! =null&& tl.red && tr ! =null && tr.red)
return false;
if(tl ! =null && !checkInvariants(tl))
return false;
if(tr ! =null && !checkInvariants(tr))
return false;
return true; }}Copy the code
TreeMap
- TreeMap implements the NavigableMap interface, and the NavigableMap interface inherits the SortedMap interface, making our TreeMap orderly!
- The bottom layer of TreeMap is a red-black tree, and the time complexity of its methods is not too high :log(n)~
- asynchronous
- Use Comparator or Comparable to compare keys for equality and sort
//
private final Comparator<? super K> comparator;
//root and size of BRTree
private transient Entry<K, V> root;
private transient int size = 0;
// Number of structural changes
private transient int modCount = 0;
public TreeMap(a){
comparator = null;
}
public TreeMap(Comparator <? super K> comparator ){
this.comparator = comparator;
}
public TreeMap(Map<? extends K, ? extends V> m ){
this.comparator = null;
putAll(m);
}
public TreeMap(SortedMap<K, ? extends V > m){
comparator = m.comparator();
try{
buildFormSorted(m.size(), m.entrySet().iterator(), str:null. defaultVal:null);
}catch(java.io.IOException cannotHappen){
catch( ClassNotFountException cannotHappen){
}
}
}
Copy the code
put()
get()
remove()
WeakHashMap
ConcurrentHashMap
Properties
Properties, inherited from Hashtable, represents a persistent set of Properties in which each key and its corresponding value is a string
The iterator
Provides methods to traverse elements in a container. Only the container itself knows how the elements in the container are organized, so iterators can only be obtained from the container itself. Each container implements its own iterators in the form of inner classes.
public interface Iterator<E>{
boolean hasNext(a);
E next(a);
void remove(a);
}
Copy the code