🖕 welcome to pay attention to my public number “Tong Elder brother read source code”, view more source code series articles, with Tong elder brother tour the ocean of source code.
This chapter continues the previous chapter, please click me for direct links.
Initialize the bucket array
Initialize the bucket array the first time an element is placed.
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
while ((tab = table) == null || tab.length == 0) {
if ((sc = sizeCtl) < 0)
// If sizeCtl<0, the CPU is being initialized or expanded
Thread.yield(); // lost initialization race; just spin
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
// If the sizeCtl atom is successfully updated to -1, the current thread enters initialization
// If the atomic update fails, another program is initialized first, and the next loop is entered
// If the initialization is not complete by the next loop, sizeCtl<0 goes into the logic above if to let the CPU go
// If the next loop is complete, table.length! =0, exit the loop
try {
// Check whether the table is empty again to prevent ABA problems
if ((tab = table) == null || tab.length == 0) {
// If sc is 0, use the default value 16
int n = (sc > 0)? sc : DEFAULT_CAPACITY;// Create an array
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])newNode<? ,? >[n];// Assign to the table bucket array
table = tab = nt;
// Set sc to 0.75 times the array length
// n - (n >>> 2) = n - n/4 = 0.75n
// The load factor and the expansion threshold are both written down
// This is why there are no threshold and loadFactor attributes
sc = n - (n >>> 2); }}finally {
// assign SC to sizeCtl, which stores the capacity expansion threshold
sizeCtl = sc;
}
break; }}return tab;
}
Copy the code
(1) Use CAS lock to control only one thread to initialize bucket array;
(2) sizeCtl stores expansion thresholds after initialization;
(3) The threshold for expansion is 0.75 times the size of the bucket array, which is the capacity of the map, that is, how many elements can be stored at most.
Determine whether capacity expansion is required
After each element is added, the number of elements is increased by 1 and the threshold for expansion is determined. If the threshold is reached, expansion or assistance is performed.
private final void addCount(long x, int check) {
CounterCell[] as; long b, s;
// The idea used here is exactly the same as the LongAdder class (more on that later).
// Store the size of the array on different segments according to different threads (also the idea of segment locking)
// There is a baseCount, and the baseCount is updated first. If the baseCount fails, the segment corresponding to the different thread is updated
// This ensures minimal conflict reduction
// First try to add the quantity to baseCount. If that fails, add it to the segmented CounterCell
if((as = counterCells) ! =null| |! U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
CounterCell a; long v; int m;
boolean uncontended = true;
// If as is null
// Or the length is 0
// Or the segment in which the current thread resides is null
// Fail to add quantity to the segment of the current thread
if (as == null || (m = as.length - 1) < 0 ||
(a = as[ThreadLocalRandom.getProbe() & m]) == null| |! (uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) {// Force increment of quantity (quantity must be added anyway, not simply spin)
// Different threads failed to update different segments
// If a conflict has occurred, expand counterCells
// To reduce the probability of multiple threads hash to the same segment
fullAddCount(x, uncontended);
return;
}
if (check <= 1)
return;
// Count elements
s = sumCount();
}
if (check >= 0) {
Node<K,V>[] tab, nt; int n, sc;
// If the number of elements reaches the threshold, expand the capacity
// Note that sizeCtl normally stores the capacity expansion threshold, i.e. 0.75 times the capacity
while (s >= (long)(sc = sizeCtl) && (tab = table) ! =null &&
(n = tab.length) < MAXIMUM_CAPACITY) {
// Rs is a postmarked mark for expansion
int rs = resizeStamp(n);
if (sc < 0) {
// IF SC <0, capacity expansion is being performed
if((sc >>> RESIZE_STAMP_SHIFT) ! = rs || sc == rs +1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
// The capacity expansion has been completed, exit the loop
// Only the condition nextTable== NULL will be triggered
break;
// If the expansion is not complete, the current thread is added to the migration element
// Add 1 to the thread count
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
transfer(tab, nt);
}
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
// This is where the thread that triggered the expansion enters
// The high 16 bits of the sizeCtl store the RS expansion postmark
// The lower 16 bits of the sizeCtl store the number of threads to be expanded by 1, i.e. (1+nThreads)
// So the official sizeCtl -(1+nThreads) is incorrect
// Enter the migration element
transfer(tab, null);
// Recalculate the elementss = sumCount(); }}}Copy the code
(1) The storage mode of the number of elements is similar to that of LongAdder class, which is stored in different segments to reduce the conflict when different threads update size at the same time;
(2) When calculating the number of elements, add the values of these segments and baseCount to calculate the total number of elements;
(3) In normal cases, sizeCtl stores capacity expansion threshold, which is 0.75 times of capacity;
(4) For capacity expansion, the number of threads for capacity expansion for low storage is increased by 1 (1+nThreads).
(5) If other threads find expansion after adding elements, they will also join the expansion line;
Assist in capacity expansion (migrating elements)
When the thread adds an element and finds that it is expanding and the bucket element in which the current element resides has been migrated, it assists in the migration of other bucket elements.
final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) {
Node<K,V>[] nextTab; int sc;
// If the bucket array is not empty, the first element of the current bucket is of type ForwardingNode, and nextTab is not empty
// Help to migrate elements of other buckets after the current bucket has been migrated
// During expansion, the first element of the old bucket is set to ForwardingNode and its nextTab points to the new bucket array
if(tab ! =null && (f instanceofForwardingNode) && (nextTab = ((ForwardingNode<K,V>)f).nextTable) ! =null) {
int rs = resizeStamp(tab.length);
// sizeCtl<0, the system is expanding capacity
while (nextTab == nextTable && table == tab &&
(sc = sizeCtl) < 0) {
if((sc >>> RESIZE_STAMP_SHIFT) ! = rs || sc == rs +1 ||
sc == rs + MAX_RESIZERS || transferIndex <= 0)
break;
// Increase the number of threads by 1
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) {
// The current thread helps migrate elements
transfer(tab, nextTab);
break; }}return nextTab;
}
return table;
}
Copy the code
Assist in the migration of other bucket elements only after the current bucket element migration is complete.
Transition element
The capacity is doubled and some elements are moved to other buckets.
private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
int n = tab.length, stride;
if ((stride = (NCPU > 1)? (n >>>3) / NCPU : n) < MIN_TRANSFER_STRIDE)
stride = MIN_TRANSFER_STRIDE; // subdivide range
if (nextTab == null) { // initiating
// If nextTab is empty, migration has not started
// Create a new bucket array
try {
// The new bucket array is twice as large as the old one
@SuppressWarnings("unchecked")
Node<K,V>[] nt = (Node<K,V>[])newNode<? ,? >[n <<1];
nextTab = nt;
} catch (Throwable ex) { // try to cope with OOME
sizeCtl = Integer.MAX_VALUE;
return;
}
nextTable = nextTab;
transferIndex = n;
}
// New bucket array size
int nextn = nextTab.length;
// Create a new ForwardingNode node and store the new bucket array in it
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
boolean advance = true;
boolean finishing = false; // to ensure sweep before committing nextTab
for (int i = 0, bound = 0;;) {
Node<K,V> f; int fh;
// The whole while loop evaluates I
// the value of I will decrease from n-1, if you are interested in the breakpoint
// where n is the size of the old bucket array, that is, I starts at 15 and goes down to 1 to migrate elements
while (advance) {
int nextIndex, nextBound;
if (--i >= bound || finishing)
advance = false;
else if ((nextIndex = transferIndex) <= 0) {
i = -1;
advance = false;
}
else if (U.compareAndSwapInt
(this, TRANSFERINDEX, nextIndex,
nextBound = (nextIndex > stride ?
nextIndex - stride : 0))) {
bound = nextBound;
i = nextIndex - 1;
advance = false; }}if (i < 0 || i >= n || i + n >= nextn) {
// If a traversal is complete
// All buckets in the map have been migrated
int sc;
if (finishing) {
// If the migration is complete, replace the old bucket array
// Set the threshold for next capacity expansion to 0.75 times the capacity of the new bucket array
nextTable = null;
table = nextTab;
sizeCtl = (n << 1) - (n >>> 1);
return;
}
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
// When the current thread is expanded, the number of threads to be expanded is -1
if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
// The two sides must be equal after expansion
return;
// Finishing set to true
// Finishing if true will go to the above if condition
finishing = advance = true;
// I is reassigned to n
// This will iterate over the bucket array again to see if all the migration is complete
// The second iteration will go to the condition (fh = f.hash) == MOVED
i = n; // recheck before commit}}else if ((f = tabAt(tab, i)) == null)
// If there is no data in the bucket, add ForwardingNode to mark that the bucket has been migrated
advance = casTabAt(tab, i, null, fwd);
else if ((fh = f.hash) == MOVED)
// If the hash value of the first element in the bucket is MOVED
// It is a ForwardingNode node
// The bucket has been migrated
advance = true; // already processed
else {
// Lock the bucket and migrate the elements
synchronized (f) {
// Check again if the first element of the current bucket has been modified
// It is possible that other lines migrated the elements first
if (tabAt(tab, i) == f) {
// Split a list into two lists
// The rule is to hash the elements of the bucket with the bucket size n
// Put 0 in the lower list (low) and non-0 in the higher list (high)
// Where the low level list is migrated to the same position in the new bucket as the old bucket
// The list moves to the new bucket by adding n to the old bucket
// This is why the capacity is doubled
Node<K,V> ln, hn;
if (fh >= 0) {
// The hash value of the first element is greater than or equal to 0
// Indicates that elements in the bucket are stored in a linked list
// This is basically similar to the HashMap migration algorithm
// The only difference is the extra step to find lastRun
// lastRun is a sublist that is extracted without special treatment
For example, the hash value of all elements and the bucket size are 0, 0, 4, 4, 0, 0, respectively
// The last three zeros must still be in the same bucket
// then lastRun corresponds to the third from last node
// I'm not quite sure why
int runBit = fh & n;
Node<K,V> lastRun = f;
for(Node<K,V> p = f.next; p ! =null; p = p.next) {
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
// see if the last few elements belong to the low or high list
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
// Walk through the list and place hash&n 0 in the low list
// Non-0 is placed in the higher order list
for(Node<K,V> p = f; p ! = lastRun; p = p.next) {int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new Node<K,V>(ph, pk, pv, ln);
else
hn = new Node<K,V>(ph, pk, pv, hn);
}
// The position of the low list remains the same
setTabAt(nextTab, i, ln);
// The position of the high list is the original position plus n
setTabAt(nextTab, i + n, hn);
// Mark that the bucket has been migrated
setTabAt(tab, i, fwd);
// advance is true
advance = true;
}
else if (f instanceof TreeBin) {
If the first element is a tree node
// The same goes for splitting into two trees
// Also according to hash&n is 0 in the low level tree
// not 0 in the high tree
TreeBin<K,V> t = (TreeBin<K,V>)f;
TreeNode<K,V> lo = null, loTail = null;
TreeNode<K,V> hi = null, hiTail = null;
int lc = 0, hc = 0;
// Walk through the entire tree and split into two trees based on whether hash&n is zero
for(Node<K,V> e = t.first; e ! =null; e = e.next) {
int h = e.hash;
TreeNode<K,V> p = new TreeNode<K,V>
(h, e.key, e.val, null.null);
if ((h & n) == 0) {
if ((p.prev = loTail) == null)
lo = p;
else
loTail.next = p;
loTail = p;
++lc;
}
else {
if ((p.prev = hiTail) == null)
hi = p;
elsehiTail.next = p; hiTail = p; ++hc; }}// If the number of elements in the tree is less than or equal to 6, the tree is reduced to a linked listln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc ! =0)?newTreeBin<K,V>(lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc ! =0)?new TreeBin<K,V>(hi) : t;
// The position of the low tree remains the same
setTabAt(nextTab, i, ln);
// The position of the high-order tree is the original position plus n
setTabAt(nextTab, i + n, hn);
// Mark the bucket migrated
setTabAt(tab, i, fwd);
// advance is true
advance = true;
}
}
}
}
}
}
Copy the code
(1) The new bucket array is twice the size of the old bucket array;
(2) Migration elements start from the rear bucket;
(3) Place an element of type ForwardingNode in the bucket to mark the bucket migration completion;
(4) The elements in the bucket are divided into two linked lists or trees according to whether hash&n is equal to 0 during migration;
(5) The low level linked list (tree) is stored in the original position;
(6) High list (tree) stored in the original location plus n position;
(7) The current bucket will be locked when migrating elements, which is also the idea of segmental locking;
To be continued ~~
Now the article can not leave a message, if there is any suggestion or opinion, welcome to leave a message to me in the background of the public account, thank you ~
Welcome to pay attention to my public number “Tong Elder brother read source code”, view more source code series articles, with Tong elder brother tour the ocean of source code.