Understand the ConcurrentHashMap principle in depth
First of all, remind readers that this episode is difficult, please take good motion sickness medicine 💊, if you have any questions after reading, please leave a message in the comment section, I will reply to 😁 one by one
What is a ConcurrentHashMap
ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap ConcurrentHashMap
ConcurrentHashMap Underlying data structure
ConcurrentHashMap for JDK8 is an improved version of HashMap for jdK8 to make locking more granular
HashMap adopts the structure of array + linked list + red-black tree, where the concurrency problem is solved by CAS+synchronized. Next, we will take you to analyze the author’s ideas from the source code
ConcurrentHashMap source code analysis
So let’s think about where do we start with ConcurrentHashMap?
Start with the ConcurrentHashMap class header
public class ConcurrentHashMap<K.V> extends AbstractMap<K.V>
implements ConcurrentMap<K.V>, Serializable {
private static final long serialVersionUID = 7249069246763182397L;
// Maximum array size
private static final int MAXIMUM_CAPACITY = 1 << 30;
// The default size of the array
private static final int DEFAULT_CAPACITY = 16;
// The maximum length that can convert a Map to an array
static final int MAX_ARRAY_SIZE = Integer.MAX_VALUE - 8;
// Concurrency level compatible with pre-1.8
private static final int DEFAULT_CONCURRENCY_LEVEL = 16;
// Load factor
private static final float LOAD_FACTOR = 0.75 f;
// Lists are converted to red-black tree thresholds
static final int TREEIFY_THRESHOLD = 8;
// The red-black tree degenerates into a linked list threshold
static final int UNTREEIFY_THRESHOLD = 6;
// The minimum array capacity of the list to be converted into a red-black tree
static final int MIN_TREEIFY_CAPACITY = 64;
// If you don't know the constants related to capacity expansion, you can have an impression
// Minimum capacity for transferring data during capacity expansion
private static final int MIN_TRANSFER_STRIDE = 16;
// Used to generate expansion stamps
private static int RESIZE_STAMP_BITS = 16;
// Maximum number of threads to help capacity expansion
private static final int MAX_RESIZERS = (1< < (32 - RESIZE_STAMP_BITS)) - 1;
// If you move the specified bit during expansion, you can obtain the first few bits of the expansion stamp
private static final int RESIZE_STAMP_SHIFT = 32 - RESIZE_STAMP_BITS;
// Status flag bit
// Expanding mobile data
static final int MOVED = -1; // hash for forwarding nodes
// The nodes are red-black tree nodes
static final int TREEBIN = -2; // hash for roots of trees
// Keep the node, only for computerIfAbsent and computer
static final int RESERVED = -3; // hash for transient reservations
// Keep the hash value of the key below the maximum value of the integer
static final int HASH_BITS = 0x7fffffff; // usable bits of normal node hash
// Get the number of CPU cores in the current running system
static final int NCPU = Runtime.getRuntime().availableProcessors();
// Volatile is used because it is visible in multiple threads
// If volatile does not understand, please leave a comment or a message to me. There will be an article on the use and underlying principles of volatile.
/ / array
transient volatile Node<K,V>[] table;
// The next array to be used is only used when the array is expanded
private transient volatile Node<K,V>[] nextTable;
// This is used to record basic key-value pairs. The size logic will be expanded later
private transient volatile long baseCount;
// Indicates the flag bit for initialization or expansion. If it is negative, initialization or expansion is in progress
private transient volatile int sizeCtl;
// Index bit for expansion
private transient volatile int transferIndex;
// Counts how frequently cells visit the tag bits
private transient volatile int cellsBusy;
// The counting cell used to calculate size
private transient volatile CounterCell[] counterCells;
// views
private transient KeySetView<K,V> keySet;
private transient ValuesView<K,V> values;
private transient EntrySetView<K,V> entrySet;
Copy the code
Which expansion related constant value and expansion created by the variable we may look very dizzy, do not know what the meaning, in the analysis of the source code, these values will be expanded for everyone
Next, let’s examine the constructor of the HashMap
// ConcurrentHashMap has many overloaded constructors, but the most complete and basic is this one
public ConcurrentHashMap(int initialCapacity,
float loadFactor, int concurrencyLevel) {
if(! (loadFactor >0.0 f) || initialCapacity < 0 || concurrencyLevel <= 0)
throw new IllegalArgumentException();
ConcurrentHashMap the default concurrency level after 1.8 is the size of the array, so if the initial capacity is less than the concurrency level, the concurrency level is assigned to the initial capacity
if (initialCapacity < concurrencyLevel) // Use at least as many bins
initialCapacity = concurrencyLevel; // as estimated threads
// Obtain the capacity
long size = (long) (1.0 + (long)initialCapacity / loadFactor);
// Convert the capacity to a multiple of 2
int cap = (size >= (long)MAXIMUM_CAPACITY) ?
MAXIMUM_CAPACITY : tableSizeFor((int)size);
this.sizeCtl = cap;
}
Copy the code
Put () method
The put() method is now unmasked!
// The common put method
public V put(K key, V value) {
return putVal(key, value, false);
}
// This is how to implement the put operation
final V putVal(K key, V value, boolean onlyIfAbsent) {
// A hashMap can have null values, but ConcurrentHashMap does not allow null values, otherwise NPE will be thrown
if (key == null || value == null) throw new NullPointerException();
// Get the hash value based on key
int hash = spread(key.hashCode());
int binCount = 0;
/ / spin
for (Node<K,V>[] tab = table;;) {
Node<K,V> f; int n, i, fh;
// If TAB is null, initialize the array
if (tab == null || (n = tab.length) == 0)
// Initialize the array
tab = initTable();
// Use the hash value of the key to determine whether the position in the array is null
else if ((f = tabAt(tab, i = (n - 1) & hash)) == null) {
// If the location is null, write data through CAS
if (casTabAt(tab, i, null.new Node<K,V>(hash, key, value, null)))
break; // no lock when adding to empty bin
}
// If the node at position is not null and the hash of the node is -1, then the array is expanding
else if ((fh = f.hash) == MOVED)
// Help array expansion
tab = helpTransfer(tab, f);
// Add data
else {
V oldVal = null;
Synchronized has been optimized in JDK6 and already performs well
synchronized (f) {
// Check again to see if the head node has changed
if (tabAt(tab, i) == f) {
// Indicates that the current node is a list node. The constants declared above are all negative numbers. Otherwise, the list node is a list node
if (fh >= 0) {
binCount = 1;
// iterate backwards from the beginning node
for (Node<K,V> e = f;; ++binCount) {
K ek;
// If the key of the node is the same as the key to be inserted, value is overwritten
if(e.hash == hash && ((ek = e.key) == key || (ek ! =null && key.equals(ek)))) {
oldVal = e.val;
if(! onlyIfAbsent) e.val = value;break;
}
Node<K,V> pred = e;
// Insert new data into the end of the list (jdk8 uses tail insert method)
if ((e = e.next) == null) {
pred.next = new Node<K,V>(hash, key,
value, null);
break; }}}// If the number is not positive, check whether the current node is a red-black tree node
else if (f instanceof TreeBin) {
Node<K,V> p;
binCount = 2;
// Return a node with the same key or add a new red-black tree node
if((p = ((TreeBin<K,V>)f).putTreeVal(hash, key, value)) ! =null) {
oldVal = p.val;
if(! onlyIfAbsent) p.val = value; }}}}// binCount defaults to 0 and changes when adding nodes
if(binCount ! =0) {
// Check whether the list length is greater than 8
if (binCount >= TREEIFY_THRESHOLD)
// The list becomes a red-black tree if the size of the array is met
treeifyBin(tab, i);
if(oldVal ! =null)
return oldVal;
break; }}}// Increase the number of key-value pairs
addCount(1L, binCount);
return null;
}
Copy the code
Calculate the hash value
// Same as HashMap, except that &hash_bits are added to prevent the final result from exceeding the maximum int value
static final int spread(int h) {
return (h ^ (h >>> 16)) & HASH_BITS;
}
Copy the code
Initializing an array
// Initialize the array
private final Node<K,V>[] initTable() {
Node<K,V>[] tab; int sc;
// Because of concurrency, the loop checks whether the array is null, where TAB is
while ((tab = table) == null || tab.length == 0) {
// If sc is less than 0, the array is already being expanded or initialized, so the current thread cedes CPU execution
if ((sc = sizeCtl) < 0)
Thread.yield(); // lost initialization race; just spin
// The CAS mode is changed to SIZECTL -1
else if (U.compareAndSwapInt(this, SIZECTL, sc, -1)) {
try {
// If the array is null
if ((tab = table) == null || tab.length == 0) {
// If the capacity is passed in for expansion, otherwise the default value is used to initialize the array
int n = (sc > 0)? sc : DEFAULT_CAPACITY;@SuppressWarnings("unchecked")
// Initialize the array
Node<K,V>[] nt = (Node<K,V>[])newNode<? ,? >[n]; table = tab = nt;// Can be converted to n - n/4
sc = n - (n >>> 2); }}finally {
// Update sizeCtl to the size to be expanded next time after capacity expansion
sizeCtl = sc;
}
break; }}// The array is initialized
return tab;
}
Copy the code
Let’s look at the process of helping arrays expand
final Node<K,V>[] helpTransfer(Node<K,V>[] tab, Node<K,V> f) {
Node<K,V>[] nextTab; int sc;
// Check whether the current array is NULL, whether the current node is a node for expansion, and whether the target array required for migration is NULL
if(tab ! =null && (f instanceofForwardingNode) && (nextTab = ((ForwardingNode<K,V>)f).nextTable) ! =null) {
// According to the current array length, generate a expansion stamp, the high part of the expansion stamp represents the expansion version number, the low part of the expansion stamp is 0
int rs = resizeStamp(tab.length);
// Spins to determine whether the current array is still expanding
while (nextTab == nextTable && table == tab &&
(sc = sizeCtl) < 0) {
//(sc>>>RESIZE_STAMP_SHIFT)! = RS indicates whether the generated expansion stamp version is the same as the expansion stamp version being expanded. If the version is different, expansion will not be facilitated
// SC == RS + 1 Indicates that capacity expansion is complete
// sc == RS + MAX_RESIZERS indicates that the number of threads helping capacity expansion has reached the maximum
// transferIndex <= 0 indicates that all capacity expansion tasks are completed
if((sc >>> RESIZE_STAMP_SHIFT) ! = rs || sc == rs +1 ||
sc == rs + MAX_RESIZERS || transferIndex <= 0)
break;
// Increase the value of the expansion stamp through CAS. The expansion stamp is increased by 1 for each thread that helps expansion
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1)) {
// Implement the expansion logic
transfer(tab, nextTab);
break; }}return nextTab;
}
return table;
}
Copy the code
The list is converted to a red-black tree if the size of the array is met
// If the array is smaller than 64, expand the array; otherwise, the list is converted to a red-black tree
private final void treeifyBin(Node<K,V>[] tab, int index) {
Node<K,V> b; int n, sc;
if(tab ! =null) {
// Check whether the array capacity is smaller than 64
if ((n = tab.length) < MIN_TREEIFY_CAPACITY)
// Try array expansion
tryPresize(n << 1);
// Check that the head node of the current array bit is not null and hash >=0 indicates that the current node is a list node
else if((b = tabAt(tab, index)) ! =null && b.hash >= 0) {
// Lock the head node to turn the list into a red-black tree
synchronized (b) {
if (tabAt(tab, index) == b) {
// The real logic of the concrete transformation
TreeNode<K,V> hd = null, tl = null;
for(Node<K,V> e = b; e ! =null; e = e.next) {
TreeNode<K,V> p =
new TreeNode<K,V>(e.hash, e.key, e.val,
null.null);
if ((p.prev = tl) == null)
hd = p;
else
tl.next = p;
tl = p;
}
setTabAt(tab, index, new TreeBin<K,V>(hd));
}
}
}
}
}
Copy the code
Increase the key-value logarithm
If there is only one size variable, change the size value through CAS every time you put a new key/value pair. This can ensure concurrency safety, but if the concurrency is very large, it is easy for many threads to loop CAS. Therefore, the author adopts the design idea of fragmentation for optimization. By creating an array, the author can perform CAS operation on different positions of the array. When calling size() method, the data can be obtained by adding up the numbers of all positions in the array
// Increase the key-value logarithm
private final void addCount(long x, int check) {
CounterCell[] as; long b, s;
1. Check whether the counterCells counter table is null. If null, baseCount is directly added to CAS
// 2. If the baseCount fails, it indicates that there is a competition. In this case, the baseCount cannot be counted, but only the CounterCell can be counted
if((as = counterCells) ! =null| |! U.compareAndSwapLong(this, BASECOUNT, b = baseCount, s = b + x)) {
CounterCell a; long v; int m;
boolean uncontended = true;
If the CounterCell counter table is null, call fullAddCount directly
// 2. Call fullAddCount if a random element is null from the count table
// 3. Use CAS to modify elements at random positions. If the modification fails, a conflict has occurred
if (as == null || (m = as.length - 1) < 0 ||
(a = as[ThreadLocalRandom.getProbe() & m]) == null| |! (uncontended = U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))) { fullAddCount(x, uncontended);return;
}
// The length of the linked list is less than or equal to 1
if (check <= 1)
return;
// Count the number of key pairs
s = sumCount();
}
// If the length of the list is greater than 0, check the capacity expansion
if (check >= 0) {
Node<K,V>[] tab, nt; int n, sc;
// The collection size is greater than or equal to the expansion threshold and the array length is smaller than the maximum capacity
while (s >= (long)(sc = sizeCtl) && (tab = table) ! =null &&
(n = tab.length) < MAXIMUM_CAPACITY) {
// Generate a unique expansion stamp
int rs = resizeStamp(n);
// if sc < 0, another thread is being expanded
if (sc < 0) {
// 1. Check whether the version number of the current capacity expansion stamp is consistent with that of the capacity expansion thread
// 2. Sc == RS + 1 Indicates that capacity expansion is complete
// 3. sc == RS + MAX_RESIZERS Indicates that the number of threads helping capacity expansion has reached its maximum
// 4. (nt = nextTable) == NULL Indicates that the array used for expansion is NULL, indicating that expansion is complete
// 5. TransferIndex <= 0 indicates that all capacity expansion tasks are completed
if((sc >>> RESIZE_STAMP_SHIFT) ! = rs || sc == rs +1 ||
sc == rs + MAX_RESIZERS || (nt = nextTable) == null ||
transferIndex <= 0)
break;
// Note You can participate in the capacity expansion task. If the CAS stamp is +1, you can participate in the capacity expansion task
if (U.compareAndSwapInt(this, SIZECTL, sc, sc + 1))
// Perform capacity expansion
transfer(tab, nt);
}
// No other thread is expanding capacity. Create a capacity expansion task through the CAS
else if (U.compareAndSwapInt(this, SIZECTL, sc,
(rs << RESIZE_STAMP_SHIFT) + 2))
// Capacity expansion starts
transfer(tab, null);
// Count the number of key pairss = sumCount(); }}}Copy the code
I’m going to show you what fullAddCount does
Initialize or expand the counter table
// Initialize or expand the count table
private final void fullAddCount(long x, boolean wasUncontended) {
int h;
// If the current thread generated a random number of 0, then generate a new random number
if ((h = ThreadLocalRandom.getProbe()) == 0) {
// Force initialization
ThreadLocalRandom.localInit(); // force initialization
// Get a random number
h = ThreadLocalRandom.getProbe();
// Due to regenerating random numbers, the unconflicted bit is set to true
wasUncontended = true;
}
boolean collide = false; // True if last slot nonempty
/ / spin
for (;;) {
CounterCell[] as; CounterCell a; int n; long v;
// as ! = null indicates that initialization has been performed
if((as = counterCells) ! =null && (n = as.length) > 0) {
// Get the node subscript by ampersand the random number generated by the current thread
if ((a = as[(n - 1) & h]) == null) {
// Indicates that the count table is not being initialized or expanded
if (cellsBusy == 0) { // Try to attach new Cell
// Construct a CounterCell, passing in the number of elements
CounterCell r = new CounterCell(x); // Optimistic create
// Change cellsBusy to prevent concurrent operations on the count table by other threads
if (cellsBusy == 0 &&
U.compareAndSwapInt(this, CELLSBUSY, 0.1)) {
boolean created = false;
try { // Recheck under lock
CounterCell[] rs; int m, j;
// Place the initialized r object in the specified location
if((rs = counterCells) ! =null &&
(m = rs.length) > 0 &&
rs[j = (m - 1) & h] == null) {
rs[j] = r;
created = true; }}finally {
// Restore the flag bit
cellsBusy = 0;
}
// The loop is created successfully
if (created)
break;
Cells subscript element is not null, enter the next loop
continue; // Slot is now non-empty
}
}
collide = false;
}
// Indicates that CAS fails in the addCount method and the probe value is not null
else if(! wasUncontended)// CAS already known to fail
// Set to no conflict and enter the next spin
wasUncontended = true; // Continue after rehash
// The value of the specified position is not null, so it is accumulated through CAS
else if (U.compareAndSwapLong(a, CELLVALUE, v = a.value, v + x))
break;
// 1. If other threads have already created counterCells
// 2. The capacity of counterCells is greater than the number of CPU cores, which is actually quite clever because the number of concurrent threads does not exceed the number of CPU cores
else if(counterCells ! = as || n >= NCPU)// Failed to set the loop. Capacity expansion is not continued
collide = false; // At max size or stale
// Restore status: indicates that the capacity will be expanded next time
else if(! collide) collide =true;
// If the thread can execute to this point, it indicates that the current thread is in serious contention, so perform the expansion of counterCells
else if (cellsBusy == 0 &&
U.compareAndSwapInt(this, CELLSBUSY, 0.1)) {
try {
if (counterCells == as) {// Expand table unless stale
// Shift length 1 bit to the left == Capacity *2
CounterCell[] rs = new CounterCell[n << 1];
for (int i = 0; i < n; ++i) rs[i] = as[i]; counterCells = rs; }}finally {
// Restore the flag bit
cellsBusy = 0;
}
collide = false;
// Continue with the next loop
continue; // Retry with expanded table
}
h = ThreadLocalRandom.advanceProbe(h);
}
// counterCells are null
// 1.cellsBusy == 0 indicates that no thread operates cellsBusy
// 2. Use the CAS to change the identifier bit to prepare for the initialization of counterCells
else if (cellsBusy == 0 && counterCells == as &&
U.compareAndSwapInt(this, CELLSBUSY, 0.1)) {
boolean init = false;
try { // Initialize table
if (counterCells == as) {
// The initial capacity is 2
CounterCell[] rs = new CounterCell[2];
rs[h & 1] = new CounterCell(x);
/ / assignment
counterCells = rs;
// Set the initialization completion flag
init = true; }}finally {
// Restore the flag bit
cellsBusy = 0;
}
// Initialization succeeds, and exits the loop
if (init)
break;
}
// If the competition is still fierce, use the last guarantee method, directly add on the baseCount
else if (U.compareAndSwapLong(this, BASECOUNT, v = baseCount, v + x))
break; // Fall back on using base}}Copy the code
Whether we have been dizzy, next take you to analyze the specific expansion process, we be careful of carsickness!
Expansion is one of the essence of ConcurrentHash, the expansion of the core is the data migration, in single-threaded scenarios, transfer is very simple is the old in the array data migration to the new array, but in a multithreaded scenario, might exist in the expansion of the thread in add elements, can add mutex ways to prevent concurrency issues, However, the throughput of ConcurrentHashMap will be extremely reduced, so instead of locking, ConcurrentHashMap adopts the CAS policy without locking. In fact, the most essential part of ConcurrentHashMap is that it can use multiple threads for capacity expansion. Each thread migrates part of the data, thus achieving multi-thread capacity expansion
// Here is the actual implementation of the expansion
private final void transfer(Node<K,V>[] tab, Node<K,V>[] nextTab) {
int n = tab.length, stride;
// Divide buckets by (n >>> 3 = n/8) and then divide by the number of CPU cores
// The purpose of this method is to make each CPU handle the same number of buckets to prevent uneven allocation. If the number of buckets is small, one CPU will handle the capacity expansion task by default
if ((stride = (NCPU > 1)? (n >>>3) / NCPU : n) < MIN_TRANSFER_STRIDE)
stride = MIN_TRANSFER_STRIDE; // subdivide range
// Indicates that the array used for expansion is null and initializes the array
if (nextTab == null) { // initiating
try {
@SuppressWarnings("unchecked")
// Create an array with twice the current capacity
Node<K,V>[] nt = (Node<K,V>[])newNode<? ,? >[n <<1];
nextTab = nt;
} catch (Throwable ex) { // try to cope with OOME
// If capacity expansion fails, sizeCtl uses int maximum value
sizeCtl = Integer.MAX_VALUE;
return;
}
// Update the member scalar
nextTable = nextTab;
transferIndex = n;
}
// The capacity of the new array
int nextn = nextTab.length;
// Create a ForwardingNode node, which indicates that the node is in the process of being migrated and its hash value is -1. If this is deja vu, it is the condition of the above put method to determine whether the capacity is being expanded
ForwardingNode<K,V> fwd = new ForwardingNode<K,V>(nextTab);
// Can we move forward
boolean advance = true;
// Whether capacity expansion is complete
boolean finishing = false; // to ensure sweep before committing nextTab
If you have any questions, please leave a comment in the comment section. I will add a comment on the specific process
// This is a process of expansion from back to front, one by one
for (int i = 0, bound = 0;;) {
Node<K,V> f; int fh;
while (advance) {
int nextIndex, nextBound;
if (--i >= bound || finishing)
advance = false;
else if ((nextIndex = transferIndex) <= 0) {
i = -1;
advance = false;
}
else if (U.compareAndSwapInt
(this, TRANSFERINDEX, nextIndex,
nextBound = (nextIndex > stride ?
nextIndex - stride : 0))) {
bound = nextBound;
i = nextIndex - 1;
advance = false; }}// I < 0 indicates that the current thread has traversed the old array
if (i < 0 || i >= n || i + n >= nextn) {
int sc;
// If capacity expansion is complete
if (finishing) {
// Delete the member variables that appear during capacity expansion
nextTable = null;
table = nextTab;
// Update the next expansion threshold
sizeCtl = (n << 1) - (n >>> 1);
return;
}
// Remember that the capacity expansion stamp +1 is used for each capacity expansion help
if (U.compareAndSwapInt(this, SIZECTL, sc = sizeCtl, sc - 1)) {
// Only the first expansion thread has the expansion stamp +2, the other expansion thread has the expansion stamp +1, if you have any questions, you can go to the top of the list
// If the value after the expansion is equal to the value after the expansion is equal to the value after the expansion is equal to the value after the expansion is equal to the value after the expansion is equal to the value after the expansion is equal to the value after the expansion
if ((sc - 2) != resizeStamp(n) << RESIZE_STAMP_SHIFT)
return;
// If yes, the capacity expansion is complete
finishing = advance = true;
i = n; // recheck before commit}}// If there is no node in this position, put the FWD node
else if ((f = tabAt(tab, i)) == null)
advance = casTabAt(tab, i, null, fwd);
// Indicates that the location has been migrated
else if ((fh = f.hash) == MOVED)
advance = true; // already processed
// Here are the details of the migration
else {
// Lock the head node
synchronized (f) {
if (tabAt(tab, i) == f) {
Node<K,V> ln, hn;
// The hash value is positive, as shown above, where the current node is a linked list node
if (fh >= 0) {
// The following logic is similar to that of a hashMap, which builds a high-low chain. Since the array length has been expanded, some keys may be rehashed at the current location
int runBit = fh & n;
Node<K,V> lastRun = f;
for(Node<K,V> p = f.next; p ! =null; p = p.next) {
int b = p.hash & n;
if (b != runBit) {
runBit = b;
lastRun = p;
}
}
if (runBit == 0) {
ln = lastRun;
hn = null;
}
else {
hn = lastRun;
ln = null;
}
// loop lists to build high and low chains
for(Node<K,V> p = f; p ! = lastRun; p = p.next) {int ph = p.hash; K pk = p.key; V pv = p.val;
if ((ph & n) == 0)
ln = new Node<K,V>(ph, pk, pv, ln);
else
hn = new Node<K,V>(ph, pk, pv, hn);
}
// Add the high and low chains to the target array using CAS
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
// If the current node is a red-black tree node
else if (f instanceof TreeBin) {
TreeBin<K,V> t = (TreeBin<K,V>)f;
// Build the upper and lower trees
TreeNode<K,V> lo = null, loTail = null;
TreeNode<K,V> hi = null, hiTail = null;
int lc = 0, hc = 0;
for(Node<K,V> e = t.first; e ! =null; e = e.next) {
int h = e.hash;
TreeNode<K,V> p = new TreeNode<K,V>
(h, e.key, e.val, null.null);
if ((h & n) == 0) {
if ((p.prev = loTail) == null)
lo = p;
else
loTail.next = p;
loTail = p;
++lc;
}
else {
if ((p.prev = hiTail) == null)
hi = p;
elsehiTail.next = p; hiTail = p; ++hc; }}// Put them in different positions in the target arrayln = (lc <= UNTREEIFY_THRESHOLD) ? untreeify(lo) : (hc ! =0)?newTreeBin<K,V>(lo) : t; hn = (hc <= UNTREEIFY_THRESHOLD) ? untreeify(hi) : (lc ! =0)?new TreeBin<K,V>(hi) : t;
setTabAt(nextTab, i, ln);
setTabAt(nextTab, i + n, hn);
setTabAt(tab, i, fwd);
advance = true;
}
}
}
}
}
}
Copy the code
conclusion
This is the end of the ConcurrentHashMap, and I think I should give myself a round of applause for seeing this, because ConcurrentHashMap, which is ConcurrentHashMap safe, is actually a lot more complicated than the underlying HashMap, with a lot of processing done to solve concurrency problems.
We can see many clever designs of the author, such as the counting table used to calculate the number of key-value pairs, the idea of sharding to solve the concurrency problem, and the method of multithreading expansion to reduce the time needed for expansion, all of which should be considered. If we are the designer of ConcurrentHashMap, Whether we can think of this way to reduce thread contention, improve throughput, can we use these ideas in our daily development, these are we need to consider, after reading the source code thinking I think is the key to improve a person’s ability.
I actually know the source will be very boring, very boring, but I feel or should stick to it, so can do much, know the why, the ability to see the source code will help improve your ability, when you do develop involuntary think of some ideas of the author, so as to enhance the capacity of your code I hope every developer can stick, together come on!
Finally, I would like to end this article with a sentence, work hard to refueling, after many years, you will thank yourself for having worked so hard!
I am he who loves writing code. See you next time!