In the background
1. Thread-unsafe HashMap
Because using HashMap to put in a multi-threaded environment causes an infinite loop that leads to CPU utilization approaching 100%, you cannot use HashMap in a concurrent environment.
2. Inefficient Hashtables
The HashTable container uses synchronized to ensure thread safety, but the efficiency of HashTable is very low in the case of thread contention, because when one thread is accessing the synchronization method of HashTable, another thread is also accessing the synchronization method of HashTable. You may enter a blocking or polling state. For example, thread A uses PUT to add elements, thread B cannot use put method to add elements, and thread B cannot use GET method to obtain elements. Therefore, the more fierce the competition is, the lower the efficiency is. In other words, for HashTable, synchronized is aimed at the Hash of the whole table, that is, lock the whole table each time and let the thread monopolize it. This is equivalent to all threads competing for a lock while reading and writing, resulting in very low efficiency.
Lock segmentation technique for ConcurrentHashMap
The reason a HashTable container is inefficient in a competitive concurrent environment is that all threads accessing a HashTable must compete for the same lock. If there are multiple locks in the container, and each lock is used to lock one part of the data in the container, then there is no lock competition between threads when accessing different data segments in the container, thus effectively improving the efficiency of concurrent access. This is the lock segmentation technology used by ConcurrentHashMap. First, the data is stored in segments, and then each segment is assigned a lock. When one thread uses the lock to access one segment of data, other segments can be accessed by other threads. In addition, ConcurrentHashMap can read data without locking. And its internal structure allows it to keep the lock granularity as small as possible during write operations, without locking the entire ConcurrentHashMap.
ConcurrentHashMap consists of the Segment data structure and the HashEntry data structure. The Segment is a ReentrantLock that acts as a lock in ConcurrentHashMap. HashEntry is used to store key-value pair data. A ConcurrentHashMap contains an array of segments. The Segment structure is similar to that of a HashMap. A Segment contains an array of Hashentries. Each Segment guards an element in a HashEntry array. When modifying the HashEntry array, you must obtain its Segment lock.
The internal structure of ConcurrentHashMap
ConcurrentHashMap uses an internal structure called a Segment to improve its concurrency. A Segment is actually a HashTable structure, which maintains a linked list array. Let’s use the following diagram to see the internal structure of ConcurrentHashMap:
From the above structure, we can see that ConcurrentHashMap Hash an element twice, the first Hash to the Segment and the second Hash to the head of the list in which the element resides. The side effect of this structure is that it takes longer to Hash than a regular HashMap, but it also has the benefit of being able to lock only the Segment in which the element is written without affecting the rest of the Segment. Thus, in the best case, ConcurrentHashMap can support write operations with a maximum number of segments at the same time. Therefore, ConcurrentHashMap’s concurrency capability can be greatly improved by using this structure.
1.Segment
Let’s look at the Segment data structure:
static final class Segment<K,V> extends ReentrantLock implements Serializable {
transient volatile int count;
transient int modCount;
transient int threshold;
transient volatile HashEntry<K,V>[] table;
final float loadFactor;
}
Copy the code
Explain the meaning of member variables in Segment:
ModCount: Number of operations that affect the size of the table (such as put or remove) threshold: Table: a list array. Each element in the array represents the head of a linked list. LoadFactor: used to determine the thresholdCopy the code
Count to the number of the segment data, it is a volatile variable, it is used to coordinate changes and read operations, to ensure that the read operation can read almost the value of the latest revision to coordinate method is that every change made structural changes, such as add/remove node (the value of the modified node is not structural changes). The value of count is written, and the value of count is read at the beginning of each read operation. This takes advantage of the enhanced semantics of volatile in Java5, where writes and reads to the same volatile variable have a happens-begore relationship. ModCount counts the number of changes of segment data, mainly to detect whether a segment changes during the traversal of multiple segments. Threashold is used to indicate the threshold value for rehash. The table array stores the nodes in the segment. Each array is a hash chain, denoted by HashEntry. Tables are also volatile, which enables the latest table values to be read without synchronization. LoadFactor indicates the loadFactor.
2.HashEntry
The elements in the Segment are stored as hashentries in a linked list array.
static final class HashEntry<K,V> {
final K key;
final int hash;
volatile V value;
final HashEntry<K,V> next;
}
Copy the code
One of the features of HashEntry is that all variables except value are final. This means that nodes cannot be added or removed from the end or middle of the hash list, because this would require modifying the next reference value. All nodes can only be modified from the head. The PUT operation can be added to the head of the hash list without exception. For the remove operation, however, a node may need to be removed from the middle. In this case, all nodes before the node to be removed need to be copied, with the last node pointing to the next node of the node to be removed. To ensure that reads see the latest values, value is set to volatile, which avoids locking.
Initialization of ConcurrentHashMap
The ConcurrentHashMap initializer is used to initialize ConcurrentHashMap.
public ConcurrentHashMap(int initialCapacity, float loadFactor, int concurrencyLevel) { if (! (loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0) throw new IllegalArgumentException(); if (concurrencyLevel > MAX_SEGMENTS) concurrencyLevel = MAX_SEGMENTS; // Find power-of-two sizes best matching arguments int sshift = 0; int ssize = 1; while (ssize < concurrencyLevel) { ++sshift; ssize <<= 1; } segmentShift = 32 - sshift; segmentMask = ssize - 1; this.segments = Segment.newArray(ssize); if (initialCapacity > MAXIMUM_CAPACITY) initialCapacity = MAXIMUM_CAPACITY; int c = initialCapacity / ssize; if (c * ssize < initialCapacity) ++c; int cap = 1; while (cap < c) cap <<= 1; for (int i = 0; i < this.segments.length; ++i) this.segments[i] = new Segment<K,V>(cap, loadFactor); }Copy the code
The CurrentHashMap initialization has three parameters: initialCapacity, the initialCapacity, loadFactor, the load parameter, and concurrentLevel. Indicates the number of internal segments in ConcurrentHashMap. Once ConcurrentLevel is specified, it cannot be changed. If the number of ConcurrentHashMap elements increases, ConcurrentHashMap needs to be expanded. ConcurrentHashMap does not increase the number of segments. Instead, ConcurrentHashMap increases the size of the linked list array. You only need to rehash the elements in the Segment once.
The default initial table capacity must be a power of 2 (at least 1) and a maximum of 16.
ConcurrentHashMap get operation
The ConcurrentHashMap get operation is unlocked.
public V get(Object key) {
int hash = hash(key.hashCode());
return segmentFor(hash).get(key, hash);
}
Copy the code
The second line hashes the hash value for a second time. The reason for the second hash is to reduce hash conflicts and evenly distribute elements across different segments, thus improving the access efficiency of the container.
In line 3, the segmentFor function is used to determine which Segment the operation should be performed in. Almost all operations on ConcurrentHashMap need to find a function.
final Segment<K,V> segmentFor(int hash) {
return segments[(hash >>> segmentShift) & segmentMask];
}
Copy the code
This function uses bitwise operations to determine the Segment, moving the segmentShift bit unsigned to the right based on the hash value passed in, and then working with the segmentMask. Assuming the number of segments is 2 to the power of n, You can determine which Segment the element is in based on the high N bits of its hash value.
Once you’ve determined which segment to operate on, the next thing to do is call the corresponding segment’s get method:
V get(Object key, int hash) { if (count ! = 0) {// read-volatile // ① HashEntry<K,V> e = getFirst(hash); while (e ! = null) { if (e.hash == hash && key.equals(e.key)) { V v = e.value; if (v ! = null) // return v; return readValueUnderLock(e); // recheck } e = e.next; } } return null; }Copy the code
Let’s start with the second line, which checks count, which is the number of elements in the Segment. Transient volatile int count;
As you can see, count is volatile, in fact using the semantics of volatile:
Writes to volatile fields happens-before each subsequent read of the same field. Because put, remove, and the like actually update count, the semantics of volatile ensure that writes are visible to subsequent reads before they are read when a race occurs, so that subsequent GET operations can get the full element content.
Then, on the third line, getFirst() is called to get the list header:
HashEntry<K,V> getFirst(int hash) {
HashEntry<K,V>[] tab = table;
return tab[hash & (tab.length - 1)];
}
Copy the code
Similarly, bitwise operations are used to determine the head of the list, and the hash and HashTable lengths are subtracted by one, resulting in the lowest n bits of the hash value, where n is the result of the HashTable length base 2.
If the value is null, then the key and value pair may be in the process of being put. If this happens, lock the key to ensure that the value retrieved is complete. If it is not null, value is returned.
The get method does not use a lock for synchronization, but only checks whether the value of the entry is null. If the value is null, the lock is used to obtain the entry again.
This implementation is very subtle, if there is no lock synchronization, how do you guarantee synchronization? Let’s do it step by step.
The first step is to determine the count! = 0; The count variable represents the number of entries in the segment. If it’s 0, you don’t have to look. If another thread puts or removes an entry from this segment, will the count be inconsistent between the two threads? Take a look at the definition of the count variable:
transient volatile int count;
Copy the code
It uses volatile to modify. After Java5, the JMM implemented a guarantee against volatile: writes to a volatile field happens-before each subsequent read or write to the same field. So, every time the count variable is judged, it will show even if another thread changes the segment.
The second step is to get the index address in the segment to which the key belongs. If the address has the same hash object, compare the entry down the list to find it. When an entry is found, a comparison is made: if(v! = null) where we annotate in red. Why is this? What if another thread added/deleted an entry or changed the value of the entry?
As mentioned above, the structure of the HashEntry class is final except for the value. That is, the value can be changed, but nothing else can be changed, including the next that points to the next HashEntry.
(1) Another thread adds an entry between ① and ② of the GET code. If another thread adds the same entry that we want to get, it becomes more subtle. The following diagram Outlines the process of putting a new entry.
Since every next ina HashEntry is final, there is no way to add a subsequent entry to the last element of the list, so the only way to insert an entry is through a header.
The newEntry object is created using new HashEntry(K K, V V, HashEntry Next). If another thread happens to new the object, the current thread gets it. Because there is no synchronization, it is possible that the current thread gets a newEntry object that is not a fully constructed object reference. Without lock synchronization, there is no guarantee that multiple threads will see the state of the object when new. It is also possible that another thread will get a reference to the object before the constructor is completed. So you need to judge: if (v! = null) if it is indeed an incomplete object, get again using a lock.
Is it possible to put an entry with a value of NULL? No, it has already been checked, this case will throw an exception, so the judgment at ② is entirely based on the status of a new object accessed by multiple threads. (2) Between ① and ② of the GET code, another thread modifies the value of an entry. Value is decorated with volitale to ensure that the modified value is retrieved when read.
(3) After ① of the get code, another thread deletes an entry.
Suppose our linked list elements are: E1 -> e2 -> e3 -> e4 We want to delete e3 entry. Because next in HashEntry is immutable, we cannot directly point e2 next to E4. Instead, we copy the nodes before the node to be deleted to form a new linked list. Its implementation is roughly as follows:
If we also get e3, maybe as soon as we follow the list to E1, another thread deletes E3, and our thread continues to follow the old list to E3 and back. There’s no way to guarantee that in real time.
We determine the count variable at ①, which ensures that we can see the changes made by other threads at ①. If another thread deletes the entry node between ① and ②, there is no guarantee that it will see the latest entry. But it doesn’t matter, even if it is deleted by another thread when we return to E3, the e3 leak won’t affect our new list.
In fact, this is an optimistic design. The designer assumes that the operation between ① and ② is unlikely to be added, deleted, or modified by other threads, so it does not adopt synchronous design. Instead, it adopts the way of making up after the fact (other threads also operate during this period, and unsafe events may occur). Since the “change” and “delete” of other threads do not affect our data, only the “new” operation is checked, that is, the non-null check at ②. If it is confirmed that unsafe events occur, the lock method is used to get again. Doing so reduces the concurrency performance impact of using mutex. Some may wonder if copying lists in the remove operation is too expensive. I haven’t gone into depth here, but since Java5 implemented this way, I think the cost of new objects is not as severe as earlier thought.