This article appears in the Interview Cheat Sheet series on Github at github.com/cosen1024/J… Domestic Gitee(downloadable PDF) : gitee.com/cosen1024/J…
Arraylist, LinkedList, HashMap, Hashtable, ConcurrentHashMap, etc.
This is the list of Java set interview questions
1. What are the common sets?
Java Collection classes are derived primarily from the two root interfaces Collection and Map. Collection derives three subinterfaces: List, Set, and Queue (newly added queues in Java5). Therefore, Java collections can be roughly divided into List, Set, Queue, and Map interface systems.
Note: Collection is an interface, Collections is a utility class, and Map is not a subinterface of Collection.
Java collection framework diagram is as follows:
In the figure, List represents an ordered repeatable collection that can be accessed directly by the index of the element; Set stands for unordered non-repeatable collections that can only be accessed on the basis of the elements themselves; A Queue is a collection of queues.
A Map represents a collection of key-value pairs that can be accessed based on the key of an element.
In the figure above, the light green background covers the common implementation classes in the collection system, namely ArrayList, LinkedList, ArrayQueue, HashSet, TreeSet, HashMap, TreeMap, etc.
2. What are thread-safe collections? What about thread insecurity?
Thread-safe:
- Hashtable: More thread-safe than HashMap.
- ConcurrentHashMap: is an efficient but thread-safe collection.
- Vector: More synchronization mechanisms than Arraylist.
- Stack: the Stack, also thread-safe, inherits from the Vector.
Linearly unsafe:
- HashMap
- Arraylist
- LinkedList
- HashSet
- TreeSet
- TreeMap
3. Similarities and differences between Arraylist and LinkedList?
- Threadsafe: ArrayList and LinkedList are not synchronized, that is, not thread-safe;
- Underlying data structures: Arraylist uses Object arrays at the bottom; The underlying LinkedList uses a two-way circular LinkedList data structure;
- Whether insert and delete are affected by element position: Arraylists are arrays, so the time complexity of inserting and deleting elements depends on their location.For example: execute
add(E e)
Method, the ArrayList defaults to append the specified element to the end of the list, which is O(1) time. But if you want to insert and delete elements at position I (add(int index, E element)
The time is O(n-i). Because all the (n-i) elements after the ith and ith elements in the set are moved backwards/forwards by one bit.LinkedList is stored in a LinkedList, so the time complexity of inserting and deleting elements is not affected by the location of elements, and is approximately O (1) while the array is approximately O (n). - Whether fast random access is supported:LinkedList does not support efficient random element access, whereas ArrayList implements the RandmoAccess interface, so it has random access. Fast random access is the quick retrieval of an element object by its ordinal number (corresponding to
get(int index)
Methods). - Memory footprint: ArrayList’s space waste is mainly due to the amount of space reserved at the end of the list, whereas LinkedList’s space cost is due to the fact that each element consumes more space than ArrayList (due to the storage of direct descendants and direct precursors as well as data).
4. Is ArrayList different from Vector?
- Vector is thread-safe, ArrayList is not thread-safe. Vector adds the synchronized keyword in front of key methods to ensure thread safety. If multiple threads are accessing the collection, it is best to use Vector because we do not need to worry about and write thread-safe code ourselves.
- ArrayList is 0.5 times larger when the underlying array is insufficient, and Vector is twice as large, so ArrayList helps save memory.
5. How does ArrayList scale up?
The essence of ArrayList expansion is to calculate the size of the new array, instantiate it, and copy the contents of the original array into the new array. By default, the new capacity is 1.5 times the original capacity.
Take JDK1.8 as an example:
public boolean add(E e) {
// Determine whether e can be contained, if so, add it directly to the end; If not, expand the capacity and add e to the end
ensureCapacityInternal(size + 1); // Increments modCount!!
// Add e to the end of the array
elementData[size++] = e;
return true;
}
// Each time an element is added (), arrayList needs to make a judgment about the list's capacity. The ensureCapacityInternal() method ensures that the array maintained by the current ArrayList has the ability to store new elements, which are then processed and stored at the end of the array elementData
private void ensureCapacityInternal(int minCapacity) {
ensureExplicitCapacity(calculateCapacity(elementData, minCapacity));
}
private static int calculateCapacity(Object[] elementData, int minCapacity) {
// If an empty array is passed in, the minimum capacity is the maximum between the default capacity and minCapacity
if (elementData == DEFAULTCAPACITY_EMPTY_ELEMENTDATA) {
return Math.max(DEFAULT_CAPACITY, minCapacity);
}
return minCapacity;
}
private void ensureExplicitCapacity(int minCapacity) {
modCount++;
// If the existing storage capacity of ArrayList meets the minimum storage requirements, add directly adds elements. If the minimum required storage capacity >ArrayList existing storage capacity, this indicates that the ArrayList storage capacity is insufficient, so need to call grow(); Method to expand capacity
if (minCapacity - elementData.length > 0)
grow(minCapacity);
}
private void grow(int minCapacity) {
// Get the memory size of the elementData array
int oldCapacity = elementData.length;
// Expand by 1.5 times
int newCapacity = oldCapacity + (oldCapacity >> 1);
// Check whether the capacity is sufficient
if (newCapacity - minCapacity < 0)
newCapacity = minCapacity;
// If the default value is greater than the maximum value, check for overflow
if (newCapacity - MAX_ARRAY_SIZE > 0)
newCapacity = hugeCapacity(minCapacity);
// Call the arrays.copyof method to point the elementData array to the new memory space
// Copy elementData's data to the new memory space
elementData = Arrays.copyOf(elementData, newCapacity);
}
Copy the code
6. What’s the difference between Array and ArrayList? When should an Array be used instead of an ArrayList?
-
Array can contain both primitive and object types, and ArrayList can contain only object types.
-
The Array size is fixed, whereas the size of an ArrayList changes dynamically.
-
ArrayList provides more methods and features, such as addAll(), removeAll(), iterator(), and so on.
7. What is the underlying data structure of HashMap?
There are differences between JDK1.7 and JDK1.8:
In JDK1.7, it consists of “arrays + linked lists”, where arrays are the body of a HashMap, and linked lists are primarily used to resolve hash conflicts.
In JDK1.8, it consists of “array + linked list + red-black tree”. When the list is too long, the performance of the HashMap is severely affected. Red-black tree search time complexity is O(logn), while the list is O(n) bad. For this reason, JDK1.8 further optimizes the data structure by introducing red-black trees. Linked lists and red-black trees convert when certain conditions are reached:
-
When the linked list exceeds 8 and the total number of data exceeds 64, the tree becomes red-black.
-
If the length of the current array is less than 64, array expansion will be selected rather than conversion to a red-black tree to reduce search time.
8. What are the methods to resolve hash conflicts? What kind of HashMap is used?
Methods to resolve Hash conflicts include open addressing, rehash, chain address (zipper), and establishment of a public overflow area. The chain-address approach is used in HashMap.
- Open addressing is also known as
Hashing again
The basic idea is ifp=H(key)
In case of conflict, thenp
Hash again,p1=H(p)
, if p1 conflicts again, p1 is used as the basis, and so on, until a non-conflicting hash address is foundpi
. Therefore, the length of the hash table required by the open addressing method must be greater than or equal to the element to be stored, and since there is rehash, soYou can only mark deleted nodes, not actually delete them.
- Rehash (double hash, multiple hash) provides multiple different hash functions when
R1=H1(key1)
In the event of a conflict, we calculateR2=H2(key1)
Until there is no conflict. This does not result in clustering, but increases the calculation time. - Linked address method (zipper method), the hash value of the same elements to form a single linked list of synonyms, and the single linked list of the head pointer stored in the hash table I cell, search, insert and delete mainly in the synonym list. Linked lists are good for frequent inserts and deletions.
- Establish a public overflow area, divide the hash table into public table and overflow table, when overflow occurs, put all overflow data into the overflow area.
9. Why not just use red-black trees when resolving hash conflicts? Instead of using a linked list and then switching to a red-black tree, right?
Because a red-black tree needs to do left rotation, right rotation, color change to maintain balance, whereas a single-linked list doesn’t. When the number of elements is less than 8, the linked list structure can ensure the query performance. When the number of elements is larger than 8, the search time complexity of red-black tree is O(logn), while the linked list is O(n). At this time, red-black tree is needed to speed up the query, but the efficiency of new nodes is slower.
Therefore, if you start with a red-black tree structure, with too few elements and slow addition, you will definitely waste performance.
10. What is the default HashMap load factor? Why 0.75, not 0.6 or 0.8?
Before answering this question, let’s take a look at the default constructor for HashMap:
int threshold; // Hold the maximum number of key-value pairs
final float loadFactor; // Load factor
int modCount;
int size;
Copy the code
The initial length of Node[] table is length(the default is 16), Load factor is the Load factor (the default is 0.75), and threshold is the maximum number of key-value pairs a HashMap can hold. Threshold = length * Load factor. That is, the larger the load factor, the greater the number of key-value pairs it can hold after the array has been defined.
The default loadFactor is 0.75. 0.75 is a balanced choice for space and time efficiency and should not be changed except in special time and space cases:
-
If you have a lot of memory and are time efficient, you can lower the value of the Load factor.
-
Conversely, if memory is tight and time efficiency is not high, you can increase the value of the loadFactor, which can be greater than 1.
Let’s go back to the author’s comments in the source code (JDK1.7) :
As a general rule, the default load factor (75.) offers a good tradeoff between time and space costs. Higher values decrease the space overhead but increase the lookup cost (reflected in most of the operations of the HashMap class, including get and put). The expected number of entries in the map and its load factor should be taken into account when setting its initial capacity, so as to minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.Copy the code
As a general rule, the default load factor (0.75) provides a good trade-off between time and space costs. Higher values reduce the space overhead but increase the cost of lookups (shown in most HashMap class operations, including GET and PUT). When setting the initial size, you should consider the expected number of entries in the map and its load factor, and minimize the number of rehash operations. If the initial capacity is greater than the maximum number of entries divided by the load factor, the rehash operation will not occur.
11. How to calculate the storage index of the key in the HashMap?
First of all, according to the key value to calculate the value of hashcode, then according to the hashcode calculate the hash value, at last, through the hash & (length – 1) to calculate the storage location. Look at the implementation of the source code:
/ / jdk1.7Method one:static int hash(int h) {
int h = hashSeed;
if (0! = h && kinstanceof String) {
return sun.misc.Hashing.stringHash32((String) k);
}
h ^= k.hashCode(); // is the first step: fetch the hashCode value
h ^= (h >>> 20) ^ (h >>> 12);
return h ^ (h >>> 7) ^ (h >>> 4); } Method 2:static int indexFor(int h, int length) { //jdk1.7 source code, JDk1.8 does not have this method, but implementation principle is the same
return h & (length-1); // Step 3: modulo operation
}
Copy the code
/ / jdk1.8
static final int hash(Object key) {
int h;
return (key == null)?0 : (h = key.hashCode()) ^ (h >>> 16);
/* h = key.hashcode (); /* h = key.hashcode ()
}
Copy the code
The Hash algorithm here essentially consists of three steps: taking the hashCode value of the key, computing the Hash value based on the hashCode, and computing the subscript by modulo. The difference between JDK1.7 and 1.8 is in the second step. Take JDK1.8 as an example. N is the length of the table.
12. Put method flow of HashMap?
Taking JDK1.8 as an example, the process is as follows:
-
First, compute the hash value based on the key value and find the index of the element stored in the array.
-
If the array is empty, call resize to initialize it;
-
If there’s no hash conflict, just put it in the corresponding array index;
-
If a conflict occurs and the key already exists, override the value;
-
If the node is found to be a red-black tree after the conflict, the node is attached to the tree.
-
If there is a linked list after the conflict, check whether the list is larger than 8. If the list is larger than 8 and the array capacity is smaller than 64, expand the list. If the list node is larger than 8 and the array is larger than 64, the structure is converted to a red-black tree; Otherwise, the list inserts key-value pairs, overwriting the value if the key exists.
13. How to expand HashMap?
The HashMap expands when it exceeds the capacity defined by the load factor. Java arrays cannot be automatically expanded by doubling the size of the HashMap and putting the original objects into the new array.
So what are the steps of expansion? Let’s look at the source code.
Take a look at the JDK1.7 code first
void resize(int newCapacity) { // Pass in a new capacity
Entry[] oldTable = table; // Reference the Entry array before expansion
int oldCapacity = oldTable.length;
if (oldCapacity == MAXIMUM_CAPACITY) { // The size of the array before the expansion has reached the maximum (2^30)
threshold = Integer.MAX_VALUE; // Change the threshold to the maximum value of int (2^31-1) so that the capacity will not be expanded later
return;
}
Entry[] newTable = new Entry[newCapacity]; // Initialize a new Entry array
transfer(newTable); / /!!!!! Move the data to a new Entry array
table = newTable; // The table property of the HashMap references the new Entry array
threshold = (int)(newCapacity * loadFactor);// Change the threshold
}
Copy the code
The transfer() method copies the elements of the original Entry array to the new one by replacing the existing one with a larger one.
void transfer(Entry[] newTable) {
Entry[] src = table; // SRC references the old Entry array
int newCapacity = newTable.length;
for (int j = 0; j < src.length; j++) { // Iterate over the old Entry array
Entry<K,V> e = src[j]; // Get each element of the old Entry array
if(e ! =null) {
src[j] = null;// Release the object reference of the old Entry array (after the for loop, the old Entry array no longer references any objects)
do {
Entry<K,V> next = e.next;
int i = indexFor(e.hash, newCapacity); / /!!!!! Recalculates the position of each element in the array
e.next = newTable[i]; / / tag [1]
newTable[i] = e; // Put the element on the array
e = next; // Access the element in the next Entry chain
} while(e ! =null); }}}Copy the code
The reference to newTable[I] is assigned to e.ext, which uses single-linked header insertion, where new elements are always placed at the head of the list. Elements placed on an index will end up at the end of the Entry chain if a hash conflict occurs.
JDK1.8 has made two optimizations
-
After resize, the position of the element is the original position, or the original position +oldCap (the length of the original hash table). There is no need to recalculate the hash as in JDK1.7, just to see if the new bit of the hash value is 1 or 0. 0 is the same index, or 1 is oldCap. This design is very clever and saves the time of recalculating the hash value.
As shown in the following figure, n is the length of the table. Figure (a) shows the example of determining the index position with key1 and KEY2 keys before capacity expansion. Figure (b) shows the example of determining the index position with key1 and KEY2 keys after capacity expansion. Where hash1 is the hash and high order operation result corresponding to key1.
After recalculating the hash element, the mask range of n-1 is increased by 1 bit(red) since n is doubled, so the new index will look like this:
- In JDK1.7, when an old list is migrated to a new one in the same array index position, the list element will be inverted (header method). JDK1.8 will not be upside down, using tail insertion.
14. What is commonly used as a HashMap key?
Immutable classes such as Integer and String are commonly used as HashMap keys, and String is most commonly used.
- Because the string is immutable, hashCode is cached when it is created and does not need to be recalculated. This is why keys in a HashMap tend to use strings.
- Since the equals() and hashCode() methods are used when retrieving objects, it is important that the key object overrides them properly. These classes have overridden hashCode() and equals() methods quite formally.
15. Why is HashMap thread unsafe?
- Infinite loop under multithreading. The HashMap in JDK1.7 uses header insertion to insert elements. In a multi-threaded environment, expansion can lead to an infinite loop of circular lists. For this reason, JDK1.8 uses tail inserts to keep the list elements in their original order during expansion without the problem of circular lists.
- Multithreaded PUT can cause elements to be lost. If multiple threads perform the put operation at the same time, if the computed index positions are the same, the previous key will be overwritten by the next key, resulting in element loss. This problem exists in BOTH JDK 1.7 and JDK 1.8.
- If PUT and GET are concurrent, GET may be null. Thread 1 performs put, rehash because the number of elements exceeds threshold, and thread 2 performs GET, which may cause this problem. This problem exists in BOTH JDK 1.7 and JDK 1.8.
Interviewer: Why is HashMap thread unsafe?
16. What is the implementation principle of ConcurrentHashMap?
ConcurrentHashMap is implemented differently in JDK1.7 and JDK1.8.
Take a look at JDK1.7
ConcurrentHashMap in JDK1.7 consists of a Segment array structure and a HashEntry array structure. ConcurrentHashMap divides hash buckets into small arrays. Each small array consists of n Hashentries.
Segment inherits a ReentrantLock, so it acts as a ReentrantLock. HashEntry is used to store key-value pair data.
First of all, the data is divided into a section of storage, and then each section of data is matched with a lock. When a thread uses the lock to access one section of data, the data of other sections can also be accessed by other threads, which can achieve real concurrent access.
Take a look at JDK1.8
In terms of data structure, ConcurrentHashMap in JDK1.8 selects the same array + linked list + red-black tree structure as HashMap. In the implementation of lock, the original Segment lock is abandoned, CAS + synchronized to achieve lower granularity lock.
The lock level is controlled at the level of finer hash bucket elements, that is, only the head of the list (the root of the red-black tree) needs to be locked without affecting the read and write of other hash bucket elements, which greatly improves the concurrency.
17. What is the execution logic of the put method of ConcurrentHashMap?
Take the JDK1.7
First, an attempt is made to acquire the lock. If it fails, the lock is acquired using spin. If the number of spin retries exceeds 64, block to acquire the lock instead.
After obtaining the lock:
- Locate the table in the current Segment to a HashEntry using the hashcode of the key.
- The HashEntry is iterated over, and if it is not empty it determines whether the passed key is equal to the currently iterated key, and if it is, the old value is overwritten.
- If it is not empty, create a HashEntry and add it to the Segment. At the same time, it determines whether to expand the Segment.
- Release the Segment lock.
Look at JDK1.8
It can be roughly divided into the following steps:
- Calculates the hash value based on the key.
- Check whether initialization is required.
- Locate Node, get the first Node F, and judge the first Node F:
- If the value is null, the cas is used to add the value.
- If it is
f.hash = MOVED = -1
“Indicates that other threads are expanding capacity and participate in the expansion. - If not, synchronized locks the F node, determines whether it is a linked list or a red-black tree, and traverses and inserts.
- When the list length reaches 8, the array is expanded or the list is converted to a red-black tree.
Interview ConcurrentHashMap, this article is enough!
18. Should the get method of ConcurrentHashMap be locked? Why?
The get method does not need to be locked. Because the element val and pointer next are volatile, thread A is visible to thread B when changing val or adding A Node in A multithreaded environment.
This is it better than other concurrent Collections such as Hashtable, with Collections. SynchronizedMap () packaging HashMap security is one of the reasons for high efficiency.
static class Node<K.V> implements Map.Entry<K.V> {
final int hash;
final K key;
// You can see that these are volatile
volatile V val;
volatile Node<K,V> next;
}
Copy the code
19. Does the get method not need to lock have anything to do with volatile hash buckets?
It doesn’t matter. The hash bucket table is volatile to ensure that the table is visible when the array is expanded.
static final class Segment<K.V> extends ReentrantLock implements Serializable {
// A bucket for storing data
transient volatile HashEntry<K,V>[] table;
Copy the code
20. Why does ConcurrentHashMap not support key or value null?
ConcurrentHashMap is used by multiple threads. If map.get(key) is null, we cannot determine whether the mapping value is null or whether the corresponding key is not found. That’s where the ambiguity comes in.
A single-threaded HashMap can use containsKey to determine whether it contains the null.
We use contradiction to reason:
Concurrenthashmap.get (key); concurrenthashMap.get (key); concurrenthashMap.get (key); concurrenthashMap.get (key) We do not know if this null is unmapped or if the value stored is NULL.
Assuming at this point, the true case of returning null is that the corresponding key was not found. So, we can verify our hypothesis with concurrenthashmap.containsKey (key), and we expect to return false.
But after calling concurrenthashmap. get(key) and before containsKey, thread B performs concurrenthashMap. put(key, null). So if we call containsKey it’s going to return true, and that’s not going to be true, and that’s going to be ambiguous.
The key in ConcurrentHashMap cannot also be null. If the interviewer is not satisfied, reply that the author Doug does not like NULL, so he did not allow null keys in the design. I don’t know what the interviewer is looking for in an answer to this interview question
21. What is the concurrency of ConcurrentHashMap?
In JDK1.7, the default concurrency is 16, which can be set in the constructor. If you set your own concurrency, ConcurrentHashMap uses the lowest power of 2 greater than or equal to the value as the actual concurrency, i.e., if you set 17, the actual concurrency is 32.
22. Is the ConcurrentHashMap iterator strongly or weakly consistent?
Unlike the HashMap iterator, which is strongly consistent, the ConcurrentHashMap iterator is weakly consistent.
When the iterator of ConcurrentHashMap is created, it iterates over each element in a hash table structure, but during the iteration, internal elements may change. If the change occurs in a traversed part, the iterator will not reflect it. If the change occurs in an untraversed part, the iterator will detect and reflect it. This is weak consistency.
This allows the iterator thread to use the old data, while the writer thread can make changes concurrently. More importantly, this ensures continuity and scalability for concurrent execution by multiple threads, which is key to improved performance. For more information, check out this article why ConcurrentHashMap is Weakly consistent (ifeve.com/ConcurrentH… -weakly-consistent/)
23. What is the difference between JDK1.7 and JDK1.8 ConcurrentHashMap?
- Data structure: Remove Segment locking and replace it with arrays, linked lists, and red-black trees.
- Ensure thread-safe mechanism: JDK1.7 uses Segment locking mechanism to achieve thread-safe, where segments inherit from ReentrantLock. JDK1.8 adopts CAS+Synchronized to ensure thread safety.
- Lock granularity: Lock each element of array (Node) from Segment where data is to be manipulated.
- Conversion of linked list to red-black tree: The simplification of hash algorithm for locating nodes will bring disadvantages and aggravate hash conflicts. Therefore, when the number of linked list nodes is larger than 8, the linked list will be converted to red-black tree for storage.
- Query time complexity: from traversal list O(n) to traversal red-black tree O(logN).
24. Which is more efficient, ConcurrentHashMap or Hashtable? Why is that?
ConcurrentHashMap is more efficient than Hashtable because Hashtable adds a large lock to the entire Hashtable to make it thread-safe. However, ConcurrentHashMap has a lower lock granularity. In JDK1.7, piecewise locking is used to achieve thread safety, and in JDK1.8, CAS+Synchronized is used to achieve thread safety.
25. What is the locking mechanism of Hashtable?
Hashtable uses Synchronized to achieve thread safety, adding a large lock to the entire Hashtable. When multi-threaded access, as long as one thread accesses or manipulates the object, the other threads can only block and wait for the required lock to be released. In the competitive multi-threaded scene, performance will be very poor!
26. Is there any other way to safely manipulate map in multi-threading?
You can also use the Collections synchronizedMap method, to add synchronization lock method
private static class SynchronizedMap<K.V>
implements Map<K.V>, Serializable {
private static final long serialVersionUID = 1978198479659022715L;
private final Map<K,V> m; // Backing Map
final Object mutex; // Object on which to synchronize
SynchronizedMap(Map<K,V> m) {
this.m = Objects.requireNon null (m);
mutex = this;
}
SynchronizedMap(Map<K,V> m, Object mutex) {
this.m = m;
this.mutex = mutex;
}
// Omit some code
}
Copy the code
If a HashMap object is passed in, it is a wrapper around the method used in HashMap. The object lock is used to ensure thread-safety in multi-threaded scenarios. In the competitive multi-threaded environment performance is still very poor, not recommended!
27. What is the difference between HashSet and HashMap?
The implementation of a HashSet is actually a HashMap, but we implement the Set interface and treat the data as K value, while the V value is always stored with the same virtual value. As shown in the source code:
public boolean add(E e) {
return map.put(e, PRESENT)==null;// Call the put method of the HashMap. PRESENT is a virtual value that is the same throughout
}
Copy the code
Since K values in a HashMap are inherently non-repeatable, and in a HashMap if K/V is the same, a new V overwrites the old V and returns the old V, executing this sentence in a HashSet always returns false, resulting in an insert failure, thus ensuring that the data is not repeatable.
28. How to implement comparison in Collection framework?
First, the entity class implements the Comparable interface and implements the compareTo(T T) method, called an internal comparator.
Second, create an external Comparator that implements the compare(T T1, T T2) method of the Comparator interface.
29. What is the difference between Iterator and ListIterator?
- Traverse. Using Iterator, you can iterate over all collections, such as maps, lists, and sets. But you can only traverse the elements of the collection in the forward direction.
With ListIterator, you can only iterate over objects implemented by List, but you can traverse the elements in a collection forward and backward.
-
Add elements. Iterator cannot add elements to the collection; ListIteror, on the other hand, can add elements to a collection.
-
Modify the element. Iterator cannot modify elements in a collection; ListIterator, on the other hand, can modify elements in a collection using set().
-
The index. Iterator cannot get the index of the elements in the collection; Instead, with ListIterator, you can get the index of the elements in the collection.
30. Talk about fail-fast and fail-safe.
Fail — fast
-
When traversing a collection object with an iterator, Concurrent Modification Exceptions are thrown if the contents of the collection object are modified (adding, deleting, modifying) during the traversal.
-
How it works: Iterators directly access the contents of a collection during traversal and use a modCount variable during traversal. If the contents of the collection change during traversal, the value of modCount is changed. Whenever the iterator iterates over the next element using hashNext()/next(), it checks whether the modCount variable is the expectedmodCount value and returns the traversal if it is; Otherwise, an exception is thrown and the traversal is terminated.
-
Note: The exception is thrown if modCount is detected! =expectedmodCount This condition. If the set changes when the modCount value is just set to the expectedmodCount value, the exception will not be thrown. Therefore, concurrent operations cannot be programmed depending on whether or not this exception is thrown; this exception is only recommended for detecting concurrent modification bugs.
-
Scenario: Collection classes in the java.util package fail fast and cannot be modified concurrently (iteratively) in multiple threads, such as HashMap and ArrayList.
** Fail — safe **
-
The collection container using security failure mechanism is not directly accessed on the collection content in traversal, but copies the original collection content first and traverses the copied collection.
-
Principle: Since the copy of the original collection is iterated during iteration, the changes made to the original collection during iteration cannot be detected by the iterator, so Concurrent Modification Modification is not triggered.
-
Disadvantages: Copy-based has the advantage of avoiding Concurrent Modification Exceptions, but again, the iterator cannot access the modified content, i.e., iterators iterate over the copy of the collection at the beginning of the iteration, and the iterator is not aware of the changes that occurred during the iteration of the original collection.
-
Scenario: Java.util. concurrent containers are security failures and can be used and modified concurrently in multiple threads, such as ConcurrentHashMap.
End
Sorting is not easy, seeking a three even ~ hope to find a job partners have satisfactory offer
Shoulders of giants
Juejin. Cn/post / 684490…
www.javazhiyin.com/71751.html
Blog.csdn.net/qq_31780525…
www.cnblogs.com/zeroingToOn… Here I also recommend a collection of computer books warehouse, the warehouse has hundreds of classic CS e-books, read the classic books will be deeper ~
Click this link to get you to the list of must-read books (PDF download included)
Github also has a repository at github.com/cosen1024/a… Welcome to star.