List

1. Why arrayList is not secure?

We looked at the source code and found that the CRUD operations of ArrayList don’t involve locking or anything like that. The bottom layer is an array with an initial size of 10. When the array is inserted, it determines whether the array capacity is sufficient. If not, it expands the array capacity. Expansion is creating a new array and then copying the elements from the old data into the new array (hence the slow increment).

2. What are the features of CopyOnWriteArrayList?

It is an implementation of the List interface in java.util.concurrent (JUC).

ReentrantLock = new ReentrantLock(); For add, delete and change operations are lock first and then release the lock, thread safety. And there is only one lock, and read operations do not need to obtain the lock, support concurrency.

Read/write separation, copy a new array while writing, and assign the new array to the array after insertion, modification, or removal.

3. CopyOnWriteArrayList vs. Vector?

Vector is a Vector with synchronized methods, but each method has to acquire the lock when it executes, and the performance will be greatly reduced. CopyOnWriteArrayList locks only the add, delete, change and read methods, but does not lock. Read performance is better than Vector, and CopyOnWriteArrayList supports read more than write less concurrency.

Vector and CopyOnWriteArrayList are both implementations of the List interface.

4. When does CopyOnWriteArrayList apply?

We look at the source code is not difficult to find that every time it adds an element, it has to be copied, which seriously affects the performance of add, delete and modify, which is several hundred times worse than arrayList.

So CopyOnWriteArrayList is more suitable and thread-safe for more read and less write operations.

DriverManager uses CopyOnWriteArrayList.

5. LinkedList vs. ArrayList?

LinkedList<Integer> lists = new LinkedList<>(); lists.addFirst(1); lists.push(2); lists.addLast(3); lists.add(4); lists.addFirst(5); lists.forEach(System.out::println); // 5, 2, 3, 4Copy the code

The addFirst and addLast methods are clear. The push method defaults to the addFirst implementation. The add method defaults to the addLast implementation. So to summarize, add and last, push and first.

In fact, we need to understand that compared with array, linkedList can be added and deleted very quickly. It is very fast to add and delete sequentially, because a linkedList will save the first node and the last node in time complexity O(1). But if you add(int index, E element) at the specified location, then it will first traverse, then find the node in the changed location, and add your node before it, at this time the maximum time complexity is O(n).

An array? We know that the underlying implementation of ArrayList is an array. The advantage of array is that the memory address is sequential and belongs to a whole block, so it is very quick to traverse. When adding and deleting, it will copy the array, and when the array length is very large, it will consume a long time. Here’s a picture you can take a look at:

6. Is the array returned by the arrays.aslist () method immutable?

List<Integer> integers = Arrays.asList(1, 2, 3, 4, 5); integers.set(2, 5); //integers. Add (6); This throws an exception, integers. ForEach (System.out::println); // 1, 2, 5, 4, 5 1setMethods 2. But when we try to use the add () method, throws Java. Lang. UnsupportedOperationException abnormalities, abnormal 3 does not support operation. When we use java9+ we can use the list.of () method, which is completely immutableCopy the code

7. How do I replace an insecure array with a secure array?

1. Use this tool Collections class List < Integer > integers1 = Collections. SynchronizedList (integers); <Integer> integers2 = (CopyOnWriteArrayList<Integer>) integers; 3. Java9 +, using list.of () to become a read-only objectCopy the code

8. The Collections utility class?

List<String> List = Collections.<String>emptyList(); Copy collection collections. addAll(list, 2,3, 4, 5, 6); 3. Build a set of safety List < Integer > safeList = Collections. SynchronizedList (List); BinarySearch for Collections. BinarySearch (list, 2); Reverse array collections.reverse (list);Copy the code

Set

1. When are the HashSet, TreeSet, and LinkedHashSet types used?

If you need a Set that can be accessed quickly, use a HashSet. The underlying implementation of a HashSet is a HashMap, where the elements are not ordered.

If you want a sortable Set, you should use TreeSet. The underlying implementation of TreeSet is TreeMap.

If you want to keep track of the insertion order, you should use LinedHashSet.

A Set cannot contain duplicate elements. Each element must be unique. When you add an element to a Set, duplicate elements are automatically removed. So it can be de-weighted, and it can be used in many cases (but in different ways).

The LinkedHashSet, which falls right between HashSet and TreeSet, is also a collection based on HashMap and bidirectional linked lists, but it also maintains a double-linked list to record the order of inserts, with a complexity of O(1) for the basic method.

Threads are not safe, need to use the Collections. SynchronizedSet (new HashSet (…). ); .

2. HashSet and LinkedHashSet are identical in determining element repetition?

The hashCode() method is executed to determine if it is repeated. If hashCode() returns the same value, equals is checked. If the equals() method remains the same, then it is considered duplicated.

3. Does TreeSet judge the element repetition principle?

TreeSet elements must implement the java.lang.Comparable interface. Therefore, the TreeSet determines that the TreeSet element is duplicated based on the compareTo method of the java.lang.Comparable interface. If the value returned is the same, the TreeSet determines that the TreeSet element is duplicated.

4. How to implement a thread-safe Hashset?

If we look at the source code, we can see that it has a HashMap (member variables marked with the TRANSIENT keyword do not participate in the serialization process, because HashMap already implements Serializable).

5. CopyOnWriteArraySet implementation?

1 public CopyOnWriteArraySet() { 2 al = new CopyOnWriteArrayList<E>(); 3}Copy the code

It is obvious that he implements CopyOnWriteArrayList() by looking at the source code.

Map

1. Hashtable features?

Hashtable and ConcurrentHashMap, ConcurrentSkipListMap and TreeMap do not allow null keys and values, but a HashMap can have null keys and values.

Hashtable methods are thread-safe with the Synchronized keyword modifier.

It is an implementation of array + linked list.

2. ConcurrentHashMap problem?

Delete the segments field and use transient volatile HashEntry<K,V>[] table to save data.

The table array element is used as the lock, so that each row of data can be locked and the probability of concurrent conflicts can be further reduced.

Change the data structure of Table array + one-way linked list to Table array + one-way linked list + red-black tree structure.

When the length of the list exceeds 8, the one-way list becomes red and black. When the hash table is expanded, if the length of the linked list is less than 6, the red-black tree will be degraded to the linked list again.

For the rest of the details, I won’t brag, but it’s harder than HashMap.

Mind using ConcurrentHashMap instead of Hashtable for thread-safe environments.

3. Why not use Hashtable instead of ConcurrentHashMap?

The HashTable container uses synchronized to ensure thread-safety, but HashTable is very inefficient in the context of competitive threads. Because when one thread accesses the synchronization method of HashTable, other threads may enter a blocking or polling state while accessing the synchronization method of HashTable. For example, thread 1 uses PUT to add elements, thread 2 cannot use put to add elements, nor can it use GET to get elements. Therefore, the more fierce the competition, the lower the efficiency.

ConcurrentSkipListMap vs. TreeMap?

ConcurrentSkipListMap provides a sort map for thread-safe concurrent access. Internal SkipList structure implementation, using the underlying CAS atomic operation of insert and delete, through an endless loop to continuously obtain the latest node Pointers to ensure that no race conditions occur. Theoretically, the search, insert, and delete operations can be completed in O(log(n)) time. When ConcurrentSkipListMap size is called, since multiple threads can operate on the map at the same time, the map must traverse the entire list to return the number of elements, which is an order (log(n)) operation.

In JDK1.8, ConcurrentHashMap has better performance and storage space than ConcurrentSkipListMap, but ConcurrentSkipListMap has one feature: it sorts keys in their natural order.

If you need to sort key values, you can use TreeMap or ConcurrentSkipListMap in concurrent scenarios.

So we don’t have to deal with ConcurrentSkipListMap or ConcurrentHashMap.

5. How to use LinkedHashMap?

The main purpose is to solve the order of the read. Implemented based on HashMap.

Queue

1. What is a queue?

We all know that a Queue is a first-in, first-out (FIFO) data structure. The Java.util.Queue interface is defined in Java to represent queues. Queue, List, and Set in Java belong to the same level of interface, they all implement the Collection interface.

Note: HashMap does not implement the Collection interface.

What is a Deque?

It’s a two-ended queue. The linkedList we use is the interface that implements the Deque. Support for inserting and removing elements at both ends.

3. How many queue implementations are common?

LinkedList is a list structure, and Queue is a list structure. The inheritance of LinkedList implements Queue, so for Queue, add is offer(obj), delete is poll(), and get Queue head (not delete) is peek().

1public static void main(String[] args) { 2 Queue<Integer> queue = new LinkedList<>(); 3 4 queue.offer(1); 5 queue.offer(2); 6 queue.offer(3); 7 8 System.out.println(queue.poll()); 9 System.out.println(queue.poll()); 10 System.out.println(queue.poll()); 11} 12// 1, 2, 3Copy the code

PriorityQueue maintains an ordered list of objects that are Heapfy when inserted or removed, called the small top heap by default. Of course, we can specify the order of elements by specifying a sort class that implements the java.util.Comparator interface. PriorityQueue is an unbounded queue, and it doesn’t matter if you set the initial size or don’t set it.

ConcurrentLinkedQueue is a thread-safe queue based on linked nodes. Because it adds elements to the tail of the queue and removes them from the head, ConcurrentLinkedQueue’s shared access to a common collection works fine as long as you don’t need to know the size of the queue. Gathering information about the queue size is slow and requires traversing the queue.

4. ArrayBlockingQueue vs. LinkedBlockingQueue, which is better?

ArrayBlockingQueue is a bounded queue. LinkedBlockingQueue Depending on constructor, the default constructor Max is 2^31-1. But ArrayBlockingQueue is faster than LinkedBlockingQueue when it comes to take and put operations.

The locks in ArrayBlockingQueue are undetached, meaning that the same lock is used for production and consumption. The lockin LinkedBlockingQueue is separate, i.e. production is putLock and consumption is takeLock; ArrayBlockingQueue is array-based and directly inserts and removes enumerations at production and consumption without creating or destroying any additional object instances. LinkedBlockingQueue is based on a linked list. During production and consumption, enumeration objects need to be converted into nodes for insertion or removal, resulting in an additional Node object. This is useful in systems that need to process large amounts of data efficiently and concurrently over long periods of time. The impact on GC is somewhat different.

The LinkedBlockingQueue consumes about 10 times as much as the ArrayBlockingQueue consumes around 1500 milliseconds, ArrayBlockingQueue takes about 150 milliseconds.

ArrayBlockingQueue can use separate locks to run producer and consumer operations in full parallel. Doug Lea probably didn’t do this because the data write and fetch operations of ArrayBlockingQueue are already light enough that introducing a separate locking mechanism would have no performance benefit other than adding additional complexity to the code.

When using LinkedBlockingQueue, it is possible to run out of memory if you use the default size and if the production rate is greater than the consumption rate.

When using ArrayBlockingQueue and LinkedBlockingQueue to enqueue 1 million simple characters, We tested that ArrayBlockingQueue performed better than LinkedBlockingQueue, starting about 50% better.

5. BlockingQueue and ConcurrentLinkedQueue?

BlockingQueue can be volume-limiting.

The BlockingQueue implementation is primarily for producer-consumer queues, but it also supports the Collection interface.

The BlockingQueue implementation is thread-safe.

BlockingQueue is a BlockingQueue (depending on the method you use) and ConcurrentLinkedQueue is a non-blocking queue.

LinkedBlockingQueue is a thread-safe blocking queue, based on a linked list implementation, commonly used in the development of producer and consumer models. The locking mechanism for multithreaded synchronization provides a constructor to specify the size of the queue. If the size is not specified, the queue takes the default size (integer. MAX_VALUE, the maximum value of an Integer).

ConcurrentLinkedQueue is a thread-safe, non-blocking queue based on a linked list implementation. Java does not provide a constructor to specify the size of the queue, so it is unbounded. In order to increase the concurrency, it uses a finer locking mechanism, so that only part of the data is locked in a multi-threaded environment, so as to improve the operation efficiency. He’s not blocking methods, take and put methods, notice that.

6. A brief overview of the seven commonly used BlockingQueue implementation classes?

The ArrayBlockingQueue constructor must pass in the specified size, so it is a bounded queue.

LinkedBlockingQueue can be divided into two cases. The first case specifies the size, which is a bounded queue, and the second case does not specify the size, which is called unbounded and has a maximum value of integer.max_value.

PriorityBlockingQueue (there is also a two-way LinkedBlockingDeque) is an unbounded queue, no matter what constructor you use. A time-based scheduling queue internally supported by a priority heap. Delayed elements are held in the queue and can only be retrieved from the queue after the delay has expired. The poll() method returns null if the getDelay() method returns less than or equal to 0.

SynchronousQueue This queue is similar to Golang’s channel (chan), which is similar to the bufferless chan. Take and put, for example, are exactly the same as Chan. But the difference is that his poll and Offer operations can set the wait time.

DelayQueue DelayQueue provides the ability to retrieve queue elements at a specified time, with the header element being the element closest to expiration. The poll() method returns null if there are no expired elements, and the timeout is determined by the return value of getDelay(timeUnit.nanoseconds) that is less than or equal to zero. Delayed queues cannot hold empty elements.

Add elements must implement Java. Util. Concurrent. Of the interface:

 1@Test
 2public void testLinkedList() throws InterruptedException {
 3
 4    DelayQueue<Person> queue = new DelayQueue<>();
 5
 6    queue.add(new Person());
 7
 8    System.out.println("queue.poll() = "+ queue.poll(200,TimeUnit.MILLISECONDS)); 9} 10 11 12static class Person implements Delayed { 13 14 @Override 15 public long getDelay(TimeUnit unit) { 16 // The expiration time of this object is 17return100L; 20} 19 20 @override 21 public int compareTo(Delayed o) {20} 20 20 @override 21 public int compareTo(Delayed o) {20returno.hashCode() - this.hashCode(); 24} 25} 26 27 Output: 28queue.poll() = nullCopy the code

LinkedTransferQueue is an unbounded queue added to JDK1.7. Doug Lea says this is the most useful BlockingQueue out there, and the best performing. Doug Lea says that in terms of functionality, the LinkedTransferQueue is actually a superset of ConcurrentLinkedQueue, SynchronousQueue (fair mode), and LinkedBlockingQueue. His transfer method means that production must wait until consumers consume before it stops blocking. Producers block until an element added to the queue is consumed by a consumer (not just added to the queue). We also know that blockingQueues use a lot of conditions and locks, which is inefficient, while linkedTransferQueues are lock free. His core method is actually xfer() method, basically all methods are carried out around this, generally SYNC, ASYNC, NOW to distinguish state quantities. Things like PUT, offer, and add are ASYNC, so they don’t block. The variables corresponding to the following states.

1private static final int NOW   = 0; // forUntimed poll, tryTransfer(not blocking) 2Private static final int ASYNC = 1; //forOffer, put, add(not blocking) 3private static final int SYNC = 2; //forTransfer, take(block) 4private static final int TIMED = 3; //for timed poll, tryTransfer (waiting)
Copy the code

7. Implementation of PriorityQueue (small top heap)?

What the small top heap is: The weight of any non-leaf node is no greater than the weight of its left and right children.

PriorityQueue is not thread-safe, PriorityBlockingQueue is thread-safe.

Both use a heap, and the algorithm is the same.

The logical structure of a PriorityQueue is a complete binary tree, and because a complete binary tree can actually store an array, its storage structure is actually an array.

First, PriorityQueue in Java is a PriorityQueue that uses a small top heap implementation, so the results are not necessarily in full ascending order.

8. Implement a big top heap yourself?

3 * @param tree 5 * @param n 6 */ 7static void build_heap(int[] tree, Int n) {8 9 int last_node = n - 1; Int parent = (last_node-1) / 2 int parent = (last_node-1) / 2; // Decrement up through 16for(int i = parent; i >= 0; i--) { 17 heapify(tree, n, i); 24 * @param tree represents a tree. 25 * @param n represents how many nodes. 26 * @param I heapify which node Void heapify(int[] tree, int n, int I) {29 30 // If the current value is greater than n, return..... 31if (i >= n) {
32        return; Int c1 = 2 * I + 1; 37 int c2 = 2 * i + 2; 40 int Max = I; 40 int Max = I; 41 42 // If greater than Max 43if(c1 < n && tree[c1] > tree[max]) { 44 max = c1; 45} 46 47 // If greater than Max 48if(c2 < n && tree[c2] > tree[max]) { 49 max = c2; 50} 51 52 // If I is the maximum, we don't have to swap 53if(max ! = I) {55 55 swap(tree, Max, I); 59 heapify(tree, n, Max); 58 heapify(tree, n, Max); 65static void swap(int[] tree, int Max, int I) {66 int temp = tree[Max]; 67 tree[max] = tree[i]; 68 tree[i] = temp; 69}Copy the code

Stack

The stack structure is a kind of first mover and last out, similar to a bottle, the first mover will push to the bottom of the stack (push operation), and on the way out, there is only one exit, the top of the stack, and return to the top of the stack. This operation is called POP.

The Stack class inherits from Vector, and all methods add sync modifier, making it inefficient and thread-safe.

 1@Test
 2public void testStack() { 3 4 Stack<Integer> stack = new Stack<>(); 5 6 // add 7 stack.push(1); 8 9 stack.push(2); 10 11 // pop returns the top element of the stack and removes 12 system.out.println ("stack.pop() = " + stack.pop());
13
14    System.out.println("stack.pop() = "+ stack.pop()); 15 16} 17 18 Output: 192, 1Copy the code