Synchronization containers associated with HashMap
- Introduction of past life: HashMap evolved from HashTable, which was designed with the intention that all methods of a container must be synchronized, so all methods of HashTable use the synchronized keyword to ensure synchronization, which is obviously not reasonable, because in most cases, There is only one thread to manipulate the container, so HashMap was introduced after HashTable
- HashTable — > Full lock operation
- HashMap — > lock free operation
- ConcurrentHashMap — > New Synchronization container (CAS)
- HashMap should also have synchronous methods, so a Collections utility class was created that can turn a HashMap into a synchronous container, as shown below, and many containers into synchronous containers
- Collections.synchronizedmap () what is the difference between synchronous container HashMap and HashTable, difference is very small, lock is HashTable using methods, And Collections. SynchronizedMap () using the synchronized code block, are synchronized, but are smaller than the magnitude of the HashTable lock, the efficiency slightly higher diu diu, ConcurrentHashMap USES is the CAS no lock operations, If no conflict occurs during the PUT operation, the CAS operation is used for lock-free update. Only when a hash conflict occurs, the operation of adding a Node to the list or updating a Node is locked
- Read and write efficiency:
- The writing efficiency of HashTable is high, and the measured writing time of 100W data is about 700 ms, while the read efficiency is low, and the reading time of 100W data is about 37 seconds
- Collections.synchronizedmap () writes of high efficiency and the measured data to 100 w takes about 600 milliseconds, reading efficiency is low, about 100 w data read in 38 seconds
- ConcurrentHashMap has low write efficiency. The actual write time of 100W data is about 1.8 seconds, while the read efficiency is very high, and the read time of 100W data is about 1.7 seconds
- Conclusion: The actual use of the synchronization container depends on the use scenario of the project, whether it is more read or write operations, and then according to the actual pressure test data to decide which synchronization container to use
List, Vector, Queue
- List synchronization requires locking or, as shown above, obtaining a synchronized List using synchronizedXXX of Collectios, also based on synchronized blocks
- Vector is a synchronized container that locks methods more strongly than synchronizedList and is slower than synchronizedList
- Queue, the implementation class has multiple synchronization queues, can realize synchronization, even some of the implementation of the producer consumer model, so the operation of a single element in large concurrency, as much as possible to consider the Queue, less List
A container that is often used in multithreading
Map
- ConcurrentHashMap, CAS based implementation, not much explanation
- ConcurrentSkipListMap, based on the hop table implementation
- Probably because, given that a ConcurrentHashMap is implemented using CAS, a ConcurrentTreeMap should also be implemented using CAS, but unfortunately, Implementing “ConcurrentTreeMap” using CAS is too difficult, extremely difficult, and complex, so take a step back and implement a ConcurrentSkipListMap using a hop table
- HashTable
- Collections.synchronizedMap( )
List
- CopyOnWriteArrayList, copy on write, as the name implies, copy the array array once while writing data, as shown in the following figure
- Similar to ReadWriteLock in Lock, read without Lock, write with Lock
- Collections.synchronizedList( )
- Whilst on the subject of Collections and synchronizedMap () a appearance, are synchronized code block
Queue
- ConcurrentLinkedQueue
- Method introduction:
- Add: Add elements, can not add, full, abnormal running
- Offer: Add element, return true on success, return false on full
- Peek: Gets elements, but does not delete them
- Poll: Gets elements and deletes elements
- Method introduction:
BlockingQueue: The natural producer-consumer model
LinkedBlockingQueue Unbounded queue
- Unbounded queues based on linked lists can be added until memory runs out
- Method introduction:In addition to the add, Offer, peek, poll methods above
- Put: Adds elements. If the element is full, it blocks and waits for more elements to be added
- Take: Fetch elements. If there are no available elements, block and wait for the elements to continue fetching
ArrayBlockingQueue A bounded queue
- Same as LinkedBlockingQueue, except that it is sized and bounded
PriorityQueue
- Introduction: You can compare and sort the added objects, and then remove them in order
- For example, add “A”, “z”, “f” and “C” in sequence, and then call poll in a loop
DelayQueue Time sequence queue
- Introduction: In order by time, the queued object must implement the Delayed interface, override the getDelay method and compareTo method
SynchronusQueue
- Introduction: the capacity of the 0 queue, actually, this thing is not used to hold things, is used to let a thread to another thread task, a thread to put data in, waiting for another thread to fetch data
- The capacity is zero, so calling add to add data will cause an error. The correct way to use this method is for one thread to call PUT to add data, and another thread to call take to fetch data
TransferQueue
- Introduction: With a unique transfer method, call this method to write data, until the data is taken away by other threads has been blocked waiting until someone will take the data away
- Methods:
- Transfer method, which writes data and blocks until the data is fetched