This article is mainly collated from “Java Concurrent Programming Practice”
Thread safety
1.1 Activity problems
Activity problems occur when an operation cannot be performed, such as deadlocks, starvation, live locks, and so on
1.2 will start another thread of code
Timer, servlet/JSP,RMI,swing/AWT
1.3 Main Synchronization Mechanisms
- Synchronized
- Volatile: Provides visibility only
- Explicit Lock (Explicit Lock)
- The atomic variable
It is best to place an associated synchronization operation in the same thread-safe class, with a class providing the entry point, where synchronization is done and clients don’t need to worry about synchronization when calling the class, such as classes under the Concurrent package
1.4 Thread safety problems occur with variable state variables
- This variable is not shared between threads
- Change a state variable to an immutable variable
- Use synchronization when accessing state variables
1.5 Race conditions
Race conditions occur when the correctness of a calculation depends on the order in which multiple threads execute alternately. A common race condition is “check then act,” in which the interval between check and execution is interrupted by other threads. When errors occur, thread-safe classes should be used as far as possible to manage the state of the class, such as atomic classes (implemented through CAS,CAS algorithm has ABA problems). When there are multiple state variables and they are related to each other, simple atomic classes are no longer enough. Synchronous code management should be used, and atomic variables can not be used at this time
1.6 Built-in lock reentrant
Each Java object can be used as a synchronized lock, called a built-in lock. Built-in locks are reentrant, so if a thread tries to acquire a lock that it already owns, the request succeeds. Reentrant means that the granularity of the lock acquisition operation is thread, not call
Note that synchronized instance methods lock only caller instances
class Widget {
public synchronized void doSomething(a) {}}class LoggingWidget extends Widget {
/* * Synchronized locks the LoggingWidget instance */
public synchronized void doSomething(a) {
// The synchronized method of the parent class is invoked by the LoggingWidget instance
// The caller LoggingWidget instance is still locked
super.doSomething(); }}Copy the code
Every lock associated a request counter and a possession of his thread, while the requests is counter to 0, the lock can be considered to be unhled, when a thread requesting a unheld lock, the JVM record locks, and lock the request count of 1, if the same thread request again this lock, request the counter will increase, When the thread exits the Syncronized block, the counter is decated by one, and when the counter reaches zero, the lock is released (this ensures that the lock is reentrant and no deadlocks occur).
1.7 Synchronized Implements visibility
The process by which a thread executes mutex code
- Get the mutex
- Clear working memory
- Copies the latest copy of a variable from main memory to working memory
- Execute the code
- Flush the value of the updated shared variable to main memory
- Release the mutex
Lock -> Main Memory -> Working Memory -> Main memory -> UNLOCK
1.8 Synchronization is required for reading and writing shared variables
Mutable state variables that may be accessed by multiple threads at the same time need to hold the same lock when accessing them, and the state variables are protected by this lock
1.9 Atomic variables and synchronized blocks
If the operation on atomic variables is already in the synchronized code block, you can drop atomic variables. Normal variables are better – different synchronization mechanisms can easily cause confusion, and when one synchronization mechanism is sufficient, do not add another synchronization mechanism
1.10 Number of Object locks
Once a thread has acquired a lock on an object, it can only prevent other threads from acquiring the same lock. The reason each object has a built-in lock is simply to avoid the need to display the creation of a lock object.
Synchronized modifies the instance method and obtains the instance lock (object lock). Modifies the static method and obtains the class lock. Similarly, the code block lock is divided into object lock and class lock
- Object locks are used on object instance methods, or on an object instance. Object locks of different object instances are not interfering with each other
- Class locks are used on static methods of a class or on the class object of a class, but there is only one class lock per class. In fact, class locks are just a concept, not a real thing. They are used to help us understand the difference between locking instance methods and static methods
- Object locks and class locks do not interfere with each other. One thread can acquire the object lock, and another thread can acquire the class lock at the same time
- There is no correlation between an object lock of one class and an object lock of another class. When A thread acquires an object lock of class A, it can also acquire an object lock of class B.
Object memory
2.1 Minimum Security
The Java memory model requires that variable reads and writes be atomic operations. When the wireless program steps at the same time, the variable read must be a value set by a thread, not a random value. Notice the concept of reads and writes here. The JVM allows 64-bit read/write operations to be split into two 32-bit operations, which can lead to a solution to the problem that high and low 32 bits are not the original combination: volatile modifier or lock protection
2.2 Volatile Visibility
Volatile variable operations are not reordered with other memory operations. Volatile variables are not cached in registers or other places where the processor can’t see them (they read and write directly to main memory without copying), so volatile variables always return the most recently written value
- Usage scenarios
- Writes to variables are independent of the variable’s current value, or only a single thread updates the variable’s value
- This variable will not be included in the invariance condition along with other state variables
- No locks are required for variable access
Example
volatile booleanasleep; ...while(asleep)
countSomeSleep()
Copy the code
If volatile is not used, it is possible that when the asleep is changed by a thread, the thread performing the judgment cannot change it. In this case, volatile is simpler and more convenient than locking. For example, volatile does not guarantee the atomicity of count++ (there are read and write steps), but locking does
The volatile type can be used to publish immutable object P40
2.3 Jvm optimization and multithreading debugging issues – server
For server applications, the -server command-line option is specified when the JVM is started, both during development and test, and the Server mode JVM performs more optimizations than the Client mode JVM, such as promoting unmodified variables in the loop outside of the loop, In the volatile example code, if asleep is not declared volatile, The Server JVM raises the asleep criteria outside of the loop, while the Client JVM does not
2.4 Release and Escape
- Publish objects can be used in code outside of the current scope (other objects can reference the object)
- Escape An escape occurs when an object is published that should not be published
When an object is published to another thread before its construction is complete, thread-safety is broken. In particular, when an object is published from an object’s constructor, only an unconstructed object is published (i.e., this is not initialized). However, you can refer to this by the newly published (instantiated) object in the constructor. Do not escape with this reference during construction
Example
//Wrong
public class ThisEscape {
public ThisEscape(EventSource source) {
source.registerListener(new EventListener() {
public void onEvent(Event e) { doSomething(e); }}); }//Correct
public class SafeListener {
private final EventListener listener;
private SafeListener(a) {
listener = new EventListener() {
public void onEvent(Event e) { doSomething(e); }}; }public static SafeListener newInstance(EventSource source) {
SafeListener safe = new SafeListener(); // Create a new one
source.registerListener(safe.listener); / / release again
return safe;
}
Copy the code
2.5 Ad-Hoc threads are closed
Refers to the responsibility of thread closure completely by the program to assume, do not share data, only access data in a single thread, the object closed to the target thread; Because of its vulnerability, it should be used sparingly, and stronger thread-blocking techniques such as stack blocking or threadLocal classes should be used instead
2.6 stack closed
Variables exist only in the stack of the executing thread and are used only within threads. If a non-thread-safe object is used in an internal thread context, that object is still thread-safe
2.7 ThreadLoad class
Classes can treat ThreadLoad
as a Map
2.8 Immutable Objects
- Object immutability
- State cannot be modified
- All fields are final: Final fields guarantee security during initialization
- The structure of the right process Any thread can be safely in the case of don’t need to synchronize access to immutable objects, even if these objects without using synchronization If the final type of domains pointing to the object is mutable object, it is reference to an immutable, but the state of the variable object is variable, and at this time still needs to be synchronized when access to the object state
public class Holder {
// private int n;
private final int n;
public Holder(int n) {
this.n = n;
}
public void assertSanity(a) {
if(n ! = n)throw new AssertionError("This statement is false."); }}Copy the code
Setting it to final ensures that the object is constructed correctly and is thread-safe
Object sharing
3.1 Correctly construct the secure publishing mode of objects
1. Initialize an object reference in a static initialization function: singleton Hunhanmode 2. Save a reference to an object in a volatile field or AtomicReferance object: atomic class 3. Save an object reference to a final field of a properly constructed object: immutable 4. Save object references to a lock protected domain: locks can place objects in thread-safe containers:Copy the code
Hashtable synchronizedMap concurrentMap vector copyOnWriterArrayList copyOnWriterSet synchronizedList synchronizedSet blockingQueue concurrentLinkedQueue
3.2 Statically Initializing an Object Reference
Public static Hodlder holder = new Hodeler(42); Public static Hodlder holder = new Hodeler(42);
3.2 Object variability and publication
Immutable objects can be published by any mechanism Fact immutable objects must be published safely Mutable objects must be published safely and must be thread-safe or protected by locks
3.4 Using and Sharing Objects
- Thread-closed objects can only be owned by one thread and are enclosed within the thread
- Read-only Shared
- Thread-safe sharing Thread-safe objects are synchronized internally and can be accessed by multiple threads through the object’s common interface without further synchronization
- Protected objects a protected object can only be accessed by holding a specific lock. Protected objects include objects encapsulated in other thread-safe objects, as well as published objects protected by a specific lock
4. Combination of objects
4.1 Designing thread-safe classes
- Find all the variables that make up the state of the object
- Find the invariance conditions that constrain the state variables
- Establish concurrent access policies for object state
4.2 Java Base Container Class synchronization
Some Java based synchronous class is not thread-safe, but can be by the wrapper factory method collections. SynchronizedList (), the container classes encapsulate in a synchronized wrapper objects
4.3 Java Monitor mode
Encapsulate all mutable state of an object and protect it with its own built-in lock such as vector and Hashtable. It may be advantageous to use a private lock on an object. At the same time, when acquiring a protected object, you can return a copy of the object. When modifying an object, you can modify it by using the protected object common method (not directly modifying the returned copy object).
Copyonwrite is to modify the returned collection and then modify the reference
4.4 Thread-safety delegate
If a class is composed of multiple independent and thread safety of state variables, and are not included in all operating invalid state transition, the thread safety can be entrusted to the underlying state variable thread safe state variables can be thread-safe class to manage, such as thread safe container, immutable container, atomic classes, etc Involves the thread safe variable, When returning a reference as final as possible, you need to pay special attention to whether it will cause escape. You can return a copy object, or an immutable object (the object itself is immutable (note that the reference can be modified), an immutable container, or a synchronous container).
4.5 Locking the Client
For example, if a vector is a synchronous class, its internal operations are synchronous, but if several operations are executed in sequence, it can be locked on the client side. In this case, the lock should be the same as the original lock on the vector. Synchronized (vector){… }
5. Basic building modules
5.1 Synchronous container vs Concurrent Container
-
Synchronous Vector container, Hashtable, synchronized wrapper class, can be made by the Collections. SynchronizedXxxx method to create such as synchronous although container classes are thread safe, but in some cases (compound), still need to lock to protect; Synchronous containers serialize access to all container states, severely reducing concurrency; When multiple threads compete for locks, throughput deteriorates significantly.
-
Since java5.0, a variety of concurrent containers have been provided to improve the performance of synchronous containers, For example, ConcurrentHashMap, CopyOnWriteArrayList, CopyOnWriteArraySet, and ConcurrentSkipListMap. Taking ConcurrentHashMap as an example, the separation lock technology is adopted. In ConcurrentHashMap, the array portion of the hash table is divided into segments and each segment maintains a lock for efficient concurrent access. Iterator weak consistency, the iteration will not throw ConcurrentModificationException; Methods such as size() and isEmpty() return an approximation; For example, the size operation stores a last value that is used to record the total number counted in the last loop. If the last two counts are equal, the last operation method will be returned. For example, putIfAbsent (without this key, add)
Note that no new atomic operations can be created by client-side locking. The client can only lock the concurrent container itself, but the concurrent container does not use its own lock
Copy the container when writing: Copyonwrite, in each change will lock and create and re-publish a new copy of the container, directly modify the container reference, so as to achieve visibility, but when reading without locking, directly read the original value, resulting in the problem is that although the lock is written, but still can read, may read invalid value. Its access and write operations must eventually be through the corresponding final method, such as setArray(),getArray() reads too much and writes too little using Copyonwrite
-
Use synchronous containers only when the application needs exclusive access to the container with a lock, otherwise use non-concurrent containers for better performance
5.2 Blocking method and interrupt method
But when you call a method in your code that throws InterruptedException, your own method becomes a blocking method and must handle the interrupted response
- Passing InterruptedException Throws InterruptedException to the caller of the method
- If you cannot throw InterruptedException, such as if your code is part of a runnable, you must catch InterruptedException and call the current thread’s interrupt to restore the interrupted state, causing a higher-level code interruption
public void run(a) {
try {
processTask(queue.take());
} catch (InterruptedException e) {
// Restore the interrupted stateThread.currentThread().interrupt(); }}Copy the code
5.3 Synchronizing Tool Classes
The synchronization utility class coordinates the control flow of threads based on their state
- Block queue blockingQueue
Not only can it act as a container for holding objects, but it also coordinates the flow of control between threads such as producers and consumers, because the take and PUT methods block
Blocking mode :(toput()
Method as an example)
- When you build the blockingQueue object, create a Lock and its corresponding Condition
lock = new ReentrantLock(fair);
Condition notEmpty = lock.newCondition();
Condition notFull = lock.newCondition();
Copy the code
- If the loop is full, it waits on Condition until it is woken up by operations such as take
lock.lockInterruptibly();
try {
while (count == items.length)
notFull.await();
insert(e);
} finally {
lock.unlock();
}
Copy the code
Note that there must be a loop, and when you wake up, you need to go back to the loop again to determine whether the condition is met
- CountDownLatch ensures that certain activities continue until all others have completed. Once the end state is reached, the state will not change. CountDownLatch enables one or more threads to wait for an event to occur. Await method, which blocks the current thread until the value of the timer is 0
// Provide unified entrance & exit
public class TestHarness {
public long timeTasks(int nThreads, final Runnable task)
throws InterruptedException {
final CountDownLatch startGate = new CountDownLatch(1);
final CountDownLatch endGate = new CountDownLatch(nThreads);
for (int i = 0; i < nThreads; i++) {
Thread t = new Thread() {
public void run(a) {
try {
// All threads wait here until the count reaches zero, i.e. startgate.countdown () is called;
startGate.await();
try {
task.run();
} finally{ endGate.countDown(); }}catch (InterruptedException ignored) {
}
}
};
t.start();
}
long start = System.nanoTime();
startGate.countDown();
// All threads wait here until the count is zero, i.e. called endgate.countdown () nThreads times;
endGate.await();
long end = System.nanoTime();
returnend - start; }}Copy the code
FutureTask: Runnable that generates results in three states: waiting, running, and completed. Future.get () will return the result immediately if the task is completed, otherwise block until it is complete. Once done, stop FutureTask for good: calculate with the ConcurrentMap
> cache,vaule value is Future,P89
,>
// Use FutureTask to pre-load data that will be needed later
public class Preloader {
ProductInfo loadProductInfo(a) {
return null; // Perform complex calculations or wait here
}
private final FutureTask<ProductInfo> future = new FutureTask<ProductInfo>(
new Callable<ProductInfo>() {
public ProductInfo call(a) throws InterruptedException {
returnloadProductInfo(); }});private final Thread thread = new Thread(future);
public void start(a) {
thread.start();
}
public ProductInfo get(a) throws InterruptedException {
try {
return future.get(); // block until there is a result
} catch (ExecutionException e) {
throwe; }}interface ProductInfo {}}Copy the code
- Semaphore is used to control the number of operations that can access a particular resource at the same time, or the number of operations that can be performed at the same time. Semaphore manages a set of virtual permissions
public class BoundedHashSet <T> {
private final Set<T> set;
private final Semaphore sem;
public BoundedHashSet(int bound) {
this.set = Collections.synchronizedSet(new HashSet<T>());
sem = new Semaphore(bound);
}
public boolean add(T o) throws InterruptedException {
sem.acquire();
boolean wasAdded = false;
try {
wasAdded = set.add(o);
return wasAdded;
} finally {
if (!wasAdded)
sem.release();
}
}
public boolean remove(Object o) {
boolean wasRemoved = set.remove(o);
if (wasRemoved)
sem.release();
returnwasRemoved; }}Copy the code
5.4 the fence
CyclicBarier CyclicBarier is used to repeatedly gather a number of participants at the fence, which is useful for parallel iterative algorithms. A two-sided fence is used for asymmetric operations, in which the parties exchange data over the fence
private final CyclicBarrier barrier;
/ / barrier. Await () call the execution of internal threads on the count time mainBoard.com mitNewValues () method
this.barrier = new CyclicBarrier(count,
new Runnable() {
public void run(a) {
mainBoard.commitNewValues();
}});
public void run(a) {
while(! board.hasConverged()) {// When the loop is complete, the calculation is complete
for (int x = 0; x < board.getMaxX(); x++)
for (int y = 0; y < board.getMaxY(); y++)
board.setNewValue(x, y, computeValue(x, y));
try {
barrier.await();
} catch (InterruptedException ex) {
return;
} catch (BrokenBarrierException ex) {
return; }}}Copy the code
P84 Number of Threads cpus and Throughput When no I/O operations or shared data access is involved, the optimal throughput is achieved when the number of threads is the number of cpus or the number of cpus +1. This is true for threads under one process, but what if there are multiple processes? Time allocation between processes?
Structuring concurrent applications
6.1 Executor
Provides a standard approach to decouple the submission and execution processes of tasks, as well as support for life cycles, statistics collection, application management mechanisms, and performance monitoring
The name of the | The number of threads | abnormal | features | To implement a queue |
---|---|---|---|---|
newfixedThreadPool | fixed | An exception and termination replenish a thread | Gradually increase until maximum | LinkedBlockingQueue |
newCachedThreadPool | The biggest Interger. MAX_VALUE | Cacheable thread pool. If the size of the thread pool is larger than the current requirement, idle threads are reclaimed and the thread pool can be expanded indefinitely | SynchronousQueue | |
newSingleThreadExecutor | 1 | Another thread is started when an exception occurs and the end occurs | A single thread executes in such order as priority | LinkedBlockingQueue |
newScheduledThreadPool | fixed | To perform in a delayed or timed manner | RunnableScheduledFuture [] array |
ExecutorService exec = Executors.newSingleThreadExecutor();
6.2 Executor Life Cycle
Three states: Running, closed, and terminated four life cycle phases: Create, submit, start and finish the task has been submitted but has not yet started to cancel, just start executing task, only when they can respond to interrupt can cancel the JVM only after all the daemon thread termination will withdraw from, if does not close properly executor, the JVM is unable to end Shut down two ways, 1. Shutdown: Stops receiving new tasks and finishes executing all tasks in the executing or waiting queue. 2. Forcibly shutdown shutdownNow: Cancel all running tasks, no longer start tasks in the waiting queue, return all tasks that have been submitted but not started, and log the tasks etc
6.3 Delayed Tasks and Periodic Tasks
The java.util.Timer class creates only one thread when executing all timed tasks. If the execution time of one task is too long, the timing accuracy of other TimerTasks will be damaged
6.4 the Future
All submit methods in the ExecutorService will put back a Future, which will get the result of the task. The Future has a cancle method, which can cancel the task
6.5 CompletionService
CompletionService: You merge the functions of executor and BlockingQueue, give the Callable task to the commit to execute, and then use methods like take and poll for queue operations to get the future. Future.get () returns the result. Here is can cope with a set of calculation results, once you have returned to you can gain as ExecutorCompletionService actually is to calculate the results after finishing in blockingqueue
void renderPage(CharSequence source) {
final List<ImageInfo> info = scanForImageInfo(source);
CompletionService<ImageData> completionService = new ExecutorCompletionService<ImageData>(executor);
for (final ImageInfo imageInfo : info)
completionService.submit(new Callable<ImageData>() {
public ImageData call(a) {
returnimageInfo.downloadImage(); }}); renderText(source);try {
for (int t = 0, n = info.size(); t < n; t++) { Future<ImageData> f = completionService.take(); ImageData imageData = f.get(); renderImage(imageData); }}catch (InterruptedException e) {
Thread.currentThread().interrupt();
} catch (ExecutionException e) {
throwlaunderThrowable(e.getCause()); }}Copy the code
6.6 Design time limit for tasks
Future.get(long,timeType) submits a set of tasks InvokeAll: Submit multiple tasks to an ExecutorService and get the result, the InvokeAll adds to the return collection in the order of iterators from the task collection, When the task completes/the caller thread interrupts/times out,invokeAll will return, and you can use get or isCancle to determine what happens
List<QuoteTask> tasks = new ArrayList<QuoteTask>();
for (TravelCompany company : companies)
tasks.add(new QuoteTask(company, travelInfo));
List<Future<TravelQuote>> futures = exec.invokeAll(tasks, time, unit);
List<TravelQuote> quotes = new ArrayList<TravelQuote>(tasks.size());
for (Future<TravelQuote> f : futures) {
try {
quotes.add(f.get());
} catch(ExecutionException e) { quotes.add(...) ;// Put the association back in order
} catch(CancellationException e) { quotes.add(...) ;// Put the association back in order}}Copy the code
class QuoteTask implements Callable<TravelQuote> {
public TravelQuote call(a) throws Exception {
returncompany.solicitQuote(travelInfo); }}Copy the code
Cancel and close
Java does not provide any mechanism for safely terminating threads. Instead, it provides Interrupion, which causes one thread to terminate the current work of another
7.1 Callable
Callable thinks that the main entry point will return a value and may throw an exception with no return value. Callable can be used
7.2 Interrupt
Calling interrupt does not mean to immediately stop what the target thread is doing. Rather, it simply passes a message requesting an interrupt, and the thread interrupts itself at the next appropriate moment. Instead of setting flag bits: If flag bits are used,I/O blocking stays stuck and interrupt requests can only set the interrupt status of the thread, which is also stuck and can only close the I/O interface
7.3 Interrupted
Gets the interrupted state and clears the interrupted state of the current thread. If interrupted returns true, the Thread clears the interrupted status. The next time the Thread is interrupted, it will no longer be interrupted, so you need to deal with the interruption by throwing interruptException or calling interrupt again to restore the interrupted status :Thread .currentThread().interrupt();
7.4 Interrupt Policy
The most reasonable to cancel the operation is a form of thread to cancel the operation or service level to cancel the operation: quit as soon as possible, when necessary to clean up, notice an owner thread has withdrawn from thread should only be interrupted by its owners, owners can will thread break encapsulation strategy information to an appropriate cancel mechanism, such as the shutdown method
- Basic interrupt policy
Pass interruptedException: Pass interruptedException to the caller of the method
Recovery interrupt: When the current code is part of the runnable, an exception cannot be thrown. The exception must be caught and the interrupt state restored by calling the interrupt method on the current thread, so that code higher up in the call stack will see that an exception was raisedpublic void run(a) { try { processTask(queue.take()); } catch (InterruptedException e) { // restore interrupted statusThread.currentThread().interrupt(); }}Copy the code
When attempting to cancel a task, do not interrupt the thread pool directly, only through the task’s future
Future<? > task = taskExec.submit(r);try { task.get(timeout, unit); } catch (ExecutionException e) { throw launderThrowable(e.getCause()); } finally { task.cancel(true); } Copy the code
7.5 Non-interruptible Blocking
Interrupt requests only set the interrupt state of the thread, but do not do much else when a socket I/O is blocked or waiting for a built-in lock. In response to the interrupt, the underlying blocking operation should be interrupted and an exception thrown
public void interrupt(a) {
try {
socket.close();
} catch (IOException ignored) {
} finally {
super.interrupt(); }}Copy the code
7.6 newTaskFor Custom interrupt
You can interrupt the thread, and you can cancel the underlying blocking method. Note that to cancel a producer-consumer operation, you need to cancel both the producer and the consumer
public abstract class SocketUsingTask <T> implements CancellableTask<T> {
@GuardedBy("this") private Socket socket;
protected synchronized void setSocket(Socket s) {
socket = s;
}
// Custom cancel method
public synchronized void cancel(a) {
try {
if(socket ! =null)
socket.close();
} catch (IOException ignored) {
}
}
public RunnableFuture<T> newTask(a) {
return new FutureTask<T>(this) {
public boolean cancel(boolean mayInterruptIfRunning) {
try {
// Call the self cancel method first
SocketUsingTask.this.cancel();
} finally {
return super.cancel(mayInterruptIfRunning); }}}; }}// Add two new methods
interface CancellableTask <T> extends Callable<T> {
void cancel(a);
RunnableFuture<T> newTask(a);
}
@ThreadSafe
class CancellingExecutor extends ThreadPoolExecutor {
protected <T> RunnableFuture<T> newTaskFor(Callable<T> callable) {
if (callable instanceof CancellableTask)
return ((CancellableTask<T>) callable).newTask(); // Return the extension object
else
return super.newTaskFor(callable); }}Copy the code
7.6 Flag bit Interruption
You can set a Boolean flag to indicate whether to cancel. At the same time, a counter is set to count the number of tasks in the current task queue, flag is set when it is closed, and the thread is interrupted. The underlying producer method determines whether flag is closed and throws an exception, and the consumer will cancel only when the value of flag and counter is 0, otherwise, the task queue is processed until all tasks are completed.
public class LogService {
private final BlockingQueue<String> queue;
private final LoggerThread loggerThread;
private final PrintWriter writer;
@GuardedBy("this") private boolean isShutdown;
@GuardedBy("this") private int reservations;
public LogService(Writer writer) {
this.queue = new LinkedBlockingQueue<String>();
this.loggerThread = new LoggerThread();
this.writer = new PrintWriter(writer);
}
public void start(a) {
loggerThread.start();
}
public void stop(a) {
synchronized (this) {
isShutdown = true;
}
loggerThread.interrupt();
}
public void log(String msg) throws InterruptedException {
synchronized (this) {
if (isShutdown)
throw new IllegalStateException(/ *... * /);
++reservations;
}
queue.put(msg);
}
private class LoggerThread extends Thread {
public void run(a) {
try {
while (true) {
try {
synchronized (LogService.this) {
// The object should be consumed
if (isShutdown && reservations == 0)
break;
}
String msg = queue.take();
synchronized (LogService.this) {
--reservations;
}
writer.println(msg);
} catch (InterruptedException e) { /* retry */}}}finally{ writer.close(); }}}}Copy the code
7.8 Poison pill objects
Is added to the task queue object of the agreement, consumers view object every time, determine whether to exit Can also do a statistics, number to quit, so we can ensure that cancel multiple threads Known producers to consumers is useful when, want to make sure the production poison pill object number note: only in unbounded queue, poison pill objects can work in a reliable way
public class IndexingService {
private static final int CAPACITY = 1000;
private static final File POISON = new File("");
private final IndexerThread consumer = new IndexerThread();
private final CrawlerThread producer = new CrawlerThread();
private final BlockingQueue<File> queue;
private final FileFilter fileFilter;
private final File root;
public IndexingService(File root, final FileFilter fileFilter) {
this.root = root;
this.queue = new LinkedBlockingQueue<File>(CAPACITY);
this.fileFilter = new FileFilter() {
public boolean accept(File f) {
returnf.isDirectory() || fileFilter.accept(f); }}; }private boolean alreadyIndexed(File f) {
return false;
}
public void start(a) {
producer.start();
consumer.start();
}
public void stop(a) { // Interrupt mechanism
producer.interrupt();
}
public void awaitTermination(a) throws InterruptedException { consumer.join(); }}Copy the code
consumers
class IndexerThread extends Thread {
public void run(a) {
try {
while (true) {
File file = queue.take();
if (file == POISON)
break;
elseindexFile(file); }}catch (InterruptedException consumed) {
}
}
public void indexFile(File file) {
/ *... * /
};
}
Copy the code
producers
class CrawlerThread extends Thread {
public void run(a) {
try {
crawl(root);
} catch (InterruptedException e) { /* If interrupted, put poison pill object */
} finally {
while (true) {
try {
queue.put(POISON);
break;
} catch (InterruptedException e1) { /* retry */}}}}private void crawl(File root) throws InterruptedException {
File[] entries = root.listFiles(fileFilter);
if(entries ! =null) {
for (File entry : entries) {
if (entry.isDirectory())
crawl(entry);
else if(! alreadyIndexed(entry)) queue.put(entry); }}}}Copy the code
7.9 Shutting down the ExecutorService
Shutdown: Stops accepting new tasks and finishes executing all tasks in the executing or waiting queue
public void stop(a) {
try {
exec.shutdown();
exec.awaitTermination(3000, TimeUnit);// Wait for execution to complete.
} catch (InterruptedException e) {
e.printStackTrace();
}finally{... }}Copy the code
7.10 Exception Handling
Is the main reason resulted in the deaths of thread in advance runtimeException, posted in a thread code can use try-catch block catch exceptions and processing Uncaught exception UncaughtExceptionHandler, exception handling Thread API provided in class, The ability to detect the termination of a thread due to an uncaught exception and at least print the exception information to the log table. You need to provide a ThreadFactory for the constructor of ThreadPoolExecutor
public class MyAppThread extends Thread {
public MyAppThread(Runnable runnable, String name) {
super(runnable, name);
setUncaughtExceptionHandler(new Thread.UncaughtExceptionHandler() {
public void uncaughtException(Thread t, Throwable e) {
log.log(Level.SEVERE, "UNCAUGHT in thread "+ t.getName(), e); }}); }}Copy the code
Only a task submitted through EXECUTE can hand its thrown exceptions to an uncaught exception handler, while a task submitted through Submit, whether it throws unchecked exceptions or checked exceptions, will be considered part of the state returned by the task. If a task submitted by Submit ends because it threw an exception, the exception will be rethrown by Future. get in an ExecutionException If you want to be notified when a task fails due to an exception and perform task-specific recovery operations, you can encapsulate the task in a runnable or Callable that catches the exception, or you can override the afterExecute method of ThreadPoolExecutor
7.11 JVM Disabling Hooks
The JVM does not stop or interrupt any application threads that are still running when the JVM is Shutdown. When the JVM finally terminates, the daemon Thread is forcibly terminated Runtime.geTrunTime ().addShutdownHook(new Thread(){… });
7.12 Daemon Threads
Normal thread: All threads created by the main thread are normal threads, which inherit the daemons of the thread that created it: When a thread exits, the JVM checks the running threads. If they are all daemons, the JVM exits. When the JVM stops, all daemons are discarded.
Use of thread pools
8.1 ThreadLocal does not apply to thread pools
Using a ThreadLocal in a thread pool makes sense only if the lifetime of thread-local values is limited by the lifetime of the task, and threadLocal should not be used in a thread pool to pass values between tasks
8.2 Starvation deadlock
Unless the thread pool is large enough, a thread starvation deadlock will occur whenever a task in the thread pool has to wait indefinitely for some resource or condition that must be provided by another task in the pool. Whenever you commit a dependent Executor task, you need to know that a thread starvation deadlock can occur. You need to record the thread pool size limit or configuration limit in your code or in the configuration file that configures your Executor It only makes sense to set boundaries for thread pool work queues if the tasks are independent of each other. If there are dependencies between tasks, then bounded thread pools or queues can lead to thread starvation deadlocks, and unbounded thread pools such as newCachePool should be used
8.3 Limit the running time
Most blocking methods have limited and unlimited versions, such as Thread. Join, blockingQueue. Put, countdownlatch.await, selector. If the wait times out, you can mark the task as failed and terminate it or put it back on the queue for later execution
8.4 Thread pool size
Thread pool sizes should not be fixed, they should be provided through configuration mechanisms, or dynamically calculated according to Runtime.geTruntime ().availableProcessors(). If you need to perform different classes of tasks and their behaviors vary widely, multiple thread pools should be considered. This allows each thread pool to be computationally intensive depending on its workload: the thread pool size is CPU count +1 I/O operation or other blocking operation: the thread does not execute all the time, but should be larger, requiring an estimate of the waiting time of the task relative to the calculation time
N_cpu=number of CPUs U_cpu= expected CPU usage,0≤U_cpu≤1 W/C= waiting time/computing timeCopy the code
To achieve the desired processor utilization, the optimal size of the thread pool is equal to:
N_threads =N_cpu*U_cpu*(1+W/C)
Copy the code
Int CPU = Runtime.getruntime (). AvailableProcessors (); ThreadPoolExecutor allows you to provide a BlockingQueue to hold tasks waiting to be executed. There are three basic queueing methods for tasks: unbounded queue, bounded queue, and synchronous handover
8.5 Unbounded queue
The factory methods newFixedPoolExecutor, newSingleThreadExecutor use an unbounded linkedBlockedQueue by default
8.6 Bounded Queues
ArrayBlockingQueue, bounded linkedBlockingQueue, PriorityBlockingQueue, bounded queues help prevent resource exhaustion
8.7 Synchronous Handover
For very large or unbounded thread pools, synchronousQueue can be used to avoid queueing tasks, as well as to delegate tasks directly from producers to workers. If no thread is waiting to accept a task and the pool is full, the newly created thread accepts the task. Otherwise, according to the saturation policy, This task will be rejected. The NewCachedThreadPool factory method uses a synchronousQueue that is only valuable if the thread pool is unbounded or if a task can be rejected
8.8 Saturation Strategy
After the bounded queues fill up, or the Executor has closed, saturation strategy began to play a role, ThreadPoolExextor saturated strategy can be modified by invoking setRejectedExcutionHandler different RejectedExcutionHandler implementation
- To suspend the Abort the default saturation strategy, throw unchecked RejectedExcutionHandlerException
- Discard Discard If a newly submitted task cannot be saved to the queue for execution, the task is discarded
- Discard the oldest discard-oldest discards the next executed task and attempts to resubmit the task. If the work queue is the priority queue, the highest priority task is discarded
- The Caller Runs caller-runs to roll back the task to the Caller and run it, thus taking up the Caller’s thread and reducing the traffic of new tasks. In Webserver, when the main thread performs the task due to rollback, the newly arrived request is saved in the QUEUE of the TCP layer instead of in the queue of the application. If the overload continues, When the TCP queue is full, requests are discarded to the caller and then sent up to the client, achieving gentle performance degradation
threadPoolExecutor.setRejectedExecutionHandler(new ThreadPoolExecutor.DiscardPolicy());
Copy the code
8.9 PrivilegedThreadFactory Set security policies
You can control access to specific code libraries, The created thread will have the same access as the thread that created PrivilegedThreadFactory, AccessControlContext and contextClassLoader. If we do not use PrivilegedThreadFactory, The new thread will inherit access from the client calling exectute or Submit
8.10 extensions ThreadPoolExextor
BeforeExecute, afterExecute, afterExecute, terminated Run methods return or throw exceptions, afterExecute is executed. AfterExecute is not executed. If beforeExecute throws a RuntimeException, the task is not executed and afterExecute is not invoked
Avoid active hazards
9.1 Lock Sequence deadlock
Threads need to be fixed in order to obtain the lock It is important to note that although the object reference sequence is fixed, but the two lock when the actual object is exchanged, the lock, in fact, it is not a fixed sequence is easy to cause a deadlock Can only lock, immutable lock the sort of value as the basis, such as account, id, etc When making the order of the lock, Use system.identityHashCode() to obtain the hashcode value of the object and lock it in order of the hashcode value. If the object has the same hashcode value, use the overtime lock. If an external method is called while holding the lock, check whether the external method is also synchronized to avoid deadlock problems
9.2 Open Invocation
Open calls: There is no need to hold a lock when calling a method. Open calls should be used wherever possible in your program to make it easier to deadlock analysis
//Class A
public synchronized void setLocation(Point location) {...if (location.equals(destination))
dispatcher.notifyAvailable(this); // Method calls are also locked
}
//class B
public synchronized void notifyAvailable(Taxi taxi) {
availableTaxis.add(taxi);
}
Copy the code
Open call
public void setLocation(Point location) {
boolean reachedDestination; // Add intermediate variables
synchronized (this) {... reachedDestination = location.equals(destination); }if (reachedDestination)
dispatcher.notifyAvailable(this);
}
Copy the code
9.3 tryLoack Periodically Fetching a Lock
You can specify a timeout period, after which a failure message is returned
9.4 Thread Dump Thread Dump identifies deadlocks
Jvm by thread dumps to help identify the occurrence of a deadlock, thread dump includes the running thread’s stack trace information, lock information (what each thread lock, the stack frame for these locks, blocked threads which are waiting to get a lock) before the thread dump, the Jvm will be waiting for the diagram by searching for a loop to find deadlocks, if discover the deadlock, Gets the corresponding deadlock information
9.5 hunger Starvation
Thread due to resources you will need to access it and cannot continue to perform a hunger has taken place, causing hunger is one of the most common resources change the CPU clock cycle thread priority and improper use, or at the end of the lock is held to perform some can’t structure (an infinite loop, wireless wait etc) to avoid using thread priority, because it can increase the platform dependencies, And may cause an active risk
9.6 live livelock lock
The thread performs the same operation over and over again, but it always fails. Such as in a transaction message handling, if cannot successfully dealing with a message, the message will roll back the entire transaction processing mechanism, and put it back in the beginning of the queue Coordinate when multiple threads are to respond to each other to modify their state, and make the thread cannot continue, such as pedestrians give way to each other at the same time, the introduction of random Numbers, can avoid the problem
X. Performance and scalability
Avoid premature optimizations. Get your program right first, and then speed it up — if not fast enough
10.1 Amdahl’s law
The maximum acceleration ratio that a program can theoretically achieve with increased computing resources depends on the proportion of parallel and serial components in the program
F: serial execution ratio N: Number of processors Speedup≤1/(F+(1-f)/N)Copy the code
10.2 Context Switch
Thread scheduling process needs to access the Shared data structure, operating system and the JVM its cost including the JVM and operating system code execution cost, at the same time, due to the new thread came in, it needs the data structures may not be the current processor local cache, so the JVM and cache switching costs can will block the thread hanging and allow it to be swapped out, When threads block frequently, cpu-intensive programs will have more context switches, increasing scheduling overhead and reducing throughput. On most general-purpose processors, context switching overhead is equivalent to 5,000 to 10,000 clock cycles. The vmstat command for microseconds Unix and the Windows Perform tool report information such as the number of context switches and the percentage of execution time in the kernel. A kernel occupancy of more than 10% usually indicates heavy scheduling activity, possibly due to I/O or blocking due to competing locks.
10.3 Memory Synchronization
- Memory barriers may be used in the visibility guarantees provided by Memory Barrier synchronized and volatile to flush caches, flush hardware write conflicts, and stop execution pipes. Memory fences may inhibit some compiler optimization operations, such as most operations that cannot be reordered
- JVM optimization locks Modern JVMS can reduce unnecessary synchronization overhead by optimizing to remove locks that will not be contested. If the lock is only accessible by the current thread, such as locking a newly created object within the thread, the JVM can remove lock acquisition operations. When the JVM can use escape analysis to find local object references that will not be published to the heap, You can remove the lock fetch operation, or you can coarsene the lock granularity
10.4 the blocked
When the Jvm implements blocking behavior, it can
- Spin waiting (spin-waiting loop constantly trying to acquire the lock until it succeeds)
- Suspends a blocked thread through the operating system
Its efficiency depends on the overhead of context switching and the time required to wait before the lock is successfully acquired. If the wait time is short, spin wait is selected; if the wait time is long, two context switches are selected when the thread is suspended and blocked, including two necessary system operations and cache operations:
- Blocked threads are swapped out before their execution slice runs out
- The replacement thread is switched back
10.5 Reducing lock contention
- Reduce lock holding time
- Reduce the frequency of lock requests
- Use exclusive locks with coordination mechanisms that allow for higher concurrency
10.6 Reducing the Lock Granularity
- Lock decomposition If a lock wants to protect multiple independent state variables, it can be divided into multiple locks, each lock only protects one variable, reducing the frequency of each lock being requested
- When the contention frequency of lock segments is higher than that of data protected by locks, the lock decomposition technique can be further extended to decompose locks on a set of independent objects. For example, ConcurrentHashMap contains an array of 16 locks, each lock protects 1/16 of the hash bucket. The NTH hash bucket is protected by N mod 16. Disadvantages: The complexity and overhead of acquiring locks is higher, and in some cases the entire container needs to be locked. For example, when ConcurrentHashMap needs to expand the mapping range and the hash value of the rekey value needs to be distributed to a larger set of buckets, all the locks in the piecewise lock set need to be acquired
10.7 Avoiding hotspots
The size function in ConcurrentHashMap does not directly return a global count stored in the map, which would cause it to become a hot value (this would change with every add and delete operation, even if it is not the same thread, resulting in lock contention). Instead, each segment maintains its own count independently. Calculate map size values by enumerating segments directly and adding them
10.8 Replacing an exclusive lock
Use concurrent containers, read and write locks, immutable objects, atomic variables
10.9 Checking the CPU usage
The UNIX vmstat/mpstat command indicates that the Windows PerfMom CPU is underutilized
- Unbalanced load Increases the test load
- I/O intensive Iostat/PerfMOM determines whether an application is I/O intensive or whether bandwidth needs to be increased by monitoring the traffic level of the network
- External limitations Determine external limitations using an analysis tool or database management tool
- Locked contention blocked threads will have a stack of frames in the thread dump containing “Waiting to lock monitor… .” , the more intense the lock contention, the more frequently it appears in the thread dump
10.10 Do not use object pools for concurrency
If a thread requests an object from the thread pool and is blocked, the blocking overhead is hundreds of times that of a memory allocation operation (creating a new object). In addition, you need to ensure that the object is reset to the correct state when reused
Testing of concurrent programs
11.1 Blocking Test
You can use interrupts to unblock by starting the test thread that contains the blocking operation on the main thread. When the test thread is blocked, it interrupts the test thread on the main thread, throws InterruptException, and joins to ensure that the test thread completes. IsAlive ()==false indicates that the blocking test succeeded. Thread.getstate is used to verify that a Thread can block on a conditional wait, which is not reliable
11.2 Security Test
Test for errors caused by data contention, requiring multiple threads to perform put and take operations respectively. To find properties that are easy to check and that are likely to fail in the event of an error, without making the error-checking code artificially limit concurrency, you can use the checksum calculation function to calculate the checksum of the in-column and out-column elements. If they are equal, the code is correct. If you need to consider sequence-sensitive tests, you can use CyclicBarrier or CountDownLatch to run multithreaded tests in a single place at the same time to avoid out-of-sync problems when creating threads. When an exception is thrown, or an infinite loop, the test may never end, At this time, the test program can set the maximum waiting time, and will not be executed when it is outdated. The number of test threads should be more than the number of CPUS in the later troubleshooting, so that threads are running and being swapped out at any time, increasing the alternating behavior
11.3 Resource Management Tests
The object reference is destroyed when it is no longer needed
11.4 More alternate operations
- use
Thread.yield()
或Thread.sleep()
. (Sleep is better.) Use AOP for increased convenience - Use to generate a higher number of active threads, at least than the number of processors
11.5 Performance test traps
11.5.1 Garbage Collection
- No garbage collection is performed
Ensure that the garbage collection operation does not run during the entire test run-- verbose: gc
- Multiple garbage collections reflect the overhead of memory allocation and garbage collection at runtime
11.5.2 Dynamic Compilation
When a class is first loaded, the JVM executes by interpreting bytecode, while hot code may be compiled into machine code by dynamic coders at run time, the code turns hot code into direct execution. Code can also be pulled back to explain execution and recompiled
- Reduce the interpret/compile time ratio so that the test program runs long enough
- Avoid explain/compile times
use-XX:+PrintCompilation
To verify that dynamic compilation is performed before the test is run
11.5.3 Code path inauthentic sampling
A dynamic compiler might make some specific optimizations for a single-threaded test program, but including some parallelism in a real-world application would eliminate those optimizations — combining single-threaded performance testing with multithreaded performance testing
11.5.4 Useless code elimination
In HotSpot, -server mode is better than -client mode, -server mode compilers produce more efficient code, and it is easier to eliminate useless code through optimization
- Avoid optimization of operations without incurring excessive overhead methods
Evaluates the hash value of a field in an object and compares it to any value, as inSystem.nanoTime
The current value of, if equal, outputs a useless and ignorable message
if ( f.x.hashCode() == System.nanoTime() ) {
System.out.println("");
}
Copy the code
11.6 Common Concurrency Errors
11.6.1 Starting a thread in a constructor
If you start a thread in the constructor, you may have subclassing problems and cause the this reference to escape from the constructor
11.6.2 Errors in conditional waiting
While waiting on a conditional queue, the Object. wait and conditional. await methods should be called in a loop after checking the state predicate and need to hold the correct lock if object.wait and condition The condition. Await method that does not hold a lock, or is not in a loop, or does not check for some state predicate is usually an error
11.6.3 Holding a lock while hibernating or waiting
If a lock is held when thread.sleep is called, it will cause other threads to be unable to execute for an extended period of time, which can lead to serious activity problems. The same problem can occur if two locks are held while calling Object. wait or condition.await
Explicit locking
In a synchronized built-in lock, the only way to resume a program when a deadlock occurs is to restart it, and the only way to prevent a deadlock is to avoid an inconsistent lock order when building a program
12.1 already
Features: can be timing, polling, interruptible lock acquisition operation, fair queue, the block structure already. LockInterruptibly (); An interruptible lock acquisition operation
12.2 Polling Lock and Timing Lock
Provides an alternative to avoid deadlocks if the lock cannot be acquired, the acquired locks are released and all locks are reacquired. Timing locks provide a time limit based on the time remaining. If the operation cannot be completed within the specified time limit, the program terminates prematurely
12.3 fairness
One reason why unfair locks perform better than fair locks in a competitive environment is that there is a significant delay between resuming a suspended thread and actually running the thread. When a lock is held for a long time, or the average interval between lock requests is long, fair locks should be used
12.4 Synchronized VS already
Use ReentrantLock only when the built-in lock does not meet the requirements. Use ReentrantLock scenarios: timed, pollable, and interruptible lock acquisition operations, fair queues, and non-block locks
12.5 read-write lock
Read-write lock can improve the reading speed of multiprocessor system data structure, and in other cases, the performance of the read-write lock is bad When lock held by the reader thread, and by written request thread lock, other threads read only when the writer thread after use and release the write locks to hold read locks Write has a higher priority thread, thread can be downgraded to a reader thread, A reader thread cannot be upgraded to a writer thread, which can result in a deadlock: if two reader threads attempt to upgrade to a write lock at the same time, neither will release the read lock
Build custom synchronization tools
13.1 State dependency issues
Certain operation is based on the state, such as can not remove elements from an empty queue, to get the calculation result is not yet the end of the task, must wait until the queue into the “not null” state into a state or task has been completed depend on the state of operation can be blocked until can continue to perform, polling (cycle) and dormancy to implement a simple block, the idea is to use a circular manner, Retry until success, which is not a good implementation to use conditional queues based on LinkedBlockingQueue latch Semaphore FutureTask
13.2 Conditional Predicates
Circular judgment:
- The conditional predicate may have become true by the time the notifyAll thread calls the notifyAll, but false again by the time the lock is reacquired: other threads may have acquired the lock and changed the object’s flag during the time the thread is awakened to wait to reacquire the lock
- The conditional predicate never comes true from the time wait is called. It is possible that another thread calls notifyAll because another conditional predicate comes true
public synchronized void put(V v) throws InterruptedException {
while (isFull())
wait();
doPut(v);
notifyAll();
}
Copy the code
13.3 notice
Whenever you wait for a condition, be sure to notify the conditional predicate in some way when it becomes true. For example, when a conditional predicate is put, a notifyAll is executed. Exit as soon as possible. Notify and notifyAll do not release the lock, but notify a thread in wait state that it is ready to acquire the lock
- NotifyAll Uses notifyAll instead of notify to avoid various types of conditions. When waiting for a lock, notifyAll awakens the thread on the lock instead of the desired one, thus avoiding signal loss
- Notify When you use notify, the following two conditions must be met
- All wait threads are of the same type except for one conditional predicate associated with the conditional queue, and each thread performs the same action after returning from WAIT
- Singlein singleout each notification on a condition variable wakes up at most one thread to execute. In a queue, take unless it is empty, put unless it is full, and only 2 is satisfied. Threads waiting on a conditional queue have two associated conditional predicates
13.4 Subclass Security Issues
For state-dependent classes, either fully expose their protocols such as wait and notification to subclasses and write them into formal documentation, or completely prevent subclasses from participating in wait and notification and so on
13.5 Import agreement and Export Agreement
For each state-dependent operation and each operation that modifies the dependent state of another operation, there should be an entry protocol and an exit protocol. The entry protocol is the conditional predicate of the operation, and the exit protocol includes checking all state variables modified by the operation. And verifies that they make any other conditional predicate true and, if so, notifies the relevant conditional queue
13.6 according to Condition
- Conditions can be created by associated Locks, and each lock can have any number of Condition objects
private final Condition notEmpty = lock.newCondition();
Copy the code
- The fairness Condition inherits the fairness of the LOCK object, and the thread releases the fair lock from the Condition await in FIFO order
- In Condition, wait, notify, and notifyAll correspond to await, signal, and signalAll — make sure you use the correct version
- Usage scenarios Use fair queue operations or multiple waiting thread sets per lock
13.7 AQS
AbstractQueuedSynchronizer (AQS) is a framework for building locks and synchronizer, many synchronizer can construct by AQS easily and efficiently, such as already, Semaphore, FutureTask, CountDownLatch
Atomic variables and non-blocking synchronization mechanism
14.1 concurrent CAS
CAS contains three operands — the memory location V needs to be read and written to, the value A to be compared with, and the new value B to be written to. CAS will update V atomically with the new value B if and only if V is equal to A. Otherwise, CAS is an optimistic technique that expects to perform the update successfully. And if another thread updates the variable after the last check, CAS can detect the error and does not update this time. Since CAS can detect interference from other threads, it can implement atomic read-change-write sequences without locking: On most processors, the overhead on the “shortcut code path” for uncontested lock acquisition and release is about twice that of CAS
14.2 Atomic variable classes
12 atomic variable classes, divided into 4 groups
- Scalar classes support CAS, AtomicInteger, AtomicBoolean, AtomicLong, and AtomicReference
- The array class only supports the elements in the atomic array class of Integer, Long, and Reference, and can implement atomic update
- Updater class
- Coincidence variable class
14.3 Non-blocking algorithm
Non-blocking algorithms: The failure or suspension of one thread does not cause other threads to fail or suspend. Lock-free algorithms: At each step of the algorithm there is a thread that can continue to execute. The trick to building a non-blocking algorithm is to narrow down the atomic modifications to a single variable
14.4 ABA problem
During algorithm execution, the value changes and then changes back to the original value, and there is a misjudgment solution when CAS decides: instead of updating the value of a reference, update two values, including a reference and a version number
Java memory model
15.1 Happens-before
If there is no happens-before relationship between two operations, the JVM can arbitrarily reorder the rules for volatile variables: writes to volatile variables must be performed before reads to them Thread start rule :A call to thread.start () on A Thread must perform transitivity before any operation is performed in that Thread :A completes before B,B completes before C, and A completes before C
15.2 Unsafe Release
Reordering problems can occur when there is a lack of happens-before relationships, which is why publishing an object without adequate synchronization causes another thread to see an object that is only partially constructed
Xvi. Q&a
16.1 Differences between Java synchronous and concurrent Collections
The synchronous collection class, Hashtable and Vector and the synchronous collection wrapper class, Collections.synchronizedmap () and Collections. SynchronizedList () provides a basic conditional thread-safe Map and the realization of the List. Concurrent collections like ConcurrentHashMap not only provide thread safety but also improve scalability with modern techniques such as lock separation and internal partitioning
16.2 How can I Avoid deadlocks
The following four conditions must be met for a deadlock to occur:
- Mutually exclusive condition: a resource can only be used by one process at a time.
- Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources.
- Non-dispossession condition: a process cannot forcibly take away a resource it has acquired until it is used up.
- Circular waiting condition: a circular waiting resource relationship is formed between several processes. The simplest method is to prevent the circular waiting condition, set all resources in the system flag bit, sort, specify that all processes must apply for resources in a certain order (ascending or descending) operations to avoid deadlock
16.3 How to Create an Immutable Object in Java
- Initialize all members by constructor
- Do not provide setter methods for variables
- Declare all members private so that direct access to them is not allowed
- In the getter method, return the cloned object
- The domain for the final
16.4 What is the difference between volatile and atomic variables
Volatile variables ensure antecedence, that is, that writes occur before subsequent reads, but they do not guarantee atomicity. For example, if the count variable is volatile, the count++ operation is not atomic. The AtomicInteger class provides atomic methods to make the operation atomic
16.5 Why Cannot Java constructors synchronized
No good explanation was found
16.6 the Collections. Synchronized () method
Encapsulate synchronous methods for all operations
public int size(a) {
synchronized (mutex) {return m.size();}
}
Copy the code