Note: Concurrency covers a lot of knowledge, this column focuses on sorting and picking up, so this article does not carry out a detailed knowledge introduction and demo demonstration, but the author may later issue a JDK source reading column, which will focus on concurrency related class libraries

If you have a multi-processor machine, you can distribute multiple tasks among those processors to greatly improve throughput, but concurrency generally refers to improving the performance of programs running on a single processor (perhaps this is the conceptual difference between so-called parallelism and concurrency).

If a program is written with concurrency, when one task blocks, other tasks in the program can continue to execute, so the program as a whole can continue to execute. Common example is to improve the performance of single processor system, event-driven programming by creating a separate thread of execution to event listeners, can avoid all tasks periodically check input events, even if the thread is blocked in most of the time, the program still can guarantee with a certain degree of responsiveness.

Thread scheduling

Java based on sequential language provides support for threads, and in a multitasking operating system bifurcation external process, thread mechanism is said in the executing program to create a task in the process of single, this way to produce a good is the operating system of transparency, it for Java, is an important design goal.

Java thread mechanism is preemptive, which periodically interrupt thread scheduling mechanism, a context switch to another thread, which provides the time slices for each thread, makes each thread is assigned to the appropriate number of time to drive its task (note that switching between threads is described here, rather than the internal thread queue task switching, Should line up or should line up).

The priority of a thread communicates the importance of that thread to the scheduler. Although the order in which the CPU processes the existing thread set is uncertain, the scheduler will prefer the higher-priority thread to execute. However, this does not mean that the lower-priority thread will not execute, it simply executes less frequently.

If you know that you have done all the work required to loop an iteration in the run() method, you can give a hint to thread scheduling by calling yield() : I’ve done enough to let another thread use the CPU, but this is just a hint, there is no guarantee that it will be adopted, and yield() cannot be relied on for any significant control.

Backend daemon thread, refers to the program runs in the background threads of the services they offer, when all the background thread at the end of the program is terminated, the process at the same time will kill all the background thread (this time finally will not be guaranteed, after all, even the instructions are not performed), on the other hand, as long as any non background thread is running, The program will not terminate.

Thread state

A thread can be in one of four states:

  1. New: When a thread is created, it is in this state only briefly, after it has allocated the necessary system resources and performed initialization. At this point, the thread is eligible for CPU time, and the scheduler then changes the thread to a runnable or blocked state.
  2. Runnable: In this state, threads can run as long as the scheduler allocates time to them. That is, at any point in time, a thread may or may not run, which is different from a blocked and dead state.
  3. Blocked: A thread can run, but some condition prevents it from running. When it is Blocked, the scheduler ignores the thread and does not allocate any CPU time to it until it is ready again to perform operations.
  4. Dead: A thread that is Dead or terminated is no longer schedulable and will no longer receive CPU time, its task has ended, or it is no longer runnable. The usual way for a task to die is to return from the run() method, but the thread of the task can also be interrupted.
stateDiagram-v2
[*] --> New
New --> Runnable
Runnable --> Blocked
Blocked --> Runnable
Blocked --> Dead
Runnable --> Dead
Dead --> [*]

One of the following causes may cause a task to be blocked:

  • The task is put to sleep by calling sleep(millseconds).
  • A thread is suspended by calling wait() until it receives a notify() or notifyAll() message.
  • The task is waiting for some input/output to complete.
  • The task tried to invoke its synchronization control method on an object, but the object lock was unavailable because another task had already acquired it.

The Thread class contains the interrupt() method, so you can terminate a blocked task. This method sets the Thread’s interrupt status. If a Thread is blocked or attempts to perform a blocking operation, setting the Thread’s interrupted status throws InterruptedException. When the exception is thrown or the task calls Thread.interrupted(), the interrupted status is reset, Resetting can achieve the effect of only one interrupt notification. Thread. IsInterrupted () Will not reset if only interrupted.

But not all blocking states are interrupt-responsive. You can interrupt a call to sleep(), but you can’t interrupt a thread that is trying to acquire a synchronized lock or perform an I/O operation. The former can be interrupted with a Lock (one of the differences between synchronized and Lock), while the latter can throw IOException by shutting down the underlying resource, or NIO can be used with future.cancel (true).

Correct posture for interrupt checking: a combination of exception catching and active checking, with try-finally resource cleanup in between.

class NeedCleanup {
    private final int id;
    public NeedsCleanup(int ident) {
        id = ident;
        print("NeedCleanup " + id);
    }
    public void cleanup(a) {
        print("Cleaning up "+ id); }}class Blocked3 implements Runnable {
    private volatile double d = 0.0;
    public void run(a) {
        try {
            while(! Thread.interrupted()) { NeedsCleanup n1 =new NeedsCleanup(1);
                try {
                    print("Sleeping")
                    TimeUnit.SECONDS.sleep(1);
                } finally {
                    n1.cleanup();
                }
            }
            print("Exiting via while() test");
        } catch (InterruptedException e) {
            print("Exiting via InterruptedException"); }}}Copy the code

Task calls

Tasks need to be attached to a Thread to perform the desired logic of the task. The Thread class does nothing by itself; it just drives the task assigned to it.

Java’s Executor provides a layer of indirection between the client and the execution of a task. It takes over the life cycle of a Thread, helping with Thread reuse and task scheduling. You can write ThreadFactory to customize the attributes of a thread created by Executor: background, priority, name, and so on.

If you want tasks to return a value upon completion, implement the Callable interface instead of the Runnable interface, and use the executorService.submit () method to return a Future object that can hold the return value. (For more flexible applications, You can choose CompletableFuture).

Exception handling

Exceptions cannot be propagated back to main() across threads, so you must handle locally all exceptions raised within the task. Once an exception escapes the task’s run() method, it is propagated outwardly to the console. (This is an exception catch problem, not an exception in one thread that will affect the execution of other threads.)

public class NaiveExceptionHandling {
    public static void main(String[] args) {
        try {
            ExecutorService exec = Executors.newCachedThreadPool();
            exec.execute(new ExceptionThread());
        } catch(RuntimeException e) {
            // This statement will NOT execute!
            System.out.println("Exception has been handled");
        }
    }
}

staitc class ExceptionThread implements Runnable {
    public void run(a) {
        throw newRuntimeException(); }}Copy the code

Thread. UncaughtExceptionHandler allows you on each Thread objects are attached to an exception handler, ThreadFactory also support.

Competition for resources

Almost all concurrent modes solve thread conflicts by serializing access to shared resources. This mutually exclusive effect of locking statements is called mutex.

synchronized

Java provides built-in support for preventing resource conflicts in the form of the keyword synchronized. All objects automatically contain a single lock, also known as a monitor, that is locked when any of its synchronized methods are called on the object, and any other method calls on the same object are blocked (with object locking granularity, not method locking granularity). Class objects are also objects that support synchronized locking, which is used for static method calls.

If there is more than one method handling critical data in your class, then you must synchronize all related methods. If only one method is synchronized, the other methods will ignore the object lock at will and can be called without penalty.

Synchronized locks are reentrant, but users don’t have to worry about their effects in code.

Lock

Java libraries in Java. Util. Concurrent. Provides explicit mutex mechanism in the locks, Lock object. The code is a little less elegant than the built-in lock form, but it is more flexible for solving certain types of problems, such as the inability to support tryLock logic with the synchronized keyword and the inability to support timeout fetching mechanisms.

Semaphore

Whereas a normal lock allows only one task to access a resource at any one time, counting semaphores allow n tasks to access the resource at the same time. You can think of a semaphore as distributing a license to use the resource.

volatile

Atomicity can be applied to simple operations (set and GET only) on basic types other than long and double. The JVM can perform reads and writes of 64-bit long and double variables as two separate 32-bit operations. This raises the possibility of a context switch between a read and write operation so that different tasks can see incorrect results, and atomicity can be achieved if the volatile keyword is used. Atomic operations can be kept uninterruptible by the threading mechanism, and the code does not need to be synchronized.

The volatile keyword also ensures visibility in the application. If you declare a field volatile, as soon as a write is made to the field, the change is visible to all reads, and the volatile field is immediately written to main memory, where the read takes place. Synchronization also causes flushing to main memory, so if a field is completely defended by synchronized methods or blocks, it does not have to be volatile.

When the value of a field depends on its previous value, such as incrementing a counter (this operation is not atomic!) And volatile would not work. The only safe time to use volatile instead of synchronized is when there is only one mutable field in the class (it’s more appropriate to say that there is only one field that needs to be synchronized, and only simple operations).

Actomic

Java introduced special atomicity variable classes such as AtomicInteger, AtomicLong, and AtomicReference, which provide atomicity conditional update operations of the following form:

boolean compareAndSet(expectedValue, updateValue);

These classes are tuned to work on some modern processors and are atomic at the machine level to solve the above increment problem.

They are rarely useful for regular programming, but when it comes to performance tuning, they come in handy. Cas are optimistic locks that do not involve mutex locking, but require the user to set the failed policy.

Container

The Collections class provides a variety of static synchronized decorator methods to synchronize different types of containers. While this is an improvement because it allows you to choose whether or not to use synchronization in your containers, this overhead is still based on synchronized locking, and Java has added some concurrent class containers to improve thread-safe performance by using smarter techniques to eliminate locking, including: CopyOnWriteArrayList, CopyOnWriteArraySet, ConcurrentHashMap, ConcurrentLinkedQueue, ReadWriteLock.

ThreadLocal

A second way to prevent conflicts between tasks over shared resources is to eradicate the sharing of variables. Thread-local storage is an automated mechanism that allows you to create a different store for each different thread using the same variable (but in practice the reverse is true, as each thread needs to maintain a private variable using ThreadLocal rather than using ThreadLocal to avoid resource contention).

DeadLock

Conditions under which a deadlock occurs:

  • Mutually exclusive and non-preemptive. At least one resource used by a task cannot be shared. A resource that has been held can be occupied by another task only after the original task is released.
  • At least one task holds a resource and is waiting to acquire a resource currently held by another task.
  • There must be a loop waiting.

The easiest way to prevent deadlocks is to break the loop wait.

Thread collaboration

A task can suspend itself until some external condition changes indicating that it is time for the task to move forward. When tasks collaborate, the key issue is the handshake between those tasks. To achieve this handshake, we use the same basic feature: mutual exclusion, which ensures that only one task can respond to a signal, thereby eliminating any possible race conditions.

Wait () and notify ()

When a task encounters a call to wait() in a method, the execution of the thread is suspended and the lock on the object is released. This lock is the same as that of synchronized objects, so the object lock must be owned before a call to wait() or notify() can be made. Sleep () and yield(), on the other hand, are independent of any object and therefore of the lock.

Note that the waiting condition cannot be judged before the lock is acquired, otherwise the signal may be lost.

T1: synchronized(shareMonitor) { <setup condition for T2> shareMonitor.notify(); } T2: synchronized(shareMonitor) { while(someCondition) { shareMonitor.wait(); }}Copy the code

The Lock with the Condition

The basic class that uses mutex and allows tasks to hang is Condition. You can suspend a task by calling await() on Condition. When an external Condition changes, meaning that a task should continue, you can wake up a task by calling signal() to notify the task.

Mutexes are similar to wait(), where Object locks are required before wait() and Lock locks are required before condition.await (). Although a bit more complex to code, concurrency control can be more flexible, creating a symmetry with synchronized.

BlockingQueue

In many cases, you can aim at higher levels of abstraction and use synchronous queues to solve task collaboration problems. Synchronous queues have support for mutual exclusion and collaboration internally, making programming more user-friendly and elegant for users without the explicit use of keywords such as synchronized. Synchronous queues are the most suitable choice for scenarios where message objects are used to convey collaboration between threads.

CountDownLatch and CycleBarrier

CountDownLatch is used to synchronize one or more tasks, forcing them to wait for a set of actions performed by other tasks to complete. Typically, CountDownLatch is used as a countdown counter, where each task is completed concurrently and the number of times is reduced until it reaches zero to unblock.

CycleBarrier applicable to create a set of tasks, which parallel execution work, then wait, before moving to the next step until all tasks are completed (is the upgrade version of the join), it makes all concurrent tasks will be lined up on the fence, so you can consistently moving forward, also supports a public mission the fence.

CountDownLatch triggers only one event, while CycleBarrier can be reused multiple times.

imagination

Have an alternative way is called active objects, each object maintains its own working thread and message queue (yyds message queue), and all the request of this object will be into the queue, any time can only run one of them, therefore, with the active object, not the method we can serialization message, This means that there is no longer a need to guard against a task being interrupted in the middle of its cycle.

Column summary

This is the last article in this column. It took me three weeks to finish 15 articles in my spare time.

First read the Java programming ideas when I was just a a little knowledge of the sophomore students, now I have experienced some personal projects and enterprises, intersperses among golang project, the two brush could revisit, about the design of the Java language structure more familiar cognition, between the lines and some programming details at the same time, The author’s programming ideas and programming skills have also become more and more impressed, if it was not for this book, it is estimated that the original may not be so determined to choose Java to do their main language.

There are JDK source code and THE JVM that could continue to be picked up on Java, but the next column will most likely be a different one, perhaps an experiment in distributed conformance protocol theory.