= = =

I used to tell my little brother that if you really don’t know when to use HashMap and when to use ConcurrentHashMap, use the latter and your code will be less buggy.

He asked me: What is ConcurrentHashMap? -. –

Programming is not showmanship. In most cases, how to write simple code is the ability.

Multithreading is complex and error prone by nature. Some difficult concepts to understand, to avoid. This article doesn’t cover the basics, because you already have the source code for the JDK.

thread

Thread

The first is the Thread class. You know there are two ways to do this. First, Thread overrides its run method. The second is to implement the Runnable interface and implement its run method. The third way to create a thread is through a thread pool.

Our actual code implementation is in the run method.

We focus on two scenarios. One is the thread exit condition and the other is the exception handling condition.

Thread to exit

Some run methods finish executing and the thread exits. But some run methods never end. Terminating a Thread is definitely not done with the thread.stop () method, which was deprecated in java1.2. So there are basically two ways we can control threads.

Define exit flags to be placed in the while

The code looks like this.

private volatile boolean flag= true;
public void run() {
    while (flag) {
    }
}
Copy the code

Flags are typically decorated with volatile to make them readable, and threads are controlled by setting the value, which has become a convention.

Terminate a thread with the interrupt method

Something like this.

While (! Interrupted ()) {...... }Copy the code

For InterruptedException, such as that thrown by Thread.sleep, we typically catch it and quietly ignore it. Interrupts allow a cancelable task to clean up work in progress, then notify other tasks that it is being cancelled, and finally terminate, in which case such exceptions need to be handled carefully.

The interrupt method doesn’t necessarily “interrupt” a thread; it’s just a collaborative mechanism. The interrupt method generally cannot interrupt some blocking I/O operations. Such as writing files, or socket transmission. In this case, you need to call the close method that is blocking the operation in order to exit normally.

Be careful with the interrupt series, which introduces bugs and even deadlocks.

Exception handling

Two kinds of exceptions are thrown in Java. One that must be caught, such as InterruptedException, otherwise it will not compile. The other is a NullPointerException that you may or may not handle.

It is very likely that both of these exceptions will be thrown during our task run. For the first type of exception, you have to put it in a try,catch. However, the second type of exception affects the normal operation of the task if it is not handled.

There are many students who do not capture some implicit exceptions when dealing with cyclic tasks, resulting in the task can not continue to execute in the case of exceptions. If you can’t determine the type of Exception, you can simply catch Exception or the more generic Throwable.

While (! IsInterrupted ()) {try{...... {} the catch (Exception ex)... }}Copy the code

synchronously

There are many ways to implement synchronization in Java, which can be summarized as follows.

  • synchronizedThe keyword
  • Wait, notify, etc
  • In the Concurrent packageReentrantLock
  • volatileThe keyword
  • ThreadLocal local variable

Producer and consumer are typical wait and Notify application scenarios. These functions must be called in synchronized code blocks to run properly. Like semaphores, they are mostly gimmickry and have a significant impact on code readability and are not recommended. ObjectMonitor: ObjectMonitor: ObjectMonitor: ObjectMonitor: ObjectMonitor: ObjectMonitor: ObjectMonitor: ObjectMonitor: ObjectMonitor

One of the most common mistakes with ReentrantLock is forgetting to close the lock in the finally block. In most synchronous scenarios, using Lock is sufficient, and it also has the concept of read/write locks for granular control. We generally use unfair locks to allow quests to compete. The performance of an unfair lock is higher than that of a fair lock. An unfair lock makes full use of the CPU time slice and minimizes the idle time of the CPU. Unfair locks also cause starvation: some quests never get locks.

Synchronized is not necessarily slower than lock by upgrading through a lock mechanism. Moreover, through JStack, it is easy to see its stack, and it is still widely used.

Volatile always ensures that variables are read visible, but it targets primary types and the primary objects that it locks. If it modifies a collection class, such as a Map, it guarantees that the read visibility is a reference to the Map, not the Map object.

Synchronized and volatile are embodied in bytecodes (Monitorenter, Monitorexit), mainly by adding memory barriers. Lock, on the other hand, is pure Java API.

ThreadLocal is convenient, one copy of data per thread, and safe, but beware of memory leaks. If the thread lives for a long time, we want to make sure that ThreadLocal’s remove() method (expungeStaleEntry, in particular) is called every time it finishes using ThreadLocal to clean up data.

About Concurrent package

The Concurrent package builds on AQS, which provides a framework for implementing blocking locks and a series of synchronizers that rely on FIFO wait queues.

The thread pool

The most complete thread pool has about seven parameters, and these parameters must be optimized to make proper use of the thread pool.

Thread pool parameters

The concurrent package is most commonly used for Thread pools, and it is recommended to use the Thread pool directly for normal work so that the Thread class can be de-prioritized. The common ones include newSingleThreadExecutor, newFixedThreadPool, newCachedThreadPool, and scheduling, which are created by using Executors factory class.

NewSingleThreadExecutor is a handy way to quickly create an asynchronous thread. NewCachedThreadPool should never be used in high-concurrency online environments; it buffers tasks in unbounded queues, which can overwhelm your memory.

I’m used to customizing ThreadPoolExecutor, which is the one with the most arguments.

public ThreadPoolExecutor(int corePoolSize,
int maximumPoolSize,
long keepAliveTime,
TimeUnit unit,
BlockingQueue<Runnable> workQueue,
ThreadFactory threadFactory,
RejectedExecutionHandler handler) 
Copy the code

If my task is predictable, corePoolSize and maximumPoolSize are generally set to the same size, and then the lifetime is set to be exceptionally long. Avoid the overhead of frequent thread creation and closure. The I/O – intensive and CPU – intensive application threads have different sizes. Generally, THE I/O – intensive application threads can be opened more.

ThreadFactory I usually define one as well, mainly to give the threads a name. This way, I can visually see the thread I’m creating while using some tools like JStack.

monitoring

Thread pools with high concurrency are best monitored. You can use logs and storage to save the logs, which helps you troubleshoot problems later.

In general, you can control and monitor the behavior of threads by inheriting ThreadPoolExecutor and overriding beforeExecute, afterExecute, and terminated methods.

Thread pool saturation policy

Perhaps most forgotten is the thread saturation strategy. That is, the thread and the buffer queue are all out of space, what happens to the new tasks added. The JDK implements four policies by default, AbortPolicy, which simply throws an exception. Here are some others.

DiscardPolicy is more aggressive than abort, it just drops the task without even an exception message.

CallerRunsPolicy is handled by the calling thread. For example, in a Web application, when the thread pool is full, new tasks will run in the Tomcat thread. This can slow down some tasks, but more often than not, it blocks the main thread.

DiscardOldestPolicy Discards the task at the top of the queue and then attempts to execute the task again (repeat the process).

In many cases, these saturation strategies may not meet your needs. You can customize your own strategies, such as persisting tasks to some store.

Blocking queue

A blocking queue blocks the current thread. Blocked threads are automatically woken up when there are elements in the queue, which greatly improves coding flexibility and convenience. In concurrent programming, it is generally recommended to use blocking queues, so that the implementation can avoid unexpected errors in the program as much as possible. The most classic scenario of using blocking queue is the reading and parsing of socket data. The reading thread keeps putting data into the queue, and the parsing thread keeps getting data from the queue for processing.

ArrayBlockingQueue calls visitors unfairly by default, and we can change it to a fair blocking queue by setting the constructor argument.

The default maximum length of the LinkedBlockingQueue queue is integer.max_value, which can be dangerous when used as a thread pool queue.

SynchronousQueue is a blocking queue that stores no elements. Each PUT operation must wait for a take operation or it cannot continue to add elements. The queue itself does not store any elements and the throughput is very high. For submitted tasks, if there are idle threads, they are processed by idle threads; Otherwise, create a new thread to handle the task. It is more like a pipe and is widely used in some communication frameworks (such as RPC) to quickly process a request.

DelayQueue is an unbounded blocking queue that supports delayed fetching of elements. The object placed in the DelayQueue needs to implement the Delayed interface, primarily to provide a time delay and for Delayed intra-queue comparison ordering. This approach is generally more CPU efficient than most non-blocking while loops.

There are also PriorityBlockingQueue and LinkedTransferQueue, which can be guessed by its literal meaning. In the construction parameters of the thread pool, it is important to pay attention to the characteristics and boundaries of the queue we use. For example, even the simplest newFixedThreadPool is insecure in some scenarios because it uses an unbounded queue.

CountDownLatch

If there are A bunch of interfaces A-Y, the maximum time of each interface is 200ms and the minimum time is 100ms.

One of my services needs to provide an interface Z that calls the A-Y interfaces to aggregate the results. How does interface Z return this data in 300ms?

This kind of problem also has the typical horse racing problem, only through the parallel computation can complete the problem. It boils down to two categories:

  • Implement parallelism of tasks
  • Wait for n threads to complete the task before starting execution

Prior to the concurrent package, these synchronization procedures needed to be written manually, which was very complicated. You can now use CountDownLatch and CyclicBarrier for easy encoding.

CountDownLatch is implemented through a counter whose initial value is the number of threads. Each time a thread completes its task, the counter is reduced by one. When the counter value reaches 0, it indicates that all threads have completed the task, and the threads waiting on the lock can resume the task. CyclicBarrier is similar in that it does the same thing. However, CountDownLatch is used more frequently for daily work.

A semaphore

Semaphore has some applications, but most of them are showy and should be used sparingly in coding.

Semaphore can realize the function of current limiting, but it is only one of the common methods of current limiting. The other two algorithms are leaky bucket algorithm and token bucket algorithm.

Hystrix’s fusing function also uses semaphores for resource control.

Lock && Condition

In Java, Lock and Condition can be understood as substitutes for traditional synchronized and wait/notify mechanisms. Many of the blocking queues in the Concurrent package are implemented using Condition.

However, these classes and functions are difficult to understand and bug prone to beginner and intermediate programmers and should be strictly prohibited from business code. But in network programming, or some framework projects, these functions are necessary, do not randomly assign this part of the work to a younger brother.

End

Whether it’s wait, notify, synchronization keywords, or locks, don’t use them at all because they introduce complexity. The best way to circumvent some of the coding issues is to use the mechanisms provided by the Concurrent package directly.

The CAS concept in the Concurrent package is, to some extent, an implementation of locklessness. There is a more specialized lockless queue framework like Disruptor, but it is still built on the CAS programming model. In recent years, event-driven models like AKKA are gaining popularity, but simple programming model does not mean simple implementation, and the work behind it still needs multi-threading to coordinate.

With the introduction of coroutine, Golang adds a more lightweight complement to multithreading. Quasars can be loaded in Java with JavaAgent technology to add some functionality, but I don’t think you’d want to sacrifice code readability for a bit of efficiency.

Welcome to pay attention to my B station account

B station account

If the content helps you, welcome everyone to like, favorites + attention

Learning exchange group

Communication group