We know that multithreading can process multiple tasks concurrently, effectively improve the performance of complex applications, and plays a very important role in practical development

However, using multithreading also brings many risks, and the problems caused by threads are often difficult to find in the test, and will cause major failures and losses on the line

I will combine several practical cases to help you avoid these problems in your work

Multithreading problem

First of all, let’s look at some of the problems with using multiple threads

The problem with multithreading is largely due to the multiple threads’ right to operate on the same variable and the uncertainty of the execution order between different threads

The book “Java Concurrent Programming in Action” addresses three kinds of multithreading problems: security problems, activity problems, and performance problems

Security issues

For example, there is a very simple storage function operation, as follows:

public int decrement(a){
	return --count;//count initial inventory is 10
}
Copy the code

In a single-threaded environment, this method works correctly, but in a multi-threaded environment, it leads to incorrect results

–count looks like an operation, but it actually contains three steps (read, modify, write) :

  • Read the value of count
  • The value minus one
  • Finally, we assign the result to count

The following figure shows an incorrect execution. When two threads (1 and 2) execute the method simultaneously, they both read count as 10 and return a result of 9. This means that two people may have purchased the product, but the inventory is only reduced by 1, which is unacceptable for a real production environment

A situation like the one above that results in incorrect results due to improper execution timing is a very common concurrency safety problem known as a race condition

The decrement() method, the section of code that causes race conditions to occur, is called a critical section

To avoid this problem, you need to ensure the atomicity of the read-modify-write compound operation

In Java, there are many ways to implement this, including the Synchronize built-in lock or ReentrantLock explicit lock mechanism, the use of thread-safe atomic classes, and the CAS approach

Activity problem

An activity problem is when an operation cannot continue because it is blocked or looping

The three most typical are deadlock, live lock and starvation

A deadlock

The most common activity problem is deadlocks

Deadlocks refer to multiple threads waiting for each other to obtain the lock, but will not release their own lock, and cause the blocking of these threads can not run is deadlocks, it is often caused by the incorrect use of locking mechanism and the unpredictability of execution order between threads

How can deadlocks be prevented

1. Ensure that the lock sequence is the same

For example, there are locks A,B and C.

  • The lock sequence of Thread 1 is A, B, and C.

  • Thread 2 is locked in sequence A and C to avoid deadlocks.

If Thread2 is locked in an inconsistent order B/A, or C/A, deadlock will occur.

2. Use the time-out waiver mechanism whenever possible

The Lock interface provides the tryLock(long time, TimeUnit Unit) method, which can wait for a Lock for a fixed period of time, so that the thread can voluntarily release all the locks it has acquired after the Lock expires. Deadlock issues can be avoided

Live lock

A live lock is very similar to a deadlock in that the program never waits for a result, but in contrast to a deadlock, a live lock is alive. Because the running thread is not blocking, it is always running, but it never gets the result

hunger

Hunger is the problem of a thread not being able to run when it needs certain resources, especially CPU resources.

In Java, there is the concept of thread priority, with Java priorities ranging from 1 to 10, with 1 lowest and 10 highest.

If we set the priority of a thread to 1, which is the lowest priority, in this case, the thread may never be allocated CPU resources, causing it to fail for a long time.

Performance issues

Thread creation and switching between threads consume resources. If frequent thread creation or CPU scheduling takes longer than the running time of the thread, the use of thread is not worth the loss, and may even result in high CPU load or OOM consequences

For example

Thread-unsafe classes

Case 1

For synchronization using thread-unsafe collections (ArrayList, HashMap, and so on), it is best to use thread-safe concurrent collections

In a multithreaded environment, collection traversal of thread unsafe operation, may throw ConcurrentModificationException abnormalities, is often said that fail – fast mechanism

The following example simulates multiple threads working on an ArrayList at the same time, thread T1 iterating through the list and printing, and thread T2 adding elements to the list

List<Integer> list = new ArrayList<>();
list.add(0); 
list.add(1); 
list.add(2);		/ / the list: [0]
System.out.println(list);

// Thread T1 iterates through the printed list
Thread t1 = new Thread(() -> {
  for(inti : list){ System.out.println(i); }});// Thread T2 adds elements to list
Thread t2 = new Thread(() -> {
  for(int i = 3; i < 6; i++){ list.add(i); }}); t1.start(); t2.start();Copy the code

Looking into the source code for the exception throwing ArrayList, you can see that traversing the ArrayList is done through an iterator that is implemented internally

When the iterator’s next() method is called to get the next element, the checkForComodification() method is used to check whether modCount and expectedModCount are equal. If he is not equal to throw ConcurrentModificationException

ModCount is a property of ArrayList that represents the number of times the collection structure has been modified (the number of times the list length has changed), and modCount is incremented by one each time a method like add or remove is called

ExpectedModCount is an iterator property that is assigned the same value as the prior modCount when the iterator instance is created (expectedModCount=modCount)

So when other threads add or remove set elements, modCount increases, and then the set traversal expectedModCount is not equal to modCount, and an exception is thrown

Use locking to manipulate thread-unsafe collection classes

List<Integer> list = new ArrayList<>();
list.add(0); 
list.add(1); 
list.add(2);
System.out.println(list);

// Thread T1 iterates through the printed list
Thread t1 = new Thread(() -> {
  synchronized (list){			// Use the synchronized keyword
    for(inti : list){ System.out.println(i); }}});// Thread T2 adds elements to list
Thread t2 = new Thread(() -> {
  synchronized (list){
    for(int i = 3; i < 6; i++){ list.add(i); System.out.println(list); }}}); t1.start(); t2.start();Copy the code

As in the code above, locking operations on the list with the synchronized keyword does not throw an exception. Synchronized, however, serializes locked blocks of code and is not advantageous in terms of performance

Thread-safe concurrency utility classes are recommended

JDK1.5 adds many thread-safe utility classes for use with concurrent containers like CopyOnWriteArrayList and ConcurrentHashMap

These utility classes are recommended for daily development for multithreaded programming

Case 2

Do not use SimpleDateFormat as a global variable

SimpleDateFormat is actually a thread-unsafe class, and the root cause is that SimpleDateFormat’s internal implementation does not synchronize operations on some shared variables

public static final SimpleDateFormat SDF_FORMAT = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

public static void main(String[] args) {
  // Both threads call the simpleDateformat.parse method simultaneously
  Thread t1 = new Thread(() -> {
    try {
      Date date1 = SDF_FORMAT.parse("The 2019-12-09 17:04:32");
    } catch(ParseException e) { e.printStackTrace(); }}); Thread t2 =new Thread(() -> {
    try {
      Date date2 = SDF_FORMAT.parse("The 2019-12-09 17:43:32");
    } catch(ParseException e) { e.printStackTrace(); }}); t1.start(); t2.start(); }Copy the code

It is recommended to use SimpleDateFormat as a local variable or in conjunction with ThreadLocal

The simplest way is to use SimpleDateFormat as a local variable

However, if used in a for loop, many instances are created and can be optimized to work with ThreadLocal

/ / initialization
public static final ThreadLocal<SimpleDateFormat> SDF_FORMAT = new ThreadLocal<SimpleDateFormat>(){
  @Override
  protected SimpleDateFormat initialValue(a) {
    return new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); }};/ / call
Date date = SDF_FORMAT.get().parse(wedDate);
Copy the code

Java8 LocalDateTime and DateTimeFormatter are recommended

LocalDateTime and DateTimeFormatter are new features introduced in Java 8 that are not only thread-safe, but also easier to use

It is recommended to replace Calendar and SimpleDateFormat with LocalDateTime and DateTimeFormatter in practical development

DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
LocalDateTime time = LocalDateTime.now();
System.out.println(formatter.format(time));
Copy the code

Correct lock release

Suppose you have pseudocode like this:

Lock lock = newReentrantLock(); .try{
  lock.tryLock(timeout, TimeUnit.MILLISECONDS)
  // Business logic
}
catch (Exception e){
  // Error log
  // Throw an exception or return directly
}
finally {
  // Business logiclock.unlock(); }...Copy the code

In this code, a piece of business logic is executed before the finally block releases the lock

If the dependent service in this logic is not available, the thread that occupies the lock cannot release the lock successfully, it will cause other threads to block because they cannot acquire the lock, and eventually the thread pool will be full

So before you release the lock; In the finally clause, there should be only some handling of releasing resources (such as locks, IO streams, etc.) held by the current thread

Another is to set a reasonable timeout when acquiring the lock

To prevent a thread from blocking without a lock, a timeout period can be set. When the lock expires, the thread can throw an exception or return an error status code. The timeout time should be reasonable, not too long, and should be longer than the execution time of the locked business logic.

Use thread pools correctly

Case 1

Do not use thread pools as local variables

public void request(List<Id> ids) {
  for (int i = 0; i < ids.size(); i++) { ExecutorService threadPool = Executors.newSingleThreadExecutor(); }}Copy the code

Creating a thread pool in the for loop creates as many thread pools as the size of the list of input arguments each time the method executes, and the shutdown() method is not called in time to destroy the thread pool

In this case, as more requests come in, the thread pool will consume more and more memory, leading to frequent fullGC and even OOM. It makes no sense to create a thread pool for each method call, as it is no different from creating and destroying threads yourself frequently, not only without taking advantage of the thread pool, but also consuming more resources required by the thread pool

So try to use thread pools as global variables

Case 2

Use the default thread pool static method with caution

Executors.newFixedThreadPool(int);     // Create a thread pool with a fixed capacity
Executors.newSingleThreadExecutor();   // Create a thread pool of capacity 1
Executors.newCachedThreadPool();       // Create a thread pool with the capacity of integer.max_value
Copy the code

Risk points for the three default thread pools above:

NewFixedThreadPool creates thread pools corePoolSize and maximumPoolSize with equal values and uses a blocking queue called LinkedBlockingQueue.

NewSingleThreadExecutor sets both corePoolSize and maximumPoolSize to 1 and also uses LinkedBlockingQueue

LinkedBlockingQueue has a default size of INTEger. MAX_VALUE=2147483647, which is considered an unbounded queue for a real machine

  • If the number of threads running newFixedThreadPool and newSingleThreadExecutor exceeds the number of threads running corePoolSize, subsequent requests will be placed in a blocking queue. Because the blocking queue is too large, subsequent requests will not fail quickly and will be blocked for a long time. The thread pool on the request side can be overwhelmed, dragging down the entire service.

NewCachedThreadPool sets corePoolSize to 0, maximumPoolSize to integer. MAX_VALUE, blocks the SynchronousQueue used by the queue, SynchronousQueue does not hold tasks waiting to execute

  • So newCachedThreadPool creates threads to run as tasks come in, and maximumPoolSize equals an unlimited setting, so that the number of threads created may fill up the machine’s memory.

Therefore, you need to create custom thread pools based on your services and hardware configurations

Number of threads

Set the number of thread pool corePoolSize.

1. Cpu-intensive applications

Cpu-intensive means that tasks require a lot of complex calculations, little blocking, and require the CPU to run at high speed for long periods of time.

General formula: corePoolSize= number of CPU cores +1 thread The number of CPU cores that the JVM can run can be viewed at Runtime.getruntime ().availableProcessors().

2.IO intensive applications

IO intensive tasks involve a lot of disk reads and writes or network transfers, and threads spend more time on I/O blocking than CPU processing. Most business applications are IO intensive.

Reference formula: Optimal number of threads = number of cpus /(1- blocking factor); Blocking factor = thread wait time /(thread wait time +CPU processing time).

The CPU processing time of IO intensive tasks is often much shorter than the thread waiting time, so the blocking coefficient is generally considered to be between 0.8-0.9. For a 4-core single-slot CPU, corePoolSize can be set to 4/(1-0.9)=40. Of course, the specific Settings should be based on the actual operation of the machine

Original text: mp.weixin.qq.com/s/KnvlSh6G_…

From the official account: Flying fish on the moon