We know that multithreading can concurrently handle multiple tasks, effectively improve the performance of complex applications, plays a very important role in the actual development

However, using multiple threads also brings many risks, and problems caused by threads are often difficult to detect in testing, and can cause significant failures and losses when they go online

Below I will combine a few practical cases to help you in the work to do to avoid these problems

Article first in the public number (moon with flying fish), and then synchronized to nuggets and personal website: Xiaoflyfish.cn /

If you like it, we will share more articles later!

Feel fruitful, hope help to like, retweet ha, thank you, thank you

Wechat search: month with flying fish, make a friend, into the interview exchange group

Reply 666 in the background of the public account, you can get free e-books

Multithreaded problem

Let’s start with the problem of using multithreading

Much of the problem with multithreading stems from the ability of multiple threads to operate on the same variable and the uncertainty of the order of execution between different threads

The book Java Concurrent Programming In action addresses three kinds of multithreading problems: security, activity, and performance

Security issues

For example, there is a very simple button inventory function operation, as follows:

public int decrement(a){
	return --count;// Count The initial inventory is 10
}
Copy the code

In a single-threaded environment, this method works correctly, but in a multi-threaded environment, it results in incorrect results

— Count looks like an operation, but it actually consists of three steps (read, modify, and write) :

  • Read the value of count
  • The value minus one
  • Finally, we assign the result to count

The following figure shows an error in the execution process. When two threads execute the method at the same time, they both read a count of 10 and both return a value of 9. That means two people might buy the product, but the inventory only goes down by 1, which is not acceptable in a real production environment

Situations like the one above that result in incorrect results due to inappropriate execution timing are a very common concurrency security problem known as a race condition

Decrement () method The area of code that leads to the race condition is called the critical section

To avoid this problem, you need to ensure the atomicity of the read-modify-write complex operation

In Java, you can do this in many ways, such as using explicit locking mechanisms like synchronize or ReentrantLock, using thread-safe atomic classes, and using CAS

Activity problem

An activity problem is when an operation cannot continue because of a block or loop

The three most typical are deadlocks, live locks, and starvation

A deadlock

The most common activity problem is deadlock

A deadlock is when multiple threads are waiting to acquire each other’s locks without releasing the locks they own. A deadlock is a block that prevents these threads from running. It is usually caused by incorrect use of the locking mechanism and unpredictability of the execution order between threads

How can deadlocks be prevented

1. Ensure that the locking sequence is the same

For example, there are three locks: A,B and C.

  • Thread 1 is locked in order A, B, and C.

  • Thread 2 locks in order A, C, so there is no deadlock.

If Thread2 locks in the order B, A, or C, A so that the order is not consistent, A deadlock problem occurs.

2. Use timeout abandonment whenever possible

The Lock interface provides a tryLock(long Time, TimeUnit Unit) method that waits for a Lock for a fixed amount of time, so a thread can actively release all locks it acquired before the Lock timeout occurs. Deadlock problems can be avoided

Live lock

A live lock is very similar to a deadlock in that the program cannot wait for the result, but in contrast to a deadlock, a live lock is alive. What does that mean? Because the running thread isn’t blocking, it’s always running, but it’s never getting results

hunger

Hunger is a problem that occurs when a thread needs certain resources, especially CPU resources, and can’t run.

In Java, there is the concept of thread priorities. In Java, priorities are rated from 1 to 10, with 1 being the lowest and 10 being the highest.

If we set the priority of a thread to 1, which is the lowest priority, it is likely that the thread will never be allocated CPU resources and will not run for a long time.

Performance issues

The creation of threads and the switching between threads consumes resources. If threads are created frequently or the CPU spends more time on thread scheduling than the running time of the thread, the use of threads is not worth the cost, and may even result in excessive CPU load or OOM consequences

For example

Thread unsafe class

Case 1

For synchronization using thread-unsafe collections (ArrayList, HashMap, and so on), it is best to use thread-safe concurrent collections

In a multithreaded environment, collection traversal of thread unsafe operation, may throw ConcurrentModificationException abnormalities, is often said that fail – fast mechanism

The following example simulates multiple threads working on an ArrayList simultaneously, with thread T1 traversing the list and printing, and thread T2 adding elements to the list

List<Integer> list = new ArrayList<>();
list.add(0); 
list.add(1); 
list.add(2);		/ / the list: [0]
System.out.println(list);

// Thread T1 traverses the print list
Thread t1 = new Thread(() -> {
  for(inti : list){ System.out.println(i); }});// Thread t2 adds elements to list
Thread t2 = new Thread(() -> {
  for(int i = 3; i < 6; i++){ list.add(i); }}); t1.start(); t2.start();Copy the code

Looking into the ArrayList source code for throwing exceptions, you can see that traversing the ArrayList is done through internally implemented iterators

When we call the next() method of the iterator to get the next element, we check whether modCount and expectedModCount are equal through the checkForComodification() method. If he is not equal to throw ConcurrentModificationException

ModCount is an ArrayList property that represents the number of times the structure of the collection has been modified (the number of times the length of the list has changed). Each call to add or remove increases modCount by one

ExpectedModCount is an iterator attribute that is assigned the same value as the pretraversal modCount when the iterator instance is created (expectedModCount=modCount)

So when another thread adds or deletes a collection element, the modCount is increased, and then the expectedModCount does not equal the modCount when the collection traverses, an exception is thrown

Using the locking mechanism to operate on collection classes that are not thread safe

List<Integer> list = new ArrayList<>();
list.add(0); 
list.add(1); 
list.add(2);
System.out.println(list);

// Thread T1 traverses the print list
Thread t1 = new Thread(() -> {
  synchronized (list){			// Use the synchronized keyword
    for(inti : list){ System.out.println(i); }}});// Thread t2 adds elements to list
Thread t2 = new Thread(() -> {
  synchronized (list){
    for(int i = 3; i < 6; i++){ list.add(i); System.out.println(list); }}}); t1.start(); t2.start();Copy the code

As in the above code, a list operation is locked with the synchronized keyword, and no exception is thrown. Synchronized, however, serializes locked blocks of code, which is not a performance advantage

A thread-safe concurrency utility class is recommended

JDK1.5 adds a number of thread-safe utility classes for use, such as CopyOnWriteArrayList, ConcurrentHashMap and other concurrent containers

It is recommended to use these classes in daily development to realize multithreaded programming

Case 2

Do not use SimpleDateFormat as a global variable

SimpleDateFormat is actually a thread-unsafe class, and the root cause is that SimpleDateFormat’s internal implementation does not synchronize operations on some shared variables

public static final SimpleDateFormat SDF_FORMAT = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");

public static void main(String[] args) {
  // Both threads call the simpleDateFormat. parse method simultaneously
  Thread t1 = new Thread(() -> {
    try {
      Date date1 = SDF_FORMAT.parse("The 2019-12-09 17:04:32");
    } catch(ParseException e) { e.printStackTrace(); }}); Thread t2 =new Thread(() -> {
    try {
      Date date2 = SDF_FORMAT.parse("The 2019-12-09 17:43:32");
    } catch(ParseException e) { e.printStackTrace(); }}); t1.start(); t2.start(); }Copy the code

Using SimpleDateFormat as a local variable or in conjunction with ThreadLocal is recommended

The simplest way to do this is to use SimpleDateFormat as a local variable

However, if used in a for loop, many instances are created and can be optimized for use with a ThreadLocal

/ / initialization
public static final ThreadLocal<SimpleDateFormat> SDF_FORMAT = new ThreadLocal<SimpleDateFormat>(){
  @Override
  protected SimpleDateFormat initialValue(a) {
    return new SimpleDateFormat("yyyy-MM-dd HH:mm:ss"); }};/ / call
Date date = SDF_FORMAT.get().parse(wedDate);
Copy the code

Java8 LocalDateTime and DateTimeFormatter are recommended

LocalDateTime and DateTimeFormatter are new features introduced in Java 8 that are not only thread-safe, but also easier to use

It is recommended to replace Calendar and SimpleDateFormat with LocalDateTime and DateTimeFormatter in practical development

DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss");
LocalDateTime time = LocalDateTime.now();
System.out.println(formatter.format(time));
Copy the code

Correct release of lock

Suppose we have this pseudo code:

Lock lock = newReentrantLock(); .try{
  lock.tryLock(timeout, TimeUnit.MILLISECONDS)
  // Business logic
}
catch (Exception e){
  // Error log
  // Throw an exception or return it directly
}
finally {
  // Business logiclock.unlock(); }...Copy the code

In this code, a piece of business logic is executed before the finally block releases the lock

If the thread that is holding the lock is unable to release the lock due to the unavailability of the dependent service in this logic, other threads will be unable to acquire the lock and block, and the pool will eventually be full

So before you release the lock; The finally clause should only handle the release of resources (such as locks, I/O streams, etc.) held by the current thread

Also, set a reasonable timeout period when acquiring a lock

To prevent the thread from blocking until the lock is acquired, a timeout can be set. When the lock is acquired, the thread can throw an exception or return an incorrect status code. The timeout period should not be too long and should be longer than the execution time of the locked business logic.

Use thread pools correctly

Case 1

Do not use thread pools as local variables

public void request(List<Id> ids) {
  for (int i = 0; i < ids.size(); i++) { ExecutorService threadPool = Executors.newSingleThreadExecutor(); }}Copy the code

The thread pool is created in the for loop, so each time the method is executed, the number of thread pools will be created according to the length of the list of the input parameters, and after the method is executed, the shutdown() method is not called in time to destroy the thread pool

As more and more requests come in, the pool will use more and more memory, resulting in frequent fullGC or even OOM. Creating a thread pool every time a method is called makes no sense, as it is no different from creating and destroying threads yourself frequently, not taking advantage of the pool, but consuming more resources than the pool needs

So try to use the thread pool as a global variable

Case 2

Be careful to use the default thread pool static method

Executors.newFixedThreadPool(int);     // Create a thread pool of fixed size
Executors.newSingleThreadExecutor();   // Create a thread pool of capacity 1
Executors.newCachedThreadPool();       // Create a thread pool with Integer.MAX_VALUE
Copy the code

The above three default thread pools are at risk:

NewFixedThreadPool creates a thread pool with the same corePoolSize and maximumPoolSize values and uses a blocking queue called LinkedBlockingQueue.

NewSingleThreadExecutor sets both corePoolSize and maximumPoolSize to 1 and also uses LinkedBlockingQueue

The default size of LinkedBlockingQueue is INTEger.max_value =2147483647, which would be considered an unbounded queue for a real machine

  • When newFixedThreadPool and newSingleThreadExecutor run on more threads than the corePoolSize, subsequent requests will be placed in the blocking queue, because the blocking queue is so large that subsequent requests cannot fail quickly and will be blocked for a long time. This can cause the requestor’s thread pool to fill up, dragging down the entire service.

NewCachedThreadPool sets corePoolSize to 0, maximumPoolSize to integer.max_value, and SynchronousQueue to block the queue. SynchronousQueue does not hold tasks waiting to be executed

  • So newCachedThreadPool runs on the creation thread as the task comes in, while maximumPoolSize is equivalent to an infinite setting, so that the number of threads created may fill up machine memory.

Therefore, you need to create a custom thread pool based on your service and hardware configuration

Recommended Number of Threads

Suggestion to set the number of thread pool corePoolSize:

1. Cpu-intensive applications

Cpu-intensive means that tasks require a large number of complex calculations, almost no blocking, and require the CPU to run at high speed for a long time.

General formula: corePoolSize= Number of CPU cores +1 thread. The number of CPU cores that the JVM can run can be seen by runtime.geTruntime ().availableProcessors().

2.IO intensive applications

IO intensive tasks involve a lot of disk read/write or network traffic, and threads spend more time blocking I/OS than CPU operations. Business applications in general are IO intensive.

Reference formula: Optimal number of threads = number of cpus /(1- blocking factor); Blocking factor = thread wait time /(thread wait time +CPU processing time).

The CPU processing time for IO intensive tasks is often much shorter than the thread waiting time. Therefore, the blocking factor is generally considered to be between 0.8 and 0.9. For a 4-core single-slot CPU, the corePoolSize can be set to 4/(1-0.9)=40. Of course, the specific Settings should be based on the actual operation of the machine in each index

The last

Feel fruitful, hope help to like, retweet ha, thank you, thank you

Wechat search: month with flying fish, make a friend, into the interview exchange group

Reply 666 in the background of the public account, you can get free e-books