preface

1. Basic introduction

In my study of Android multi-threaded optimization method in the process, I found that the understanding of multi-threaded optimization is too one-sided.

The purpose of writing this article is to improve my understanding of Android multithreading optimization methods, and the purpose of sharing this article is to hope that you can also get some inspiration from this knowledge.

This article is divided into the following three parts.

  • The first part

    The first part is about the basic knowledge of multithreading optimization, including the introduction of threads and the basic principle of thread scheduling.

  • The second part

    The second part is about some problems that need to be prevented by multithreading optimization, including the introduction of thread safety problems and the way to achieve thread safety.

  • The third part

    The third part looks at some of the methods that can be used for multithreading optimization, including the way threads collaborate and the way Android typically performs asynchronous tasks.

2. Reading skills

As you read this article, drawing and thinking will help you remember and understand the content better.

  • drawing

    Mapping means drawing the key points of each section on the nodes of a mind map.

    Mind mapping allows random information to make visual connections.

    Random information refers to information that has no logical relationship, such as the name of a thread and the state of a thread.

    The characteristic of random information is that there is no logical connection between them, which makes it difficult to remember.

    By making associations, our brains are better able to remember random information.

  • thinking

    Learning is not about being constrained by existing knowledge, but about building on existing knowledge and developing new ideas.

    Keep these questions in mind as you read this article.

    • What is the basis of this statement?
    • How do you explain this concept in your own way?
    • How can you apply this technique in your own projects?
    • What is the actual code implementation of this concept?
    • What are the problems with this implementation?

3. The acronym

  • AS

    Android Studio (Android Application Development Tools)

  • GC

    • Garbage Collector
    • Garbage Collection
  • ART

    Android Runtime Android Runtime

  • JVM

    Java Virtual Machine

  • JUC

    Java.util.concurrent (Java and package issuing)

1. Can we not use multithreading?

Whether you understand multithreading or not, you have to use multithreading.

  • The GC thread

    If we’re running a demo project with nothing built with AS, that doesn’t mean we’re running a single-threaded application.

    Because the application is running on ART, and ART has its own GC thread, plus the main thread, it is still a multi-threaded application.

  • Third party thread

    As we develop our application, we create threads indirectly, even if we don’t create them directly.

    Because we daily use of third-party libraries, including the Android system itself are using multithreading.

    Glide, for example, uses a worker thread to load an image from the network, wait for the image to load, and then cut back to the main thread to set the image into the ImageView.

  • Rigid requirements

    If there is only one thread in our application, it means that Loading images cannot be played, the interface is stuck, and users will lose patience.

    Android also mandates that developers make network requests on the worker thread, not the main thread, which means they must use multiple threads to develop Android applications.

2. Why do multithreading optimizations?

Now that multithreading is inevitable, what are the problems with it?

The purpose of multithreading optimization is to solve the safety and activity of multithreading.

These two problems can cause a multithreaded program to produce incorrect results and the task cannot be executed. Let’s take a look at these two problems.

  • Security issues

    If there are two chefs, Xiao Zhang and Lao Wang, and they cook two dishes respectively, everyone knows how much salt and sugar they put in their dishes, the probability of problems in this case is relatively low.

    But what if two people cook the same dish?

    Xiao Zhang is doing a dish, doing the pot was snatched away by Lao Wang, Lao Wang did not know if Xiao Zhang put salt, put salt again, the result of the fried dish is too salty, can not eat, and then they decided to go out of the royal City PK.

    The “dish” here corresponds to the data in our program.

    This phenomenon is one of the causes of thread safety: Race conditions.

    Races occur because of Java’s memory model and thread scheduling mechanism, which is covered in more detail later.

  • Activity problem

    Since the last time out of the huangcheng PK thing, the manager Lao Li out of a regulation, fight buckle 100, this regulation, xiao Zhang and Lao Wang no longer dare PK, but a few days, they found a new way to fight each other.

    One day, Xiao Zhang in cooking, xiao Zhang to put salt and sugar, and Lao Wang took the salt, Lao Wang to put sugar and salt, the results of two hours two people did not make the dish, the manager Lao Li again into the meng force of the state.

    This is one of the phenomena of thread activity problems: deadlocks.

The three questions about thread safety and the four questions about thread activity are covered in more detail later in this article.

3. What are threads?

In the last video we talked about how multithreaded programming can lead to problems like this and that, but what is a thread?

Our section covers the following sections.

  • The thread profile
  • Four attributes of a thread
  • Six methods for threads
  • Six states of a thread

3.1 Introduction to Threads

A thread is the smallest unit of independent execution in a process and the basic unit of CPU resource allocation.

A process is the basic condition for a program to apply for resources from the operating system. A process can contain multiple threads, and threads in the same process can share resources in the process, such as memory space and file handles.

The operating system allocates resources to processes, but CPU resources are special in that they are allocated to threads. In this case, CPU resources are CPU time slices.

The relationship between process and thread is just like the relationship between hotel and employees. Hotel provides service to customers, and the specific way of providing service is realized by individual employees.

Threads are used to perform specific tasks, such as downloading files, loading images, drawing interfaces, and so on.

3.2 Four attributes of threads

The thread has four attributes: number, name, category and priority. In addition, some attributes of the thread also have inheritance. Here we will look at the function of the four attributes of the thread and the inheritance of the thread.

3.2.1 number

  • role

    Thread ids are used to identify different threads, and each thread has a different number.

  • Matters needing attention

    • Cannot be used as a unique identifier

      After a numbered thread finishes running, that number may be used by subsequent threads, so it is not appropriate to use a number as a unique identifier

    • read-only

      The number is a read-only property and cannot be modified

3.2.2 name

Each Thread has its own name, which defaults to a thread-thread number, such as thread-0.

In addition to the default values, we can also give the threads names to differentiate each thread in our own way.

  • role

    Giving a thread a name allows us to use the name of a thread to quickly locate the problem

3.2.3 category

Daemons are divided into daemons and user threads, which can be set to daemons by setDaemon(true).

When the JVM exits, it considers whether all user threads have completed execution and exits if so.

In the case of a daemon thread, the JVM exits without considering whether it has completed execution.

  • role

    Daemon threads are usually used to perform unimportant tasks, such as monitoring the health of other threads, and the GC thread is a daemon thread.

  • Matters needing attention

    SetDaemon () just before the thread starts Settings, otherwise the JVM throws an illegal abnormal thread state (IllegalThreadStateException).

3.2.4 priority

  • role

    The thread Priority is used to indicate which thread the application wants to run first, and the thread scheduler uses this value to determine which thread to run first.

  • Value range

    Java Thread priorities range from 1 to 10. The default value is 5. Thread defines the following three priority constants.

    • Lowest priority: MIN_PRIORITY = 1
    • Default priority: NORM_PRIORITY = 5
    • Maximum priority: MAX_PRIORITY = 10
  • Matters needing attention

    • Does not guarantee that

      The thread scheduler uses thread priorities as a reference value and does not necessarily execute threads in the order we set them

    • Thread starvation

      Improper use of priorities can cause some threads to never execute, a condition known as thread hunger, which we’ll learn more about in Section 7

3.2.5 inheritance

Thread inheritance means that the class and priority attributes of a thread are inherited, and the initial values of these two attributes are determined by the thread that started the thread.

If daemon A with priority 5 starts thread B, then thread B is also A daemon with priority 5.

We call thread A the parent thread of thread B, and thread B the child thread of thread A.

3.3 Six methods of threading

There are six common methods used by threads. They are three non-static methods: start(), run(), join() and three static methods currentThread(), yield(), and sleep().

Let’s take a look at the functions and precautions of each of these six methods.

3.3.1 the start ()

  • role

    The start() method starts the thread.

  • Matters needing attention

    This method can only be called once; calling it again will not cause the thread to execute again and will throw an illegal thread-state exception.

3.3.2 rainfall distribution on 10-12 run ()

  • role

    The run() method, which puts the specific logic of the task, is called by the JVM and generally does not need to be called directly by the developer.

  • Matters needing attention

    If you call the run() method and the JVM calls it once, the method will be executed twice

3.3.3 the join ()

  • role

    The join() method is used to wait for other threads to finish executing.

    If thread A calls thread B’s join() method, thread A will wait until thread B finishes running.

  • Matters needing attention

    The join() method can interrupt the wait state, so calling this method requires catching the interrupt exception

3.3.4 Thread. CurrentThread ()

  • role

    The currentThread() method is a static method that gets the thread executing the current method.

    We can call thread.currentthread () in any method to get the currentThread and set properties such as its name and priority.

3.3.5 Thread. The yield ()

  • role

    The yield() method is a static method that causes the current thread to give up its occupation of the processor, which is equivalent to reducing thread priority.

    Calling this method is like saying to the thread scheduler, “If other threads want processor resources, give them, otherwise I’ll keep using them.”

  • Matters needing attention

    This method does not necessarily put the thread into a suspended state.

3.3.6 Thread. Sleep (ms)

  • role

    The sleep(MS) method is a static method used to make the current thread sleep(pause) for a specified amount of time.

Threads provides more than the above six methods to use, and the use of other methods is covered in more detail later in this article.

3.4 Six states of threads

3.4.1 Thread life cycle

Like activities, threads have their own lifecycle, and lifecycle events are triggered by the user (developer).

From an Activity perspective, when the user clicks a button to open an Activity, it triggers the Activity’s onCreate() method.

From the Thread’s point of view, when the developer calls the start() method, it triggers the Thread’s run() method.

If we performed a time-consuming operation in the onPause() method of the previous Activity, the next Activity’s display will be slower because of the time-consuming operation. This is equivalent to Thread’s wait state.

The life cycle of a thread can be triggered not only by the developer, but also by other threads, as shown below.

Thread.getstate () returns an enumeration of thread.state.

There are six states of the thread: new, runable, blocked, wait, limited wait and terminate. Let’s take a look at the transition process between these six states.

3.4.2 Creating a State

When a thread is created and not started, it is in the NEW state.

3.4.3 Operational status

When we call the thread’s start() method, the thread enters the RUNNABLE state.

The runnable state is divided into READY and RUNNING states.

  • Prepare status

    A thread in the prepared state can be scheduled by the thread scheduler, after which the state of the thread will change from the prepared state to the running state. A thread in the prepared state is also called an active thread.

  • Running state

    The run state indicates that the thread is running, that is, the processor is executing the thread’s run() method.

    When a thread’s yield() method is called, its state may change from a running state to a ready state.

3.4.4 Blocking Status

A thread is BLOCKED when one of the following occurs.

  • Initiate blocking I/O operations
  • Apply for a lock held by another thread
  • Failed to enter a synchronized method or block

3.4.5 Waiting status

A thread executing a method is WAITING for another thread to finish executing it.

  • Wait state

    There are several ways to put a thread into the wait state.

    • Object.wait()

    • LockSupport.park()

    • Thread.join()

  • Operational condition

    There are several ways to change a thread from a wait state to a runnable state, which is called wake up.

    • Object.notify()

    • Object.notifyAll()

    • LockSupport.unpark()

3.4.6 Timed wait state

The difference between a TIMED_WAITING state and a wait state is that a TIMED_WAITING state is a state that waits for a period of time and then returns to an operational state.

The following methods allow a thread to enter a timed wait state, with ms, ns, and time parameters representing milliseconds, nanoseconds, and absolute time, respectively.

  • Thread.sleep(ms)
  • Thread.join(ms)
  • Object.wait(ms)
  • LockSupport.parkNonos(ns)
  • LockSupport.parkUntil(time)

3.4.7 Termination Status

The thread is in TERMINATED state when its task is TERMINATED or when the task execution is TERMINATED.

4. What is the principle of thread scheduling?

After reading the previous section, we have a basic understanding of threads, what they are, and how their life cycle flows.

In this section we will look at how threads are scheduled, including the following.

  • Introduction to the Java Memory model
  • The cache
  • Java thread scheduling mechanism

4.1 Introduction to the Java Memory Model

Understanding Java’s memory model can help us better understand thread safety issues, so let’s look at what Java’s memory model is.

The Java Memory Model (JMM) specifies that all variables are stored in main Memory and that each thread has its own working Memory.

The JVM divides memory into chunks, with the method area and heap memory area shared by threads.

If three threads increment variable A with value 5 at the same time, the final result should be 8.

But the real implementation of increment is a three-step operation, not an indivisible (atomic) operation.

  1. Assigns the value of variable A to the temporary variable temp
  2. Increase the value of temp by 1
  3. Reassign the value of temp to variable A.

If thread 1 goes through the second step and the other two threads read variable A, the final result is 7, not 8 as expected.

This phenomenon is one of the problems of thread safety: atomicity.

4.2 Cache

4.2.1 Overview of Caching

Modern processors have far more processing power than DRAM, the amount of time it takes to perform a single memory read/write operation. If used by a processor, the processor can execute hundreds of instructions.

To bridge the gap between the processor and the main memory, hardware designers put caches between the main memory and the processor.

Instead of dealing directly with main memory, the processor performs reads and writes to memory through the cache.

A cache is a hardware-implemented minuscule hash table whose key is the memory address of an object and whose value can be a copy of the memory data or data to be written to memory.

4.2.2 Cache internal structure

Internally, a Cache is similar to a Chained Hash Table consisting of buckets, each containing Cache entries.

4.2.3 Cache entry structure

Cache entries can be further divided into Tag, Data Block, and Flag sections.

  • Tag

    The Tag contains partial information about the memory address corresponding to the data in the cache line (the high part of the memory address).

  • Data Block

    A Data Block, also known as a Cache Line, is the smallest unit of Data exchange between the Cache and the main memory. It can store Data read from memory or Data to be written into memory.

  • Flag

    Flag indicates the status information of the corresponding cache line

4.3 Principles of Java Thread Scheduling

Only one machine instruction can be executed by the CPU at any time, and each thread can execute instructions only after it has acquired the CPU.

That is, only one thread occupies the CPU and is running at any one time.

Multi-threaded concurrent running actually refers to multiple threads taking turns to acquire CPU usage and perform their respective tasks.

The JVM is responsible for thread scheduling, which allocates CPU usage to multiple threads according to a specific mechanism.

Thread scheduling models can be divided into two types: time-sharing scheduling model and preemptive scheduling model.

  • Time-sharing scheduling model

    The time-sharing scheduling model allows all threads to acquire CPU usage in turn and equally allocates the CPU time slices to each thread.

  • Preemptive scheduling model

    The JVM uses a preemptive scheduling model, in which the threads with the highest priority occupy the CPU first, and if they are all of the same priority, a random thread is selected and allowed to occupy the CPU.

    That is, if we start multiple threads at the same time, there is no guarantee that they will take turns getting equal slices of time.

    If our program wants to interfere with thread scheduling, the simplest way is to assign a priority to each thread.

5. What are thread safety issues?

After reading the previous section, we have a basic understanding of Java’s thread scheduling mechanism.

In this section we will take a look at thread safety issues caused by thread scheduling. This section covers the following sections.

  • race
  • atomic
  • visibility
  • order

5.1 race

The thread-safety issue isn’t that threads aren’t safe, and it’s not that threads can’t blow up phones.

Thread safety refers to the possibility that interleaving operations on one or more shared mutable objects between threads may result in data exceptions.

The problem often encountered in multithreaded programming is that the same input has different outputs at different times. The phenomenon that the correctness of a calculation result depends on time is called race, that is, the correctness of the calculation depends on the relative time order or interlacing of threads.

Races do not necessarily result in incorrect calculation results, but do not rule out the possibility that calculation results are sometimes correct and sometimes wrong.

Races are often associated with dirty data, which is when a thread reads stale data, and lost updates, which are updates made by one thread that are not reflected in subsequent reads of the data by other threads.

For Shared variables, race can as access (read/write) sharing the same set of variables by multiple threads to execute the operation of the mutual crisscross, such as a thread reads a Shared variables, and a calculation is made based on the Shared variables, another thread to update the Shared variable values, and lead to dirty data or missing updates.

In the case of local variables, the use of local variables does not cause a race because different threads each access their own local variables.

5.2 atomic

The literal sense of an atom is indivisible. An operation involving access to a shared variable is an Atomic operation if it appears indivisible from any thread other than its executing thread, and is accordingly said to have Atomicity.

An indivisible operation is an operation that accesses (reads/writes) a shared variable and, from the perspective of a thread other than the executing thread, has only two states, not started and not finished, and does not know the middle part of the operation.

Take stir-fry for example, stir-fry can be divided into several steps: oil, vegetables, salt, sugar and so on.

But from the guest’s point of view, there are only two states of a dish: it’s not ready and it’s ready.

Atomic operations that access the same set of shared variables cannot be interleaved, which precludes one thread from performing an operation while another thread reads or updates the shared variables accessed by that operation, resulting in dirty data and lost updates.

5.3 the visibility

In a multithreaded environment, when a thread updates a shared variable, subsequent threads accessing that variable may not be able to read the result of the update immediately, or even forever. This is another manifestation of thread-safety issues: visibility.

Visibility refers to whether a thread's update to a shared variable is visible to other threads reading it.

Visibility problems are related to the computer’s storage system, variables in a program may be allocated to registers instead of main memory, each processor has its own registers, and one processor cannot read the contents of another processor’s registers.

Even if shared variables are allocated for storage in main memory, visibility is not guaranteed because the processor does not access main memory directly, but through caching.

Changes to variables made by a thread running on a processor may only be made to the processor’s Store Buffer, not to the cache, let alone the processor.

Ensuring visibility means that a thread can read the new value of the corresponding shared variable. From the perspective of thread safety, ensuring atomicity is not enough, but also ensuring visibility, and ensuring visibility and atomicity can ensure that a thread can correctly see the updates made by other threads to the shared variable.

5.4 order

Orderliness refers to memory access operations performed by one processor for one thread that are out of order for threads running on another processor.

A sequential structure is a basic structure in structured programming that indicates that we want one operation to be executed before another.

However, in a multi-core processor environment, the order of code execution is not guaranteed. The compiler may change the order of two operations, and the processor may not execute the instructions in the order of the program code

Reordering is an optimization of the processor and compiler. It can improve the performance of a single-threaded program without affecting the correctness of the program. However, it can affect the correctness of multithreaded programs, resulting in thread safety problems.

In order to improve the execution efficiency of instructions, modern processors often pay attention to the execution of instructions not according to the program order, but to execute which instruction is ready, which is the out-of-order execution of processors.

6. How to achieve thread safety?

Thread-safety requires atomicity, visibility, and order as mentioned above.

Common approaches to thread-safety are the use of locks and atomic types, which can be classified as internal, explicit, read/write, and volatile.

Let’s take a look at the usage and characteristics of the four lock and atom types.

6.1 the lock

The “100 deduction for a fight” mentioned at the beginning of the article is a real-life lock that allows Zhang and Wang to do their jobs and stop cooking uninedible dishes.

This is what a Lock does. It allows multiple threads to work together and avoid data anomalies caused by interlocking operations.

6.1.1 Five features of locks

  • A critical region

    The code executed by the thread holding the lock after it is acquired and before it is released is called a Critical Section.

  • exclusive

    The exclusivity of locks ensures that a shared variable can only be accessed by one thread at a time. This ensures that critical section code can only be executed by one thread at a time. Critical section operations are indivisible, thus ensuring atomicity.

  • serial

    Locking is equivalent to changing the operation of multiple threads on shared variables from concurrent to serial.

  • Three kinds of security

    Locks can protect shared variables to achieve thread safety, and their functions include ensuring atomicity, visibility and order.

  • Scheduling policy

    Lock scheduling policies are divided into fair policies and unfair policies. The corresponding locks are called fair locks and unfair locks.

    A fair lock checks to see if there are any queued threads before locking, and if there are, the first thread is processed first.

    Fair locking ensures the fairness of lock scheduling and increases the possibility of thread suspension and wake up at the cost of increasing context switch.

6.1.2 Two problems with locking

  • Lock leak

    A lock leak is a process in which one thread acquires a lock but the lock cannot be released due to a program error.

  • Activity problem

    Lock leaks can cause activity problems, including deadlocks, and deadlocks.

6.2 the internal lock

6.2.1 Introduction to Internal Locking

Java provides the synchronized keyword for internal locking, and methods and code blocks that are modified by the synchronized keyword are called synchronized methods and code blocks.

Let’s take a look at seven features of internal locking.

  • Monitor lock

    Because thread synchronization using synchronized is done through a monitor, internal locks are also called monitor locks.

  • Automatic get/release

    Locks on synchronized code blocks are applied and released by threads within the JVM. Threads automatically acquire the lock before entering the synchronized code block and release the lock when exiting the synchronized code block, which is why synchronized code blocks are called internal locks.

  • Lock methods/classes/objects

    The synchronized keyword can be used to modify methods, locking specific classes and specific objects.

  • A critical region

    A synchronized code block is a critical section of an internal lock that must be held by a thread before it can execute the critical section code.

  • Lock handle

    Internal lock The lock object is called a lock handle, which is usually decorated with the private and final keywords.

    If the lock handle variable is changed, multiple threads executing the same synchronized code block will actually use different locks.

  • Don’t leak

    A leak is a lock leak, and an internal lock does not cause a lock leak because the Javac compiler, when compiling synchronized code blocks into bytecode, takes special care of exceptions that may be thrown in the critical section so that an exception in the critical section does not prevent the lock from being released.

  • Not fair lock

    Internal locking uses an unfair strategy, an unfair lock, that is, does not add context switch overhead.

6.2.2 Basic Usage of internal Locks

/ / lock handle
private final String hello = "hello";

private void getLock1(a) {
  synchronized (hello) {
    System.out.println("ThreadA has the internal lock.");
    ThreadUtils.sleep(2 * 1000);
  }
  System.out.println("ThreadA releases the internal lock");
}
Copy the code
private void getLock2(a) {
  System.out.println(ThreadB attempts to acquire an internal lock);
  synchronized (hello) {
    System.out.println("ThreadB has the internal lock.");
  }
  System.out.println("ThreadB continues execution");
}
Copy the code

When we run the above two functions separately in two threads, we get the following output.

ThreadB attempts to acquire the internal lock. ThreadA releases the internal lock. ThreadB acquires the internal lock and continues executingCopy the code

6.3 the explicit lock

6.3.1 Introduction to Explicit Locks

Explict Lock is an instance of the Lock interface, which abstracts explicit locks, and ReentrantLock is its implementation class.

Here are four features of an explicit lock.

  • reentrant

    An explicit lock is a reentrant lock, which means that once a thread holds the lock, it can successfully apply for it again.

  • Manually obtain/release

    The difference between an explicit lock and an internal lock is that with an explicit lock, we release and acquire the lock ourselves. To avoid lock leakage, we release the lock ina finally block

  • A critical region

    The code between lock() and unlock() methods is the critical section of an explicit lock

  • Fair/unfair lock

    Explicit locking allows us to choose our own lock scheduling policy.

    ReentrantLock has a constructor that allows us to pass in a fair value, which, when true, indicates that the lock being created is a fair lock.

    Because fair locks cost more than unfair locks, the default ReentrantLock scheduling policy is unfair.

6.3.2 Explicit Lock Usage

private final Lock lock = new ReentrantLock();

private void lock1(a) {
  lock.lock();
  System.out.println("Thread 1 has acquired an explicit lock.");
  try {
    System.out.println("Thread 1 starts operation");
    ThreadUtils.sleep(2 * 1000);
  } finally {
    lock.unlock();
    System.out.println("Thread 1 releases an explicit lock."); }}Copy the code
private void lock2(a) {
  lock.lock();
  System.out.println("Thread 2 has acquired an explicit lock.");
  try {
    System.out.println("Thread 2 starts operation");
  } finally {
    System.out.println("Thread 2 releases an explicit lock."); lock.unlock(); }}Copy the code

When we execute the above two functions in two separate threads, we get the following output.

Thread 1 has acquired an explicit lock and thread 1 is going to do the operation and thread 1 is going to release the explicit lock and thread 2 is going to do the operation and thread 2 is going to release the explicit lockCopy the code

6.3.3 Displaying locks Four methods for obtaining locks

  • lock()

    The thread is blocked if the lock fails to be acquired

  • tryLock()

    Lock, return true on success, false on failure, not blocked

  • tryLock(long time, TimeUnit unit)

    Retrieves the lock, returning true if it is acquired, or false if it is not acquired within the specified time.

    Blocked for a specified period of time, interruptable.

  • lockInterruptibly()

    Obtain lock, interruptible.

6.4 Differences between internal locks and explicit locks

After reading the introduction of internal and explicit locks, let’s look at the five differences between internal and explicit locks.

  • flexibility

    Internal locks are code-based locks, and lock application and release can only be performed in one method, which lacks flexibility.

    Explicit locks are object-based locks that can be applied and released in different ways to make full use of the flexibility of object-oriented programming.

  • Lock scheduling strategy

    Internal locks can only be unfair locks.

    Explicit locks can choose their own lock scheduling policies.

  • convenience

    Internal lock is simple and easy to use, lock leakage does not occur.

    Explicit locks require manual lock acquisition/release and can leak if improperly used.

  • blocking

    If the thread holding the internal lock does not release the lock, the other thread claiming the lock must wait.

    The explicit Lock interface has a tryLock() method that returns false when another thread holds the Lock.

    This will not cause the thread to block and we can do something else if we fail to acquire the lock.

  • Applicable scenario

    We can use internal locks in cases where the average holding time of multiple threads is not long

    We can use explicit locks (fair locks) in cases where multiple threads hold locks for an average length of time

6.5 read-write lock

6.5.1 Read/Write Lock Overview

The exclusivity of locks prevents multiple threads from reading shared variables at the same time in a thread-safe manner, which is not conducive to system concurrency, which is why read-write locks occur.

The ReadWriteLock interface implementation class is ReentrantReadWriteLock.

A thread that only reads shared variables is called a reader thread, and a thread that only updates shared variables is called a writer thread.

Read/write locks are an improved type of Exclusive lock, also known as Shared/Exclusive locks.

Read/write locks have the following six features.

  • Read lock Shared

    Read/write locks allow multiple threads to read shared variables at the same time. When a reader thread accesses a shared variable, it must hold the corresponding read lock, which can be held by multiple threads.

  • Write locks and exclusive

    Read/write locks allow only one thread to update a shared variable at a time. When a writer thread accesses a shared variable, it must hold the corresponding write lock, which can only be held by one thread at any time.

  • Can the drop

    A read-write lock is a reentrant lock that supports degradation, that is, a thread holding a write lock can continue to acquire the corresponding read lock.

    This allows us to modify the variable, read it elsewhere, and perform other operations.

  • Can’t upgrade

    Read/write locks cannot be upgraded. The read thread can apply for a write lock only after the read lock is released

  • Three kinds of security

    Read/write locks allow multiple threads to read shared variables, but they also guarantee atomicity, visibility, and order due to the nature of write locks.

  • Applicable scenario

    Read/write locks incur additional overhead and are appropriate only if the following two conditions are met

    • Read operations are much more frequent than write operations
    • The thread reading the shared variable holds the lock longer

6.5.2 Basic Usage of read/Write Lock

private final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();
private final Lock readLock = readWriteLock.readLock();
private final Lock writeLock = readWriteLock.writeLock();

private void write1(a) {
  writeLock.lock();
  System.out.println("Writer thread 1 has acquired the write lock.");
  try {
    System.out.println("Writer thread 1 starts operation");
    ThreadUtils.sleep(3 * 1000);
  } finally {
    writeLock.unlock();
    System.out.println("Writer thread 1 released the write lock."); }}private void write2(a) {
  writeLock.lock();
  System.out.println("Writer thread 2 has acquired the write lock.");
  try {
    System.out.println("Writer thread 2 starts operation");
  } finally {
    writeLock.unlock();
    System.out.println("Writer thread 2 released the write lock."); }}Copy the code
private void read1(a) {
  readLock.lock();
  System.out.println("Reader thread 1 has acquired the read lock.");
  try {
    System.out.println("Reader thread 1 starts operation");
    ThreadUtils.sleep(3 * 1000);
  } finally {
    readLock.unlock();
    System.out.println("Read thread 1 releases read lock"); }}private void read2(a) {
  readLock.lock();
  System.out.println("Reader thread 2 has acquired the read lock.");
  try {
    System.out.println("Reader thread 2 starts operation");
    ThreadUtils.sleep(3 * 1000);
  } finally {
    readLock.unlock();
    System.out.println("Reader thread 2 released the read lock"); }}Copy the code

When executing the above four functions separately on four threads, we get the following output.

Write thread one get the write lock thread 1 began to perform operations to write thread 1 released the write lock thread 2 get the write lock thread 2 began to perform operations to write thread 2 released write lock read: thread one obtained read lock read: thread one began to perform operations thread 2 get read lock thread 2 began to perform operations read thread one releases the lock Reader thread 2 releases the read lockCopy the code

6.6 Volatile Keyword

The volatile keyword can be used to modify shared variables, which are called volatile variables. Volatile variables have the following characteristics.

  • Easy to change

    Volatile literally means “unstable.” Volatile refers to variables that are subject to change. Volatile means that reads and writes to such variables are read from cache or main memory rather than allocated to registers.

  • overhead

    • Is lower than the lock

      Volatile is less expensive than locking, and reads and writes to volatile variables do not cause context switches, so the volatile keyword is also called lightweight locking.

    • Higher than ordinary variables

      Volatile variable reads are more expensive than regular variables because the values of volatile variables are read from cache or main memory each time and cannot be temporarily stored in registers.

  • Release/store barriers

    For writes to volatile variables, the JVM inserts a release barrier before the operation and a storage barrier after the operation.

    Storage barriers flush the processor cache, so the insertion of a barrier after volatile writes keeps the results of all operations up to that barrier synchronized with other processors.

  • Load/get barriers

    For reads of volatile variables, the JVM inserts a load barrier before the operation and a fetch barrier after the operation.

    Load barriers flush the processor cache so that the thread’s processor synchronizes updates made by other processors to the shared variable into that processor’s cache.

  • Ensure order

    Volatile can prohibit instruction reordering, which means that using volatile ensures that operations are ordered.

  • Guaranteed visibility

    The loading barrier performed by the reader thread, in conjunction with the storage barrier performed by the writer thread, ensures visibility by making writes to volatile variables visible to the reader thread.

  • atomic

    On the atomicity side, volatile ensures that reads and writes are atomic for long/double variables.

    For non-long /double variables, volatile only guarantees atomicity for writes.

    If shared variables are involved before volatile writes, races can still occur because other threads may have already updated the value of the shared variable when assigned to it.

6.7 Atomic Types

6.7.1 Introduction to atomic Types

Under JUC, there is an atomic package, which contains a set of atomic classes whose methods are thread-safe without locking. Atomic classes are thread-safe at the hardware level through CAS directives in the Unsafe class.

Inside the package as AtomicInteger, AtomicBoolean, AtomicReference, AtomicReferenceFIeldUpdater, etc.

Let’s start with an example that uses the AtomicInteger increment.

// The initial value is 1
AtomicInteger integer = new AtomicInteger(1);

/ / since the increase
int result = integer.incrementAndGet();

// The result is 2
System.out.println(result);
Copy the code

AtomicReference and AtomicReferenceFIeldUpdater can make our own classes is atomic, and their principle is achieved by CAS Unsafe operation.

Let’s look at their usage and differences.

6.7.2 Basic Usage of AtomicReference

class AtomicReferenceValueHolder {
  AtomicReference<String> atomicValue = new AtomicReference<>("HelloAtomic");
}

public void getAndUpdateFromReference(a) {
  AtomicReferenceValueHolder holder = new AtomicReferenceValueHolder();
  
  // compare and set values
  // If the value is HelloAtomic, change the value to World
  holder.atomicValue.compareAndSet("HelloAtomic"."World");
  
  // World
  System.out.println(holder.atomicValue.get());
  
  // Modify and get the modified value
  String value = holder.atomicValue.updateAndGet(new UnaryOperator<String>() {
    @Override
    public String apply(String s) {
      return "HelloWorld"; }});// Hello World  
  System.out.println(value);
}


Copy the code

6.7.3 AtomicReferenceFieldUpdater basic usage

AtomicReferenceFieldUpdater on the usage and AtomicReference some different, we directly to exposing a String value, and use volatile to modify this value.

And the values of the current class and class to newUpdater () method of obtaining Updater, this usage is a bit like reflection, and AtomicReferenceFieldUpdater is usually used as static members of the class.

public class SimpleValueHolder {
  public static AtomicReferenceFieldUpdater<SimpleValueHolder, String> valueUpdater
    = AtomicReferenceFieldUpdater.newUpdater(
      SimpleValueHolder.class, String.class, "value");

  volatile String value = "HelloAtomic";

}

public void getAndUpdateFromUpdater(a) {
  SimpleValueHolder holder = new SimpleValueHolder();
  holder.valueUpdater.compareAndSet(holder, "HelloAtomic"."World");

  // World
  System.out.println(holder.valueUpdater.get(holder));

  String value = holder.valueUpdater.updateAndGet(holder, new UnaryOperator<String>() {
    @Override
    public String apply(String s) {
      return "HelloWorld"; }});// HelloWorld
  System.out.println(value);
}
Copy the code

The difference between the 6.7.4 AtomicReference and AtomicReferenceFieldUpdater

AtomicReference and AtomicReferenceFieldUpdater is almost AtomicReference than AtomicReferenceFieldUpdater is more simple in usage.

Internally, AtomicReference also has a volatile variable.

Use the AtomicReference compared with using AtomicReferenceFIeldUpdater, to create an object.

On 32-bit machines, the header of this object is 12 bytes, and its members are 4 bytes, or 16 bytes more.

On 64-bit machines, if pointer compression is enabled, this object takes up 16 bytes.

On 64-bit machines, if pointer compression is not enabled, the object will be 24 bytes, with 16 bytes for the object header and 8 bytes for the members.

This overhead becomes significant when creating thousands of objects using AtomicReference.

That is why the BufferedInputStream, Kotlin coroutines and Kotlin lazy implementation will choose AtomicReferenceFieldUpdater as atomic type.

Because the cause of the overhead, so in general only atomic type to create an instance of the less cases, such as singleton, will choose AtomicReference, otherwise is done with AtomicReferenceFieldUpdater.

6.8 Lock Usage Techniques

There is some overhead associated with using locks, but mastering the skills of using locks can reduce the overhead and potential problems caused by locks to a certain extent. The following are some techniques for using locks.

  • A short lock is better than a long lock

    Try to lock only the necessary parts

  • A small lock is better than a big one

    It is possible to split locked objects

  • A public lock is better than a private lock

    It is possible to put the lock logic in private code, and if you let an external caller lock it, it can lead to improper use of the lock resulting in a deadlock

  • Nested locks are inferior to flat locks

    Avoid lock nesting when writing code

  • Split read/write lock

    Separate read and write locks whenever possible

  • Coarse high frequency lock

    Merge handles locks that are frequent and too short, because each lock incurs some overhead

  • Eliminating useless locks

    Leave locks unlocked if possible, or use volatile instead

7. What is the issue of thread activity?

The previous section described the use of locks and their basic usage. Locks can cause threads to block, and this blocking can cause tasks to fail. That is, threads to become active, which is what we will cover in this section.

The problem with activity is not that threads are too active, but that threads are not active enough to make progress on a task.

In this section we will look at four common thread activity problems: deadlocks, deadlocks, live locks, and starvation.

7.1 The four activity problems of threads

7.1 a deadlock

Deadlocks are a common multithreaded activity problem for threads. If two or more threads are suspended forever because they are waiting for each other, this is called a deadlock.

Let’s take a look at four conditions that cause deadlocks and three ways to avoid them.

7.1.1 Four conditions for deadlock occurrence

When multiple threads are deadlocked, these threads and their associated shared variables meet the following four conditions.

  1. Resources mutex

    The resources involved must be exclusive, that is, they can only be used by one thread at a time

  2. Resources cannot be robbed

    The resource involved can only be actively released by the thread holding the resource and cannot be robbed by other threads (passively released)

  3. Hold and wait for resources

    The thread in question holds at least one resource and has requested other resources, which happen to be held by other threads, and the thread does not release the held resources

  4. Cyclic wait resource

    The threads involved must wait for resources held by other threads, which in turn wait for resources held by that thread

As long as a deadlock occurs, the above condition is always true, but if the above condition is true, it does not necessarily result in a deadlock.

7.1.2 Three ways to avoid deadlocks

To eliminate deadlocks, simply break one of the conditions above.

Since locks are exclusive and cannot be released passively, we break conditions 3 and 4.

  1. Thick lock method

    When coarse-grained locks are used instead of multiple locks, the scope of locks becomes larger, and multiple threads accessing shared resources only need to apply for one lock, because each thread only needs to apply for one lock to perform its own tasks. In this way, the two conditions of “occupy and wait for resources” and “loop waiting for resources” are not valid.

    The disadvantage of rough locking is that it reduces concurrency and can lead to a waste of resources because only one thread can access the resource at a time, so other threads have to put the task on hold.

  2. Lock sort

    The lock sort method means that related threads apply for locks in a globally uniform order.

    If there are multiple threads that need to apply for locks, all we need to do is make them apply for locks in a globally uniform order, thus breaking the “loop waiting for resources” condition.

  3. tryLock

    Explicit lock reentrantLock. tryLock(long timeUnit) this method allows us to set a timeout for the lock request operation, thus breaking the “hold and wait for resources” condition.

  4. Open call

    An Open Call is a method that does not hold a lock when calling an external method. An Open Call breaks the “hold and wait for resources” condition.

7.2 lock

A waiting thread is locked out because the wakeup condition can never be established, so that the task can never continue to execute.

The difference between a lock and a deadlock is that a lock can occur even if none of the conditions for a deadlock are true.

Locking can be divided into signal loss locking and nested monitor locking.

7.2.1 Signal loss locks up

Signal-loss locking is an active problem in which the waiting thread remains in a waiting state because there is no corresponding notification thread to wake it up.

A typical example of signal loss locking is when the waiting thread does not determine the protection Condition before executing Object.wait()/ condition.await (), and the protection Condition is already established, but no other thread updates the protection Condition and informs the waiting thread. This is why it is emphasized that object.wait ()/ condition.await () should be executed in a loop.

7.2.2 The nested monitor is lost and locked

Nested monitor locking refers to the use of nested locks so that threads can never be woken up, which is represented in code as two nested synchronized blocks of code.

The way to avoid nested monitor locking is simply to avoid nested using internal locks.

7.3 live lock

Livelock is when a thread is always running, but a task cannot continue to execute.

7.4 hunger

Starvation is when a task is always unavailable due to the fact that a thread is not able to obtain required resources.

8. How do threads collaborate?

There are two common ways to collaborate between threads: wait and interrupt.

Interrupted collaboration is covered in section 8, which focuses on waiting collaboration.

Waiting thread collaboration is involved when an operation in one thread needs to wait for an operation in another thread to finish.

Join, wait/notify, await/signal, await/countDown and CyclicBarrier are five common wait thread cooperation modes. Let’s take a look at the usage and differences of these five thread cooperation modes.

8.1 the join

Using the thread.join () method, we can make one Thread wait for another Thread to finish executing before continuing.

The join() method implements the wait through the wait() method, in which the join() method continuously determines whether the thread that called the join() method is still alive and continues to wait if so.

Here’s a simple use of the join() method.

public void tryJoin(a) {
  Thread threadA = new ThreadA();
  Thread threadB = new ThreadB(threadA);
  threadA.start();
  threadB.start();
}
Copy the code
public class ThreadA extends Thread {
  @Override
  public void run(a) {
    System.out.println("Thread A starts executing.");
    ThreadUtils.sleep(1000);
    System.out.println("Thread A completes execution"); }}Copy the code
public class ThreadB extends Thread {
  private final Thread threadA;

  public ThreadB(Thread thread) {
    threadA = thread;
  }

  @Override
  public void run(a) {
    try {
      System.out.println("Thread B starts waiting for thread A to finish.");
      threadA.join();
      System.out.println("Thread B finishes waiting and does whatever it wants to do.");
    } catch(InterruptedException e) { e.printStackTrace(); }}}Copy the code

When we finish executing the code above, we get the following output.

Thread A is going to execute thread B is going to wait thread A is going to execute end thread A is going to execute end thread B is going to finish waiting and do whatever it wants to doCopy the code

8.2 wait/notify

8.2.1 wait/notify profile

In Java, the use of Object.wait()/ object.wait (long) and Object.notify()/ object.notifyall () can be used to implement waits and notifications.

A process in which a thread is suspended because the protection conditions required to perform an action (the target action) are not met is called a wait.

When a thread updates a shared variable to make the protection condition required by another thread hold, the process of waking up the suspended thread is called notify.

The wait() thread is called the wait thread, and the notify() thread is called the notification thread.

Wait/Notify collaboration has the following characteristics.

  • Pause/wake up

    Object.wait() suspends the thread (state changed to WAITING), while Object.notify() awakens a suspended thread.

  • All the objects

    Because Object is the parent class of all objects, all objects can implement wait and notification.

  • Get monitor lock

    To use the wait()/notify() methods, acquire the monitor lock of a shared object first. There are two ways to obtain the monitor lock of a shared object: in a synchronized code block, or in a synchronized method (a synchronized method), perform wait()/notify().

    Without first obtaining monitor lock, the thread will quote us the illegal exception IllegalMonitorStateException monitor state.

  • Catch interrupt exception

    To use wait(), you must catch InterruptedException because the wait state entered through wait() can be interrupted.

  • Wake up any thread

    The notify() method awakens only an arbitrary waiting thread on the corresponding object, not necessarily the one we want to wake up.

  • Wake up specific threads

    NotifyAll () If we want to correspond to a particular thread on an object, we can use notifyAll() to wake up all waiting threads on the object.

  • The final modification

    The lock object uses the final modifier because without the final modifier, the value of the object could be modified, causing the waiting thread and the notification thread to synchronize on different internal locks, causing a race that defeats the purpose of the lock.

  • Cycle judgment

    The determination of the guard condition and the call to the wait() method are placed in the loop to ensure that the target action is executed only if the guard condition is true.

  • Only the corresponding internal lock is released

    When a wait() method is used to suspend the current thread, the lock released is the internal lock of the object on which the wait() method belongs. Other internal and explicit locks held by the current thread are not released

8.2.2 Basic Usage of Wait /notify

Here is sample code for wait/notify.

final Object lock = new Object();
private volatile boolean conditionSatisfied;

public void startWait(a) throws InterruptedException {
  synchronized (lock) {
    System.out.println("Waiting thread acquires lock");
    while(! conditionSatisfied) { System.out.println("Protection condition not valid, waiting thread enters wait state");
      lock.wait();
    }
    System.out.println("Wait for thread to wake up and start executing target action."); }}Copy the code
public void startNotify(a) {
  synchronized (lock) {
    System.out.println("Notification thread acquired lock");
    System.out.println("Notification thread about to wake up waiting thread");
    conditionSatisfied = true; lock.notify(); }}Copy the code

When we execute the above two functions separately in two threads, we get the following output.

The waiting thread has acquired the lock. The protection condition is invalid. The waiting thread has entered the wait state. The notification thread has acquired the lockCopy the code

8.2.3 wait/notify principle

The JVM maintains an Entry Set and a Wait Set for each object.

The entry set is used to store the threads applying for the internal lock of the object, and the wait set is used to store the waiting threads on the object.

The wait() method suspends the current thread, and when the internal lock is released, the current thread is placed in the object wait set to which the method belongs.

Calling notify() awakens any thread in the object’s wait set. The awakened thread remains in the object’s wait set until the thread again holds the corresponding internal lock. Wait () removes the current thread from the object’s wait set.

Adding the current thread to the wait set, suspending the current thread, releasing the lock, and removing the awakened thread from the object’s wait set are all implemented in the wait() method.

Native code for the wait() method determines whether the thread holds an internal lock on the current object, and if not, it reports an illegal monitor state exception, which is why the wait() method is executed in a synchronized block of code.

8.2.4 Problems of WAIT/Notify

  • Early to wake up

    The phenomenon that the waiting thread is awakened when the protection condition is not established is called premature awakening.

    Early awakening causes waiting threads that do not need to be woken up to be woken up, resulting in a waste of resources.

  • Signal loss

    There are two cases of signal loss, one is to judge the protection conditions outside the circulation, the other is the improper use of notify() method.

    • Extracorporeal conditions were evaluated

      If the waiting thread does not determine whether the protection condition is true or not before executing wait(), it is possible that the notifier thread updates the shared variable before the wait thread enters the critical section, making the protection condition true and notifying, but the waiting thread is not suspended and therefore not awakened.

      This phenomenon is equivalent to the waiting thread missing a “signal” sent to it, so it is called signal loss.

      As long as the judgment of the guard condition and the call of the wait() method are placed in the loop statement, signal loss resulting from this situation can be avoided.

    • Improper use of notify()

      Another manifestation of signal loss is the notify() downgrade when a notifyAll() should be called. In this case, the way to avoid signal loss is to use notifyAll()

  • Deceptive awakening

    A waiting thread may be awakened without notify()/notifyAll() from any other thread. This phenomenon is called deceptive wake-up.

    Although the probability of deceptive wake-up is low, Java allows it, as a result of compromises between the Java platform and the operating system.

    • Avoid deceptive awakenings

      The way to avoid a deceptive wake-up call is to check whether the condition is met in the loop and wait if it is not, which is to call wait() again.

  • Context switch

    Execution of wait() by a waiting thread results in at least two requests and releases of the internal lock by that thread.

    When executing notify()/notifyAll(), the notification thread needs to hold the internal lock of the corresponding object. Therefore, a lock is applied for once, and the application and release of the lock may lead to a context switch.

    Second, the process of waiting for a thread to wake up from being suspended itself causes a context switch.

    Thirdly, when the waked waiting thread continues to run, it needs to apply for the internal lock again. At this time, the waiting thread may need to compete with other threads in the entry set of the corresponding object and other newly active threads for the internal lock, which may lead to a context switch.

    Finally, premature awakening also causes additional context switches, because the thread that is prematurely awakened needs to continue to wait and experience the process of being suspended and woken up again.

There are two common ways to reduce wait/notify context switches.

  • Use notify() instead of notifyAll()

    Using notify() instead of notifyAll() can reduce the context-switching overhead without causing premature wake up as long as the program is correct

  • Release the internal lock as soon as possible

    The notifying thread releases the internal lock as soon as possible after the notify()/notifyAll() is executed. This prevents the thread from being suspended when it applies for the internal lock again before wait() is called

8.2.5 Selection of notify()/notifyAll()

Notify () may cause signal loss, while notifyAll() awakens waiting threads that do not need to wake up, but notifyAll() is guaranteed to be correct.

Therefore, notifyAll() is generally preferred to ensure correctness.

In general, use notify() to implement notifications only if both of the following conditions are true.

  1. Only one thread is needed to wake up

    We can consider using notify() for notifications when we need to wake up at most one thread at a time, but this is not enough.

    When different waiting threads use different protection conditions, an arbitrary thread that notify() wakes up may not be the one we need to wake up, so condition 2 is needed to rule it out.

  2. The wait set of the object contains only homogeneous wait threads

    Homogeneous wait threads refer to threads using the same protection condition and logically returning from a wait() call.

    The most typical homogenous threads are different threads created using the same Runnable, or multiple instances of the same Thread subclass new.

8.3 await/signal

8.3.1 await/signal profile

Wait ()/notify() is too low-level and has two problems: premature awakening and inability to tell whether object.wait (ms) returns due to a timeout or being woken up by the notifying thread.

There are several points to using await/signal collaboration.

  • Condition interfaces

    The Condition (Condition variable) interface was introduced in JDK 5, and using Condition can also implement wait/notification without the two problems mentioned above.

    The await()/signal()/signalAll() provided by the Condition interface is equivalent to the wait()/notify()/notifyAll() provided by Object.

    A Condition instance can be obtained with lock.newcondition ().

  • Hold locks

    Similar to wait/notify, wait/notify requires the thread to hold an internal lock on the owning object, while await/ Signal requires the thread to hold an explicit lock on the Condition instance.

  • Waiting queue

    Condition instances are also called Condition variables or conditional queues. Each Condition instance internally maintains a wait queue for waiting threads, which is equivalent to the wait set in Object.

  • Looping statements

    The judgment of the protection condition and the call to the await() method are placed in the loop statement

  • Guide the area

    Loop statements and execute target actions are placed in the same explicit lock-guided critical section to avoid deceptive wake up and signal loss problems

8.3.2 Basic usage of await/signal

private Lock lock = new ReentrantLock();
private Condition condition = lock.newCondition();
private volatile boolean conditionSatisfied = false;

private void startWait(a) {
  lock.lock();
  System.out.println("Waiting thread acquires lock");
  try {
    while(! conditionSatisfied) { System.out.println("Protection condition not valid, waiting thread enters wait state");
      condition.await();
    }
    System.out.println("Wait for thread to wake up and start executing target action.");
  } catch (InterruptedException e) {
    e.printStackTrace();
  } finally {
    lock.unlock();
    System.out.println("Waiting thread releases lock"); }}Copy the code
public void startNotify(a) {
  lock.lock();
  System.out.println("Notification thread acquired lock");
  try {
    conditionSatisfied = true;
    System.out.println("Notification thread about to wake up waiting thread");
    condition.signal();
  } finally {
    System.out.println("Notify thread to release lock"); lock.unlock(); }}Copy the code

When we execute the above two functions in two separate threads, we get the following output.

The waiting thread has acquired the lock. The protection condition is invalid. The waiting thread has entered the wait state. The notification thread has acquired the lockCopy the code

The usage 8.3.3 are included awaitUntil ()

The Condition interface can solve the problem of object.wait (ms) not being able to determine whether the end of await is due to timeout or wake up by using the awaitUntil(timeout, unit) method.

AwaitUntil () returns false if the wait ended due to a timeout, or true if the wait was awakened. Let’s see how this works.

private void startTimedWait(a) throws InterruptedException {
  lock.lock();
  System.out.println("Waiting thread acquires lock");
  // Time out after 3 seconds
  Date date = new Date(System.currentTimeMillis() + 3 * 1000);
  boolean isWakenUp = true;
  try {
    while(! conditionSatisfied) {if(! isWakenUp) { System.out.println("Timed out. Waiting task terminated.");
        return;
      } else {
        System.out.println("The protection condition is not met and the waiting time is not up. Wait to enter the waiting state.");
        isWakenUp = condition.awaitUntil(date);
      }
    }
    System.out.println("Wait for thread to wake up and start executing target action.");
  } finally{ lock.unlock(); }}Copy the code
public void startDelayedNotify(a) {
  threadSleep(4 * 1000);
  startNotify();
}
Copy the code
The waiting thread has obtained the lock. The conditions are not met and the waiting time is not up. The waiting task has timed outCopy the code

8.4 await/countDown

A brief introduction 8.4.1 await/countDown

The use of Join () allows one thread to wait for another thread to finish, but CountDownLatch can be used when we just want a specific action to finish without waiting for the entire thread to finish.

The await/countDown collaboration has the following characteristics.

  • Prior operation

    CountDownLatch enables one or more threads to wait for other threads to complete a specific set of operations before continuing. This set of threads is called a prerequisite operation.

  • Prerequisite operand

    CountDownLatch internally maintains a count value for counting pending pre-operands, which is reduced by 1 each time the countdownlatch.countdown () method executes.

    The unfinished prerequisite operand count is set in the CountDownLatch constructor.

    Note that this value cannot be less than 0, otherwise an invalid parameter exception will be reported.

  • A one-time

    When the counter value is 0, subsequent calls to the await() method will not cause the executing thread to wait, so CountDownLatch is a one-time collaboration.

  • Don’t lock

    CountDownLatch internally encapsulates the wait and notification logic for count values, so no locks are required to implement wait/notification using CountDownLatch

  • await()

    Countdownlatch.await () puts the thread into a waiting state, and when count in CountDownLatch is 0, it indicates that the prior action to wait has completed.

  • countDown()

    When the countdownlatch.countdown () method is called, the count value is reduced by one, and when the count value is zero, the corresponding waiting thread is awakened.

8.4.2 await/countDown Basic usage

public void tryAwaitCountDown(a) {
  startWaitThread();
  startCountDownThread();
  startCountDownThread();
}
Copy the code
final int prerequisiteOperationCount = 2;
final CountDownLatch latch = new CountDownLatch(prerequisiteOperationCount);

private void startWait(a) throws InterruptedException {
  System.out.println("Waiting thread enters wait state");
  latch.await();
  System.out.println("Wait for thread to end wait");
}
Copy the code
private void startCountDown(a) {
  try {
    System.out.println("Perform prerequisites");
  } finally {
    System.out.println("Count minus 1"); latch.countDown(); }}Copy the code

When we execute the startWait() and startCountDown() methods on both threads, we get the following output.

The waiting thread enters the waiting state. Perform the preceding operation minus 1. Perform the preceding operation minus 1Copy the code

8.5 CyclicBarrier

8.5.1 CyclicBarrier profile

CyclicBarrier can be used when multiple threads need to wait for each other to execute at a certain point in each other’s code (the assembly point).

CyclicBarrier is a class introduced in JDK 5. CyclicBarrier collaboration has several features.

The thread that uses CyclicBarrier. Await () to wait is called the Party, and all threads executing the CyclicBarrier. Await () method are suspended except for the last thread executing the CyclicBarrier.

Unlike CountDownLatch, CyclicBarrier is reusable, meaning that once the wait is complete, another round of waits can be made.

8.5.1 CyclicBarrier

One day, Lao Li came up with an idea and organized several days to climb the mountain. Let’s take a look at what they did before climbing the mountain.

final int parties = 3;
final Runnable barrierAction = new Runnable() {
  @Override
  public void run(a) {
    System.out.println("The men are gathered, and the climb begins."); }};final CyclicBarrier barrier = new CyclicBarrier(parties, barrierAction);

public void tryCyclicBarrier(a) {
  firstDayClimb();
  secondDayClimb();
}

private void firstDayClimb(a) {
  new PartyThread("First day of climbing, Lao Li comes first.").start();
  new PartyThread("Lao Wang has arrived, but Xiao Zhang has not.").start();
  new PartyThread("Here comes Xiao Zhang.").start();
}

private void secondDayClimb(a) {
  new PartyThread("Climb the mountain the next day, Lao Wang first.").start();
  new PartyThread("Xiao Zhang is here, Lao Li is not here yet.").start();
  new PartyThread("Here comes Lao Li.").start();
}
Copy the code
public class PartyThread extends Thread {
  private final String content;

  public PartyThread(String content) {
    this.content = content;
  }

  @Override
  public void run(a) {
    System.out.println(content);
    try {
      barrier.await();
    } catch (BrokenBarrierException e) {
      e.printStackTrace();
    } catch(InterruptedException e) { e.printStackTrace(); }}}Copy the code

After running the above code, you get the following output.

The first day to climb the mountain, Lao Li came first Lao Wang arrived, xiao Zhang has not arrived Xiao Zhang arrived people came together, began to climb the next day to climb the mountain, Lao Wang arrived first Xiao Zhang arrived, Lao Li has not arrived Lao Li arrived people came together, began to climb the mountainCopy the code

8.5.3 CyclicBarrier principle

CyclicBarrier contains a variable trip of type Condition that implements wait/notification.

There is also a Generation object inside a CyclicBarrier, which indicates that instances of a CyclicBarrier can be reused.

The initial state of the current generation is Parties (the total number of parties), and each time the CyclicBarrier. Await () method is executed, the value of parties decreases by one.

The participant that invokes the CyclicBarrier method is the waiting thread, and the last participant is the notification thread.

When the last participant calls the CyclicBarrier.await() method, barrierAction.run() is executed, trip.signalAll() is executed to wake up all waiting threads, and the next generation is started. The value of parties is restored to its initial value.

Generation has a Boolean value broken, which becomes true when the thread calling CyclicBarrier. Await () is interrupted.

A BrokenBarrierExcetpion exception is thrown, indicating that the current generation has been broken and is unable to complete its intended task.

Every thread that uses a CyclicBarrier cannot be interrupted (the interrupt() method is called).

9. How do I stop a thread?

9.1 the stop () method

The stop() method in the JDK was deprecated a long time ago, and we can look at two possible scenarios for why it was deprecated.

In the first case, if we have thread A and thread B, thread A holds the lock required by thread B, and thread A is stopped by thread A (), so that the lock is not released, thread B can never get the lock, so that thread B can never execute the task.

In the second case, if thread A is in the process of modifying A variable and is stopped by stop(), thread B reads the variable and gets an exception, which may cause thread B to raise an exception.

Because of these two resource cleaning problems, many languages now deprecate the stop() method for threads.

Although threads cannot be simply terminated, they can be stopped. Here’s how.

9.2 interrupt () method

When we call the sleep() method, the compiler asks us to catch InterruptedException because the thread’s sleep state may be interrupted.

During sleep, if the thread’s interrupt() method is called elsewhere, the sleep state is interrupted and an interrupt exception is received.

We can release the lock when we catch an interrupt exception, such as closing a stream or file.

But the calling thread’s interrupt() method is not 100% interruptible. If we now have a thread whose run() method has a while loop doing something, calling the thread’s interrupt() method elsewhere does not interrupt the task.

In this case, we can determine whether the task has been interrupted by using the interrupted() or isInterruped() methods.

The interrupted() and isInterrupted() methods can both retrieve the interrupted status of a thread, but they have some differences.

  • static

    Interrupted() is a static method, isInterrupted() is a non-static method

  • reset

    Interrupted () resets the interrupted status, meaning that regardless of whether the interrupted status is true or false, the next interrupted status will be false

    IsInterrupted () does not reset the interrupted status, that is, after calling the interrupt() method of the thread, the interrupted status is always true

Whether interrupted() or isInterrupted() is essentially determined by the Boolean flag bit of the Native layer.

9.3 Boolean flag bit

Since interrupt() is just a modification to a Boolean value, we can set our own Boolean flag bit in the Java layer and let each thread share this Boolean value.

Externally we change this flag bit to true when we want to cancel a task.

  • Matters needing attention

    Using the Boolean flag bit directly has visibility problems, so use the volatile keyword to modify the value.

  • Usage scenarios

    When we need to use the sleep() method, we can interrupt the task with interrupt(), and other times we can use the Boolean flag bit.

10. What is ConcurrentHashMap?

10.1 introduction of ConcurrentHashMap

ConcurrentHashMap is a concurrent container, which is a concept relative to synchronous containers.

We often use data containers such as HashMap and ArrayList that are not thread-safe. For example, when using HashMap and locking itself, we need thread-safe data containers: synchronous and asynchronous.

Synchronous containers refer to thread-safe data containers such as Hashtable, which have performance problems in implementing thread-safe methods.

Synchronizing a Hashtable in one of the containers has the following problems.

  • Big lock

    Lock the Hashtable object

  • Long locks

    Lock the method directly

  • Read/write lock sharing

    There’s only one lock, from head to tail

Concurrent containers such as ConcurrentHashMap and CopyOnWriteArrayList do not have this problem. Let’s see how they are implemented.

10.2 ConcurrentHashMap brief

ConcurrentHashMap has been optimized for each version of the JDK from JDK 5 to 8.

  1. JDK 5

    In JDK 5, ConcurrentHashMap is implemented using segmented locks, which are locked when necessary.

    The ConcurrentHashMap version introduced in JDK 5 uses segments to store key-value pairs. Segments are locked when necessary, and access between segments is not affected.

    The hash algorithm in JDK 5’s ConcurrentHashMap does not distribute elements evenly across segments for small integers, such as integers up to 30,000, causing it to degenerate into a Hashtable.

  2. JDK 6

    In JDK 6, ConcurrentHashMap optimizes the quadratic Hash algorithm using the single-word Wang/Jenkins Hash algorithm, which allows elements to be evenly distributed across segments.

  3. JDK 7

    In JDK 7, the ConcurrentHashMap initializes segments in a different way than the previous version. The ConcurrentHashMap instantiates 16 segments directly after being constructed.

    Lazy loading of instantiation segments can involve visibility issues, so volatile and unsafe.getobjectVolatile () are used in THE ConcurrentHashMap of JDK 7 to ensure visibility.

  4. JDK 8

    In JDK 8, ConcurrentHashMap deprecated the concept of segments and implemented concurrency based on the HashMap principle.

    Use volatile where locking is not necessary, and narrow range locking is used for operations that must be locked.

10.3 ConcurrentHashMap characteristics

  • The little lock

    Segmented locking (JDK 5~7)

    Bucket node lock (JDK 8)

  • Short lock

    Try to obtain, fail, then lock

  • Split read/write lock

    (JDK 5~7)

    Volatile read CAS write (JDK 7~8)

  • Weak consistency

    • You may not be able to read the element immediately after you add it
    • You may still have elements after you’ve cleaned them out
    • Segment element changes before traversal can be read
    • Segment element changes after traversal cannot be read
    • Changes in traversal elements do not throw exceptions

11. What are the guidelines for using threads?

When using threads to perform asynchronous tasks, there are a few guidelines to follow to avoid some of the problems associated with using threads.

The five common guidelines for thread use are: carefully create threads directly, use basic thread pools, choose appropriate asynchrony, threads must be named, and prioritization.

  1. Do not create a thread directly

    Creating threads directly has no advantage other than simplicity and convenience, so it is very important that you do not create threads directly to perform asynchronous tasks during actual project development.

  2. Provide a base thread pool for use by various lines of business

    The purpose of this guideline is to avoid excessive threads by maintaining a separate thread pool for each line of business.

    If we have 10 lines of business, if each line of business maintains a thread pool, and if the core number of the thread pool is 8, then we have 80 threads, which is obviously unreasonable.

  3. Choose the appropriate asynchronous mode

    HandlerThread, IntentService, and RxJava can all perform asynchronous tasks, but it depends on the task type.

    If we have a low-priority task that may take a long time to execute, we can choose to use HandlerThread.

    There are also cases where we need to perform a scheduled task, where thread pools are more appropriate.

  4. Threads must be named

    When we had a large group of developers, whether we were using threads or thread pools, if we didn’t name the thread we were creating, and if that thread got an exception, we didn’t know which developer to call just by default.

    If we give each thread a name, we can quickly locate the thread’s creator and delegate problems to him.

    We can change the name of the Thread at runtime with Thread.currentThread().setName(name).

    If it is used by our line of business for a period of time, we can change the name of the thread to our line of business logo, and then change the name back when the task is complete.

  5. Pay attention to priority Settings

    Java uses a preemptive scheduling model, where high-priority tasks consume CPU first, and if we want a task to finish first, we can give it a higher priority.

    Set way is through the android. OS. Process. The setThreadPriority (priority), the priority value is smaller, the higher the priority, in the scope of its value – 20 ~ 19.

12 How do I execute asynchronous tasks on Android?

In this section, we’ll cover seven asynchronous approaches commonly used in Android: Thread, HandlerThread, IntentService, AsyncTask, Thread pool, RxJava, and Kotlin coroutines.

12.1 Introduction to Asynchronous Replication

Asynchronous means that the code is not executed in the order we wrote it. In addition to multi-threading, code like OnClickListener is executed asynchronously.

When writing asynchronous code, it’s possible to write callback hell that you won’t know what to do two days later, such as the following.

btn.setOnClickListener(new View.OnClickListener() {
  @Override
  public void onClick(View v) {
    sendRequest(request, new Callback() {
      public void onSuccess(Response response) {
        handler.post(new Runnable() {
					@Override         
          public void run(a) { updateUI(response); }})}})});Copy the code

12.2 the Thread

Creating threads directly is the easiest way to do it asynchronously, but using this approach has no advantage other than convenience and simplicity.

This approach has many disadvantages, such as being difficult to reuse, resulting in the overhead of frequently creating and destroying threads.

If we wanted to execute a timed task, we could create Thread directly, but it would be more cumbersome.

12.3 HandlerThread

HandlerThread is also essentially a Thread, but it comes with a message loop.

HandlerThread internally executes tasks in a serial manner, which is suitable for scenarios where tasks are continuously removed from the queue and executed for a long time.

12.4 IntentService

IntentService is a subclass of the Service component. It has an internal HandlerThread, so it has the characteristics of HandlerThread.

It has two advantages. The first is that in contrast to Services, IntenService is executed on the worker thread rather than the main thread.

The second point is that it is a Service. If the application uses a Service, it will increase the priority of the application so that it is not easy to be killed by the system.

12.5 AsyncTAsk

AsyncTask is an asynchronous utility class provided by Android. Its internal implementation uses a thread pool. The advantage of using AsyncTask is that we do not have to handle thread switching ourselves.

When using AsyncTask, note that the implementation of AsyncTask is inconsistent between different versions, but this inconsistency is below API 14, and most of our current applications are adapted at 15 and above, so this problem is basically gone.

12.6 the thread pool

12.6.1 Introduction to Thread Pools

Using thread pools to perform asynchronous tasks has two advantages.

  • Easy to reuse

    Threads created through thread pools are easy to reuse, thus avoiding the overhead of frequent thread creation and destruction.

  • powerful

    Thread pools provide several powerful features, such as timing, task queuing, concurrency control, and so on.

If the Executors don’t satisfy our needs, we can customize ThreadPoolExecutor to create a thread pool that meets our needs.

12.6.2 Basic Usage of thread Pools

The following ThreadPoolUtils class provides direct access to the global thread pool when threads are used by individual lines of business.

By fixing the number of threads in the thread pool to five, you can avoid creating too many threads directly.

With ThreadFactory, you can set the name when you create a thread, so you don’t have to locate the problem to the faulty thread.

private static ExecutorService sService = Executors.newFixedThreadPool(5.new ThreadFactory() {
  @Override
  public Thread newThread(Runnable r) {
    Thread thread = new Thread(r);
    thread.setName("ThreadPoolUtils");
    returnthread; }});Copy the code

The following code changes the name of the thread before executing the task and changes it back after executing the task to achieve a reuse effect.

public void executeTask(a) {
  ThreadPoolUtils.getService().execute(new Runnable() {
    @Override
    public void run(a) {
      String oldName = Thread.currentThread().getName();
      Thread.currentThread().setName("newName");
      System.out.println("On a mission");
      System.out.println("Mission accomplished."); Thread.currentThread().setName(oldName); }}); }Copy the code

12.7 RxJava

12.7.1 RxJava profile

RxJava is an asynchronous framework, and here we will focus on its basic usage, exception and cancellation handling.

RxJava provides different thread pools depending on the type of task, and for I/O intensive tasks such as network requests, it provides I/O thread pools.

For CPU-intensive tasks, it provides a dedicated thread pool for CPU tasks, namely Schdulers.com/putation ().

If our project integrates RxJava, we can use RxJava thread pools.

12.7.1 Basic Usage of RxJava

For the code in Section 12.1, writing in RxJava would look something like this.

btn.setOnClickListener(new View.OnClickListener() {
  @Override
  public void onClick(View v) {
    sendRequest(request)
      .subscribeOn(Schedulers.io())
      .observeOn(AndroidSchedulers.mainThread())
      .subscribe(new Consumer<Response>() {
        @Override
        public void accept(Response response) throws Exception { updateUI(response); }}); }});Copy the code

With Lambda expressions, the above code looks like this.

btn.setOnClickListener(v -> sendRequest(request))
  .subscribeOn(Schedulers.io())
  .observeOn(AndroidSchedulers.mainThread())
  .subscribe(response -> updateUI(response));
Copy the code

However, these two pieces of code are potentially dangerous because they use the Consumer directly instead of the Observer and do not handle exceptions.

12.7.2 RxJava Exception Handling

In the above code, we can add another method onErrorReturnItem() to the observeOn() method, such as the following, to map the exception to Response.

btn.setOnClickListener(v -> sendRequest(request))
  .subscribeOn(Schedulers.io())
  .observeOn(AndroidSchedulers.mainThread())
  .onErrorReturnItem(t -> mapThrowableToResponse(t))
  .subscribe(response -> updateUI(response));
Copy the code

Another approach is to use a global catch exception that is reported when caught.

To note here is that if capture is OnErrorNotImplmentedException, cause that we have to report it, because the cause is the true anomaly information, such as the following.

RxJavaPlugins.setErrorHandler { e -> 
  report(e instanceof OnErrorNotImplmentedException ? e.getCause() : e);
  Exceptions.throwIfFatal(e);
}
Copy the code

12.7.3 RxJava Cancellation Procedure

RxJava can execute an asynchronous task. If the Activity continues after Acitvity is closed, the Activity will be held by the Observer, resulting in a memory leak.

When we call subscribe(), we get a Disposable object that we can use to cancel the task when the page is destroyed.

That is, we can maintain a list of Disposable in our Activity and cancel tasks one by one in the onDestory() method.

Better yet, use Didi’s open-source framework, AutoDispose, which is easy to use by adding as.

btn.setOnClickListener(v -> sendRequest(request))
  .subscribeOn(Schedulers.io())
  .observeOn(AndroidSchedulers.mainThread())
  .onErrorReturnItem(t -> mapThrowableToResponse(t))
  .as(AutoDispose.autoDisposable(ViewScopeProvider.from(btn)))
  .subscribe(response -> updateUI(response));
Copy the code

The principle of AutoDispose is to monitor the life cycle of the control passed in. When it is found that the control is destroyed, it often means that the page is closed, so you can cancel this task.

12.8 Kotlin coroutines

12.8.1 Introduction to Kotlin coroutines

In addition to RxJava, we can also implement asynchronous tasks in AndorIds using Kotlin coroutines.

Asynchronous code written using Kotlin coroutines looks very similar to synchronous code. Here is an example of a network request.

First we define an onClick extension method that passes the context, launch mode, and coroutine body into the Launch method.

fun View.onClick(
  context: CoroutineContext = Dispatchers.Main,
  handler: suspend CoroutineScope. (v: View?). ->Unit
) {
  setOnClickListener { v ->
    GlobalScope.launch(context,CoroutineStart.DEFAULT) {
      handler(v)
    }
  }
}
Copy the code

You then have a button call this method and make a network request.

btn.onClick {
  val request = Request()
  val response = async { sendRequest(request) }.await()
  updateUI(response)
}
Copy the code

The above code appears to execute synchronously, but the code in async {} is executed asynchronously, and the updateUI() method is not executed until a Response is returned.

12.8.2 Cancellation of Kotlin coroutines

Using Kotlin coroutines is the same as using RxJava to execute asynchronous tasks. It is also necessary to cancel tasks to avoid memory leaks. Let’s look at how to cancel asynchronous tasks performed by Kotlin coroutines.

For the example above, we can use the idea of AutoDispose to listen to the View life cycle and cancel the asynchronous task when the View is destroyed.

When performing a task using a Kotlin coroutine, we get a Job object that we can use to cancel the corresponding task.

First, we define a class AutoDisposableJob, which listens to the View declaration cycle, and an extension of Job, autoDispose().

class AutoDisposableJob(
  private val view: View,
  private val wrapped: Job
) : Job by wrapped, View.OnAttachStateChangeListener {

  init {
    if (ViewCompat.isAttachedToWindow(view)) {
      view.addOnAttachStateChangeListener(this)}else {
      cancel()
    }
    invokeOnCompletion {
      view.removeOnAttachStateChangeListener(this)}}override fun onViewDetachedFromWindow(v: View?). {
    cancel()
    view.removeOnAttachStateChangeListener(this)}override fun onViewAttachedToWindow(v: View?). = Unit

}

fun Job.autoDispose(view: View) = AutoDisposableJob(view, this)
Copy the code

The autoDispose() extension method is then called in the onClick() method.

fun View.onClick(
  context: CoroutineContext = Dispatchers.Main,
  handler: suspend CoroutineScope. (v: View?). ->Unit
) {
  setOnClickListener { v ->
    GlobalScope.launch(context,CoroutineStart.DEFAULT) {
      handler(v)
    }.autoDispose(v)
  }
}
Copy the code

reference

1. Books

  1. Java Multithreaded Programming Practical Guide (Core)
  2. Java Concurrent Programming
  3. The Beauty of Concurrent Programming in Java

2. Video

  1. Domestic Top team with you to play Android performance analysis and optimization
  2. Android Advanced Interview with a senior interviewer

3. The article

  1. The differences between Blocked, Wait, and TIMED_WAIT in Java threads
  2. Java Advanced (6) Java multithreading core technology from the evolution of ConcurrentHashMap
  3. Java Multithreading (2) : Atomic variables and Atomic classes
  4. Cracking Kotlin coroutines (1) – Getting Started
  5. Fair lock, unfair lock, optimistic lock, pessimistic lock

Picture 4.

  1. The first figure