Processes and Threads

Official definition: A thread is the basic unit of CPU scheduling and allocation. It must be clearly different from a process, which is the minimum unit of resource allocation (CPU, memory, disk I/O, etc.) for the operating system. Note that one is the CPU and the other is the system's resources (where resources mean everything but the CPU, also called context). The CPU process cannot be shared at all times, but what if there is a need to share the CPU? This is where the concept of threads comes in. Threads are contained within a process, and the different threads of the process share the CPU and program context. Share the resources allocated by the process. When a single CPU performs process scheduling, it needs to read the context, execute the program, and save the context, that is, process switchover. If the CPU is single-core, threads in the process will switch in order to use the CPU core, but this is much less expensive than a process switch because of the shared execution environment. In this case, it is still concurrent, and only one thread can execute at a single core time. If the CPU is multi-core, then different threads in the process can use different cores, and true parallelism occurs. Threads are designed to reduce the cost of context switching, increase the concurrency of the system, and break through the flaw that a process can only do one thing, making it possible to have in-process concurrency. For example, in a program, if there are many functions, but it is single-threaded, that is, a function a function of serial execution, so the CPU utilization is not very high, because if there will be some blocking time, this time allocated to the program is actually wasted. To this end, and the process of each part of the function of the thread to achieve, so that the same program when running, because of the way of execution is multithreaded, this time can be concurrent (single-core) or even parallel (multi-core) execution. In this way, the running time of a process is reduced, and the context switch of the process is reduced. Process level: context switches are triggered when user processes block network I/O requests for Redis or Mysql data, or when the process time slice is up. An application can have multiple processes, if the process is the smallest unit of CPU scheduling, so for a Shared resource, speaking, reading and writing are all need to switch from the process, on the contrary, if at this time there is a kind of mode, can according to Shared resources, make the context switch overhead minimum, so at this point is the thread. Situations that cause context switches 1. Thread switches between different processes. 2. Switch between different processes. All threads in the same process can share the resources owned by the process. First, all threads have the same address space (the address space of the process), which means that threads can access every virtual address in the address space. In addition, you can access open files owned by the process, timers, semaphore mechanisms, and so on. Since threads within the same process share memory and files, threads do not need to call the kernel to communicate with each other.Copy the code

Processes and programs

process

A process is a process that allows a concurrent program to run on a data set

Generally, a process consists of body segments, user data, and process control blocks. Among them, the text segment is mainly machine instructions, and the user data is mainly user data that can be directly operated. A process control block is a data structure that describes and controls the various states of a process while it is running.

  1. Concurrency. Multiple process entities can run simultaneously at intervals. Concurrency is an important characteristic of processes and modern operating systems.

  2. Dynamics. A process is the execution process of a process entity. The dynamic performance of process is the dynamic change process of creating process because of executing program, executing process instruction because of obtaining CPU, and being cancelled because of running termination. In addition, the process state changes after it is created.

  3. Independence. In operating systems that did not introduce the concept of threading, processes are the basic unit of independent running and resource scheduling.

  4. Asynchrony. It refers to the intermittent execution of the process, the process when to execute and when to stop is unpredictable, showing a random character.

A process is a one-time execution of a program, and a process always corresponds to at least one specific program. A program can correspond to multiple processes, the same program can run on different sets of data, and thus constitute several different processes. Several processes can execute the same program code concurrently, while the same process can execute several programs sequentially.

PCB

  • A process identifier that uniquely identifies a process.
  • Processor status: Records the running time of the process in each processor register
  • Process scheduling information: process status, process priority, events, etc. A property primarily associated with state changes in a process.
  • Process control information: address of programs and data, synchronization and communication mechanisms, etc.

Process context

Is a static description of the whole process of process execution activities. It includes the values of various registers related to the execution of the process in the computer system, the machine instruction code set formed after the compilation of the program segment, the data set, and various stack values and PCB structures. It can be combined at certain execution levels, such as user-level context, system-level context, etc.

  • Above: Refers to the contents of executed process instructions and data in relevant registers and stacks.
  • Body: The contents of the executing process instructions and data in the relevant registers and stacks are called the body.
  • Below: Refers to the contents of the process instructions and data to be executed in relevant registers and stacks as below.

The process context of Unix System ⅴ consists of user-level context, register context, and system-level context.

  • User-level context: the user body segment, user data and user stack compiled from the user program segment of the process.

  • Register context: consists of a program counter (PC), processor status word (PS), stack pointer, and general purpose register. The PC gives the virtual address of the next instruction to be executed by the CPU; PS gives the hardware state when the machine is associated with the process; The stack pointer points to the current address of the next entry; The general purpose register is used to pass parameters between different execution modes.

  • The system-level context is divided into static and dynamic parts. The dynamic part here refers to the records that the system saves and recovers for the register values associated with each level of context when entering and exiting different levels of context. The static part of the system-level context includes the PCB structure, related tables that map the process’s virtual address space to the physical space, the core stack, and so on. Here, the core stack is primarily used to load the call sequence of system calls used in the process.

  • The dynamic part of the system-level context is associated with the register context. The concept of hierarchy of process context is mainly embodied in the dynamic part, that is, the dynamic part of system-level context can be regarded as a number of layers of quantity change, its change rules conform to the stack mode of allow in, then out.

Process context switch

Process context switches occur between different processes rather than within the same process. Context switching is divided into three steps: (1) The information about the switched process is saved to the relevant storage area, such as the PCB of the process. (2) The scheduling and resource allocation programs in the operating system are executed, and new processes are selected. (3) Take out the original saved body part of the selected process from the relevant storage area and send it to the register and stack to activate the execution of the selected process.

Atomicity, visibility, orderliness

CPU thread switching and locking

Note that any time a thread faces a “CPU end” or “CPU deprivation”, it is switched out of CPU use after executing some (but not all) instructions. This is where a context switch of the thread occurs. But !!!! The lock is not released, which means that even if you can’t get a CPU somewhere, the part of the code that can only be executed after you get a CPU in the future is not allowed to be executed by other threads (because no lock is released!!). . This is atomicity.

atomic

  1. Atomic operation: refers to the indivisible minimum operation instruction. For example, assignment statements of basic types. But a lot of times, we need multiple instructions for atomic operations in this case.
  2. Atomic operations of multiple instructions: a set of operations is either not executed, or all of them are executed without interruption (where interruption means that only one thread is executing all statements at any given time). The concept of synchronization is in there.
  3. Interrupt: Another thread accesses the block of code at the same time.

order

It refers to the order in which the program is executed in the order in which the code was written. The guarantee of order

  1. volatile
  2. The Lock or synchronized keyword
Instruction rearrangement

The as-if-searies semantics guarantee that the results of a single-threaded program execution must be consistent whether or not instruction reorders occur. In order to ensure this, but in the event of a data dependency, the dependent data operation instructions are generally not rearranged.

Happens-before procedural rules

  1. Within a thread, in code order, the operations that are written in front of them take place before the operations that are written behind them.
  2. Lock rule: An unlock operation takes place after a lock
  3. The volatile variable rule: a write to a variable occurs before a read to the variable
  4. Pass: if A precedes B and B precedes C then A precedes C

Program rule: The results of a piece of code executed in a single thread are ordered. Note the execution result, because the virtual machine and processor will reorder the instructions (more on reordering later). Although the reordering does not affect the execution result of the program, so the final execution result of the program is consistent with the sequential execution result. Therefore, this rule is only valid for a single thread and cannot be guaranteed to be correct in a multi-threaded environment. So as-if-searies are actually safe for single threads.

Lock rule: This rule is easy to understand. In both single-threaded and multi-threaded environments, a lock must be locked only after the UNLOCK operation has been performed.

The Volatile Variable rule: This is an important rule, indicating that volatile guarantees thread visibility. If a thread writes to a volatile variable and then a thread reads it, the write must happen before the read.

CPU instruction reordering

Because the CPU is pipelinded, a single CPU instruction is actually divided into many steps, such as fetching the finger, decoding, executing, accessing the memory, and writing the register

visibility

When multiple threads are accessing the same resource concurrently, the changes made by one thread are immediately known to the other threads. There are two involved:

  1. A thread that modifies and writes back a volatile variable is an atomic operation.
  2. Invalidates the cache setting of this value in other threads (resulting in resource updates in other threads at this time). But the new value calculated from that variable doesn’t change. Because it’s history. For example, a + b = 1; It’s already calculated, so in time A has been updated. B will not be updated again!!

Visibility is no guarantee of thread-safety. i=0; Because for a non-atomic operation i++, there may still be a case of modification override. I++ actually divides into:

  1. Read the value of I (if other threads update, then this will also be updated due to caching, always I feel updated in real time. Therefore, volatile is often used to modify flag bits so that they can be updated in real time.
  2. Calculate the value of I +1 (not updated here)
  3. Assign the value of I plus 1 to I. (The assignment is immediately flushed back to main memory and the cache is notified of invalidity.)

Because of pure visibility, no atomicity, then another thread reentrant will occur, for example, before 3 is executed, another thread has already changed the value of I. When I =1, the thread updates the value of I, but when I =1, the thread updates the value of I, so the logic of this change is wrong.

The big use of volatile is actually

  1. Modifies the flag bit so that the thread can monitor the state in real time
  2. A variable that avoids reordering instructions for some operations. For example, instance = new Object(), i++, and so on. I ++ is special because it prevents reordering instructions. I +1 is evaluated first, and I + I is assigned later, but I + I is no longer updated. So there will still be thread-safety issues.

Multithreaded implementation and differences

Thread class inheritance

You can get a Thread instance directly by defining your own class and inheriting from Thread, overriding the run() method.Copy the code
    public class ThreadDemo01 extends Thread{
        public ThreadDemo01(){// write a constructor for a subclass, which can default} public voidrun(){system.out.println (thread.currentThread ().getName()); } public static void main(String[] args){ ThreadDemo01 threadDemo01 = new ThreadDemo01(); threadDemo01.setName("I'm a custom thread 1"); threadDemo01.start(); System.out.println(Thread.currentThread().toString()); }}Copy the code
One thing to note is that you need to use the start() method to start a thread. Calling the run method directly does not achieve the effect of a thread. The start() method calls start0(), which is a native method. In addition, the start method is a method with synchronize added to synchronize the creation thread. If you call the run method directly, you are calling a normal method directly. Inherited faults:Copy the code
    1. Since Java is a single-inheritance pattern, there is a limitation that if you inherit Thread, you cannot inherit other classes.
    1. Many people on the Internet say that Thread cannot share resources. Thread implements the Runnable interface itself.

Implement the Runnable interface

The Thread(Runnnable arg) constructor is used to create an instance of the Thread class by Tread. Therefore, you can get an instance of the thread class by passing in an instance that implements the Runnable interface, and you can share resources. Multiple thread instances can share the resources of the same Runnable instance. However, there will be synchronization problems when multiple threads share resources, and you need to do the synchronization operation yourself.Copy the code
public class ThreadDemo02 {

    public static void main(String[] args){ 
        System.out.println(Thread.currentThread().getName());
        Thread t1 = new Thread(new MyThread());
        t1.start(); 
    }
}

class MyThread implements Runnable{
    @Override
    public void run() {
        // TODO Auto-generated method stub
        System.out.println(Thread.currentThread().getName()+"--> I am through the implementation of the interface thread implementation!"); }}Copy the code

With the same Runnable instance, multiple threads can be shared concurrently. This is because the Runnable instance passed in to Thread is the same instance and shares all the resources of that instance.

Thread pool mode

There are three static methods for creating the thread pool. Use Executors to create the corresponding thread pool:

public static ExecutorSevice newFixedThreadPool(int nThreads)
public static ExecutorSevice newCachedThreadPool()
public static ScheduledExecutorService newScheduledThreadPool(int corePoolSize) 

Copy the code
  • NewSingleThreadExecutor returns an Executor containing a single thread. When multiple tasks are assigned to the Exector, the thread finishes processing one task and then moves on to the next. If the thread fails, a new thread will take its place.
  • NewFixedThreadPool returns a thread pool containing a specified number of threads. If the number of tasks exceeds the number of threads, the tasks that are not executed must wait until one of them completes.
  • NewCachedThreadPool creates threads for processing based on the number of tasks a user has. The pool does not place a limit on the number of threads and is completely dependent on the number of threads the JVM can create, which can cause memory shortages.
  • NewScheduledThreadPool Creates a thread pool with at least n thread Spaces. This thread pool supports the need for timed and periodic execution of tasks.

We simply put an object instance of a class that implements Runnable into the thread pool, and the thread pool automatically maintains the start, run, and destroy of the thread. We don’t need to call the start() method ourselves to start the thread. A thread placed in the thread pool will be in a wait state until there is enough space to wake it up. We just need to give the instances that implement the Runnable interface to the thread pool.

private ExecutorService threadPool = Executors.newFixedThreadPool(5);
threadPool.execute(socketThread);

Copy the code

Detail thread pools

Public ThreadPoolExecutor(int corePoolSize, int corePoolSize, int corePoolSize, int corePoolSize) Long keepAliveTime, TimeUnit Unit, // Idle duration of threads exceeding corePoolSize, // Idle duration of threads exceeding corePoolSize, // Redundant threads will be reclaimed. BlockingQueue<Runnable> workQueue, BlockingQueue<Runnable> workQueue, RejectedExecutionHandler handler) RejectedExecutionHandler (RejectedExecutionHandlerCopy the code

The default rejection behavior of a thread pool is AbortPolicy, which means that it throws a RejectedExecutionHandler exception that is unchecked and easy to forget to catch. If you do not care about the event of the task being rejected, you can set the rejection policy to DiscardPolicy, so that redundant tasks will be quietly ignored.

The execution process of a new task by a thread pool

The execute method executes logic in several ways:

1. If the number of threads currently running is less than the corePoolSize, a new thread is created to execute the new task. 2. If the number of running threads is equal to or greater than the corePoolSize, the submitted tasks are stored in the blocking queue workQueue. 3. If the current workQueue is full, a new thread will be created to execute the task. If the number of threads exceeds maximumPoolSize, RejectedExecutionHandler will be used to RejectedExecutionHandler.Copy the code

Note that the thread pool is designed to handle tasks using caching strategies such as the core thread pool corePoolSize, the blocking queue workQueue, and the thread pool maximumPoolSize.

The life cycle of the thread

After a thread is created and started, it neither enters the execution state as soon as it is started nor remains in the execution state. During the life of a thread, it passes through five states: New, Runnable, Running, Blocked, and Dead. In particular, when a thread is started, it cannot “hog” the CPU all the time, so the CPU needs to switch between multiple threads, and the thread state will switch between running and blocking many times.

The New building

When the program creates a thread using the new keyword, the thread is in the new state.

  • At this point, the JVM allocates memory for it and initializes the value of its member variables.
  • At this time, the thread object does not show any dynamic characteristics of the thread, and the program will not execute the thread execution body of the thread;

The state is Runnable

After the start() method has been called by the thread object, the thread is in the ready state. The thread situation is as follows:

  1. At this point, the JVM creates a method call stack and a program counter for it;
  2. Threads in this state are always in a thread ready queue (although in queue form, in fact, it is called a runnable pool rather than a runnable queue). Because the CPU is not necessarily scheduled in first-in, first-out order), the thread is not running;
  3. In this case, the thread is waiting for the system to allocate the CPU time slice to it, not just executing the start() method immediately;

Note:

  1. The start() method is called to start the thread, and the run() method is treated as the body of the thread execution. But if you call the run() method of the thread object directly, the run() method is executed immediately, and no other thread can execute concurrently until the run() method returns. That is, the system treats the thread object as a normal object and the run() method as a normal method, not the thread execution body.
  2. Do not call the start() method of a thread object again if the thread is no longer in the new state after its run() method has been called. Only to in the new state of the thread calls the start () method, otherwise will cause IllegaIThreadStateExccption exception;

Running status

When the CPU starts scheduling a thread in the ready state, the thread is in the running state after it has acquired a slice of CPU time to actually start executing the thread body of the run() method.

Threads in the running state of the most complex, it can’t have been running (unless it’s short enough thread execution body, instant is executed over), thread need to be interrupted in the operation process, the purpose is to make for other threads to perform, the details of thread scheduling strategy adopted by the depending on the underlying platform. The thread state can become blocked, ready, and dead. For example, in a preemptive system, each executable thread is assigned a time slice to handle tasks. When that time slice runs out, the system deprives that thread of its resources and gives other threads the chance to execute. The thread will change from the running state to the ready state again and wait for the system to allocate resources.

The state of being Blocked

In some cases, a running thread gives up the CPU and temporarily stops itself, entering a blocked state. The thread will be blocked if:

  1. The thread calls the sleep() method, actively gives up the occupied processor resources, temporarily enters the interrupted state (does not release the held object lock), and waits for the system to allocate CPU to continue execution when the time is up;
  2. A thread calls a blocking IO method and is blocked until the method returns.
  3. A thread attempted to acquire a synchronization monitor that is being held by another thread.
  4. The program suspends the thread by calling its suspend method;
  5. The thread calls wait to wait for the notify/notifyAll to wake up (releasing the object lock it holds).

Basic thread operations

join

The join methods between the threads can be seen as a way of collaboration, in many cases, a thread of input may be very dependent on the output of another thread, it's like two good gay friend, a gay friends go ahead first suddenly saw another gay friend left behind, this time he will be in place to wait the gay friends, such as gay friends to catch up, just two people hand in hand. This kind of collaboration between threads is also true in real life. In the process of software development, after obtaining requirements from customers, requirements need to be decomposed by requirements analysts. At this time, product development will continue to follow up. If A thread instance A executes threadb.join (), it means that the current threadA will wait until threadB terminates before threadA continues. The join method provides the following methods: The active thread will execute first, and then continue to execute the current thread. For example, if b. join is used in A, B is executed first and then A is executedCopy the code

sleep

Public static native void sleep(long millis) public static native void sleep(long millis) public static native void sleep(long millis) Note that the sleep method does not lose the lock if the current thread has acquired it. The sleep method is often compared to the object.wait () method and is often asked in interviews. Sleep () VS wait() -sleep () methods are static methods of threads, while wait is an Object instance method. -Wait () methods must be called in a synchronized method or block, that is, they must have acquired an Object lock. The sleep() method has no such limitation and can be used anywhere. In addition, the wait() method releases the lock on the held object, causing the thread to enter the wait pool and wait for the next acquisition of the resource. The sleep() method only releases the CPU and does not release the lock; - sleep () method if again after sleep time to get the CPU time slice will continue, and wait () method must wait for the Object. Notift/Object notifyAll notification, will leave the waiting pool, and again the CPU time slice will continue.Copy the code

yield

public static native void yield(); This is a static method. Once executed, it will allow the current thread to give up the CPU. However, it is important to note that giving up the CPU does not mean that the current thread is no longer running. In addition, the allotted time slice will only be allocated to threads of the same priority as the current thread. What is thread priority now? Let’s talk about it in detail. Modern operating system basically adopts the form of time division to schedule running threads. The operating system will divide each time slice, and the thread will be allocated to several time slices. After the current time slice is used up, the thread scheduling will occur and wait for the next allocation. The amount of time allocated to a thread determines how much processor resources the thread consumes, and thread priority is a thread attribute that determines whether a thread needs to allocate more or less processor resources.

It is also important to note that both sleep() and yield() methods allow the current thread to give up the processor’s resources. However, the difference between sleep() and yield() methods is that all other threads can compete for the time slice that the current thread has given up. The yield() method, on the other hand, allows only threads with the same priority as the current thread to obtain the freed slice of CPU time.

Monitor implementation Monitor

In-depth analysis of Synchronize

CAS: In concurrent programming, locking is a performance consuming operation. Only one thread can enter the synchronized block at a time to modify the value of a variable. Without synchronized, multiple threads changing the value of A would result in an incorrect result, creating a thread-safety problem. But locking is also a performance intensive operation. Whether it’s holding a lock, unlocking it, or waiting for a lock or blocking it, it’s very performance intensive. Then can we leave it unlocked?

(expect==old){old=new; } is an atomic operation. But even then, it doesn’t solve the ABA problem, because after reading the old value, the old value might be transformed a-b-A first, and then the atomic operation CAS would be successful. But the value of old has actually changed.

CAS can guarantee that the value is correct at high concurrency, but at the cost of multiple attempts, also known as spins, CPU performance is wasted.

  1. CAS is non-blocking, lightweight optimistic locking.
  2. CAS is a CPU instruction
  3. CAS is an atomic operation that guarantees the security of concurrency, not synchronization.

CAS underlying

UNSAFE_ENTRY(jboolean, Unsafe_CompareAndSwapInt(JNIEnv *env, jobject unsafe, jobject obj, jlong offset, jint e, jint x))
  UnsafeWrapper("Unsafe_CompareAndSwapInt");
  oop p = JNIHandles::resolve(obj);
  jint* addr = (jint *) index_oop_from_field_offset_long(p, offset);
  return (jint)(Atomic::cmpxchg(x, addr, e)) == e;
UNSAFE_END

Copy the code

The cas layer is implemented by CMPXCH (x,addr,e). This function locks a multi-core processor so that only one thread can execute the two instructions at a time, thus achieving atomicity.

The lock

The locking mechanism has the following two features:

Mutual exclusion: The ability to allow only one thread to hold a lock on an object at a time enables coordination among multiple threads so that only one thread can access the synchronized block of code (compound operations) at a time. Mutual exclusion is also often referred to as atomicity of the operation.

Visibility: You must ensure that changes made to a shared variable before the lock is released are visible to another thread that subsequently acquired the lock (that is, it should acquire the value of the latest shared variable when it acquired the lock), otherwise the other thread may continue to work on a copy of the local cache causing inconsistency.

Object lock

In Java, every object has a Monitor object, which is actually a lock for the Java object and is often referred to as a “built-in lock” or an “object lock.” Class can have more than one object, so each object has its own object lock, do not interfere with each other.

Kind of lock

In Java, there is also a lock for each Class, which can be called a “Class lock.” Class locks are actually implemented through object locks, which are Class object locks for classes. There is only one Class object per Class, so there is only one Class lock per Class.

Type of lock obtained from Synchronize

  1. Synchronize modifies an object lock when it applies to normal code blocks and non-static methods

  2. Synchronize is a class lock when it decorates static methods and class instances of the class (class.class)

  3. Fields about locks that may be used in Synchronize

Biased locking

If an object is in a biased locking state, this means that the biased locking class can belong to the current thread as well as to another thread.

  1. We first compare Thread_ID in the object header to determine whether it is our own lock. Note that if the thread ID is inconsistent, it does not result in a direct escalation to bias lock, because bias lock is not automatically revoked. The value of Thread_ID in the object header is not automatically set to zero after a thread is executed. . So you don’t know if the thread with this Thread_ID is still working. Biased lock release uses a mechanism in which the lock is released only if the race occurs (the lock release process is included in the lock undo process). The release of the lock means that the thread has exited the synchronized code area and the Word Mark area can be restored to the unlocked state. Threads do not actively release bias locks and have to wait for other threads to compete. Biased locking is undone by waiting for the global security point (the point at which no code is executing). The steps are as follows:
    1. Pause threads with biased locks;
    2. Determine whether the lock object is still in the locked state. If no, it is returned to the unlocked state (01) to allow the remaining threads to compete. If yes, the current thread holding the lock is suspended, and the pointer pointing to the address of the lock record of the current thread is put into the object head Mark Word, and the state is upgraded to lightweight lock (00), and then the current thread holding the lock is restored, and the competition mode of lightweight lock is entered.
  2. If so, you can continue executing the block; Otherwise, go to Step 3
  3. If it is not your own lock at this point, then you try CAS to acquire the lock and, if successful, set Thread_ID to your own ID and start executing the synchronized block. The CAS for biased locking is CAS (word_mark_ADDR, NULL, current_threaDID). That is, if the expected value is NULL /0, cas succeeds. If you’re already biased, you’re not going to succeed. Subsequent undo operations are required.

In summary, after bias locking is initially used, locks can be repeatedly reentered without being released if no new threads appear, and only when the CAS is used for the first time. You don’t have to do anything after that.

However, if a new thread executes the CAS, the biased lock will not release the lock until the contention is reached. Therefore, when a competing thread performs CAS, it will find that Thread_ID is no longer null, and it will fail, triggering the cancellation mechanism. The undo process determines whether the thread is alive, and if it is, whether it will continue to compete. Then modify the corresponding transition flag bit.

If the race is found during the undo phase, then lock bloat is performed, changing from biased locking to lightweight locking. The global safe point is when no instructions are executed. When the processing is complete, the suspended thread resumes execution. Before upgrading to A lightweight Lock, the thread holding the biased Lock (thread A) is suspended. The JVM first creates A space called A Lock Record in the stack of the original thread holding the biased Lock (thread A). This space is used to hold A copy of the Lock object’s current Mark Word. Then copy the Mark Word in the object header to the lock record of the original thread (thread A) holding the biased lock (officially called swat Mark Word), then thread A obtains the lightweight lock, the lock flag bit of Mark Word is 00. Word lock(a field in word Mark) The address of the lock record that points to thread A is shown below:

Lightweight lock

Adaptive spin lock
Adaptive means that the spin time is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner: If, on the same lock object, the spin wait has just succeeded in obtaining the lock, and the thread holding the lock is running, the virtual machine will assume that the spin is likely to succeed again, and it will allow the spin wait to last a relatively long time, say 100 cycles. Conversely, if the spin is rarely successfully acquired for a lock, it may be possible to reduce the spin time or even omit the spin process when acquiring the lock in the future to avoid wasting processor resources. Adaptive spin solves the problem of "uncertain lock race time". The JVM has a hard time sensing the exact lock race time, and giving it to the user for analysis is a violation of what the JVM was designed for. Adaptive spin assumes that different threads hold the same lock object for roughly the same amount of time, and that the level of competition tends to be stable, so that the time of the next spin can be adjusted according to the time and results of the last spin. The goal of spin locking is to reduce the cost of thread switching. If the lock competition is intense, we have to rely on heavyweight locks to block the thread that failed to compete. If there is no actual lock competition, then applying for a heavyweight lock is wasted. The goal of lightweight locking is to reduce the performance cost of using heavy locking without actual contention, including kernel mode and user mode switching due to system calls, thread switching due to thread blocking, and so on. As the name suggests, lightweight locks are relative to heavyweight locks. When using lightweight Lock, there is no need to apply for mutex, only _ update some bytes of CAS in Mark Word to the Lock Record in the thread stack. If the update is successful, the lightweight Lock is successfully acquired _, and the Record Lock state is lightweight Lock. Otherwise, a thread has acquired the lightweight lock, a lock race has occurred (it is not suitable to continue using the lightweight lock), and then it expands to the heavyweight lock. Of course, since lightweight locks are naturally aimed at scenarios where there is no lock competition, if there is lock competition but it is not intense, it can still be optimized with spin locking, and then reflate into heavyweight locks when spin fails. If the CAS of a lightweight Lock is also stateless, update word Mark to a pointer to the Lock Recoder in the thread. If the update is successful, then the Lock is acquired and the Lock flag is set to lightweight. If the CAS fails, it indicates that another thread has acquired the lock. Compare whether the CAS thread has acquired the lock. If it is a pointer to its own Lock Recoder, then it is a reentrant. If not, spin it a few times. If the spin fails several times, the Word Mark flag bit is changed to a heavyweight lock (Word Mark points to a Monitor object). At this point, lock management has been handed over to Monitor. Note that an object is associated with a monitor for synchronization management) and is itself placed in the blocking queue. At this point, the thread that acquired the lock completes its execution, and the lock is released. In other words, the lock is set to the unlocked state, but when the CAS detects that the lock is already a heavyweight lock, it indicates that a thread has blocked due to inconsistency. At this point, the release of the heavyweight lock is performed. Release the lock and wake up the blocked thread.Copy the code

Simple summary

Biased lock, lightweight lock is optimistic lock, heavyweight lock is pessimistic lock. When an object is initially instantiated and no thread has access to it. It's biased, which means that it now thinks that only one thread can access it, so when the first thread accesses it, it's biased to that thread, and in that case, the object holds a biased lock. When changing the object header to a biased lock, the thread uses CAS and changes ThreadID in the object header to its own ID. When accessing the object again, the thread only needs to compare the ID and does not need to use CAS. Once a second thread to access the object because the biased locking does not take the initiative to release, so the second thread can see objects to state, at this time that already there is competition on the object, check whether the original owners of the thread lock the object is still alive, if you hang up and the object can be become unlocked state, then back to the new thread, If the original thread is still alive, the stack of operations for that thread is immediately executed, the usage of the object is checked, and if the bias lock is still required, the bias lock is upgraded to lightweight (this is when the bias lock is upgraded to lightweight). If no one is in use, the object can be returned to its unlocked state and then re-biased. Lightweight locks assume contention exists, but the degree of contention is very light. Generally, two threads will stagger or wait a little (spin) for the lock to be released by another thread. But when the spin exceeds a certain number of times, or when one thread is holding the lock, one is spinning, and a third is visiting, the lightweight lock expands into a heavyweight lock, which blocks all but the thread owning the lock, preventing the CPU from idling. Two CAS expected value is known, biased locking is: to = 1 = null | | thread ID 01. Cancel the lock can be achieved when word Mark reset lightweight lock CAS is the expected value of biased towards = 0 flags = 00 | | lock lock the record pointer = null CAS's purpose is to record the address pointer to your thread lock records. The lightweight lock automatically releases the lock by setting the expected CAS value to word Mark.Copy the code

Lock principle: bias lock, light lock, weight lock 1. Lock 2. Undo bias lock 1. Unlock 3. Swell to heavyweight lock

Synchronized low-level optimization (bias locking, lightweight locking)

JVMS – Lock elimination + lock coarsening spin lock, bias lock, lightweight lock escape analysis -30

CAS operations, Java object headers, bias lock acquisition and undo, lightweight lock acquisition and undo, lock coarsening, lock elimination

How did you upgrade from bias to heavyweight

Gnathing Concurrency (7) : An in-depth analysis of Synchronized principles

Heavyweight lock

Note that synchronized is an object lock (that locks objects, not reference variables) with an object granularity that can be used to achieve synchronized, mutually exclusive access to critical resources and is reentrant. The greatest benefit of reentrant is to avoid deadlocks, such as:

Synchronized uses code blocks and methods a little differently. When using methods, there is only one ACC_SYNCHRONIZED flag bit. This flag indicates whether the lock object monitor needs to be acquired. The process of executing the synchronized method after the monitor gets it is the same as the process of decorating the code block.

Monitor the Monitor

Every object has a Monitor associated with it, and when a Monitor is held, it is locked. Implementations of Synchronized in the JVM are based on entering and exiting Monitor objects for method synchronization and code block synchronization. Although implementation details vary, they can be implemented with pairs of MonitorEnter and MonitorExit directives. 1. MonitorEnter: inserted at the beginning of the synchronized code block. When the code executes this MonitorEnter instruction, it attempts to acquire ownership of the object Monitor, that is, it attempts to acquire a lock on the object. 3. MonitorExit instruction: Inserted at the end of the method and at exceptions, the JVM guarantees that each MonitorEnter must have a corresponding MonitorExit; So what is Monitor? It can be understood as a synchronization tool or described as a synchronization mechanism, and is usually described as an object. Like all objects, all Java objects are born Monitor. Every Java object has the potential to be a Monitor because in Java design, every Java object is born with an invisible lock, called an internal lock or Monitor lock. This is the object lock commonly referred to as Synchronized, with the MarkWord lock identifying bit 10 and the pointer to the start address of the Monitor object. In the Java virtual machine (HotSpot), Monitor is implemented by ObjectMonitor with the following main data structures (in the HotSpot source code objectmonitor.hpp file, implemented in C++) :Copy the code
ObjectMonitor() { _header = NULL; _count = 0; _waiters = 0, _recursions = 0; _object = NULL; _owner = NULL; _WaitSet = NULL; / / inwaitThe state of the thread will be added to _WaitSet _WaitSetLock = 0; _Responsible = NULL ; _succ = NULL ; _cxq = NULL ; FreeNext = NULL ; _EntryList = NULL ; // Threads waiting for a lock block are added to the list. _SpinFreq = 0; _SpinClock = 0 ; OwnerIsThread = 0 ; }Copy the code

The difference between CXQ,EntryList, and WaitSet

CXQ (ContentionList), EntryList, and WaitSet.

  1. Owner: Indicates the thread that holds the lock
  2. ContentionList (CXQ) : When a thread attempts to acquire a lock, if the lock is already occupied, the thread is inserted to the top of the CXQ queue
  3. EntryList: When the thread holding the lock releases the lock, it moves all the elements in CXQ to the EntryList and wakes up the leading thread of the EntryList
  4. WaitSet: If a thread calls the wait method in a synchronized block, the thread is removed from the EntryList and added to the WaitSet, then the lock is released. When a wait thread is notified, the corresponding thread is moved from the WaitSet to the EntryList

Note that the new contended threads are added to CXQ, and the EntryList is the threads transported from CXQ each time they wake up.

The whole waking up and waiting process

Monitor the competition

Contention occurs when multiple threads execute a synchronized block of code, synchronized executes monitorenter, and eventually calls the ObjectMonitor::enter method of C++. 1. Use CAS to set the owner field of Monitor to the current thread. 2. If the owner is set to point to the current thread, it indicates that the current thread enters the monitor again, that is, the lock is reentered, run recursions++, record the number of times the lock is reentered. 3. If this is the first time that the current thread has entered the monitor, set recursions to 1 and _owner to the current thread. 4. If the lock fails to be acquired, the system enters the waiting queue and waits for the lock to be released.

Wait for the process

1. The current thread is encapsulated as an ObjectWaiter node with the status set to ObjectWaiter::TS_CXQ. 2. In the for loop, CAS pushes nodes to the _cxq list. At the same time, multiple threads may push their nodes to the CXQ list. 3. After the node is pushed to the _CXq list, it attempts to acquire the lock by spin. If the lock is not acquired, it suspends the current thread by park and waits to be woken up. 4. When the thread is awakened, it continues from the suspended point, trying to acquire the lock with ObjectMonitor::TryLock.

The release of the lock

A value of 0 indicates that the thread has completely exited the synchronized block and released the lock. It can also wake up threads that are waiting to block according to a different policy (QMode specified). The header node is obtained from the waiting list _CXq and _EntryList, and the thread encapsulated by the node is awakened by the ExitEpilog method. The awakening operation is finally performed by UNpark.

The difference between Synchronized and ReentrantLock

Synchronized and ReentrantLock:

Synchronized is a JVM level lock implementation, and ReentrantLock is a JDK level lock implementation. The lock status of Synchronized cannot be determined directly in code, but that of ReentrantLock can be determined by ReentrantLock#isLocked. Synchronized and ReentrantLock can be both fair and unfair. Synchronized is not interruptible, while the ReentrantLock#lockInterruptibly method is interruptible; Synchronized releases the lock automatically in the event of an exception (automatically implemented at javAC compile time), whereas ReentrantLock requires the developer to display the release in the finally block; ReentrantLock can acquire locks in a variety of ways: tryLock(), which returns success immediately, or wait for a specified amount of time to acquire, is more flexible; Synchronized in certain cases is the first thread to acquire the lock (mentioned above) for the thread that is already waiting, and ReentrantLock is the first thread to acquire the lock for the thread that is already waiting.

Fair and unfair

ReentrantLock

First, non-fair locking performs better, but can cause thread starvation. Fair locking guarantees the order in which locks are acquired, but requires additional maintenance overhead.

ReentrantLock implements AQS at the bottom. This class implements some common methods, such as

Both fair and non-fair locks implement the Sync class. This class has two main methods. One is the lock() function to acquire the lock and the other is tryLock().

NonfairTryAcquire () for non-fair locks is already implemented in Sync. When ReentrantLock is implemented, an unfair lock is created by default.

  1. The first attempt at CAS to acquire the lock is successful, and the lock flag is set to the current thread. If not, enter acquire().
  2. NofairTryLock () in an unfair lock is a direct call to a function in Sync.

Note that this method is called by the acquire() function.

Fair lock classes do not attempt to acquire locks. Instead, acquire locks directly using acquire(). Let’s see how acquire() acquires locks. The acquire() function is a function in AQS, as shown in the figure below:

Fair and unfair Locks –ReentrantLock

Optimistic lock and pessimistic lock

Optimistic locking

Optimistic locking is introduced in order to access synchronous resources without blocking. Optimistic locking is a simple idea where a table has a version field, and the first time you read it, you get that field. When the business logic is processed and the update starts, you need to check whether the value of the field is the same as the first time. If one is updated, the loop continues to try the other. One implementation of optimistic locking is the CAS + version number.Copy the code

Pessimistic locking

Always assume the worst case, and every time you fetch data, you think another thread is going to change it, so you put (pessimistic) locks on it. Once a lock is placed, only one thread can execute when different threads are executing at the same time. The other threads wait at the entrance until the lock is released. Frequent blocking and waking can be costly. In contrast, an optimistic lock does not lock.Copy the code

The classic producer and consumer three implementations

PV operation

PV operations: first make sure that P and V operations are atomic. Secondly, it should be clear that the PV operation is meant to control access to the resource, that is, to execute the thread blocking and waking operation based on the situation of resource S.

For producer and consumer patterns, also known as common buffer concurrency, we typically use two variables to indicate how much space is left in the cache and how much space is occupied. The occupied space is called producer, indicating that commodities have been stored. Buffer indicates the remaining space of the cache. The mutex is S=1. Indicates that there is only one resource. Since we started with a cache of 12, we set S=12. This means that when the resource is used up, the thread will block. Specific P V operations are as follows:

The V operation is used to release a resource. Note that the judgment is made after the resource is released. If S<=0. A thread is using the resource, so it needs to wake up.

Specific implementation:

Note that although pairings are required for PVS through resources, they can be in different functions.

Notice what S means

First of all, S represents the availability of the resource. S=0 indicates that the resource has been used up, so at this time, some thread must occupy the resource.

Resource application

If S >=0, then the request for resources is successful. If S >=0, then the request for resources is successful.

Release of resources

If S<=0, then there are other threads that have occupied the resource and have not released it. Only one thread at a time can hold access to this synchronized block of code (because other threads are blocked). Therefore, the blocked thread needs to be woken up. Because I’m releasing a resource, I can only wake up one thread.

Conclusion: S==0 is the critical point.

ReentrantLock and Condition

This is actually the official version of the PV operation feel. No longer their own realization of PV operation, no longer their own resource usage and the corresponding situation of the judgment, operation, etc.

ReentrantLock provides a ReentrantLock to achieve lock acquisition and release

The meaning of the condition

For Condition, the JDK API explains it like this:

Condition breaks down the Object monitor methods (wait, notify, and notifyAll) into distinct objects in order to provide multiple wait-sets for each Object by combining these objects with any Lock implementation. Lock replaces the use of synchronized methods and statements, and Condition replaces the use of Object monitor methods.

A condition (also known as a condition queue or condition variable) provides a meaning for a thread to suspend (that is, make it “wait”) a state condition until it is notified by another thread that it might now be true. Because access to this shared state information occurs in different threads, it must be protected, so some form of lock is associated with this condition. The main property of a wait that provides a condition is to release the associated lock atomically and suspend the current thread, just as object.wait does.

The Condition instance is essentially bound to a lock. To get a Condition instance for a particular Lock instance, use its newCondition() method.

After we have created a lock, we can bind that lock to multiple blocking queues. For example here we can use two queues, one to block the producer and one to block the consumer. When the product is full, the production thread hangs in the producer blocking queue; When the item is empty, the consumption removes the blockage into the consumer blockage queue. But when an item is finished, it wakes up a thread in the consumption-blocking queue. Similarly, after consuming an item, wake up a production thread in the production blocking queue. Note that this is different from synchronize, which wakes up all blocked threads each time by default (both production and consumption threads, the biggest reason is that there is only one blocking queue).

So, in that sense, it’s kind of the same thing as the PV operation.

The synchronized keyword

Synchronize is the basic operation.

Concurrent collection utility classes and thread-safe collections

Equals, equals, hashcode

First, the equals() function is generally used to compare two variables to see if they are equal. The equality here is the logical equality. Equals () will try == first to determine whether the addresses of the two variables are consistent, and if they are not, then compare the contents. So equals () is an all-in comparison method. The simplest way to compare equals () in Object is to return ==, the address values of the two variables.

== is used to compare the values of variables. If it is a basic data type, the value is compared directly; if it is a reference type, the address value of the reference is compared. Objects go on the heap, and references go on the stack.

Hashcode is a mapping encoding function. The code used to identify an object. Specifies that for the same object (which is logically equal), its Hashcode must be equal. Objects that are not equal may also have equal Hashcodes. Therefore, hashcode is also used to compare objects for equality. Just like the Bloom filter, it can only determine that hashcodes that are not equal must not be equal.

The most common methods:

  1. First hashcode comparison, if not equal they are not the same objects, vice versa into the second part
  2. If the Hashcodes are equal, they are not necessarily equal objects, and equals() is performed to compare.

This is efficient because you don’t have to use equals() to start each comparison.

However, we usually write our own equals logic, so we also need to make sure that the hashcode for the two equals objects is the same. So if you override equals, you must override HashCode.

The seemingly simple hashCode and equals interview question has so many pitfalls.

HashMap HashTable ConcurrentHashMap

The hashcode here is only used to locate the array. Tables is an array of entries where an Entry is the data format of a node. With a pointer.

Public part:

  1. Initialize the bucket, which is the size of the Entry array –table; The default bucket value 16 is used.
  2. Load factor, default is 0.75. The load factor determines the capacity expansion threshold 16×0.75=12.

The PUT method in 1.7

if (table == EMPTY_TABLE) { inflateTable(threshold); } if (key == null) return putForNullKey(value); int hash = hash(key); int i = indexFor(hash, table.length); for (Entry<K,V> e = table[i]; e ! = null; e = e.next) { Object k; if (e.hash == hash && ((k = e.key) == key || key.equals(k))) { V oldValue = e.value; e.value = value; e.recordAccess(this); return oldValue; } // If the bucket is not found, the bucket is empty. modCount++; //addEntry implements two modes of adding empty buckets and non-empty buckets. Expansion is also done in addEntry. addEntry(hash, key, value, i); return null; }Copy the code
  • Determines whether the current array needs to be initialized.
  • If the key is empty, put a null value in it.
  • Calculate the Hashcode based on the key.
  • Locate the bucket based on the calculated Hashcode.
  • If the bucket is a linked list, you need to check whether the hashcode and key in the bucket are equal to the passed key. If they are equal, you will override them and return the original value.
  • If the bucket is empty, no data is stored in the current location. Adds an Entry object to write to the current location.

The PUT method in 1.8

In 1.7, when Hash collisions are severe, the list on the bucket becomes longer and longer, which makes the query less efficient. The time complexity is O(N). Therefore, the introduction of red-black trees in 1.8 focuses on optimizing this query efficiency.

Detailed explanation of PUT in 1.8

  1. To determine whether the current bucket is empty, if the bucket is empty, it needs to be initialized (resize will determine whether to initialize).
  2. According to the hashcode of the current key, locate the specific bucket and determine whether it is empty. If it is empty, it indicates that there is no Hash conflict, and directly create a new bucket at the current position.
  3. If the current bucket has a value (Hash conflict), then compare the key in the current bucket, the hashcode of the key and the key written are equal. If they are equal, the value is assigned to e. In step 8, the value is assigned and returned uniformly.
  4. If the current bucket is a red-black tree, data is written as a red-black tree.
  5. If it is a linked list, you need to encapsulate the current key and value into a new node and write it to the end of the current bucket (forming a linked list).
  6. Then determine whether the size of the current list is greater than the preset threshold, if it is greater than the red black tree.
  7. If the same key is found during the traversal, exit the traversal directly.
  8. If e! = null is equivalent to the existence of the same key, so the value needs to be overwritten. Finally, determine whether to expand the capacity.

www.jianshu.com/p/c0642afe0…

www.javazhiyin.com/37729.html

Blog.csdn.net/u010842515/…

Juejin. Cn/post / 684490…

Confused concept

If an object is locked by someone else, can I still access the object to perform other operations?

Yes, someone else has the lock, but I can still access the object. And it can execute all functions that are not locked. When you try to execute the locked block, it will be blocked.

By locking resources, does that mean locking a variable?

Isn’t. It locks access to a shared resource. For example, we often say read and write, read, write and other operations, for a shared resource modification. We just lock these modified instructions. In fact, you can lock only one of them, such as read lock, such as write lock, such as both read and write lock, etc. ConcurrentHashMap locks only put, which is actually a write lock. What is locked is an entire list or tree root. However, you can still access all properties and unlocked blocks of code in the object. Note that all variables… Even synchronized code looks at the variables of focus… This is amazing.

The volatile keyword

When a variable is volatile, any write to it by a thread is immediately flushed to main memory, forcing the thread that cached the variable to clean up and read the latest data from main memory. Through these two steps, you can ensure that the data of each thread is up to date in real time.

Detail the four singletons

First, take a look at the class loading time:

  1. When a class is loaded, its inner classes are not loaded at the same time.
  2. A class is loaded if and only if one of its static members (static fields, constructors, static methods, etc.) is called.

That is, even if the outer class is used, the inner class will not be loaded at the same time, whether it is static or not. Unless you explicitly call some static property or code block of the class.

The hungry type

Public class Singleton {private static Singleton instance = new Singleton(); private static Singleton instance = new Singleton(); privateSingleton (){
     }
     public static Singleton getInstance() {  
     returninstance; }}Copy the code

LanHanShi

Public class Singleton {private static Singleton instance; privateSingleton (){
          }   
          public static Singleton getInstance() {  
          if(instance == null) {// Instance = new Singleton(); }returninstance; }}Copy the code
Public class Singleton {private static Singleton instance; privateSingleton (){
      }
      public static synchronized Singleton getInstance() {  
      if (instance == null) {  
          instance = new Singleton();  
      }  
      returninstance; }}Copy the code

Double check lock

Public class Singleton {//volatile prevents directive reordering. private volatile static Singleton instance; privateSingleton (){
      }   
      public static Singleton getInstance() {  
      if (instance== null) {  
          synchronized (Singleton.class) {  
          if(instance== null) {// The assignment statement is not an atomic operation and includes the creation and assignment of an object. // There are three steps: allocate the memory address, initialize the object, and assign the address to the singleton. // But the order here is not certain, that is, there may be a first assignment and then initialization process. // During initialization, if there is a new thread that gets a singleton that has been assigned, then it is possible to get an empty singleton. When volatile is used, however, the order of execution is guaranteed to be the last assignment for the // singleton. Instance = new Singleton(); }}}returnsingleton; }}Copy the code

Static inner class

public class Singleton {  
    private Singleton() {} static class SingletonHolder {private static final Singleton instance = new Singleton(); } public static SingletongetInstance() {// Triggers the initialization condition. A static property of a static inner class is called. // A static inner class is used because an inner class cannot have static properties. // Simple is static inner class can have static properties.returnSingletonHolder.instance; }}Copy the code

Note: this series of blog is mainly to do the work of information integration, thank you very much to collect your blog, many places have not indicated the source, mainly in the time of looking at some of their own ideas and so on. Therefore, for any blog post that is referenced without reference, I will do so in the future and sincerely apologize for this!!