Section 1: The Java memory model
- The JVM specification defines a Java Memory Model (JMM) to mask differences in memory access across hardware and operating systems. To achieve Java programs in different platforms can achieve the same memory access effect.
Main memory and working memory
Summary:
- The main goal of the Java memory model is to define access rules for variables in a program
- That is, the low-level details of storing variables into and reading variables from memory in the JVM
- The variables here do not include the vm stack and method parameters.
1. Main memory:
- Common memory for all threads
- All variables must be kept in main memory.
2. Working memory
- Thread-specific, analogous to processor caching
- The working memory holds copies of variables from main memory used by the thread
- All operations on variables by a thread must be in its own working memory, not in main memory
- Different threads cannot directly access each other’s working memory, and variable values can only be transferred through the main memory
- All communication between threads must be completed through the main memory relay.
- **Node: ** The memory model is not the same level of memory partition as the previous run-time data areas (stack, heap, method area, etc.)
Two, memory interactive operation
- Overview: An interaction protocol between main memory and working memory that defines the details of data synchronization between the two (all eight operations are atomic).
1. Eight operations of the interactive protocol
- Lock: A variable applied to main memory, indicating that the variable is in a thread exclusive state
- Unlick: A variable that acts on main memory to release variables in locked main memory, allowing other threads to lock.
- Read: A variable acting on main memory that transfers the value of a main memory variable to the thread’s working memory for use by later loads (the next operation of read must be load, otherwise it cannot be read, and the two operations occur in pairs).
- Load: A variable that acts on working memory and places the value of the variable read from main memory into working memory
- Use: variable applied to working memory, which passes a variable of working memory to the execution engine. The JVM performs this operation whenever it needs to use a variable value (for example, by bouncing the contents of the operand stack for data calculation).
- Assign: scoped working memory variable that assigns a value from the execution engine to the working memory variable. This operation is performed whenever the JVM encounters a bytecode instruction that assigns a value to a variable. (e.g., when the calculation is done, the result is pushed onto the operand stack.)
- Store: Variable applied to working memory, passing the value of the working memory variable to main memory. (It must be paired with a subsequent write, so the next operation to store must be write.)
- Write: a variable that operates on main memory and puts the value of the variable into main memory, as obtained from store.
Three,volatile
Special rules for type variables (for multicore cpus)
Summary:
volatile
Synchronization is the lightest synchronization mechanism offered by the JVM, but it is not thread-safevolatile
Modified variable visible to all threads. That is, one thread modifies the variable and other threads understand it- Common variable. Thread A writes back to main memory after modification, and thread B proactively reads the new value of the variable from main memory.
volatile
, thread A writes back to the main memory after modification, and other threads must take the initiative to read the new value in the main memory before each use (without locking, once the operand stack is read for operation, it is unsafe under concurrent conditions).
1. Ensure visibility:
- The following rules must be met:
- The result of the operation does not depend on the current value of the variable, or can guarantee that only a single thread changes the value of the variable
- Variables do not need to participate in invariant constraints with other state variables.
2. Disable reordering
- The last line of study() inside thread A
flag=false
. Thread B determines whether to execute the thread based on the flag.- Due to reordering, the rest of the study() code may not be finished, flag is set to true, and thread B begins execution
- Principle:
volatile
Modifies a variable that adds a memory barrier. If the current CPU writes the catche to main memory, the catche of other observing cpus will be invalidated
3, choosevolatile
Meaning:
- Synchronization performs better than locking, and because of lock elimination and optimization, it’s hard to quantify how much faster (and in most cases, less expensive) synchronized is.
volatile
Variable reads are similar to normal variables, but writes are slower (with memory barriers).
4,volatile
Affects 8 interaction protocols (thread T, two byvolatile
Modified variables V, W)
- The execution action of thread T on variable V must be
Read, load, use
These three are in order. (This rule ensures that every time V is used, the main memory must be refreshed to ensure that the value of the variable is updated by other threads.) - The execution action of thread T on variable V must be
Assign, Store, and write
This rule requires that each change in working memory must be synchronized back to main memory so that other threads can see the change in V’s variables. - Change the V and W variables, the command has the case of intersection, then the order of execution (this rule states, by
volatile
Instruction reordering does not occur on modified variables, and the code is executed in the same order as the program.
Atomicity, visibility and order
Overview: JMM revolves around atomicity, visibility, and order in concurrent processes
1. Atomicity:
- Six types of memory interaction (
Read, Load, use, assign, Store, and write
) operations (non-atomic protocols for long and double) guarantee atomicity for primitive data type operations. - If you need a wider range of atomicity guarantees, the JMM also provides them
The lock and unlock
It’s reflected in Java codesynchronized
The keyword
2. Visibility:
- When one thread changes the value of a variable, other threads immediately know about the change
volatile
The keyword ensures that the value of a variable changed is immediately synchronized back to main memory, and the value of a variable in main memory is read before each use. Other common variables do not guarantee visibility.- Visibility of the final keyword. Fields modified with final are visible in other threads once initialized in the constructor and the constructor does not pass this (this has not escaped by reference).
- Synchronized locking also ensures visibility
3. Orderliness:
- If you observe all operations in order within a single thread, if you observe another thread in one thread then the operations are all unordered
The principle of antecedent occurrence
Summary:
- Operation A occurs before operation B, and the effect of operation A can be observed by OPERATION B.
Rules:
- Program order rule: Program code is executed in order within this thread.
- Pipe lock rule: Before locking, the variable must be UNLOCK (same lock)
volatile
Variable rule: For onevolatile
The modified variable write occurs first when the variable is read- Thread start rule: Thread start() other actions that currently occur on the Thread
- Thread termination rule: All operations of a thread must be completed before termination
- Thread interrupt rule: for threads
intercept()
Call, which occurs first when the code in the interrupted thread detects that an interrupt event has occurred. (Interrupt first) - Object finalization rule: Objects are initialized, starting with finalize() method.
- Transitivity: A occurs before B, B occurs before C, so A occurs before C.
Section 2: Java and threads
One, thread implementation
Summary:
- Threads can separate the scheduling and resource allocation of a process.
- Threads can either share process memory (memory address, file IO, etc.) or schedule independently (the CPU’s minimum scheduling unit)
- Threads can be implemented in three ways: kernel thread implementation, user thread implementation, user thread and lightweight process hybrid implementation
1. Kernel thread implementation
- Threads, implemented by the operating system kernel, are switched to schedule threads through the kernel scheduler and are responsible for mapping threads’ tasks to individual processors.
- Each kernel thread can be seen as a doppelgant of the kernel, so that the operating system can handle more than one thing at a time. A kernel that supports multithreading is called a multithreaded kernel.
- Programs do not use kernel threads directly, but by using the kernel’s threading interface – lightweight processes.
- This lightweight process is what we call a thread
- Each lightweight process is supported by a kernel thread, so the kernel has to support kernel threads to support lightweight processes, and they have a one-to-one relationship
- Disadvantage:
- Kernel thread-based implementation, so thread operations (creation, destruction, synchronization) require system scheduling. System scheduling needs to switch back and forth between user mode and kernel mode, and the scheduling cost is high
- Each lightweight process is supported by a kernel thread, so lightweight processes consume some kernel resources (kernel thread stack space). The system supports a limited number of lightweight processes
2. User thread implementation
- User threads are built entirely on thread libraries in user space. Kernel space is not aware of the implementation of thread existence.
- User threads are created, synchronized, scheduled, and destroyed entirely in user mode (fast and low cost) without kernel threads
- No need to switch between user mode and kernel mode, higher performance.
- Supports a larger number of threads, and some high-performance databases use a one-to-many model design
- Disadvantage:
- Thread operations (creation, switching, scheduling) are performed by users themselves and are extremely complex
- Since the operating system allocates processor resources to processes, that
How is blocking handled and how do multiple processors map threads to other processors
Problems like this can become extremely difficult or even impossible to complete
User thread + lightweight process
- User threads and lightweight processes, co-existing, support many-to-many
- User threads are still all built in user space
- Lightweight processes act as a bridge between kernel threads and user threads
Java threads
- The implementation varies according to the operating system
- Under Windows and Linux, lightweight processes are used, that is, lightweight processes have a one-to-one relationship with kernel threads.
Second, Java thread scheduling
Overview: The process by which the system assigns processor rights to threads
- Collaborative thread scheduling
- Preemptive thread scheduling
1. Collaborative scheduling
- The execution time of a thread is controlled by the thread itself
- When the thread completes execution, the processor is notified to switch the thread
- The implementation is simple and there is no synchronization problem
- Disadvantage:
- Execution time is out of control, and if the thread does not release the kernel, the program will block
2. Preemptive scheduling
- Thread execution time is randomly scheduled by the kernel, and thread switching is not determined by the thread itself
- In Java
Thread.yield()
, you can let the thread execute, but the thread cannot actively acquire time - Java uses preemptive scheduling
- Software will not be blocked because a thread is blocked
- In Java
- You can set the thread priority to suggest that the system allocate time to the thread
- Java threads map to internal thread implementations, so this recommendation does not necessarily work
- The system may change the priority
Three, thread state switch
1. New:
- The thread is not started after being created
2. Operation:
- Runable contains the running and ready states of kernel threads that are either executing or waiting for the CPU to allocate time for them
3. Wireless Waiting:
- Threads in this state are not allocated time by the CPU and must be woken up by other threads. The following method causes the thread to enter the wireless wait:
- No Timeout parameter is set
Object.wait()
- No Timeout parameter is set
Thread.join()
LockSupport.park()
4. Deadline:
- A thread in this state does not allocate CPU time, but does not need to be woken up by another thread. The system wakes up automatically after a certain period of time.
Thread.sleep()
- The Timeout parameter is set
Object.wait()
- The Timeout parameter is set
Thread.join()
LockSupport.parkNanos()
LockSupport.parkUntil()
5. Blocked:
- Thread blocked
- Thread blocking: a thread is waiting to acquire an exclusive lock. This event occurs when another thread relinquishes the lock
- Wait state: Waiting for a period of time, or being woken up by another thread. (Thread enters the security zone)
6. End:
- The state of a terminated thread. The thread is terminated