preface
This JVM series is my summary of some knowledge points in the learning process, the purpose is to let readers quickly master the KEY points of KNOWLEDGE related to JVM, inevitably there will be some emphasis, if you want to learn JVM knowledge in a more systematic and detailed way, you still need to read professional books and documents.
The main content of this paper:
- Overview of JVM memory areas
- What is the allocation of heap space? A demonstration of heap overflow
- How is memory allocated when creating a new object?
- Method area into Metaspace meta-space
- What is a stack frame? What’s in the stack frame? How to understand?
- Local method stack
- Program counter
- What is Code Cache?
Note: Please distinguish between JVM memory structure (memory layout) and JMM (Java Memory model) concepts!
An overview of
Memory is a very important system resource. It is the intermediate warehouse and bridge between hard disk and CPU, carrying the operating system and application program running in real time. The JVM memory layout defines the Java memory allocation, allocation, and management strategies during the running process, ensuring efficient and stable running of the JVM.
The figure above depicts the current classic Java memory layout. (The heap area is drawn 2333 smaller, which should be the largest area.)
If threads are classified according to whether they are shared or not, the following figure shows them:
PS: Whether threads share this or not is a natural thing to remember once you actually understand the actual use of each area. No need to memorize.
Let’s take a look at the regions.
There is a Heap.
1.1 Introduction to heap area
Let’s start with the heap. The heap is the most common OOM failure area. It is the largest area of memory, shared by all threads, and holds almost all instance objects and arrays. All object instances and arrays are allocated on the heap, but as JIT compilers and escape analysis techniques mature, allocation on the stack and scalar replacement optimization techniques can lead to subtle changes that make it less “absolute” to allocate all objects on the heap.
JIT compilation optimization is a part of the content – escape analysis.
Recommended reading: An in-depth understanding of Escape analysis in Java
The Java heap is the primary area managed by the garbage collector and is often referred to as the “GC heap”. From a memory reclamation point of view, since collectors are now mostly generational, the Java heap can be subdivided into new generation and old generation. More detailed are Eden space, From Survivor space, To Survivor space, etc. From the perspective of memory Allocation, the Java heap shared by threads may have multiple Thread private Allocation buffers (TLabs). However, no matter how to partition, it has nothing to do with the storage content, no matter which area, the storage is still the object instance, the purpose of further partition is to better reclaim memory, or faster allocation of memory.
1.2 Adjustment of heap area
According to the Java Virtual Machine specification, the Java heap can be in a physically discontinuous memory space, as long as it is logically contiguous, like our disk space. This can be either fixed size at implementation time or dynamically adjusted at run time.
How do you adjust?
You can set the initial and maximum values of the heap size by setting the following parameters, such as -xMS256m -xmx 1024M, where -x stands for JVM runtime parameter, ms stands for memory start, mx stands for memory Max, It means maximum memory.
It is important to note that under normal circumstances, the heap space is constantly expanding and shrinking while the server is running, which can create unnecessary system stress for the JVM in the production environment onlineXms
andXmx
It is set to the same size to avoid the extra stress of resizing the heap after GC.
1.3 Default space allocation for the heap
Also, to emphasize the general situation of heap space memory allocation.
Somebody might ask here, where do you know that? If I want to configure this ratio, how do I change it?
Let me show you how to look at the default configuration of the virtual machine. To view all default JVM parameters for the current JDK version, run the following command from the command line.
java -XX:+PrintFlagsFinal -version
Copy the code
The output
The corresponding output should be several hundred lines. Let’s look at two parameters related to heap memory allocation
>java -XX:+PrintFlagsFinal -version[Global flags] ... uintx InitialSurvivorRatio = 8 uintx NewRatio = 2 ... Java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-B11) Java HotSpot(TM) 64-bit Server VM (build 25.131 - bl1, mixed mode)Copy the code
Parameter interpretation
parameter | role |
---|---|
-XX:InitialSurvivorRatio | Initial Eden/Survivor space ratio of the new generation |
-XX:NewRatio | Memory ratio of the Old /Young area |
Since the new generation is composed of Eden + S0 + S1, according to the default ratio above, if the memory size of Eden area is 40M, then the two survivor areas are 5M, and the whole young area is 50M. Then it can be calculated that the memory size of Old area is 100M, and the total size of heap area is 150M.
1.4 Heap overflow demo
/ * * * Args VM: - Xms10m - Xmx10m - XX: + HeapDumpOnOutOfMemoryError *@author Richard_Yi
*/
public class HeapOOMTest {
public static final int _1MB = 1024 * 1024;
public static void main(String[] args) {
List<byte[]> byteList = new ArrayList<>(10);
for (int i = 0; i < 10; i++) {
byte[] bytes = new byte[2* _1MB]; byteList.add(bytes); }}}Copy the code
The output
java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid32372.hprof ...
Heap dump file created [7774077 bytes in 0.009 secs]
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
at jvm.HeapOOMTest.main(HeapOOMTest.java:18)
Copy the code
– XX: + HeapDumpOnOutOfMemoryError allows the JVM when meeting OOM anomalies, output information in pile, especially for a few months apart to appear abnormal OOM is particularly important.
Create a new object memory allocation process
After the introduction to the heap, let’s strike while the iron is hot and take a look at how the JVM allocates memory to create a new object.
Most objects are generated in Eden, and when Eden fills up, Young Garbage Collection, or YGC, is triggered. During garbage collection, the clearing policy is implemented in Eden area, and objects that are not referenced are directly collected. Objects that are still alive are moved to a Survivor zone. Survivor is distinguished by so and S1. At each YGC, they copy the living objects into the unused space, and then clean up the currently in use space completely, switching the usage of the two Spaces. If the object YGC wants to move is larger than the upper limit of Survivor zone capacity, it is handed over directly to the older generation. The -xx :MaxTenuringThreshold parameter is used to configure the threshold at which an object will be promoted from the new generation to the old generation. The default value is 15 and can be promoted to the old age after 14 swaps in Survivor zones.
For those unfamiliar with some of the garbage collection terms mentioned above, check out the resources or the garbage collection section of this series.
Metaspace Metaspace
In HotSpot JVM, the persistent generation (≈ Method section) is used to hold metadata for classes and methods, as well as constant pools such as Class and Method. Whenever a class is first loaded, its metadata is put into the persistent generation.
Permanent generation has a size limit, so if the loaded classes is too much, will likely result in the permanent generation of memory, namely the evil Java. Lang. OutOfMemoryError: PermGen, therefore we have to do to the virtual machine tuning.
So why was PermGen removed from HotSpot JVM in Java 8? (See: JEP 122: Remove the Permanent Generation) :
- Because PermGen memory often overflows, causing annoying
java.lang.OutOfMemoryError: PermGen
Therefore, the JVM developers want this area of memory to be managed more flexibly and not to have OOM as often - Removing PermGen can facilitate fusion between HotSpot JVM and JRockit VM, as JRockit has no permanent generation.
For various reasons above, PermGen was eventually removed, the method section moved to Metaspace, and the string constant pool moved to the heap.
Specifically, the string constant pool in the Perm area was moved to the heap memory after Java7. In Java 8, PermGen was replaced by the metaclase, and other content such as class metaclase, fields, static properties, methods, constants, and so on were moved to the metaclase area. Such as Java /lang/Object class meta information, static property System.out, plastic constant 100000, etc.
Similar in nature to permanent generations, meta-spaces are implementations of method areas in the JVM specification. However, the biggest difference between a meta-space and a permanent generation is that the meta-space is not in the virtual machine, but uses local memory. Therefore, by default, the size of the meta-space is limited only by local memory. (As with direct memory, it uses local memory.)
In JDK 8, classes metadata is now stored in the native heap and this space is called Metaspace.
The corresponding JVM callback:
parameter | role |
---|---|
-XX:MetaspaceSize | The initial size assigned to Metaspace (in bytes) |
-XX:MaxMetaspaceSize | The maximum value allocated to Metaspace, beyond which Full GC is triggered, is unlimited by default but should depend on the size of system memory. The JVM changes this value dynamically. |
-XX:MinMetaspaceFreeRatio | The minimum percentage of Metaspace free space capacity after GC, reducing the garbage collection resulting from allocating space |
-XX:MaxMetaspaceFreeRatio | After GC, the maximum percentage of Metaspace free space capacity is reduced to garbage collection resulting from free space |
Read on: Two good articles about Metaspace.
Metaspace in Java 8
Lovestblog. Cn/blog / 2016/1…
Java virtual machine stack
For each thread, the JVM creates a separate stack when the thread is created. This means that the lifecycle of the virtual stack is the same as that of the thread and is thread-private. In addition to Native methods, Java methods are implemented through the Java virtual machine stack to call and execute the process (need program technology, heap, metadata space data cooperation). So the Java virtual machine stack is one of the cores of the virtual machine execution engine. The elements on and off the Java virtual machine stack are called “stack frames.”
A Stack Frame is a data structure used to support method invocation and method execution by a virtual machine. A stack frame stores information about a method’s local variogram, operand stack, dynamic linkage, and method return address. The process of each method from invocation to completion corresponds to the process of a stack frame in the virtual machine stack from loading to unloading.
Stack corresponds to thread, stack frame corresponds to method
In an active thread, only the frame at the top of the stack is valid, called the current stack frame. The method being executed is called the current method. When the execution engine is running, all instructions can only operate on the current stack frame. A StackOverflowError, on the other hand, is a stack overflow of a request that causes memory to run out, and is typically found in recursive methods.
Virtual machine stack operates on the active stack frame corresponding to each method through pop and push. After the normal execution of the method, it will definitely jump to another stack frame. In the process of execution, if an exception occurs, an exception backtracking is performed, and the return address is determined by the exception handling table.
As you can see, stack frames play an important role in the overall JVM architecture. Storage information in stack frames is also described in detail below.
1. Local variation scale
A local variable table is an area where method parameters and local variables defined within a method are stored.
The memory space required for the local variable table is allocated at compile time. When entering a method, how much local variable space the method needs to allocate in the frame is completely determined, and the size of the local variable table does not change during the method run.
I’m going to go straight to the code, just to make it easier to understand.
public int test(int a, int b) {
Object obj = new Object();
return a + b;
}
Copy the code
If the local variable is one of Java’s eight basic base data types, it is present in the local variable table, if it is a reference type. As with String new, the local variable table stores references, while the instance is stored in the heap.
2. The operation stack
The Operand Stack is a Stack structure. The interpretation execution engine of the Java virtual machine is called “stack-based execution engine,” where the “stack” is the operand stack. When the JVM creates a stack frame for a method, it creates a stack of operands for the method in the stack frame, ensuring that the instructions in the method do their job.
But let’s do it in practice.
/ * * *@author Richard_yyf
*/
public class OperandStackTest {
public int sum(int a, int b) {
returna + b; }}Copy the code
After compiling the. Class file, disassemble to see the assembly instructions
> javac OperandStackTest.java
> javap -v OperandStackTest.class > 1.txt
Copy the code
public int sum(int.int);
descriptor: (II)I
flags: ACC_PUBLIC
Code:
stack=2, locals=3, args_size=3 // Maximum stack depth is 2 and number of local variables is 3
0: iload_1 // local variable 1 is pushed
1: iload_2 // local variable 2 is pushed
2: iadd // Add the two elements at the top of the stack
3: ireturn
LineNumberTable:
line 10: 0
Copy the code
3. Dynamic connection
Each stack frame contains a reference to the current method in the constant pool to support dynamic concatenation during method calls.
4. The method returns the address
There are two exits when a method executes:
- Normal exit, that is, normal execution of the return bytecode instructions to any method, e.g
RETURN
,IRETURN
,ARETURN
Etc. - Abnormal exit
In any exit case, the method is returned to where it was currently called. Method exit is equivalent to eject the current stack frame. There are three possible ways to exit:
- The return value is pushed into the upper call stack frame
- The exception message is thrown to a stack frame that can handle it
- The PC counter points to the next instruction after the method call
Read on: JVM machine instruction set diagrams
Local method stack
The Native Method Stack is very similar to the virtual machine Stack. The difference is that the virtual machine Stack performs Java methods (that is, bytecode) services for the virtual machine, while the Native Method Stack serves the Native methods used by the virtual machine. The virtual machine specification does not mandate the language, usage, or data structure of methods in the local method stack, so specific virtual machines are free to implement it. There are even virtual machines (such as the Sun HotSpot VIRTUAL machine) that simply merge the local method stack with the virtual machine stack. Like the virtual stack, the local method stack area throws StackOverflowError and OutOfMemoryError exceptions.
5. Program counter
The Program Counter Register is a small memory space. Is thread private. It can be thought of as a line number indicator of the bytecode being executed by the current thread. What does that mean?
Vernacular version: Because the code is running in a thread, the thread may be suspended. That is, the CPU executes thread A, suspends thread A, then executes thread B, and finally executes thread A. The CPU needs to know which part of thread A is executing, and the thread counter tells the CPU.
Because Java virtual machine multithreading is implemented by switching threads in turn and allocating processor execution time, the CPU can only run if it loads data into registers. The register stores the field information related to the instruction. Due to the LIMITATION of THE CPU time wafer wheel, a processor or a core in a multi-core processor will execute only one instruction in a certain thread at any given moment during the concurrent execution of many threads.
Therefore, in order to restore the correct execution position after the thread switch, each thread needs to have an independent program counter, counters between each thread do not affect each other, independent storage. After each thread is created, it will generate its own program counter and stack frame. Program counter is used to store the offset and line number indicator of the execution instruction. The thread execution or recovery depends on the program counter. This area also does not run out of memory exceptions.
Direct memory
Direct Memory is not part of the run-time data region of the virtual machine, nor is it defined in the Java Virtual Machine specification. But this part of memory is also frequently used and can cause OutofMemoryErrors, so we’ll cover it here.
The NIO (New Input/Output) class was introduced in JDK 1.4, introducing a Channel and Buffer based I/O method that can allocate off-heap memory directly using Native libraries. It then operates through a DirectByteBuffer object stored in the Java heap as a reference to this memory. This can significantly improve performance in some scenarios because it avoids copying data back and forth between the Java heap and Native heap.
Obviously, the allocation of native direct memory is not limited by the Size of the Java heap, but since it is memory, it is certainly limited by the size of the total native memory (including RAM and SWAP or paging files) and the addressing space of the processor. If the total memory area is greater than the physical memory limit, OOM is also displayed.
Code Cache
In short, the JVM code cache is the area where the JVM stores its bytecode as native code. We call each block of executable native code nMethod. This nMethod may be a complete or inline Java method.
Real-time (JIT) compilers are the biggest consumers of code cache areas. This is why some developers refer to this memory as a JIT code cache.
The memory occupied by this part of the code is called the CodeCache area. Normally we don’t care about this area and most developers are not familiar with it. OOM, if the area inside the log will see Java. Lang. OutOfMemoryError code cache.
Diagnostic option
options | The default value | describe |
---|---|---|
PrintCodeCache | false | Whether to print CodeCache usage before JVM exits |
PrintCodeCacheOnCompilation | false | Whether to print the usage of the CodeCache region after each method is jit-compiled |
Read further Introduction to JVM Code Cache
reference
- Understanding the Java Virtual Machine in Depth – Zhou Zhiming
- Code Efficient
- Metaspace in Java 8
- JVM machine instruction set diagram
- Introduction to JVM Code Cache
If this article is helpful to you, I hope you can give a thumbs up. This is the biggest motivation for me.