A preface.

Digging into the SECOND article on the Jvm, again as a conclusion, may be a bit tedious, but it is essential for interviews and virtual machine tuning. Without further ado, let’s get to the point.

Ii. Jvm memory model is roughly divided

Let’s get straight to this:

I believe this diagram is familiar to many of you, and perhaps some of you have only heard of stacks and stacks. Here’s a quick description of what each area is used for:

  1. Stack: Usually used to store our local variable table, if it is a variable of the underlying data type, on the stack. If it is an object type variable, it is actually stored in the heap, and only the reference address of the object is stored in the stack.
  1. Heap: Stores objects
  1. Method stack: A stack for native methods, also known as native methods. If you’ve looked at the start() method of Thread’s class, you’ll see that it traces to the end, calling a start0() method with a native modifier, which means that it makes cross-language calls, usually to C or C++ libraries.
  1. Method area: Our constants, static variables, and class meta-information are in the method area, also known as the meta-space. What kind of meta information? If you read my last article, you might have a clue. This class meta-information is all the information about the class that is generated after it is loaded into JVM memory. Note that this area uses direct memory, not memory allocated to the virtual machine.
  1. Program counter: store Java bytecode execution where, can be roughly understood as our debug line number. This thing is also crucial, if you think about multithreading, so many threads, involving suspension and thread context switching, how does each thread know which line of code to execute next when it’s woken up?
  1. Class loading subsystem: Used to load classes into the Jvm.
  1. Bytecode execution engine: As the name suggests, bytecode execution engine.

I’m sure you noticed those two color blocks, thread specific and thread shared. What does that mean? Thread sharing: All threads share one piece of memory. This may not be very clear, but please continue to read, because a lot of things need to be read and added together to understand.

Detail the Jvm stack

Let’s start with a simple code:

/** * simple Java program, used to illustrate the stack relationship **@Author: deadline
 * @Date: any * / 2021-02-27
public class JvmTestForStack {

    public int count(a) {
        int a = 1;
        int b = 2;
        int c = a + b;
        return c;
    }

    public static void main(String[] args) {
        JvmTestForStack jvmTestForStack = newJvmTestForStack(); jvmTestForStack.count(); }}Copy the code

Here’s another picture:

The code and the diagram, they go together, so if you look at them together, I think you get the idea, but I’m going to talk a little bit more:

You may have heard of a stack, a data interface that is first in and last out. I said that the stack is unique to each thread, meaning that each thread is allocated a small chunk of stack memory, as shown in the figure above. At this point, we have to talk about stack frames. What is stack frames? When each method is called, it pushes a small piece of memory into the thread’s stack memory. This piece of memory is called a stack frame. This action is like putting a bullet into a magazine, and the first one in is the last one to fire. According to the code above, the main() method is first on the stack, so its frame is at the bottom of the stack, followed by the count() method. When the count() method is finished, the allocated frame is destroyed immediately.

Combined with the above, I’m sure you already understand most of this, but to conclude… That’s obviously not enough, so I’m going to write more about operand stacks, and dynamic linking

Javap -v xxx.class: decompile class bytecode with javap -v xxx.class

Constant pool:
   #1 = Methodref          #5.#27         // java/lang/Object."<init>":()V
   #2 = Class              #28            // jvm/JvmTestForStack
   #3 = Methodref          #2.#27         // jvm/JvmTestForStack."<init>":()V
   #4 = Methodref          #2.#29         // jvm/JvmTestForStack.count:()I
   #5 = Class              #30            // java/lang/Object
   #6 = Utf8               <init>
   #7 = Utf8               ()V
   #8 = Utf8               Code
   #9 = Utf8               LineNumberTable
  #10 = Utf8               LocalVariableTable
  #11 = Utf8               this
  #12 = Utf8               Ljvm/JvmTestForStack;
  #13 = Utf8               count
  #14 = Utf8               ()I
  #15 = Utf8               a
  #16 = Utf8               I
  #17 = Utf8               b
  #18 = Utf8               c
  #19 = Utf8               main
  #20 = Utf8               ([Ljava/lang/String;)V
  #21 = Utf8               args
  #22 = Utf8               [Ljava/lang/String;
  #23 = Utf8               jvmTestForStack
  #24 = Utf8               MethodParameters
  #25 = Utf8               SourceFile
  #26 = Utf8               JvmTestForStack.java
  #27 = NameAndType        #6: #7          // "<init>":()V
  #28 = Utf8               jvm/JvmTestForStack
  #29 = NameAndType        #13: #14        // count:()I
  #30 = Utf8               java/lang/Object
Copy the code

This is called the constant pool, and when you look at it, it seems to decompose our Java code into symbols, such as the main symbol for #19 and the ()V symbol for #7. These symbols are stored in the methods area. Look at the decompiled code below:

public int count(a);
    descriptor: ()I
    flags: ACC_PUBLIC
    Code:
      stack=2, locals=4, args_size=1
         0: iconst_1
         1: istore_1
         2: iconst_2
         3: istore_2
         4: iload_1
         5: iload_2
         6: iadd
         7: istore_3
         8: iload_3
         9: ireturn
Copy the code

This is the decompiled code for the count() method. Starting with line 0: iconst_1, we can see what the count() method is doing at the bottom, in combination with the stack memory diagram above.

With the above foreshadowing, now we can talk about it in detail:

Dynamic linking: Take a closer look at the #4 = Methodref #2.#29 line in the constant pool decompression code above. #28 = JVM /JvmTestForStack; #13 = count, #14 = ()I; Link, then, becomes the JVM/JvmTestForStack count () this line of code, I said before, the JVM to perform to this line of code do not yet know the count () method of the specific what bytecode ah, so when the underlying implementation, also need to convert a link, What you do is you turn these symbolic links into direct links that let the JVM know where in memory it should go to get executable JVM bytecode when it comes to this line of code.

Operand stack: After the dynamic chaining described above, the JVM bytecode execution engine finds the specific bytecode of the count() method, which the JVM can read and which is then converted, according to the rules, into machine-readable assembly language and executed. Our computer only knows zeros and ones, so we have the stack memory diagram above… Operations such as pushing operands off the stack.

Again: For basic data types, such as int, double, Boolean, etc., their values are stored directly on the stack. For objects, most of the time, it’s just a reference in the stack, and the real value is in the heap.

The Jvm heap

Again, the picture above:

Our heap is divided into two segments: the young generation and the old generation, with the young generation accounting for 1/3 of the entire heap and the old generation accounting for 2/3. The young generation was divided into Eden area and Survivor area. Eden area accounted for 8/10 of the young generation, and each of the two Survivor area accounted for 1/10.

In general, new objects will be placed in Eden. When Eden is full, minor gc is performed. What kind of objects will be considered garbage objects? An object that has no reference and can no longer be accessed by any variable. The whole process is clearly drawn in the diagram, but explained a lot. It is important to note that in addition to the dynamic age mechanism mentioned in the figure, there are many other ways to move objects to older ages, which will be discussed in the next article. When the old age is full, the full GC is triggered.

Why is Full GC so slow?

During full GC, the Jvm executes a STW mechanism, which stands for stop the world. Stop all user threads for full GC. Why STW? There are many different algorithms for gc, but they all need to be marked as garbage objects or non-garbage objects, so my guess is that if garbage collection is going on while new garbage objects are being generated… If a collection is iterated, and other threads are constantly adding or modifying values in the collection, there are serious consequences, such as never completing the collection, thread-safety issues, and so on.

5. Jvm memory parameter configuration

Of course, the first picture is:

The figure above is an example of the size of the Jvm’s memory configuration parameters for each region. The size of the Jvm’s memory configuration parameters depends on the service situation of your system. For more information on how to properly configure the size of various regions of Jvm memory, look forward to the next article.

Examples of complete parameters: ‐XX:MetaspaceSize=256M ‐XX:MaxMetaspaceSize=256M ‐jar test.jar One other thing to note is that I have repeatedly mentioned that the method area uses direct memory. The default size of the method area is 21M, and when it is full, full GC is performed. Then according to the results of GC, the size of this area is dynamically expanded and shrunk horizontally. If the amount released by GC is large, the space is reduced. If the amount freed is small, enlarge the space (no larger than the value set by MaxMetaspaceSize); If this value is not set, there is no limit and the full GC will be performed several times when the program starts

So, simulate stack overflow, call methods recursively, simulate heap overflow, make a collection of new objects, heh heh… ^ ^

Today’s summary and share here, if there is something wrong, please do not hesitate to comment.