preface

If you have any problems with the wording or understanding of the text, please feel free to point out. The purpose of this article is not to go into detail, but to get the facts out as efficiently as possible

First, the basic introduction of JVM

JVM is short for Java Virtual Machine, it is a fictitious computer, a specification. By simulating all kinds of computer functions on the actual computer

Well, aside from the jargon, the JVM is basically a small computer running on Windows or Linux or whatever. It interacts directly with the operating system, not with the hardware, but the operating system helps us do the job of interacting with the hardware.

1.1 How are Java files run

For example, if we write helloWorld.java, what about helloWorld.java, all things aside, is it just like a text file, except it’s written in English, and it’s abbreviated.

Our JVM doesn’t know text files, so it needs a compile to make it a HelloWorld.class that it reads binaries

① Class loaders

If the JVM wants to execute the.class file, we need to load it into a classloader, which acts like a hauler and moves all the.class files into the JVM.

Methods area

The method area is used to store data such as metadata information, such as class information, constants, static variables, compiled code, etc

The classloader moves the.class file here by dropping it on this block first

(3) the heap

Heap mainly put some stored data, such as object instances, arrays, etc., it and method area are the same thread shared area. That is, they are thread unsafe

(4) stack

The stack is where our code runs. Every method we write is run on the stack.

We’ve heard of native method stack or native method interface, but we’re not going to talk about them, because they work in C and have nothing to do with Java.

⑤ Program counter

The main thing is to do a load, sort of a pointer to the next line of code that we need to execute. Like the stack, it is threaded exclusive, meaning that each thread will have its own area without concurrency and multithreading problems.

A small summary

  1. Java files are compiled into.class bytecode files
  2. Bytecode files are moved to the JVM virtual machine through the class loader
  3. The five main blocks of virtual machine: method area, heap are thread shared area, there are thread safety issues, stack and local method stack and counter are exclusive area, there is no thread safety issues, and the TUNING of THE JVM is mainly around the heap, stack two blocks

1.2 Simple code examples

A simple class for students

A main method

Execute the main method as follows:

  1. App.class. The system will start a JVM process and find a binary file named app.class from the classpath path. The process of loading App class information into the method section of the runtime data section is called App class loading
  2. The JVM finds the App’s main program entry and executes the main method
  3. Student Student = new Student(“tellUrDream”); So the JVM immediately loads the Student class and puts the Student information in the method area
  4. After loading the Student class, the JVM allocates memory in the heap for a new Student instance and then calls the constructor to initialize the Student instance, which holds a reference to the type information of the Student class in the method area
  5. Perform student. SayName (); The JVM finds the student object based on its reference to student, and then locates the method table in the method area based on the reference held by the student object to obtain the bytecode address of sayName().
  6. Perform sayName ()

You don’t have to worry too much about it, except that when the object instance is initialized, it looks for the class information in the method area, and when it’s done, it runs the method on the stack. To find a method, look in the method table.

Introduction to class loaders

As mentioned earlier, it is responsible for loading.class files, they have a specific file identifier at the beginning of the file, loading the bytecode contents of the class file into memory, and converting those contents into runtime data structures in the method area, and it is only responsible for loading the class files. It is up to the Execution Engine to decide whether it can run

2.1 Class loader flow

There are seven steps from the time a class is loaded into the memory of the virtual machine to the time it is freed: load, verify, prepare, resolve, initialize, use, and unload. The validation, preparation, and resolution sections are collectively called joins

2.1.1 load

  1. Load the class file into memory
  2. Convert static data structures into runtime data structures in the method area
  3. A java.lang.Class object representing this Class is generated in the heap as the entry point for data access

2.1.2 connection

  1. Validation: Ensuring that the loaded class complies with the JVM specification and security. Ensuring that the method of the class being verified does not endanger the VIRTUAL machine at run time is a security check
  2. Prepare: Allocate memory space in the method area for static variables, and set the initial value of the variable, such as static int a = 3. (Note: only static variables (in the method area) are set in the prepare phase, not instance variables (in heap memory), which are assigned when the object is initialized.)
  3. Parse: The process by which a virtual machine replaces a symbolic reference in a constant pool with a direct reference (for example, import java.util.ArrayList)

2.1.3 initialization

Initialization is essentially an assignment that executes the <clinit>() method of a class constructor. Static int a = 3; static int a = 3

2.1.4 uninstall

GC unloads unwanted objects from memory

2.2 Loading sequence of class loaders

The order in which a Class is loaded is precedence, starting from the bottom of the Class loader

  1. The BootStrap this: rt. The jar
  2. Extention ClassLoader: loads the extended JAR package
  3. App ClassLoader: jar package under the specified CLASspath
  4. Custom ClassLoader: a Custom ClassLoader

2.3 Parent delegation mechanism

When a class receives a load request, it doesn’t try to load it itself, it delegates it to the parent class, so if I want a new Person, this Person is our custom class, and if we want to load it, we delegate the App ClassLoader, Only if the parent Class loaders report that they are unable to complete the request (that is, neither of the parent Class loaders can find the required Class to load) will the child Class loaders attempt to load it themselves

The benefit of this approach is that the BootStrap ClassLoader will eventually delegate the load to the rt.jar class regardless of which loader it is. This ensures that different class loaders will all get the same result.

It’s also a way to isolate, to prevent our code from affecting the JDK code, like I’m going to do now

public class String(a){

    public static void main(a){sout; }

}

Copy the code

In this case, our code is bound to report an error, because when loading, we actually found the string.class in rt.jar, and found that there is no main method

Third, the runtime data area

3.1 Local method stack and program counter

For example, if we open the source code for Thread, we will see that its start0 method has a native keyword modifier, and there is no method body. This method is a native method. This method is implemented in C, and these methods are generally placed in an area called the local method stack.

The program counter is essentially a pointer to the next instruction to execute in our program. It is also the only area of memory where OutOfMemoryError does not occur, and it occupies a negligible amount of memory. This memory is simply an indicator of the line number of bytecode executed by the current thread. The bytecode parser changes the value of this counter to select the next bytecode instruction to execute.

If a native method is executed, the pointer does not work.

3.2 method area

The main function of the method area is to store class metadata information, constants and static variables, etc. When it stores too much information, it will report an error when the memory allocation cannot be satisfied.

3.3 VM Stack and VM heap

In a word: stack tube running, heap tube storage. The virtual stack is responsible for running the code and the virtual heap is responsible for storing the data.

3.3.1 Concept of VM Stack

It is an in-memory model for Java method execution. Local variables, dynamic lists, method exits, and stack operations (push and exit) are stored exclusively for the thread. And if we hear a local variable table, we’re also talking about a virtual machine stack

public class Person{

    int a = 1;



    public void doSomething(a){

        int b = 2;

    }

}

Copy the code

3.3.2 VM Stack Is Abnormal

If the thread requests a stack depth greater than the maximum depth of the virtual machine stack, a StackOverflowError is reported (this error often occurs in recursion). The Java virtual Machine can also be dynamically expanded, but as it expands, memory is constantly allocated, and OutOfMemoryError is reported when enough memory cannot be allocated.

3.3.3 Life Cycle of vm Stack

For stacks, there is no garbage collection. Once the program finishes running, the stack will free up space. The life cycle of the stack is consistent with the thread it is in.

The eight basic types of variable + object reference variable + instance methods all allocate memory on the stack.

3.3.4 Vm stack execution

What we often refer to as stack frames, or stack frames in the JVM, are actually methods in Java, which are also stored on the stack.

The data in the stack is in the format of a stack frame, which is a data set of methods and runtime data. For example, if we execute a method A, it will produce a stack frame A1, and then A1 will be pushed onto the stack. Similarly, method B will have a B1, method C will have a C1, and when this thread is finished, the stack will pop C1, then B1,A1. It’s first in, last in, first out.

3.3.5 Local variable reuse

The local variable table is used to store method parameters and local variables defined within the method. Its capacity is the smallest unit of Slot, a Slot can store less than 32 bits of data type.

The vm uses the local variable table in index location mode. The range is [0, number of slots in the local variable table]. The parameters of the method will be arranged in a certain order in this local variable table, and we don’t care how they are arranged. In order to save stack frame space, these slots can be reused. When the method execution location exceeds a variable, the slot of that variable can be reused by other variables. Of course, if we need to reuse the memory, then our garbage collection will not touch the memory.

3.3.6 Concept of vm heap

JVM memory is divided into heap memory and non-heap memory, heap memory is also divided into young and old, and non-heap memory is permanent. The young generation is divided into Eden and Survivor zones. Survivor is also divided into FromPlace and ToPlace, where the Survivor area is empty. The default ratio of Eden, FromPlace, and ToPlace is 8:1:1. Of course this thing actually can also pass a – XX: + UsePSAdaptiveSurvivorSizePolicy parameters according to the dynamic adjustment of the rate of generated objects

Heap memory holds objects, and garbage collection is the collection of these objects and then handed over to the GC algorithm for collection. Non-heap memory, which we already talked about, is the method area. In 1.8, the permanent generation has been removed and replaced by MetaSpace. The main difference is that MetaSpace does not exist in the JVM and uses local memory. And it takes two parameters

MetaspaceSize: Initializes the size of the meta space that controls GC

MaxMetaspaceSize: Sets the maximum size of the meta space to prevent excessive physical memory usage.

Copy the code

The reason for the removal can be seen in general: a change to fuse HotSpot JVM with JRockit VM, since JRockit does not have a permanent generation, but this indirectly solves the permanent generation OOM issue.

3.3.7 Introduction to Eden’s young generation

When we create an object, we will first place it in Eden’s memory partition, but we know that the heap memory is shared by threads, so it is possible for two objects to share the same memory. The JVM’s processing here is that each thread will pre-apply for a contiguous memory space and specify the location of objects, and if the space is insufficient, it will apply for more memory space. This operation is called TLAB, if you are interested in it.

When the Eden space is full, an operation called Minor GC (that is, a GC that occurs in the younger generation) is triggered, and the surviving objects are moved to the Survivor0 region. When a Survivor0 zone is full, Minor GC is triggered, which moves the survivor object to a Survivor1 zone, and in this case, the from and to Pointers are exchanged, which guarantees that for some time there will always be an empty survivor zone to which to points. Objects that survive multiple Minor GCS (in this case, 15 survive, corresponding to the VM parameter -xx :MaxTenuringThreshold). Why 15? Because HotSpot will record the age in the tag field of the object cast. It has only 4 bits allocated space, so it can only record up to 15. When it’s Full, it triggers what we hear most about Full GC, during which all threads stop waiting for the GC to complete. Therefore, for applications with high response requirements, Full GC should be minimized to avoid response timeout.

In this case, the heap memory in the virtual machine is insufficient. The reason may be that the heap memory size is too small. This can be adjusted by using the -xms and -xmx parameters. Or it could be that the objects created in the code are large and numerous, and they are constantly being referenced and thus cannot be collected by garbage collection for a long time.

3.3.8 How to determine if an Object needs to be killed

In the diagram, the program counter, the virtual machine stack, and the local method stack, three areas live along with the thread. Memory allocation and reclamation are determined. Memory is automatically reclaimed as the thread ends, so there is no need to worry about garbage collection. In contrast, the Java heap and method area are shared by each thread, and memory allocation and reclamation are dynamic. So all the garbage collector cares about is the heap and the method part of memory.

Determine which objects are alive and which are dead before you can recycle them. Two basic calculation methods are described below

1. Reference counter calculation: add a reference counter to an object, incremented by one each time the object is referenced, and subtracted by one when the reference expires. The counter will not be used again if it equals 0. However, there is a case where a circular reference to an object cannot be collected by GC.

2. Reachability analysis and calculation: This is an implementation similar to binary tree, which takes a series of GC ROOTS as the starting set of surviving objects, searches down from this node, and the path searched becomes a reference chain, and adds the objects that can be referenced by the set to the set. Search When no chain of references is used for an object to GC Roots, the object is not available. Mainstream commercial programming languages such as Java, C#, etc., rely on this to determine whether an object is alive or not.

Objects that can be aggregated as GC Roots in the Java language fall into the following categories:

  1. Objects (local variables) referenced in the virtual machine stack (local method table in the stack frame)
  2. Objects referenced by static variables in the method area (static variables)
  3. Object referenced by a constant in the method area
  4. Objects referenced by JNI in the native method stack (that is, native modified methods) (JNI is the way the Java Virtual Machine calls corresponding C functions, and new Java objects can also be created through JNI functions. And JNI’s local or global references to objects mark the objects they point to as non-retrievable.)
  5. Java threads that have been started and not terminated

The advantage of this approach is that it solves the problem of circular references, but it takes a lot of resources and time to implement. It also requires GC (its analysis process references cannot change, so all processes need to be stopped).

3.3.9 How do I Declare an Object Dead

The first thing that must be mentioned is a method called Finalize ()

Finalize () is a method of the Object class, and the Finalize () method of an Object will be called automatically only once by the system. Objects that escape death after The Finalize () method will not be called the second time.

One more thing: calling Finalize () in your program to save yourself is not recommended. It is recommended to forget the existence of this method in Java programs. Because it executes at an indefinite time or even at all (an abnormal exit of a Java program), it is expensive to run, and there is no guarantee of the order in which individual objects will be called (even from different threads). In Java9 has been marked as deprecated, and java.lang.ref.Cleaner (that is, strong, soft, weak, phantom reference set) has gradually replaced it, will be more lightweight and reliable than the Finalize.

It takes at least two flags to determine the death of an object

  1. If the object has no chains of references linked to GC Roots after reactability analysis, it will be flagged and filtered for the first time. The judgment condition is to determine whether it is necessary to execute finalize() method on this object. If it is necessary for an object to execute the Finalize () method, it is placed in an F-queue.
  2. GC marks the objects in the F-queue twice. If an object is re-associated with any object on the reference chain in the Finalize () method, it will be moved out of the “to be reclaimed” collection in the second markup. If the object has not yet escaped, it can only be recycled.

If the object is certain to be dead, how do we recycle the garbage

3.4 Garbage collection Algorithm

Not very detailed expansion, commonly used are tag clean, copy, tag collation and generational collection algorithms

3.4.1 Flag clearing algorithm

The marker clearing algorithm is divided into two stages: “mark” and “clear”. All objects to be reclaimed are marked. After the end of the marking, the objects are reclaimed uniformly. This routine is very simple, there are also shortcomings, the subsequent algorithm is based on this basis to be improved.

In fact, it marks dead objects as free memory and records them in a free list. When we need a new object, the memory management module will find free memory from the free list to allocate to the new object.

The disadvantage is that the efficiency of marking and clearing is relatively low. And this approach will make the memory fragment very much. This causes us to not be able to allocate enough contiguous memory if we need to use large blocks of memory. Such as below

The available memory blocks are scattered, leading to the large memory object problem just mentioned

3.4.2 Replication Algorithm

To solve the efficiency problem, the replication algorithm came into being. It divides the available memory into two equal parts by capacity and uses only one chunk at a time. Just like survivor, we use from and to Pointers. When the fromPlace store is full, it copies the surviving objects onto another toPlace and exchanges the contents of the Pointers. That solves the problem of fragmentation.

The cost of this algorithm is that it shrinks memory, making heap memory very inefficient

But when they’re allocated, they’re not allocated on a 1:1 basis, just like Eden and Survivor are not equal allocations in the same way.

3.4.3 Tag sorting algorithm

The replication algorithm has some efficiency problems when the object survival rate is high. The marking process is still the same as the mark-clean algorithm, but the next step is not to clean up the recyclable object directly, but to move all the surviving objects to one end, and then directly clean up the memory outside the boundary

3.4.4 Generational collection algorithm

There is nothing new in this algorithm, except that memory is divided into several blocks according to the lifetime of the object. Generally, the Java heap is divided into new generation and old generation, so that the most appropriate collection algorithm can be adopted according to the characteristics of each generation. In the new generation, when a large number of objects die and a small number survive each garbage collection, a replication algorithm can be used to complete the collection at the cost of copying a small number of surviving objects. In the old days, because the object had a high survival rate and there was no extra space to guarantee its allocation, it had to be reclaimed using either mark-clean or mark-clean algorithms.

To put it bluntly, the eight immortals have demonstrated their abilities across the sea, and specific problems have been analyzed.

3.5 (Understanding) the various garbage collectors

The garbage collector in HotSpot VM, and the applicable scenarios

Up to JDK8, the default garbage collector is Parallel Scavenge and Parallel Old

Starting with JDK9, the G1 collector becomes the default garbage collector. By far, the G1 collector has the shortest pause time and no obvious drawbacks, making it ideal for Web applications. Parallel Scavenge avenge the jdK8 Web application, heap memory 6G, Generation 4.5G case, Parallel Scavenge Generation pause up to 1.5 seconds. The G1 collector only pauses for 0.2 seconds to collect the same size generation.

3.6 (Understand) common JVM parameters

The JVM has a large number of parameters. Here are just a few of the important ones, which are also available through various search engines.

The parameter name meaning The default value instructions
-Xms Initial heap size 1/64 of physical memory (<1GB) When the default (MinHeapFreeRatio parameter can be adjusted) free heap memory is less than 40%, the JVM will increase the heap to the -xmx maximum limit.
-Xmx Maximum heap size 1/4 of physical memory (<1GB) When the default (MaxHeapFreeRatio parameter can be adjusted) free heap memory is greater than 70%, the JVM reduces the heap to the minimum limit of -xms
-Xmn Young Generation size (1.4or lator) Note: The size here is (Eden + 2 survivor space). This is different from the New Gen shown in jMAP-Heap. Total heap size = young generation size + old generation size + persistent generation (permanent generation) size. Increasing the size of the young generation will reduce the size of the old generation. The value has a significant impact on system performance. Sun officially recommends that the value be 3/8 of the entire heap
-XX:NewSize Set the young generation size (for 1.3/1.4)
-XX:MaxNewSize Young generation maximum (for 1.3/1.4)
-XX:PermSize Set the initial value of perm gen 1/64 of the physical memory
-XX:MaxPermSize Set the maximum persistence value 1/4 of the physical memory
-Xss The stack size per thread After JDK5.0, the stack size of each thread is 1M, whereas before, the stack size of each thread is 256K. More application threads need to adjust the size of memory. With the same physical memory, reducing this value can generate more threads. But the operating system is limited to the number of threads in a process, can not be generated indefinitely, the experience value in the 3000~5000 or so generally small applications, if the stack is not very deep, should be 128K enough large applications recommended to use 256K. This option has a significant performance impact and requires rigorous testing. -xSS is translated in a VM flag named threadStacksize. It is the number of translated words in a VM
-XX:NewRatio Ratio of young generation (including Eden and two Survivor regions) to old generation (excluding persistent generation) -xx :NewRatio=4 Indicates that the ratio of the young generation to the aged generation is 1:4. The young generation occupies 1/5 of the stack. If XMS =Xmx and Xmn is set, this parameter does not need to be set.
-XX:SurvivorRatio Size ratio of Eden and Survivor zones Set to 8, the ratio of two Survivor zones to one Eden zone is 2:8, and one Survivor zone accounts for 1/10 of the entire young generation
-XX:+DisableExplicitGC Closure System. The gc () This parameter requires rigorous testing
-XX:PretenureSizeThreshold Objects over how big are allocated directly in the old generation 0 Unit byte Invalid when Parallel avenGEGC is used in the New Generation Another case that is directly allocated in the old generation is a large array object with no external reference object in the array.
-XX:ParallelGCThreads Number of parallel collector threads This value is best configured to be equal to the number of processors
-XX:MaxGCPauseMillis Maximum time per young generation garbage collection (maximum pause time) If this time cannot be met, the JVM automatically adjusts the young generation size to meet this value.

There are also some printing and CMS parameters, which will not be listed here

Some aspects of JVM tuning

Based on the JVM knowledge we just covered, we can try to tune the JVM, mainly in the heap memory area

Size of data area shared by all threads = New generation size + old generation size + persistent generation size. Durable generation is generally fixed size of 64m. Therefore, increasing the size of the young generation in the Java heap will reduce the size of the old generation (because the old generation is cleaned using fullGC, so if the old generation is too small, it will increase the fullGC). The value has a significant impact on system performance. Sun officially recommends that the value be 3/8 of the Java heap.

4.1 Adjusting the maximum heap memory and minimum heap memory

-xmx-xms: Specifies the maximum Java heap value (default is 1/4 of physical memory (<1GB)) and the minimum initial Java heap value (default is 1/64 of physical memory (<1GB))

By default (MinHeapFreeRatio can be adjusted), if the free heap memory is less than 40%, the JVM increases the heap to the -xmx maximum limit, and by default (MaxHeapFreeRatio can be adjusted), if the free heap memory is greater than 70%, the JVM reduces the heap to the -xMS minimum limit. Simply put, you keep dropping data into the heap, and when it’s less than 40%, the JVM dynamically allocates less than -xmx, and if it’s more than 70%, it dynamically shrinks but it’s not less than -xMS. It’s that simple

During development, it is common to configure the -xMS and -xmx parameters to the same value so that after the Java garbage collection mechanism has cleaned up the heap, the heap size does not need to be redelimited to waste resources.

Let’s execute the following code

System.out.println("Xmx=" + Runtime.getRuntime().maxMemory() / 1024.0 / 1024 + "M");    // The maximum space of the system

System.out.println("free mem=" + Runtime.getRuntime().freeMemory() / 1024.0 / 1024 + "M");  // Free space of the system

System.out.println("total mem=" + Runtime.getRuntime().totalMemory() / 1024.0 / 1024 + "M");  // The total space currently available

Copy the code

Note: This is the Java heap size, which is the new generation size + old generation size

Set a VM options parameter

-Xmx20m -Xms5m -XX:+PrintGCDetails

Copy the code

Start the main method again


Here GC pops an Allocation Failure, which happens in PSYoungGen, which is the young generation

The obtained memory is 18 MB, and the free memory is 4.214195251464844 MB

Let’s create a byte array at this point and see. Execute the following code

byte[] b = new byte[1 * 1024 * 1024];

System.out.println("1 MB allocated to the array.");

System.out.println("Xmx=" + Runtime.getRuntime().maxMemory() / 1024.0 / 1024 + "M");  // The maximum space of the system

System.out.println("free mem=" + Runtime.getRuntime().freeMemory() / 1024.0 / 1024 + "M");  // Free space of the system

System.out.println("total mem=" + Runtime.getRuntime().totalMemory() / 1024.0 / 1024 + "M");

Copy the code

Free memory shrinks again, but total memory remains the same. Java tries to keep the total MEm value at the minimum heap memory size

byte[] b = new byte[10 * 1024 * 1024];

System.out.println("10 MB allocated to the array.");

System.out.println("Xmx=" + Runtime.getRuntime().maxMemory() / 1024.0 / 1024 + "M");  // The maximum space of the system

System.out.println("free mem=" + Runtime.getRuntime().freeMemory() / 1024.0 / 1024 + "M");  // Free space of the system

System.out.println("total mem=" + Runtime.getRuntime().totalMemory() / 1024.0 / 1024 + "M");  // The total space currently available

Copy the code

At this point we have created a 10 MB byte of data, which the minimum heap memory cannot hold. We will find that the total memory is now 15M, which is the result of having applied for memory once.

Now let’s run through this code again

System.gc();

System.out.println("Xmx=" + Runtime.getRuntime().maxMemory() / 1024.0 / 1024 + "M");    // The maximum space of the system

System.out.println("free mem=" + Runtime.getRuntime().freeMemory() / 1024.0 / 1024 + "M");  // Free space of the system

System.out.println("total mem=" + Runtime.getRuntime().totalMemory() / 1024.0 / 1024 + "M");  // The total space currently available

Copy the code

At this point, we manually execute a FULLGC, and the total memory space changes back to 5.5m, which is the result of freeing the allocated memory.

4.2 Adjust the ratio of Cenozoic to old age

-xx :NewRatio — The ratio of the new generation (Eden +2*Survivor) to the old (excluding permanent)

For example, -xx :NewRatio=4: Cenozoic: Old age =1:4, that is, the Cenozoic occupies 1/5 of the whole heap. If Xms=Xmx and Xmn is set, this parameter does not need to be set.

4.3 Adjusting the Ratio of Survivor zone to Eden Zone

-xx :SurvivorRatio — Sets the ratio of two Survivor extones to Eden

For example, 8 indicates two survivors: Eden =2:8, that is, a Survivor accounts for 1/10 of the young generation

4.4 Setting the size of the young generation and the old generation

-xx :NewSize — Sets the size of the young generation

-xx :MaxNewSize — Sets the maximum value of the young generation

You can test different cases by setting different parameters, but the optimal solution is, of course, the official ratio of Eden and Survivor is 8:1:1, and then you have some instructions in the description of these parameters, if you are interested, please check them out. However, if the maximum heap memory and minimum heap memory value is different, it will result in multiple GC.

4.5 small summary

Adjust the size of the new generation and the surviving generation according to the actual situation. The official recommendation is that the new generation is 3/8 of the Java heap and the surviving generation is 1/10 of the new generation

When in OOM, remember to Dump the heap to make sure you can troubleshoot any problems. You can output a. Dump file using VisualVM or the Java VisualVM tool that comes with Java.

- Xmx20m - Xms5m - XX: + HeapDumpOnOutOfMemoryError - XX: HeapDumpPath = do you want to output log path

Copy the code

Generally, we can write a script that will let us know when OOM appears. This can be resolved by sending an email or restarting the program.

4.6 Setting a permanent area

-XX:PermSize -XX:MaxPermSize

Copy the code

Initial space (defaults to 1/64 of physical memory) and maximum space (defaults to 1/4 of physical memory). In other words, when the JVM is started, the permanent area will take up the space of PermSize. If the space is not enough, you can continue to expand, but can not exceed MaxPermSize, otherwise it will be OOM.

Tips: If the heap space is not used, OOM is thrown. This is probably caused by the permanent area. The actual heap space usage is very small, but a permanent area overflow throws OOM as well.

4.7 Tuning stack parameters for the JVM

4.7.1 Resize the stack space of each thread

You can resize the stack space per thread with -xSS:

After JDK5.0, the stack size of each thread is 1M, whereas before, the stack size of each thread is 256K. With the same physical memory, reducing this value can generate more threads. However, the operating system has a limit on the number of threads in a process, which cannot be generated indefinitely. The experience value is about 3000~5000

4.7.2 Setting the thread stack size

- XXThreadStackSize:

Set the thread stack size (0 meansuse default stack size)

Copy the code

These parameters can be written by their own program to simple test, here due to space issues will not provide demo

4.8 (you can directly skip) Introduction to other JVM parameters

There are a lot of different parameters, so I don’t say I’m going to rip them all out, because people don’t really say I have to go to the bottom.

4.8.1 Setting the Memory Page Size

- XXThreadStackSize:

Set the size of the memory page. If the size is too large, the Perm size will be affected

Copy the code

4.8.2 Setting fast optimization for primitive types

-XX:+UseFastAccessorMethods:

Set up quick optimizations for primitive types

Copy the code

4.8.3 Disabling manual GC

-XX:+DisableExplicitGC:

Set offSystem.gcThis parameter needs to be rigorously tested.

Copy the code

4.8.4 Setting the Maximum Age of garbage

-XX:MaxTenuringThreshold

Set the maximum age of garbage. If set to 0, the young object does not pass throughSurvivorZone, directly into the tenured generation.

For applications with more aged generations, efficiency can be improved. If you set this value to a larger value,

The young generation will be inSurvivorTo increase the lifetime of an object in a younger generation,

Increases the probability of being recycled in the young generation. This parameter is only available in serialGCOnly when.

Copy the code

4.8.5 Speed up compilation

-XX:+AggressiveOpts

Copy the code

Speed up compilation

4.8.6 Improving lock Performance

-XX:+UseBiasedLocking

Copy the code

4.8.7 Disabling Garbage Collection

-Xnoclassgc

Copy the code

4.8.8 Setting the Lifetime of heap Space

-XX:SoftRefLRUPolicyMSPerMB

Set in each Megabyte of heap free spaceSoftReferenceThe default value is 1s.

Copy the code

4.8.9 Setting objects to be allocated directly in the old age

-XX:PretenureSizeThreshold

Sets how large an object is to be allocated directly in the old age. The default value is 0.

Copy the code

4.8.10 Setting the Proportion of TLAB in the Eden Zone

-XX:TLABWasteTargetPercent

Set up theTLABAccount foredenPercentage of extents, default is 1%.

Copy the code

4.8.11 Setting YGC Priority

-XX:+CollectGen0First

Specifies whether to set YGC first for FullGC. The default value isfalse.

Copy the code

finally

I’ve been talking about this for a long time. I’ve been referring to many sources, including Geek Time’s “Deep Virtual Machine Disassembly” and “Java Core Technology Interview”, as well as Baidu and some online courses I’ve been studying. I hope it helps. Thank you.

There are now people running their own knowledge planet for free but that doesn’t mean there is no gain. Those who are interested in the direction of big data can pay attention to it