This article covers memory structures in the JVM, garbage collection algorithms, garbage collectors, class loading, parental delegation models, and common questions about JVM tuning.
This article is included in the Interview Cheat Sheet series at github.com/cosen1024/J…
Domestic Gitee: gitee.com/cosen1024/J…
This topic covers Java basics, Java concurrency, JVM, MySQL, Redis, Spring, MyBatis, Kafka, operating systems, and computer networking.
This is the list of JVM interview questions for this issue
1. What is the JVM memory structure?
The JVM divides the virtual machine into five areas: program counters, virtual machine stack, local method stack, Java heap, and method area.
- Program counter: thread private, is a small memory space, as the current thread line number indicator, used to record the current VIRTUAL machine is executing thread instruction address;
- Virtual machine stack: thread-private stack frame is created for each method execution to store local variables, operands, dynamic links, and method returns. StackOverFlowError is raised when thread requests exceed the maximum stack depth allowed by the VIRTUAL machine.
- Local method stack: the thread is private, which stores the information of native method. When a thread created by JVM calls native method, the JVM will not create stack frame for the thread in the virtual machine stack, but simply dynamically link and directly call this method.
- Heap: The Java heap is a piece of memory shared by all threads. Almost all object instances and arrays are allocated on the heap, so garbage collection often occurs in this area.
- Method area: stores loaded class information, constants, static variables, and code data compiled by the immediate compiler. In JDK1.8, the method area is replaced by the metadata area. The original method area is divided into two parts. 1: loaded class information. 2: runtime constant pool. The loaded class information is stored in the metadata area, and the runtime constant pool is stored in the heap;
2. What is the JVM memory model?
The Java Memory model (hereinafter referred to as JMM) defines its own multithreaded semantics based on the underlying processor memory model. It explicitly specifies a set of collation rules to ensure visibility between threads.
This set of rules is called happens-before, and the JMM states that the happens-before relationship between A and B must be satisfied in order to guarantee that action B will see the result of action A (whether or not they are on the same thread) :
- Single-thread rule: Every action in a thread happens-before every subsequent action in that thread
- The monitor locking rule: happens-before Specifies the subsequent locking action for this listener
- Rule for volatile variables: Writes to a volatile field happens-before Each subsequent read to that field
- Thread start rule: Execution of the thread start() method happens-before any action within a starting thread
- Thread join rule: All actions within a thread happens-before any other thread returns from the thread join() successfully
- Transitivity: If A happens-before B, and B happens-before C, then A happens-before C
How do you understand happens-before? What if the thread (regardless of whether it is the same or not) is literally unlocked before the lock, as in the second rule? This is clearly not true. Happens-before also ensures visibility, such as the action of unlocking and locking. Thread 1 releases the lock to exit the block, thread 2 locks the block, and thread 2 sees the result of thread 1’s changes to the shared object.
Java provides several language constructs, including volatile, final, and synchronized, designed to help programmers describe the concurrency requirements of a program to the compiler:
- Volatile – Ensures visibility and order
- synchronized– ensure thatvisibilityandorder; Through ** pipe (Monitor) *To ensure a set of actionsatomic
- Final – Ensures visibility by disallowing reordering in constructor initialization and assignment of final fields (visibility is not guaranteed if this reference escapes)
When the compiler encounters these keywords, it inserts corresponding memory barriers to ensure semantic correctness.
Synchronized does not forbid reordering of code in a synchronized block, because it locks that only one thread accesses a synchronized block (or critical region) at a time. That is, synchronized block code only needs to satisfy as-if-serial semantics — as long as the execution result of a single thread does not change. It can be reordered.
So, the Java memory model describes the visibility of shared memory modifications made by multiple threads to each other, and also ensures that properly synchronized Java code runs correctly on processors with different architectures.
What’s the difference between heap and stack?
(1) Application method
Stack: Automatically assigned by the system. For example, declare a local variable int b in a function; The system automatically creates space in the stack for B
Heap: requires the programmer to apply and specify the size, in C malloc function, for Java requires manual new Object()
(2) Response of the system after application
Stack: If the remaining space of the stack is larger than the requested space, the system provides memory for the program. Otherwise, an exception is reported indicating stack overflow.
Heap: The first thing you should know is that the operating system has a linked list of free memory addresses. When the system receives a program’s request, it iterates through the list, looks for the first heap node that has more space than the requested space, and then removes that node from the free list and allocates that space to the program. In addition, since the size of the found heap may not be exactly the size of the requested heap, the system will automatically put the extra heap back into the free list.
(3) Limit of application size
Stack: THE stack is a data structure that extends to a lower address. It is a contiguous area of memory. On WINDOWS, the stack size is 2M (the default value also depends on the size of the virtual memory). If you request more space than the stack size, overflow will be displayed. Therefore, less space can be obtained from the stack.
Heap: a data structure that expands to a higher address. It is a discontinuous area of memory. This is because the system uses a linked list to store the free memory address, which is naturally discontinuous, and the traversal direction of the list is from low address to high address. The size of the heap is limited by the amount of virtual memory available in the computer system. Thus, the heap is more flexible and larger.
(4) Comparison of application efficiency
Stack: The system automatically assigns data to the stack with high speed. But programmers have no control.
Heap: memory allocated by new, usually slow and prone to memory fragmentation, but most convenient to use.
(5) The contents stored in the heap and stack
Stack: When a function is called, the address of the next instruction in the main function (the next executable statement of the function calling statement) is first pushed, followed by the arguments of the function. In most C compilers, arguments are pushed from right to left, followed by local variables in the function. Note that static variables are not pushed.
When this function call ends, the local variable is first off the stack, then the parameter, and finally the top pointer points to the original address, which is the next instruction in the main function, and the program continues to run from this point.
Heap: The size of the heap that is typically placed in one byte at the head of the heap. What’s in the heap is up to the programmer.
4. When can stack memory overflow occur?
1. The stack is thread private, and the life cycle of the stack is the same as that of the thread. When each method is executed, a stack frame will be created, which contains local variable table, operand stack, dynamic link, method exit and other information. StackOverFlowError is raised when thread requests exceed the maximum stack depth allowed by the virtual machine. 3. Adjust the -xss parameter to adjust the JVM stack size
5. What do you know about OOM? How to troubleshoot OOM faults?
All memory areas except program counters are subject to OOM risk.
- A StackOverflowError can occur on a 32-bit Windows operating system, such as a 2 gb memory limit for a single process, and an OOM for an unlimited number of threads
- Java 8 constant pool to the heap, spilling out for Java. Lang. OutOfMemoryError: Java heap space, set the maximum dimension size parameter is invalid;
- The heap is out of memory, and the heap is out of memory.
- Method area OOM, often encountered is dynamic generation of a large number of classes, JSP, etc.;
- In the OOM, the -xx :MaxDirectMemorySize parameter and the Unsafe object refer to the application for memory.
How to check OOM
- Increase the two parameters – XX: + HeapDumpOnOutOfMemoryError – XX: HeapDumpPath = / TMP/heapdumps hprof, OOM when it happens automatically dump heap memory information to the specified directory.
- At the same time, jstat looks at the memory and GC performance of the JVM to see where the problem might be.
- Use the MAT tool to load the dump file to analyze the occupation of large objects. For example, if HashMap is not cleaned in the cache for a long time, the memory will overflow, and you can change it to weak reference.
6. What about constant pools in the JVM?
JVM constant pools are divided into Class file constant pools, runtime constant pools, global string constant pools, and primitive wrapper object constant pools.
- Class file constant pool. A class file is a set of binary data streams in bytes. During the compilation of Java code, the binary data in the. Class file format is compiled and stored on disk, including the class file constant pool.
- Runtime constant pool: Runtime constant pools are dynamic compared to class constant pools. The Java specification does not require that constants be generated only at runtime. That is, the contents of the runtime constant pool do not come from the Class constant pool, but can be generated by code at runtime and put into the runtime constant pool. The most commonly used feature is string.intern ().
- Global String constant pool: The string constant pool is a reference table of a string instance maintained by the JVM, which in HotSpot VM is a global table called StringTable. A reference to a string instance is maintained in the string constant pool, and the underlying C++ implementation is a Hashtable. The string instances to which these maintained references refer are referred to as “hosted strings” or “interned Strings” or commonly referred to as “strings entered into the string constant pool.”
- Constant pool class object basic types of packaging: most of the basic types of wrapper classes in Java implements the constant pool technology, these classes are Byte, Short, Integer, Long, Character, Boolean, the other two floating-point types of wrapper classes are not implemented. In addition, the wrapper class of the above five integers can only use the object pool if the corresponding value is less than or equal to 127, that is, the object is not responsible for creating and managing objects of these classes greater than 127.
7. How to determine whether an object is alive or not?
There are two algorithms to judge whether an object is alive or not: 1. 2. Accessibility analysis algorithm;
Reference counting method: set a reference counter for each object, when there is a reference to the object, the reference counter is +1, when the reference is invalid, the reference counter is -1; When the reference counter is 0, the object is not referenced, that is, garbage, waiting to be collected. Disadvantages: It cannot solve the problem of circular reference. When A references B and B also references A, the reference of AB object is not 0 and garbage collection cannot be performed. Therefore, mainstream VMS do not use this method.
Reachability analysis searches down from an object called GC Roots. If an object is not connected to GC Roots by any reference chain, the object is not available. In Java, there are several possible GC Roots objects:
- Object referenced in the virtual machine stack
- Method area variables referenced by static properties of the class
- The object referenced by the method area constant pool
- Objects referenced by the local method stack JNI
However, when an object meets the above conditions, it will not be recovered immediately, and it needs to be marked twice. First mark: Determine whether the current object has a Finalize () method and the method has not been executed. If it does not exist, mark the object as garbage and wait for recycling. If so, mark the second time; The second tag will put the current object into the F-Queue and generate a Finalize thread to execute the method. The virtual machine does not guarantee that the method will be executed, because if the thread executes slowly or enters a deadlock, the recycle system will crash. If there is no direct or indirect reference to GC Roots after finalize method is executed, the object will be collected.
8. What are strong references, soft references, weak references, and virtual references? What are the differences?
- String s = new String(“ConstXiong”)
- Soft references, used to maintain unnecessary objects. The system reclaims soft reference objects only when there is insufficient memory. If there is still insufficient memory after reclaiming soft reference objects, an out of memory exception is thrown. SoftReference implementation
- Weak references, which are more useless than soft references, have a shorter lifetime, and when the JVM does garbage collection, the objects associated with weak references are reclaimed regardless of whether there is enough memory. WeakReference implementation
- A virtual reference is a kind of dummy reference that is not used much in real life, and is mainly used to track the activity of an object being garbage collected. PhantomReference implementation
9. Do referenced objects always survive?
Weak references will be reclaimed during GC, soft references will be reclaimed before OOM when memory is out, but objects that are not in the Reference Chain will definitely be reclaimed.
10. What are the garbage collection algorithms in Java?
There are four garbage collection algorithms in Java, which are mark removal, mark collation, copy algorithm and generational collection algorithm. Mark removal method: the first step: the use of reachability to traverse memory, the surviving objects and garbage objects marked; Step 2: Iterate again and recycle all marked objects; Features: efficiency is not good, marking and cleaning efficiency is not high; After marking and clearing, a large number of discontinuous spatial fragments will be generated, which may cause the program to allocate large objects and not find continuous fragments and have to trigger a GC.
Mark finishing method: the first step: the use of reachable to traverse the memory, the surviving objects and garbage objects to mark; Step 2: Move all surviving objects to a segment, reclaim all objects outside the end boundary; Features: Suitable for living objects, less garbage; No space debris is generated in the process of cleaning up;
Copy algorithm: according to the size of the memory is divided into two equal size, each time only use a piece, when a used up, the surviving objects will be moved to another piece, and then remove the used memory space; Features: No space debris; Very low memory usage;
Generational collection algorithms: according to the memory object different survival period, will be divided into a few pieces of memory, the Java virtual machine generally divide memory into new generation and the old generation, in the Cenozoic era, there are a large number of objects and a few dead objects alive, so use replication algorithm, only need to pay a small amount of live object replication costs can be completed collection; In the old age, because the survival rate of the object is very high, there is no extra space for him to allocate guarantee, so the use of mark cleaning or mark sorting algorithm for recycling;
contrast
11. What kinds of garbage collectors are there, and what are their advantages and disadvantages?
Garbage recyclers are classified into the following types: Serial, ParNew, Parallel Scavenge, Serial Old, Parallel Old, CMS, G1;
-
Serial: single-threaded collector that must stop the world and use the copy algorithm when collecting garbage. The biggest feature of garbage collection is that it needs to stop the world for all threads that are executing, which is unacceptable for some applications. However, if the real-time requirements of applications are not so high, as long as the pause time is controlled within N milliseconds, most applications can accept it. Is the default GC mode at the client level.
-
ParNew: a multithreaded version of the Serial collector that also needs to stop the world and copy calculations
-
The Insane, the Parallel Scavenge collector, is designed to achieve a controlled throughput. The greatest difference between ParNew and the GC auto-regulation strategy; The vM collects performance monitoring information based on the system running status and dynamically sets these parameters to provide optimal pause time and maximum throughput.
-
Serial Old: An older version of the Serial collector, a single-threaded collector that uses a mark-collation algorithm.
-
Parallel Old: An older version of the Parallel Avenge collector, using the multithreaded, mark-collation algorithm.
-
CMS: is a collector to obtain the shortest recovery pause time as the goal, mark clearing algorithm, operation process: initial mark, concurrent mark, re-mark, concurrent clear, the end of the collection will produce a large number of space debris;
-
G1: Realization of the tag sorting algorithm, and the operation process mainly includes the following: initial tag, concurrent tag, final tag, and screening and recycling. No space debris can be generated and the pauses can be precisely controlled; G1 divides the whole heap into multiple regions (regions) of equal size. G1 tracks the size of garbage in each Region and maintains a priority list in the background. According to the allowed collection time, the Region with the largest collection value is prioritized to achieve the highest collection efficiency within a limited time.
Collaboration between garbage collectors:
Comparison of garbage collectors:
12. Explain the CMS recycling process in detail? What’s the problem with CMS?
The CMS(Concurrent Mark Sweep) collector is a collector whose goal is to achieve the shortest collection pause time (pursuit of low pauses). It allows the user thread and the GC thread to execute concurrently during garbage collection, so that the user does not feel a significant lag during garbage collection.
As the name suggests, CMS is based on a mark-and-sweep algorithm. The CMS recycling process is divided into the following four steps:
-
CMS Initial mark: Basically marks the lower level objects from GC Root. This process STW, but there are not many lower level objects directly associated with GC Root, so this process is actually very fast.
-
CMS Concurrent Mark: Based on the results of the previous step, identify all associated objects down to the very end of the chain. This process is multithreaded, and although it theoretically takes longer, other worker threads do not block, and there is no STW.
-
CMS remark: As the name implies, to mark again. Why mark it again? Because step 2 does not block other worker threads, it is very likely that other threads will generate new garbage during identification.
-
CMS Concurrent sweep: The sweep phase cleans and removes dead objects judged to be dead by the marking phase. Since living objects do not need to be moved, this phase can also be performed concurrently with user threads.
CMS problems:
1. CPU resources are strained due to concurrent reclamation:
In the concurrent phase, it does not cause user threads to pause, but it does slow down the application by taking up a portion of the threads, reducing the overall throughput of the application. By default, CMS starts the number of reclaim threads :(number of CPU cores + 3) / 4. When the number of CPU cores is less than four, CMS may have a large impact on user programs.
2. Unable to clean floating garbage:
During the concurrent tagging and concurrent cleanup phases of the CMS, new garbage objects are generated as user threads continue to run. However, these garbage objects are generated after the tagging process is completed, and the CMS cannot dispose of them in the current collection, so it has to be cleaned up in the next garbage collection. This part of the garbage is called “floating garbage”.
3. Concurrent Mode Failure
Because the user threads are running concurrently during the garbage collection phase, it is necessary to reserve enough memory space for the user threads to use. Therefore, CMS cannot wait until the garbage collection is almost completely filled, but must reserve some space for concurrent collection programs to run. By default, when used as much as 92% of the old s space will trigger the CMS after garbage collection, this value can be – XX * * : * * CMSInitiatingOccupancyFraction parameters to set.
There is a risk that a Concurrent Mode Failure will occur if the memory set aside during the CMS operation fails to meet the program’s need to allocate new objects, at which point the virtual machine will have to initiate a backup plan: Stop The World, temporarily enabling Serial Old to redo Old garbage collection, resulting in long pauses.
4. Memory fragmentation problem:
CMS is a collector based on a “mark-clean” algorithm, which means that there is memory fragmentation at the end of the collection. Too much fragmentation can cause problems with allocating large objects, often resulting in a situation where there is a lot of free space in the old years, but you just can’t find a contiguous enough space to allocate the current object, and you have to trigger a Full GC beforehand.
In order to solve this problem, a CMS provides a collector – XX * * : + UseCMSCompactAtFullCollection switch parameters (open) by default, when used for Full GC open memory fragments merging process, as a result of this memory must live mobile object, cannot be concurrent, This will make the pause longer. There is another parameter – XX: * * CMSFullGCsBeforeCompaction, the function of this parameter is required in the CMS in performing several times is not tidy space after a Full GC, the next first before they enter Full GC defragmentation (the default value is 0, Represents defragmentation every time a Full GC is entered.
13. What is the recycling process of G1?
The G1 (Garbage First) collector adopts the design concept of local collection and memory layout based on Region. It is mainly oriented to server applications. G1 was designed to replace CMS as a full-featured collector. G1 became the default garbage collector in server mode after JDK9, replacing the default combination of the Parallel Avenge plus Parallel Old, while CMS was declared as not recommended. G1 is a collector based on the mark-collation algorithm as a whole, but it is also based on the mark-copy algorithm locally (between two regions).
G1 recovery process, the operation process of G1 collector can be roughly divided into four steps:
-
Initial tag (STW) : Simply tag objects that GC Roots can associate directly with, and change the value of the TAMS pointer so that the next phase of user threads running concurrently allocates new objects to the available regions correctly. This phase requires the thread to pause, but it is very short, and it is done synchronously during the Minor GC, so the G1 collector actually has no additional pauses during this phase.
-
Concurrent marking: Start with GC Roots to analyze the reacability of objects in the heap, recursively scan the object graph in the whole heap to find objects to reclaim. This stage is time-consuming, but can be performed concurrently with the user program. When the object graph scan is complete, objects whose references changed at the time of concurrency are also reprocessed.
-
Final flag (STW) : Pause the user thread for a short time to process objects that have references that change after the concurrency phase ends.
-
Cleaning phase (will STW) : Update the statistics of a Region, sort the reclamation value and cost of each Region, and make a reclamation plan based on the pause time expected by users. You can select multiple regions to form a collection and copy the surviving objects of the Region to an empty Region. Then clean up the entire old Region. The operation here involves moving a live object, which must suspend the user thread, and is done in parallel by multiple collector threads.
14. What does a full GC look like in a JVM?
First describe the Java heap partition.
In Java, the heap is divided into two distinct regions: Young and Old, with the new generation taking up 1/3 of the total space by default and the Old two-thirds by default. The new generation has three partitions: Eden, To Survivor, and From Survivor, and their default ratio is 8:1:1.
In the new generation of garbage collection (also known as Minor GC), only a small number of objects survive, so the replication algorithm is used to complete the collection with only a small amount of replication cost.
Garbage collection in the old days (also known as Major GC) usually uses a “mark-clean” or “mark-clean” algorithm.
Then describe the transformation process between them:
-
Objects are allocated in Eden first. When the Eden area does not have enough space to allocate, the virtual machine will initiate a Minor GC.
- After the first GC in Eden, the surviving objects are moved to one of the Survivor partitions;
- At this time, the replication algorithm will be adopted to clean Eden and FROM area together, and the surviving objects will be copied to to area.
- If you move it once, the age of the object increases by 1. If the age of the object exceeds a certain threshold, it moves directly to the old age. The GC age threshold can be set by -xx :MaxTenuringThreshold. The default value is 15.
- Dynamic object age determination: when the sum of all object sizes of the same age > (Survivor area memory size * this target usage), objects older than or equal to this age directly enter the old age. The usage is specified by -xx :TargetSurvivorRatio, which defaults to 50%.
- A guarantee allocation occurs when Survivor is out of memory, and objects that exceed the specified size can go straight to the old age.
-
Large objects go directly to the old age. Large objects are objects that require a large amount of contiguous memory space (such as strings and arrays). In order to avoid the efficiency of allocating memory for large objects due to the replication caused by allocation guarantee mechanism.
-
When the old age is Full and no more objects can be held, the Minor GC is usually followed by a Full GC, which cleans up the entire heap — both young and old generations.
15. What’s the difference between the Minor AND Full GC?
Minor GC: Collects only new generation GC.
Full GC: Collects all parts of the heap, including the new generation, old generation, and permanent generation (in JDK 1.8 and later, permanent generation was removed and replaced with metaspace metspace).
**Minor GC is triggered when the Eden field is full.
Full GC trigger condition:
- The average size entering the old age after passing the Minor GC is greater than the memory available for the old age. If the statistics say that the average promotion size of the previous Minor GC is greater than the amount of space left in the current Old Gen, then the Minor GC will not trigger and the full GC will trigger.
- There is not enough space in the old generation to allocate new memory (or not enough space in the permanent generation, but only in JDK1.7, which is the reason for replacing the permanent generation with a meta-space to reduce the frequency, reduce the GC burden, and improve its efficiency).
- If the size of the object is larger than the available memory of To Space, the object is transferred To the old age and the available memory of the old age is smaller than the size of the object.
- When system.gc is called, Full GC is recommended, but not necessarily executed.
16. Introduce the guarantee principle of space allocation?
If a large number of objects survive in the YougGC generation, but the survivor zone is no longer available, then you have to move to the old age, but then you find that the old age is no longer able to accommodate these objects, what happens? In fact, the JVM has an old age space allocation guarantee mechanism to ensure that objects can be old age.
Before executing each YoungGC, the JVM checks whether the maximum contiguous available space in the old generation is greater than the total size of all objects in the new generation. Because in extreme cases, after the YoungGC generation, all the objects will survive, but the survivor zone will not fit, and then all the objects will enter the old age. If the available contiguous space of the old generation is greater than the total size of all objects of the new generation, then you can safely YoungGC. But if the memory size is less than the old s new generation total size of the object, it is not enough space in the new generation of all possible old s live objects, this time the JVM will check – XX: HandlePromotionFailure parameters are allowed to guarantee failure, if allowed, It will determine whether the maximum available continuous space of the old age is greater than the average size of the objects promoted to the old age. If so, a YoungGC will be attempted, as soon as this YoungGC is risky. If less than, or – XX: HandlePromotionFailure parameters are not allowed to guarantee fail, then will conduct a Full GC.
After allowing the guarantee to fail and attempting YoungGC, three scenarios can occur:
- (1) After YoungGC, the size of the surviving object is smaller than that of survivor, and the surviving object enters the survivor zone
- ② After YoungGC, the size of the surviving object is larger than survivor but smaller than the size of the large available space of the old age.
- (3) After YoungGC, “Handle Promotion Failure” occurs and Full GC is triggered when the surviving objects are larger than survivor and larger than the large available space of old age. If there is not enough space in the old years after Full GC, OOM memory will run out.
To understand the guarantee principle of space allocation, see the following figure:
17. What is class loading? Class loading process?
The virtual machine loads the data describing the class into memory, verifies, parses, and initializes the data, and finally becomes a class object that can be directly used by the VIRTUAL machine.
The whole life cycle of a class includes Loading, Verification, Preparation, Resolution, Initialization, Using, and Unloading. Preparation, validation and parsing are collectively referred to as Linking. As shown in the figure:
Load, validation, preparation, initialization and unload the order of the five phases is certain, the class loading process must, in accordance with the order, step by step while parsing stage does not necessarily: in some cases it can start again after the initialization phase, this is to support the Java language runtime binding (also called dynamic binding or late binding)
The class loading process is as follows:
-
Loading is divided into three steps: 1. Obtain the binary stream of the class through the full name of the class; 2. Convert the static storage structure of the binary stream into the runtime data structure of the method area; Generate a class object for the class in the heap.
-
Validation: Verify that the byte stream information in this class file is consistent with the requirements of the VIRTUAL machine and does not threaten the security of the JVM;
-
Preparation: Allocates memory for static variables of the class object and initializes their initial values;
-
Analysis: this stage is mainly completed symbol reference into direct reference;
-
Initialization: During the initialization phase, Java code defined in the class is executed. The initialization phase is the process of calling the class constructor;
18. What is a class loader and what are the common class loaders?
Class loaders are: the binary byte stream that gets a class by its full class name is called a class loader; Class loaders fall into the following four categories:
-
BootStrapClassLoader: used to load Java core class libraries, cannot be directly referenced by Java programs;
-
The Extension ClassLoader is used to load Java Extension libraries. The virtual machine implementation of Java provides an Extension library directory in which the ClassLoader finds and loads Java classes.
-
System class loader (AppClassLoader) : it loads classes according to the Java class path, generally speaking, Java application classes are loaded through it;
-
Custom class loaders: implemented by the Java language and inherited from ClassLoader;
19. What is the parental delegation model? Why do we need the parental delegation model?
When a classloader receives a classload request, it first does not try to load the request itself. Instead, it delegates the request to the parent classloader. Only when the parent classloader fails to find the class in its search scope, the child loader will try to load the class itself.
To prevent multiple identical bytecodes in memory; Because without parental delegation, the user can define a java.lang.String class by himself, and there is no guarantee that the class is unique.
Ps: How do you break the parental delegation model?
Custom ClassLoader, inherit the ClassLoader class, overwrite the loadClass method and findClass method.
20. Name some examples you know of that break parental delegation. Why?
-
JNDI by introducing the Thread context class loader, can in the Thread. SetContextClassLoader method set, the default is the application class loader, to load the code of SPI. With thread-context classloaders, it is possible to accomplish the behavior of the parent class loader requesting the subclass loader to complete the class loading. The reason for the break is that the class loader of the JNDI service is the initiator class loader, and in order to complete the advanced class loader requests the subclass loader (the thread context loader above) to load the class.
-
Tomcat: The application class loader preferentially loads the classes in the application directory. It does not delegate the classes to the parent loader until the classes fail to be loaded.
Tomcat has built its own class loaders for three main purposes:
- For each
webapp
In theclass
andlib
They need to be isolated so that libraries loaded in one application don’t affect another, and for many applications, they need to have shared liBs so they don’t waste resources. - with
jvm
Same security issues. Use a separateclassloader
To loadtomcat
Their own class libraries to avoid other malicious or unintentional damage; - Hot deployment.
The Tomcat class loader is shown below:
- For each
-
OSGi implements modular hot deployment. Class loaders are customized for each module. When a module needs to be replaced, the module and class loaders are replaced together. In the process of class loading, there is a horizontal class loader loading behavior. The reason for the break is to achieve module hot replacement.
-
In JDK 9, Extension ClassLoader was replaced by Platform ClassLoader. When the Platform and application ClassLoader received the class loading request, it should determine whether the class can be assigned to a system module before assigning it to the parent loader. If such an attribution can be found, delegate the loading to the loader responsible for that module first. The reason for the break is to add modular features.
21. What about JVM tuning commands?
- JPS: JVM Process Status Tool, which displays all HotSpot VIRTUAL machine processes in a specified system.
- Jstat: Jstat (JVM Statistics Monitoring) is a command used to monitor the running status of a virtual machine. It displays the running data of a virtual machine, such as class loading, memory, garbage collection, and JIT compilation.
- Jmap: jmap (JVM Memory Map) command is used to generate heap dump file, if you don’t use this command, also rich in use – XX: + HeapDumpOnOutOfMemoryError parameters to make the virtual machine appears OOM, automatically generate the dump file. Jmap can not only generate dump files, but also query finalize execution queue, Java heap, and persistent generation details, such as current usage, which collector is currently in use, etc.
- Jhat: The JHAT (JVM Heap Analysis Tool) command is used with Jmap to analyze dump results generated by Jmap. Jhat has a mini-HTTP /HTML server built in. After dump Analysis results are generated, you can view them in a browser. Note that the dump file generated by the server is usually copied to local or other machines for analysis because jHAT is a time-consuming and resource-consuming process.
- Jstack: The jSTACK is used to generate a thread snapshot of the Java VM at the current time. Jstack looks at the call stack of individual threads to see what unresponsive threads are doing in the background, or waiting for resources. If a Java program crashes to generate a core file, the JStack tool can be used to get information about the Java Stack and native stack of the core file, so you can easily know how the Java program crashed and where the problem occurred in the program.
Shoulders of giants
Jishuin.proginn.com/p/763bfbd35…
www.javanav.com/val/93550f1…
Juejin. Cn/post / 684490…
www.cnblogs.com/chiangchou/…
Juejin. Cn/post / 684490…
Segmentfault.com/a/119000002… Here I also recommend a collection of computer books warehouse, the warehouse has hundreds of classic CS e-books, read the classic books will be deeper ~
Click this link to get you to the list of must-read books (PDF download included)
Github also has a repository at github.com/cosen1024/a… Welcome to star.