What is a word of mouth?
This is what you should say when the interviewer asks you a question about it.
Have you ever been asked a question that I know how to answer? I’ve seen it before and I’ve even written a blog about it myself, but I still falter or say what comes to mind.
It is different from the general blog language trivial, but concise language, highlighting the point, in the form of communication rather than preaching to explain the problem.
More slobber: github.com/Omooo/Andro…
With this in hand, you can add one more personal skill to your resume:
Familiar with JVM, including memory region, memory model, GC, class loading mechanism, compiler optimization, etcCopy the code
The following is the body, welcome to discuss ~ :
directory
- Memory area
- The memory model
- Memory allocation reclaiming policy
- Creation, memory layout, and access location of Java objects
- GC
- Reference counting and accessibility analysis
- Garbage collection algorithm
- The G1 and ZGC
- Class loading mechanism
- Parental delegation model
- Compiler optimization
- Methods the inline
- Escape analysis
- Vm related
- The HotSpot and JIT
- Dalvik
- ART and AOT
- How does the JVM perform method calls?
- How does the JVM implement reflection?
- How does the JVM implement generics?
- How does the JVM implement exceptions?
- How does the JVM implement annotations?
Memory area
Runtime data in Java can be divided into two parts, one that is thread private, including the virtual machine stack, local method stack, and program counters, and the other that is thread shared, including the method area and heap.
A program counter is a small memory space that can be viewed as a line number indicator of the bytecode being executed by the current thread. Virtual machine stack describes the memory model of Java method execution. Each method execution creates a stack frame to store information such as local variable table, operand stack, dynamic link address, method exit, etc. The process of each method from invocation to completion of execution corresponds to a process of loading and unloading frames in the virtual machine. The role played by the Native method stack is very similar to that played by the virtual machine stack, except that the Native method stack describes the in-memory model of Native method execution.
The Java heap is a data area shared by all threads and is used primarily to hold object instances. It is also the main area managed by the garbage collector. From a memory collection point of view, the Java heap can also be subdivided into new generation and old generation because modern collectors are mostly generational. The new generation can be further divided into Eden zone, From Survivor zone and To Survivor zone. From a memory allocation perspective, the Java heap shared by threads may be divided into allocation buffers tLabs that are private to multiple threads. However, no matter how to divide, it has nothing to do with the storage content, no matter which area, storage is the object instance, the purpose of further division is to better reclaim memory or faster allocation of memory. The method area is used to store class information that has been loaded by the virtual machine, constants, static variables, even if the compiler compiled the code, and so on. The JVM has relatively loose limits on the method area, and in addition to the fact that the Java heap does not require contiguous memory and can be either fixed size or extensible, it can also choose not to implement garbage collection. Garbage collection is relatively rare in this area. The runtime constant pool is part of the method area that stores the various literal and symbolic references generated at compile time. An important feature of the runtime constant pool over the Class file constant pool is its dynamic nature, which means that new constants can be added to the pool at run time, such as the String intern method.
In Java version 6, the persistent generation is in non-heap memory; With the Java 7 release, the pool of static variables and runtime constants for the persistent generation is merged into the heap; In Java 8, permanent generation was replaced by meta-space. Many developers tend to refer to method areas as “permanent generations”, but the two are not equivalent. The HotSpot VIRTUAL machine only implements the method area using persistent generations, but Java 8 has removed the persistent generation implemented in the method area and replaced it with a meta-space, which is stored in local memory. So why does Java 8 use meta-spaces to replace persistent generations? What’s the good of that?
The official explanation is that the removal of permanent generation is an effort to merge HotSpot JVM and JRockit VM. Since JRockit does not have permanent generation, there is no need to configure permanent generation. Secondly, permanent generation memory is often out of use, easy OOM. This is because in Java 7, the size of the specified PermGen area is 8M. Since the recovery rate of metadata information of classes in PermGen is low at each FullGC, and it is difficult to determine how much space to allocate to PermGen, the size of PermSize depends on many factors. Examples include the total number of classes loaded by the JVM, the size of the constant pool, and the size of the methods.
The memory model
The JMM memory model is used to screen out differences in memory access between hardware and operating systems, so that Java programs can achieve a consistent memory access effect across platforms.
Java memory model provides for all Shared variables are stored in main memory, each thread has its own working memory, thread of working memory to save the use this thread to share variables of main memory copy of a copy of the thread to the operation of the variables must be done in working memory, and cannot be directly read/write variables in main memory, Different threads cannot directly access the data in each other’s working memory, and the transfer of variable values between threads requires the main memory to complete.
So why do it?
In fact, we will talk about some hardware knowledge, we know that the SPEED of CPU execution is far faster than the speed of memory access, in order to counteract this speed difference, there will be multiple CPU caches between CPU and memory, such as L1, L2, L3. When processing data, the CPU first reads the data in memory to its own CPU cache, then manipulates the data in the cache, and finally synchronizes the data to the memory. Here, you can think of the CPU’s cache as the thread’s working memory and memory as the main memory, which is a loose but understandable statement.
Memory allocation reclaiming policy
The memory allocation reclaiming policy consists of three aspects:
-
Objects are allocated in Eden area first
Specifically, allocations are made first on the TLAB in Eden, and if there is not enough space in Eden to allocate, a Minor GC is triggered.
-
Big object goes straight to the old age
Large objects are Java objects that require a large amount of contiguous memory space, such as arrays. Generally speaking, objects larger than 3M are allocated directly in the old age.
-
Long-lived objects enter the old age
Since virtual machines use the idea of generational collection to manage memory, memory collection must identify which objects should be placed in the new generation or the old generation. To do this, the virtual machine defines an object age counter for each object. If an object is still alive after Eden is born and after a Minor GC and can be accommodated by Survivor, it is moved to Survivor space and the object age is set to 1. Each time an object survives a Minor GC in a Survivor zone, its age increases by 1. When the age increases to a certain level, 15 by default, one will be promoted to the old age.
Finally, the Minor and Full GC.
Minor GC is a garbage collection that occurs in the new generation, and because Java objects are mostly dead, Minor GC is more frequent and faster.
A Full /Major GC is a GC that occurs in an older era, and is usually accompanied by at least one Minor GC, which is typically ten times slower than a Minor GC.
Creation, memory layout, and access location of Java objects
When the virtual machine gets a new instruction, it first checks whether the parameter of this instruction can locate a symbolic reference of a class in the constant pool, and checks whether the class represented by this symbolic reference has been loaded. If not, it goes through the class loading process. After the class load check is passed, the virtual machine allocates memory for the new object, and the required memory of the object is determined after the class load is complete. Allocating memory for an object is equivalent to dividing a certain size of memory from the Java heap. There are two allocation methods: pointer collision and free list. The choice of allocation method depends on whether the Java heap is clean, which in turn depends on whether the garbage collector used has the ability to compress. Object creation is a very frequent activity in virtual machines, and even just changing the location of a pointer is not thread-safe in the case of concurrency. There are two solutions, one is to use CAS with failed retry, and the other is to use TLAB, a thread-private allocation buffer.
Next comes the memory layout of the object, which in the HotSpot VIRTUAL machine stores in memory into three areas: the object header, the instance data, and the padding for it. You can use the OpenJDK open source JOL tool to view the memory layout of the Object. The size of the direct new Object is 16 bytes, that is, 12 bytes of Object header + 4 bytes of padding. JOL is useful for analyzing collection source code expansion, hash conflicts in hashMaps, and so on.
The Java program needs to operate the specific object on the heap by reference data on the stack. Because reference type only specifies a reference to the object in the Java Virtual Machine specification, it does not specify how the reference should locate and access the object in the heap. So object access depends on the virtual machine implementation. At present, there are two mainstream ways to use handle and direct pointer. Using a handle is equivalent to adding an intermediate layer. When the object is moved, only the pointer of the instance data in the handle will be changed, and the reference itself does not need to be changed. HotSpot uses the second approach, and the biggest benefit of using direct Pointers is that they are fast.
GC
When the garbage collector collects an object, it first determines whether the object is no longer in use. There are two methods: reference counting and reachable analysis.
Reference counting and accessibility analysis
Reference counting is simply adding a reference counter to an object, incrementing it every time a reference is made and subtracting it every time a reference fails. Reference counting is easy to implement and efficient to judge, but the JVM does not use reference counting to manage memory, mainly because it is difficult to solve the problem of circular references between objects. The idea of reachability analysis is to use a series of objects called GC Roots as starting points, and search down from these starting points. When an object has no reference chain to GC Roots, that is, it is unreachable, indicating that the object is not available. In Java, objects that can be used as GC Roots include objects referenced in the virtual machine stack and the local method stack, objects referenced in class static attributes in the method area, objects referenced in constants in the method area, and so on.
But that’s not to say reference counting is useless, as is the smart pointer used in the Android Framework Native layer. A smart pointer is a technique that automatically maintains an object reference count. It is an object rather than a pointer, but it refers to an object that is actually in use. In simple terms, when constructing a smart pointer, increase the reference count of the object it references. When a smart pointer is destructed, the object referenced by it is reduced. But how does it solve the cross-reference problem? In fact, it is realized by strong and weak reference, that is, the reference counting of the object is divided into strong reference counting and weak reference counting. Among them, the life cycle of the object is only controlled by strong reference counting. For example, when objects A and B reference each other, A is regarded as the parent and B as the child. Object A references B through strong reference counting, and B references A through weak reference counting. When A is no longer used, the life cycle of A is not affected by B because B references it through weak references. Therefore, A can be released safely. When A is released, its strong references to B are also released, and B can be safely reclaimed. In Android, sp is used for strong references and WP for weak references.
References in Java can be divided into four categories: strong references, soft references, weak references, and virtual references. Strong references are common in programs, and operations such as new will not be reclaimed by the OOM JVM as long as strong references are present. A SoftReference is reclaimed when memory is insufficient. The JDK provides a SoftReference class to implement a SoftReference. Weak references are collected during GC regardless of whether there is enough memory. WeakReference class can be used to implement weak references. Virtual references have no impact on the life cycle of the object, just to receive a system notification when the object is reclaimed, and you can use the PhantomReference class to implement virtual references.
Next comes the garbage collection algorithm.
Garbage collection algorithm
Garbage collection algorithms mainly include mark clearing, copy algorithm and mark sorting. The main problem with tag cleanup, which is to first mark objects alive through GC Roots and then uniformly clean unmarked objects, is memory fragmentation. The CMS collector used in the old days was based on the tag clearing algorithm. Replication algorithm is to put the memory space is divided into two pieces, each allocated objects are allocated on a piece of memory, only after the block of memory to use, directly on the copy objects of survival to the other, then I have to use a clean up the piece of space, but the cost of the algorithm is reduced memory usage greatly. Modern virtual machines use the replication algorithm to recover the new generation, but the memory is divided into one Eden zone and two Survivor zones, the ratio is 8:1:1, each time Eden and one Survivor zone is used, that is, only 10% of the memory will be wasted. If Survivor space is insufficient, you need to rely on other memory such as old memory for allocation guarantee. The efficiency of replication algorithm is relatively low when the object survival rate is relatively high, so the replication algorithm is generally not used in the old era. After the mark is cleared, all surviving objects are moved to one end, and then the memory area beyond the boundary is cleared.
Finally, the specific application of garbage collection algorithm, that is, garbage collector.
The G1 and ZGC
The Garbage First (G1) collector is a milestone in the history of Garbage collector technology. It pioneered the idea of collector design oriented to local collection and region-based memory layout. It was also a garbage collector for server-side applications, but CMS was deprecated after JDK9 and G1 became the default garbage collector. CMS was officially removed in JDK 14. All other collectors prior to the G1 collector, including the CMS, targeted garbage collection at either the entire Minor, Major, or Full Java heap. G1 is based on the Region heap memory layout. Although G1 is still designed according to the generation collection theory, its heap memory layout is very different from other collectors: G1 no longer insists on a fixed size and a fixed number of generational regions, but divides the continuous Java heap into multiple independent regions of equal size. Each Region can act as the Eden space of the new generation, Survivor space, or old age according to needs. The collector uses different policies to process regions based on their roles. The name Garbage First is derived from the fact that G1 prioritises regions that generate the most collection value according to the allowed collection pause times set by users.
The operation of the G1 collector can be roughly divided into four steps:
1. Initial tag
It’s just a matter of marking objects that GC Roots can be directly associated with, which is a short time to pause the thread.
2. Concurrent markup
Reachability analysis of objects in the heap is performed starting with GC Root, recursively scanning the entire heap object graph to find objects to reclaim. This phase is time-consuming, but can be performed concurrently with user programs.
3. Final marking
Another short pause is made on the user thread to handle new object reference chain changes during the concurrent marking phase.
4. Filter recycling
Update statistics of regions, sort the collection value and cost of each Region, and make a collection plan based on the pause time expected by users.
G1 goal is finished in a controllable pause time garbage collection, so for the partition design, but G1 also has some problems, such as the pause time is too long, usually the G1 pauses between tens to hundreds of milliseconds, although this number is already very small, but in the case of user experience have higher requirements still can not meet the actual demand, The G1 also supports limited memory space and is not suitable for very large memory systems, especially those with more than 100GB of memory, which can cause pause times to increase.
ZGC was introduced in JDK11 as a new generation of garbage collector. It was designed with three goals: supporting terabytes of memory, keeping pause times under 10ms, and impacting program throughput by less than 15%.
Class loading mechanism
The virtual machine loads the data describing the Class from the Class file to the memory, verifies, parses, and initializes the data, and finally forms Java objects that can be directly used by the VIRTUAL machine. This is the Class loading mechanism of the VIRTUAL machine.
The class loading process is divided into five phases: loading, validation, preparation, parsing, and initialization.
In the load phase, the binary byte stream that defines a class is obtained by its fully qualified name, and the static storage structure represented by the byte stream is converted into the runtime data structure of the method area. The load phase is the most controlled by the developer because the developer can customize the class loader. For arrays, the situation is different. The array class itself is not created by the classloader; it is created directly by the Java VIRTUAL machine.
Validation is the first step in the linking phase, which ensures that the byte stream in the Class file meets the requirements of the current virtual machine and does not compromise the security of the virtual machine itself. It includes file format check, metadata check, bytecode check and so on.
The preparation phase is the phase where memory is formally allocated for class variables and initial values are set. The memory used by these variables is allocated in the method area. Note that only class variables, not instance variables, are allocated on the Java heap along with the object when it is instantiated. Second, the initial value of the variable is zero for the data type.
The parsing phase is the process by which the virtual machine replaces symbolic references in the constant pool with direct references. Symbolic references describe a referenced target as a set of symbols. A direct reference can be a pointer to a target directly.
The initialization phase is the process of executing the class constructor <clinit>() method. The < Clinit >() method is generated by combining the assignment action of all class variables in a class that the compiler automatically collects with statements in a static statement block in the order in which the statements appear in the source file. The virtual machine ensures that a class’s < Clinit >() methods are locked and synchronized correctly in a multithreaded environment. If multiple threads initialize a class at the same time, only one thread will execute the class’s < Clinit >() methods, and all the other threads will have to block and wait, which is one of the main reasons why static inner classes can implement singletons.
Parental delegation model
The working process of the parental delegation model is: If a class loader received the request of the class loading, it won’t try to load the first class, but to delegate the request to the parent class loader to complete each level of class loaders, so all the class loading request should be sent to the top finally start the class loader, only when the parent class loader feedback they can’t finish the load request, The child loader will try to load it itself.
One of the obvious benefits of using the parent-delegate model to organize relationships between class loaders is that Java classes have a hierarchical relationship with priority along with their classloaders. For example, the Object class, no matter which class loader is used to load it, is the same class in each loader environment of the application, while avoiding repeated loading. Moreover, the parental delegation model also ensures the stable operation of Java programs. For example, you can’t directly use an UnSafe class in an application.
The implementation of the parent delegate model is relatively simple. The code is concentrated in the loadClass method of the ClassLoader to check whether it has been loaded. If not, the parent loadClass method is called first. If the parent loader fails to load, throw a ClassNotFoundException and call your own findClass method to load.
Compiler optimization
Internally, I once shared my knowledge about compiler optimization. The topic is “From final ‘can’ improve performance, on compilation optimization”.
The compilation of Java code is divided into front-end compilation and back-end compilation. Front-end compilation refers to the process of converting Java code into bytecode using javAC tools. Since Javac is responsible for bytecode generation, there must be some general optimization tools. Examples include constant folding, auto-unpacking, conditional compilation, etc., followed by JDK9’s unified entry to overloading “+” using the StringContactFactory. Back-end compilation refers to the interpreter and just-in-time compilers (C1, C2) built into the JVM. JVM optimization of code execution can be divided into runtime optimization and just-in-time compiler (JIT) optimization. Runtime optimization mainly explains some of the mechanisms common to execution and dynamic compilation, such as locking mechanisms (such as skew locking), memory allocation mechanisms (such as TLAB), and so on. In addition, there are several dedicated to optimizing the efficiency of interpretation execution, such as the template interpreter and inline cache (which optimizes the dynamic binding of virtual method calls). Just-in-time compiler optimization for the JVM involves converting hot code into machine code, on a method basis, to run directly on the underlying hardware. It uses a variety of optimizations, including methods such as method inlining and escape analysis that can be used by static compilers, as well as speculative optimizations based on program running profiles.
I’ll focus on method inlining and escape analysis.
Method inlining refers to the method body of the target method is included in the compilation scope when a method call is encountered in the compilation process, and replaces the optimization method of the original method call. Method inlining not only eliminates the performance overhead of the call itself, but also triggers even more optimizations. Therefore, it is one of the most important aspects of compilation optimization. Take getter/setter as an example. If there is no method inline, when calling getter/setter, the program needs to save the execution position of the current method, create and press the frame for the getter/setter, access the field, pop the frame, and finally resume the execution of the current method. When method calls to getters/setters are inline, all that is left is field access. However, the just-in-time compiler does not inline methods indefinitely. It determines whether to inline methods based on the number of times a method is called, the method body size, the size of the Code cache, and so on. Even hot code, for example, will not be inlined if the method body is too large because it will take up more memory. So in normal coding, use small method body as much as possible. For virtual method calls that need to be dynamically bound, the just-in-time compiler needs to de-blur the virtual method calls, that is, convert them to one or more direct calls, before method inlining can take place. At this point, you should understand the benefits of final/static. So use the final, private, and static keywords whenever possible. Virtual methods, because of inheritance, require extra type checking to know which method is actually called.
Escape analysis is an analysis technique to determine whether an object is referenced by an external method or accessed by an external thread. The just-in-time compiler can use the results of escape analysis to perform optimizations such as lock elimination, on-stack allocation, and scalar substitution. If the just-in-time compiler can prove that the lock object does not escape, then locking and unlocking the lock object does not make any sense because it cannot be acquired by other threads. In this case, the just-in-time compiler can eliminate locking and unlocking the lock object. Synchronized (new Object), for example, is completely optimized out. In fact, the results of escape analysis are more often used to translate new object operations into on-stack assignments or scalar replacements. As we know, objects in the Java Virtual machine are allocated on the heap, and the contents of the heap are visible to any thread. Meanwhile, the JVM needs to manage allocated heap memory and reclaim the memory occupied by objects when they are no longer referenced. If escape analysis proves that some newly created object does not escape, then the JVM can simply allocate it to the stack and automatically reclaim the allocated memory space by popping the current method’s stack frame when the method exits. However, instead of stack allocation, the HotSpot VIRTUAL machine uses scalar substitution because of the amount of code that has to be changed to assume that objects are heap allocatable. Scalars are variables that can store only one value, such as local variables in Java code. Scalar substitution is an optimization technique that can be thought of as replacing access to the fields of an object with access to local variables.
Vm related
Let’s start with the HotSpot VIRTUAL machine.
From a hardware perspective, Java bytecode cannot be run directly, so the JVM needs to translate bytecode into machine code. In HotSpot, there are two types of translation, one is interpreted execution, which translates bytecode into machine code one by one and executes it, and the other is just-in-time compilation execution, which compiles the whole method into machine code and executes it. The former has the advantage of not having to wait to compile, while the latter has the advantage of actually running faster. HotSpot defaults to hybrid mode, combining the benefits of both interpret execution and compile execution. It interprets the execution of bytecode, and then compiles the hot code that is executed repeatedly in it on a method-by-method basis.
HotSpot comes with multiple JIT just-in-time compilers, C1 and C2, which were introduced to make a trade-off between compile time and the efficiency of the generated code. Java 7 introduced hierarchical compilation, which divides the execution state of the JVM into five levels. Layer 0 is interpreted execution, performance monitoring is enabled by default; Layers 1 through 3 are called C1 compilation, which compiles bytecode into native code for simple, reliable optimizations; The fourth layer is C2 compilation, which also compiles bytecode into native code, but enables some optimizations that take longer to compile, or even some aggressive optimizations that are unreliable based on performance monitoring information.
So much for HotSpot and JIT.
Dalvik and ART.
HotSpot is stack based, while Dalvik is register based. There is no information related to Dalvik in official documents. After Android 5, ART has completely replaced Dalvik. Dalvik uses JIT and ART uses AOT. AOT differs from JIT in that JIT compiles at run time, dynamically, and recompiles Odex each time a program is run. AOT, on the other hand, is statically compiled. The application will start the Dex2OAT process during installation to precompile dex into OAT files, without recompiling each time the application is run. In addition, ART also improves the GC process compared to Dalvik, with only one GC pause, whereas Dalvik requires two GC pauses, and processes in parallel while the GC remains paused. AOT solves the problem of application startup and running speed, but also brings two other problems. One is that it takes a long time to install applications after application installation and system upgrade, and the other is that optimized files occupy extra storage space. After Android 7, JIT returned to form AOT/JIT hybrid compilation mode, which features: Dex will not be compiled when the application is installed, but the dex file will be first executed by the interpreter when the application is running. Hot codes will be identified and stored in the Code cache to generate profile files after JIT compilation. When the phone enters the IDLE or Charging state, the system scans the profile file in the App directory and executes the AOT process to compile it. So said, actually is and HotSpot a little inside flavour.
How does the JVM perform method calls?
It’s about understanding how the Java compiler and JVM distinguish methods. Method overrides are determined at compile time, whereas method overrides are determined at run time.
While the Java compiler selects overloaded methods based on the declared types of the parameters passed in, the JVM recognizes methods by relying on method descriptors, which consist of the parameter types and return types of the method. The JVM has five built-in instructions related to method calls, Invokestatic invokes static methods, Invokespecial invokes private instance methods, Invokevirtual invokes non-private instance methods, InvokeInterface invokes interface methods, and InvokeDynamic invokes dynamic methods. For InvokeStatic and Invokespecial, the JVM can directly identify the specific target method, while for Invokevirtual and InvokeInterface, in most cases, the JVM needs to execute the specific method The specific target method is determined based on the caller’s dynamic type. The only exception is if the virtual machine can determine that there is one and only one target method, such as if the method is final, then it can determine the target method directly without dynamic typing.
The invokespecial and InvokeInterface mentioned above are also called virtual method invocation or dynamic binding. Compared with static binding that can directly locate the method, virtual method invocation is more time-consuming. The JVM uses a space-for-time strategy to implement dynamic binding. It generates a table of methods for each class to quickly locate the target method, which occurs in the run-up to class loading. The method table is essentially an array. It has two characteristics. First, the subclass method table contains all the methods in the parent class method table, and second, the index of the subclass method in the method table is the same as the index of the parent class method that it overrides. We know that symbolic references in method call instructions are resolved to actual references before execution. For statically bound method calls, the actual reference refers to a specific method, and for dynamically bound methods, the actual reference is the index value of the method table.
The JVM also provides inline caching to speed up dynamic binding, which caches the dynamic type of the caller in a virtual method call, as well as the target method corresponding to that type.
How does the JVM implement reflection?
Reflection is an important feature of the Java language that allows a running Java program to observe and even modify the dynamic behavior of the program. There are two aspects: one is that for any class, all the attributes and methods of the class can be known; the other is that for any object, any attributes and methods of the class can be called.
The use of reflection is relatively simple. The apis involved are divided into three categories: Class, Member (Filed, Method, Constructor), Array and Enumerated. I read the official Oracle document directly at that time, which is very detailed.
My curiosity about reflection comes from the fact that I often hear that reflection affects performance, so where is the performance overhead and how can you optimize it?
Before I do that, I’ll talk about how reflection is implemented by the JVM.
We can direct the new Exception to check Method call stack trace, in the calling Method. The invoke (), is to call DelegatingMethodAccessorImpl invoke, It actually calls the Invoke method of NativeMethodAccessorImpl. The former is called a delegated implementation and the latter is called a local implementation. Why delegate the middle tier when the concrete implementation of the delegate implementation is a local implementation? In fact, the Java reflection invocation mechanism also sets up another implementation that dynamically generates bytecode, called a dynamic implementation, using the Invoke directive directly to call the target method. Delegate implementation is used to switch between the local implementation and the dynamic implementation. According to the notes, dynamic implementations run up to 20 times faster than native implementations. This is because a dynamic implementation does not require switching from Java to C++ to Java, but because producing bytecode is time consuming, a single invocation is three or four times faster than a native implementation. Given that many reflection calls are executed only once, the JVM sets a threshold of 15, uses a local implementation below 15, and starts dynamically generating bytecode for dynamic implementation above 15. This is also called the Inflation mechanism.
What about the performance overhead of reflection? We normally call class. forName, class. getMethod, and Method.invoke. Class.forname calls the local method, and class.getMethod iterates through the public method of the Class, and if there is no match, it iterates through the public method of the parent Class, both of which are predictably time-consuming. First, the invoke Method takes a variable length argument, which builds an Object array. This also introduces a boxing operation for basic data types. Inside the Invoke, a runtime permission check is performed. This is also a loss point. Normal method calls may have a range of optimizations, such as method inlining and escape analysis, that reflection calls cannot do, again magnifying the performance gap.
Optimize reflection calls to avoid reflection calling virtual methods as much as possible, turn off run-time permission checks, possibly increase the wrapper class cache for the base data types, turn off Inflation if the number of calls is known, and increase the number of inline cached record types.
How does the JVM implement generics?
Generics in Java are nothing more than syntactically sugar, and the actual type is erased at compile time, but a new checkcast directive is added to check at compile time and throw a ClassCastException if the type does not match.
However, there is still information about generic parameters in the bytecode, such as T foo(T) in the method declaration and “(TT;) in the method Signature. TT”, this information is available via the reflection Api getGenericXxx.
In addition, it is important to note that generics combined with arrays have some problems that are easy to overlook. Arrays are covariant and materialized, and arrays are known and checked at runtime for their element type constraints. It is possible to compile normally but run with an ArrayStoreException, so use lists whenever possible, which is why Effective Java recommends lists over arrays. This can also be seen when we look at collections source code, such as ArrayList, which stores data as an Object[] instead of E[], but with a strong twist. There is also the use of wildcards to improve the FLEXIBILITY of the API, in short, PECS principles. A classic example would be the collections. copy method:
Collections.copy(List<? super T> dest, List<? extends T> src);
Copy the code
How does the JVM implement exceptions?
In Java, all exceptions are the Throwable class or its subclasses, which have two main subclasses Error and Exception. When the program triggers Error, its execution state cannot be recovered and the thread or virtual machine needs to be terminated, such as memory overflow, stack overflow, etc. Exceptions fall into two categories: checked exceptions, such as IOException, and runtimeExceptions, such as null Pointers, out-of-bounds arrays, and so on.
I’m going to talk about this in three ways.
First, exception instances are expensive to construct. This is because when constructing an exception instance, the JVM needs to generate the stack trace of the exception, which accesses the Java stack frame of the current thread one by one and records various debugging information, including the name of the method to which the stack frame points, the name of the class in which the method is located, and the location of the method in the source code.
Second, the JVM needs an exception table to catch exceptions. Each method has an exception table, and each entry in the exception table represents an exception handler and consists of Pointers to from, to, and target and their exception types. Form-to is basically a try block, and target is the starting position of the catch. When the program raises an exception, the JVM detects that the index value of the bytecode that triggered the exception falls in the from-to range of the exception table, then determines whether the exception type matches and executes the bytecode at target to handle the exception.
Finally, the compilation of the finally code block. We know that the finally block must run (unless the virtual machine exits). So how does it work? This is a rather clumsy approach, as the current JVM does by copying the contents of the finally code block and placing them separately at the exit of all possible execution paths.
How does the JVM implement annotations?
There is no silver bullet, but where is the annotation information stored? In the Java bytecode is stored by RuntimeInvisibleAnnotations structure, it is an array of Annotations, classes, methods, properties, after all, be more Annotations can be added. Each element in the array is an ElementValuePair array, which stores the annotation parameter information.
Run-time annotations can be retrieved by reflection, compile-time annotations can be retrieved by APT, and basically nothing.