1. Basic concepts
The JVM is an imaginary computer that can run Java code, including a set of bytecode instructions, a set of registers, a stack, a garbage collection, a heap, and a storage method field. The JVM runs on top of the operating system and has no direct interaction with the hardware.
2. Running status
We all know that Java source files, through the compiler, can produce corresponding. Class files, or bytecode files, which in turn are compiled into machine code for a particular machine through an interpreter in the Java Virtual machine.
Namely, as follows:
-
Java source files — > compiler — > bytecode files
-
② Bytecode files — >JVM — > machine code
The interpreter for each platform is different, but the virtual machine implemented on each platform is the same, which is why Java can be cross-platform. When a program is started, the virtual machine is instantiated, and multiple programs are started with multiple virtual machine instances. If the program exits or is stopped, the VM instance disappears and data cannot be shared between multiple VM instances.
2.1 process
A thread here refers to a thread entity during the execution of a program. The JVM allows an application to execute multiple threads concurrently. Java threads in the Hotspot JVM map directly to native operating system threads. When thread local storage, buffer allocation, synchronous objects, stacks, program counters, and so on are ready, an operating system native thread is created. The Java thread terminates and the native thread is reclaimed. The operating system is responsible for scheduling all threads and allocating them to any available CPU. When the native thread completes initialization, the Java thread’s run() method is called. When the thread terminates, all resources of the native thread and Java thread are released.
Hotspot JVM runs in the background on the following threads:
2.2.JVM memory areas
The JVM memory area is mainly divided into thread private area (program counters, virtual stack, local method area), thread shared area (JAVA heap, method area), and direct memory.
Thread-private data areas have the same lifecycle as threads and are created/destroyed depending on the start/end of the user thread (in Hotspot VM, each thread maps directly to the operating system’s native thread, The thread shared area is created/destroyed when the VIRTUAL machine is started/shut down.
Direct memory is not part of the JVM runtime data area,But it is also used frequently: in JDK 1.4The introduction of NIO provides Channel and Buffer based I/O. It can use the Native function library to directly allocate out-of-heap memory, and then use DirectByteBuffer objects as references to this memory (see: Java I/O extensions), which avoids copying data back and forth between the Java heap and Native heap, and thus can significantly improve performance in some scenarios.
2.2.1. Program counter (thread private)
A small area of memory that is a line number indicator of the bytecode being executed by the current thread. Each thread has a separate program counter. This type of memory is also called “thread-private” memory. If the Java method is being executed, the counter records the address of the virtual machine bytecode instruction (the address of the current instruction). Null if the Native method is used. This memory region is the only one in the virtual machine that does not specify any OutOfMemoryError cases.
2.2.2. Virtual Stack (Thread private)
Is a memory model that describes the execution of Java methods. Each method creates a Stack Frame for storing information such as local variable table, operand Stack, dynamic link, method exit, etc. The process of each method from invocation to completion corresponds to the process of a stack frame being pushed into and out of the virtual machine stack. Stack frames are data structures used to store data and partial process results. They are also used to handle Dynamic Linking, method return values, and Dispatch exceptions. Stack frames are created as the method is called and destroyed as the method terminates — method completion counts whether the method completes normally or abnormally (throwing exceptions that were not caught within the method).
2.2.3. Local Method area (thread private)
The local method Stack is similar to the Java Stack, except that the VM Stack serves the execution of Java methods, while the Native method Stack serves the execution of Native methods. If a VM implementation uses the C-linkage model to support Native calls, the Stack will be a C Stack. But HotSpot VM simply blends the local method stack with the virtual machine stack.
2.2.4. Heap (Heap- thread shared) – run-time data area
An area of memory shared by threads, where objects and arrays are created and stored in Java heap memory, is the most important area of memory for garbage collection by the garbage collector. Since modern VMS use generational collection algorithms, the Java heap can also be subdivided From a GC perspective into the new generation (Eden zone, From Survivor zone, and To Survivor zone) and the old.
2.2.5. Method area/persistent generation (thread sharing)
Permanent Generation, as it is often called, is used to store classes loaded by the JVM, constants, static variables, code compiled by the just-in-time compiler, and more. HotSpot VM extends GC generation collection to the method area, using persistent generations of the Java heap to implement the method area so that HotSpot garbage collector can manage this part of memory as well as the Java heap. You don’t have to develop a special memory manager for the method area (the main goal of memory reclamation for permanent bands is collection and type offloading for constant pools, so the benefits are generally small). The Runtime Constant Pool is part of the method area. The Constant Pool Table is used to store various literals and symbolic references generated at compile time. This part of the Constant Table is stored in the runtime Constant Pool of the method area after the Class is loaded. The Java virtual machine has strict rules on the format of each part of a Class file (including, of course, the constant pool), and each byte must be used to store what data must conform to the specification before it is accepted, loaded, and executed by the virtual machine.
2.3.JVM runtime memory
The Java heap can also be subdivided From a GC perspective into the new generation (Eden, From Survivor, and To Survivor) and the old generation.
2.3.1. The new generation
It’s used to store new objects. It takes up about a third of the heap. Because objects are created frequently, MinorGC is frequently triggered for garbage collection by the new generation. The Cenozoic era is divided into three regions: Eden, ServivorFrom and ServivorTo.
2.3.1.1. Eden area
Birthplace of a new Java object (if the newly created object takes up a lot of memory, it is allocated directly to the old age). When Eden runs out of memory, MinorGC is triggered to do a garbage collection for the new generation.
2.3.1.2. ServivorFrom
The survivor of the last GC, as the scanned for this GC.
2.3.1.3. ServivorTo
A MinorGC survivor was saved.
2.3.1.4. Process of MinorGC (Copy -> empty -> Swap)
MinorGC uses a copy algorithm.
1: Copy Eden and servicorFrom to ServicorTo, age +1 First, copy the surviving objects in Eden and ServivorFrom to The ServicorTo region (if any objects are old and reach the old age criteria, the value is assigned to the old age region). Add the age of these objects +1 (if the ServicorTo is not enough space, put it in the old area);
2: Clear Eden and servicorFrom. Then, clear objects in Eden and servicorFrom.
3: ServicorTo and ServicorFrom swap finally, ServicorTo and ServicorFrom swap, the original ServicorTo becomes the ServicorFrom section in the next GC.
2.3.2. Old age
It mainly stores memory objects with long lifetime in application programs. Older objects are more stable, so MajorGC is not executed very often. Before MajorGC is generally carried out a MinorGC, so that there is a new generation of objects into the old age, resulting in space is not enough time to trigger. MajorGC is also triggered early for garbage collection to free up space when a large contiguous space cannot be found for a newly created larger object.
MajorGC uses a mark-clearing algorithm: it scans all the ages once, marks the surviving objects, and then recycles the unmarked objects. MajorGC takes a long time because it is scanned and recycled. MajorGC generates memory fragmentation, and in order to reduce memory consumption, we usually need to merge or mark it for direct allocation next time. An OOM (Out of Memory) exception is raised when the old age is too full.
2.3.3. Permanent generation
A permanent area of memory where classes and Meta information are stored. Classes are placed in a permanent area when they are loaded. Unlike the area where instances are stored,GC does not clean the permanent area during the main program runtime. This also causes the permanent generation area to swell as more classes are loaded, resulting in an OOM exception.
2.3.3.1. JAVA8 and metadata
In Java8, the persistent generation has been removed and replaced by an area called the “metadata area” (meta space). The essence of a meta-space is similar to that of a permanent generation. The biggest difference between a meta-space and a permanent generation is that the meta-space does not exist in a VIRTUAL machine but uses local memory. Therefore, by default, the size of the meta-space is limited only by local memory. The metadata of the class is put into native memory, and the string pool and static variables of the class are put into the Java heap, so that the amount of metadata of the class can be loaded is not controlled by MaxPermSize, but by the actual available space of the system.
2.4. Garbage collection and algorithms
2.4.1. How to identify garbage
2.4.1.1. Reference counting method
In Java, references and objects are associated. If you want to manipulate objects, you must do so by reference. Therefore, it is obvious that a simple way to determine whether an object is recyclable is by reference counting. Simply put, an object that has no references associated with it, that is, none of them has a reference count of zero, which means that the object is unlikely to be used again, and therefore is a recyclable object.
2.4.1.2. Accessibility analysis
To solve the circular reference problem of reference counting, Java uses the method of reachability analysis. Search through a series of “GC Roots” objects as a starting point. If there is no reachable path between “GC roots” and an object, the object is said to be unreachable. It is important to note that unreachable objects are not equivalent to recyclable objects, and at least two marking processes are required for unreachable objects to become recyclable. If the object is still recyclable after two marks, it will face collection.
2.4.2. Mark-sweep algorithm
The most basic garbage collection algorithm is divided into two stages, annotation and cleanup. The mark phase marks all objects that need to be reclaimed, and the clear phase recycles the space occupied by the marked objects. As shown in figure:As can be seen from the figure, the biggest problem of this algorithm is the serious memory fragmentation, and the problem that large objects cannot find available space may occur later.
2.4.3. Copying Algorithms
An algorithm was proposed to solve the memory fragmentation defect of Mark-Sweep algorithm. The memory is divided into two pieces of equal size based on the memory capacity. Use only one block at a time. When this block is full, copy the surviving objects to the other block to clear the used memory, as shown in the figure:Although this algorithm is simple to implement, has high memory efficiency and is not easy to generate fragmentation, the biggest problem is that the available memory is compressed to half of the original. Also, with more surviving objects, the efficiency of Copying algorithms decreases dramatically.
2.4.4. Mark-compact algorithm
Combined with the above two algorithms, in order to avoid defects. The marking phase is the same as the Mark-sweep algorithm. Instead of cleaning up objects, the living objects are moved to one end of memory. It then clears objects outside the end boundary. As shown in figure:
2.4.5. Generational collection algorithm
The generational collection method is currently used by most JVMS. The core idea is to divide the memory into different domains according to the lifetime of the object. The GC heap is typically divided into Tenured/Old Generation and Young Generation. The characteristics of the old generation are that only a small number of objects need to be recycled in each garbage collection, while the characteristics of the new generation are that a large number of garbage needs to be recycled in each garbage collection, so different algorithms can be selected according to different regions.
2.4.5.1. New generation and replication algorithms
Most JVM GCS currently adopt a Copying algorithm for the new generation because it recycles most of its objects with each garbage collection, meaning there are fewer operations that need to be replicated, but the new generation is usually not classified in a 1:1 fashion. Generally, the new generation is divided into a large Eden Space and two small Survivor Spaces (From Space, To Space). Each time Eden Space and one Survivor Space are used, when recycling, The surviving objects in the two Spaces are copied to the other Survivor space.
2.4.5.2. Old age and tag replication algorithm
In the old days, the mark-Compact algorithm was used because only a few objects were collected at a time.
1. Permanet Generation, which is used to store class classes, constants, method descriptions, etc. Recycling of the immortal generation mainly involves discarding constants and useless classes.
2. The memory allocation of objects is mainly in the Eden Space of the new generation and the From Space of Survivor Space(Survivor currently storing objects). In a few cases, it is directly allocated to the old generation.
3. When the Eden Space and From Space of the new generation are insufficient, a GC will occur. After GC, the surviving objects in Eden Space and From Space will be moved To To Space. Then clean up Eden Space and From Space.
4. If To Space is insufficient To store an object, the object is stored in the old generation.
5. After GC, Eden Space and To Space are used, and the cycle repeats.
6. When an object escapes a GC in Survivor, its age increases by +1. By default, objects aged 15 are moved to the old generation.
2.5. Reference types in JAVA 4
2.5.1. Strong reference
The most common in Java is strong references, where an object is assigned to a reference variable that is a strong reference. When an object is referenced by a strongly referenced variable, it is in a reachable state and cannot be collected by the garbage collection mechanism, even if the object will never be used or collected by the JVM. So strong references are one of the main causes of Java memory leaks.
2.5.2. Soft references
A SoftReference is delivered using the SoftReference class. An object with only a SoftReference is not reclaimed when the system memory is sufficient, but is reclaimed when the system memory is insufficient. Soft references are usually used in memory-sensitive programs.
2.5.3. A weak reference
WeakReference needs to be realized by WeakReference class, which has a shorter lifetime than soft reference. For objects with only weak references, as long as the garbage collection mechanism runs, regardless of whether the memory space of JVM is enough, it will always reclaim the memory occupied by the object.
2.5.4. Virtual reference
Virtual references need to be implemented by the PhantomReference class, which cannot be used alone but must be used in conjunction with the reference queue. The main purpose of a virtual reference is to track the status of an object being garbage collected.
2.6.GC generational collection algorithm VS partition collection algorithm
2.6.1. Generational collection algorithm
The current mainstream VM garbage Collection uses “Generational Collection,” which divides memory into chunks based on the lifetime of an object, such as the new generation, the old generation, the permanent generation in the JVM, In this way, the most appropriate GC algorithm can be used according to the characteristics of each era
2.6.1.1. In the New Generation – Replication algorithm
Each garbage collection finds a large number of objects dead and only a few alive. Therefore, the selection of replication algorithm, only need to pay a small amount of replication cost of living objects can complete the collection.
2.6.1.2. In the old days – tag collation algorithm
Because an object has a high survival rate and no extra space is guaranteed to allocate it, a “mark-clean” or “mark-tidy” algorithm must be used to recycle it without memory replication and free memory directly
2.6.2. Partition collection algorithm
Partition algorithm divides the whole heap space into continuous different cells, and each cell is used independently and recycled independently. The advantage of this is that you can control how many cells are recycled at a time, and reasonably recycle several cells at a time (rather than the whole heap) depending on the target pause time, thereby reducing the pauses generated by a SINGLE GC.
2.7.GC garbage collector
Java heap memory is divided into the new generation and the old generation. The new generation mainly uses copy and mark-clean garbage collection algorithms. The older generation mainly uses mark-sort garbage collection algorithms, so there are several different garbage collectors in the Java virtual machine for the new generation and the older generation respectively. In JDK1.6, the Sun HotSpot VIRTUAL machine has the following garbage collector:
2.7.1. Serial garbage collector (single thread, replication algorithm)
Serial is a basic garbage collector that uses a replication algorithm and was the only garbage collector in the new generation prior to JDK1.3.1. Serial is a single-threaded collector that not only uses one CPU or one thread to complete garbage collection, but must suspend all other worker threads until the garbage collection is complete. The Serial garbage collector, while suspending all other worker threads during garbage collection, is simple and efficient, achieving the highest single-thread garbage collection efficiency for a limited single-CPU environment without the overhead of thread interaction. So Serial garbage collector is still the default generation garbage collector for Java virtual machines running in Client mode.
2.7.2. ParNew Garbage Collector (Serial+ Multithreading)
The ParNew garbage collector is a multithreaded version of the Serial garbage collector. It also uses a replication algorithm and behaves exactly the same as the Serial garbage collector except that it uses multiple threads for garbage collection. The ParNew garbage collector also suspends all other worker threads during garbage collection. By default, the ParNew collector opens the same number of threads as the number of cpus. You can limit the number of garbage collector threads by using the -xx :ParallelGCThreads parameter. Although ParNew is almost identical to Serial except for multi-threading, the ParNew garbage collector is the default garbage collector for the new generation of Java virtual machines running in Server mode.
Parallel Avenge (Multi-threaded replication algorithm, efficient)
The Parallel Scavenge collector is a new generation garbage collector that also uses replication algorithms and is a multi-threaded garbage collector that focuses on achieving a controlled throughput (Thoughput, the amount of time the CPU takes to run user code, That is, throughput = run user code time /(run user code time + garbage collection time)), high throughput can make the most efficient use of CPU time, as soon as possible to complete the program’s computing tasks, mainly suitable for the background operations without too much interaction. Adaptive conditioning strategies are also an important difference between the ParallelScavenge collector and the ParNew collector.
Serial Old collector (Single-thread token collation algorithm)
Serial Old is the tenured version of the Serial garbage collector, which is also a single-threaded collector using the mark-collation algorithm. This collector is also primarily the default tenured garbage collector running on the Client’s default Java virtual machine.
In Server mode, there are two main uses:
- 1. Used with the new generation of the Parallel Scavenge avenge in versions prior to JDK1.5.
- 2. As a backup garbage collection solution for the older generation using the CMS collector.
Garbage collection process diagram of new Generation Serial and Old generation Serial
The next-generation Parallel Scavenge collector is similar to the ParNew collector in that it is a multi-threaded collector and uses replication algorithms to suspend all worker threads during garbage collection. Insane /ParNew insane insane insane
2.7.5. Parallel Old Collector (Multi-thread tag collation Algorithm)
The Parallel Old collector is an older version of the Parallel insane, using the multithreaded mark-collation algorithm and was only available in JDK1.6.
Insane insane Insane Insane Insane Insane Insane Insane Insane insane insane insane insane insane insane The Parallel Old collector is designed to provide a through-first garbage collector in the older generation. The Newer Generation Parallel Scavenge and the Older Generation Parallel Old collector can be used as a match strategy if the system requires higher throughput.
Insane and the Older Generation of Parallel Collector.
2.7.6. CMS collector (Multi-threaded tag clearing algorithm)
A Concurrent Mark sweep(CMS) collector is an tenured garbage collector whose primary goal is to obtain the minimum garbage collection pause time. Unlike other tenured mark-collation algorithms, it uses a multithreaded mark-sweep algorithm.
Minimum garbage collection pauses can improve the user experience for highly interactive applications.
The WORKING mechanism of CMS is more complex than that of other garbage collectors. The whole process is divided into the following four stages:
2.7.6.1. Initial markup
Just mark objects that GC Roots can associate directly, which is fast, and still suspend all worker threads.
2.7.6.2. Concurrent marking
The process of GC Roots tracing, which works with the user thread without the need to suspend the worker thread.
2.7.6.3. Relabeling
All worker threads still need to be suspended in order to correct the mark record for the part of the object whose mark changes as the user program continues to run during concurrent marking.
2.7.6.4. Concurrent clearing
Remove unreachable GC Roots objects and work with the user thread without suspending the worker thread. Since the garbage collector thread can now work concurrently with the user during the longest concurrent marking and concurrent cleaning processes, the CMS collector’s memory collection and the user thread are generally executed concurrently.
2.7.7. G1 collector
Garbage first Garbage collector is the most advanced theoretical development of Garbage collector. Compared with CMS collector, G1 collector has two most prominent improvements: 1. Based on mark-defragment algorithm, no memory fragmentation is generated. 2. Pause times can be controlled very precisely to achieve low pause garbage collection without sacrificing throughput. The G1 collector avoids region-wide garbage collection by dividing heap memory into separate regions of fixed size and tracking the progress of garbage collection in these regions, while maintaining a priority list in the background that prioritizes the areas with the most garbage collected at a time based on the allowed collection time. Zone partitioning and priority zone collection mechanisms ensure that the G1 collector can achieve maximum garbage collection efficiency in limited time.
2.8. JAVA IO/NIO
2.8.1. Blocking IO model
In the most traditional IO model, data is blocked during reading and writing. When the user thread makes an I/O request, the kernel checks to see if the data is ready. If not, the kernel waits for the data to be ready, and the user thread blocks and hands over the CPU. When the data is ready, the kernel copies the data to the user thread and returns the result to the user thread, which is unblocked at 13/04/2018 Page 35 of 283. A typical example of a blocking IO model is: data = socket.read(); If the data is not ready, it will always block in the read method.
2.8.2. Non-blocking IO model
When the user thread initiates a read operation, it does not wait, but gets a result immediately. If the result is an error, it knows that the data is not ready, and it can send the read operation again. Once the data in the kernel is ready and the user thread requests it again, it copies the data to the user thread and returns. So, in fact, in the non-blocking IO model, the user thread needs to constantly ask if the kernel data is ready, which means that non-blocking IO does not surrender CPU, but always occupies CPU. Typical non-blocking IO models are as follows:
while(true){
data = socket.read();
if(data! = error){process databreak; }}Copy the code
A serious problem with non-blocking IO is that the while loop constantly asks the kernel if the data is ready, which can lead to very high CPU usage, so the while loop is rarely used to read data.
2.8.3. Multiplexing IO model
The multiplexing IO model is widely used at present. Java NIO is essentially multiplexing IO. In the multiplexing I/O model, there is a thread polling the state of multiple sockets, and the actual I/O operation is called only when the socket actually has read/write events. Because in the multiplexing IO model, only one thread can be used to manage multiple sockets, the system does not need to create new processes or threads, and do not need to maintain these threads and processes, and only when there are socket read and write events, the IO resources will be used, so it greatly reduces resource consumption. In Java NIO, each channel is queried by selector. Select () to see if there is an arrival event. If there is no event, the block will always be there, so this will cause the user thread to block. Multiplexing IO mode, through a thread can manage a socket, only when the socket really have read and write events will occupy resources for actual read and write operations. Therefore, multiplexing IO is suitable for a large number of connections.
In addition, multiplexing IO is more efficient than the non-blocking IO model because in non-blocking IO, the constant query of socket state is done through the user thread, whereas in multiplexing IO, polling for each socket state is done by the kernel, which is much more efficient than user threads.
Note, however, that the multiplexing IO model detects the arrival of events in a polling manner, and responds to the arrival of events one by one. Therefore, for the multiplexing IO model, once the event response body is large, the subsequent events will not be processed and the new event polling will be affected.
2.8.4. Signal-driven IO model
In the signal-driven IO model, when the user thread initiates an IO request operation, a signal function will be registered to the corresponding socket, and then the user thread will continue to execute. When the kernel data is ready, a signal will be sent to the user thread. After the user thread receives the signal, IO read and write operations are called in the signal function to perform the actual IO request operation.
2.8.5. Asynchronous I/O model
The asynchronous IO model is the ideal one. In the asynchronous IO model, when a user thread initiates a read operation, it can immediately start doing other things. On the other hand, from the kernel’s point of view, when it receives an asynchronous read, it will immediately return indicating that the read request has been successfully initiated and therefore no blocks will be generated for the user thread. The kernel then waits for the data to be ready, then copies the data to the user thread, and when all is done, the kernel sends a signal to the user thread that the read operation is complete. In other words, the user thread does not need to know how the entire IO operation is actually going on. It only needs to make a request first. When receiving the success signal from the kernel, it indicates that the IO operation has been completed and the data can be used directly.
In the asynchronous IO model, neither phase of the IO operation blocks the user thread. Both phases are automatically completed by the kernel, which then sends a signal to the user that the thread operation has completed. The user thread does not need to call the IO function again for specific reads and writes. This is different from the signal-driven model. In the signal-driven model, when the user thread receives the signal indicating that the data is ready, the user thread needs to call the IO function for the actual read and write operation. However, in the asynchronous I/O model, receiving a signal indicates that the I/O operation has been completed, and there is no need to call the IO function in the user thread for the actual read and write operation. Note that Asynchronous IO is required by the underlying support of the operating system, and in Java 7, Asynchronous IO is provided.
More reference: www.importnew.com/19816.html
2.8.1. JAVA IO package
2.8.2. JAVA NIO
NIO has three core parts: channels, buffers, and selectors. Whereas traditional IO operates on byte streams and character streams, NIO operates on channels and buffers, where data is always read from a Channel into a Buffer or written from a Buffer into a Channel. A Selector listens for events on multiple channels (such as a connection opening, data arrival). Thus, a single thread can listen on multiple data channels.The first big difference between NIO and traditional IO is that WHILE IO is stream-oriented, NIO is buffer-oriented.
2.8.2.1. NIO buffer
Java IO stream-oriented means that one or more bytes are read from the stream at a time until all bytes are read without being cached anywhere. In addition, it cannot move data back and forth in a stream. If you need to move data read from the stream back and forth, you need to cache it into a buffer first. NIO has a different buffer-oriented approach. The data is read into a buffer that it processes later and can be moved back and forth in the buffer as needed. This adds flexibility to the process. However, you also need to check that the buffer contains all the data you need to process. Also, you need to make sure that when more data is read into the buffer, you don’t overwrite the unprocessed data in the buffer.
2.8.2.2. Non-blocking of NIO
The various streams of IO are blocked. This means that when a thread calls read() or write(), the thread blocks until some data is read, or data is written entirely. The thread can’t do anything else in the meantime. NIO’s non-blocking mode allows a thread to send a request from a channel to read data, but it only gets what is currently available, and if no data is currently available, it gets nothing. Instead of keeping the thread blocked, it can continue doing other things until the data becomes readable. The same is true for non-blocking writes. A thread requests to write some data to a channel, but without waiting for it to write completely, the thread can do something else in the meantime. Threads typically spend the idle time of non-blocking IO performing IO operations on other channels, so a single thread can now manage multiple input and output channels.
2.8.3. Channel
First, let’s talk about Channel, which is mostly translated as “Channel” in China. A Channel is about the same level as a Stream in IO. A Stream is one-way, such as InputStream and OutputStream, while a Channel is two-way. It can be used for both read and write operations. The main implementations of channels in NIO are:
- FileChannel
- DatagramChannel
- SocketChannel
- ServerSocketChannel
It can correspond to file IO, UDP, and TCP (Server and Client) respectively. The following examples are basically about these four types of channels.
2.8.4. Buffer
A Buffer, as its name suggests, is actually a container, a contiguous array. Channel Provides a Channel for reading data from a file or network. However, the data read or written must go through Buffer.The figure above depicts the process of sending data from a client to a server, which then receives the data. When clients send data, they must first store the data in Buffer and then write the contents of Buffer to the channel. The server side must read the data into the Buffer through a Channel, and then read the data out of the Buffer for processing.
In NIO, Buffer is a top-level superclass, which is an abstract class. Common subclasses of Buffer include: ByteBuffer, IntBuffer, CharBuffer, LongBuffer, DoubleBuffer, FloatBuffer, ShortBuffer
2.8.5. The Selector
The Selector class is the core class of NIO. The Selector class can detect whether an event has occurred on multiple registered channels. If an event has occurred, the Selector class can obtain the event and then handle the corresponding response for each event. In this way, you can manage multiple channels, and thus multiple connections, with just one thread. This allows functions to be called only when a connection actually has an read or write event, greatly reducing system overhead, eliminating the need to create a thread for each connection, maintaining multiple threads, and avoiding the overhead of context switching between multiple threads.
2.9.JVM class loading mechanism
The JVM class loading mechanism is divided into five parts: loading, validation, preparation, parsing, and initialization. Let’s take a look at each of these five processes.
Load 2.9.1.1.
Load is the phase of the Class loading process that generates a java.lang.Class object representing the Class in memory as an entry point to the Class’s various data in the method area. Note that this does not have to be obtained from a Class file; it can be read from ZIP packages (such as JAR packages and WAR packages), computed at runtime (dynamic proxies), or generated from other files (such as JSP files converted to the corresponding Class classes).
Validation 2.9.1.2.
The main purpose of this phase is to ensure that the byte stream in the Class file meets the requirements of the current virtual machine and does not compromise the security of the virtual machine.
2.9.1.3. Prepare
The preparation phase is the formal allocation of memory for class variables and the setting of initial values for class variables, that is, the memory space used by the allocation of these variables in the method area. Note the concept of initial values here, such as a class variable defined as:
public static int v = 8080;
Copy the code
The put static instruction that assigns v to 8080 is stored in the class constructor method after the program is compiled.
But note that if the statement is:
public static final int v = 8080;
Copy the code
The ConstantValue attribute is generated for V during compilation, and the virtual machine assigns v a value of 8080 based on the ConstantValue attribute during preparation.
2.9.1.4. Parse
The parsing phase is the process by which the virtual machine replaces symbolic references in the constant pool with direct references. Symbolic references are in the class file:
- CONSTANT_Class_info
- CONSTANT_Field_info
- CONSTANT_Method_info
Constants of the same type.
2.9.1.5. Symbolic Reference
- Symbolic references are independent of the layout implemented by the virtual machine, and the target of the reference does not have to have been loaded into memory. The memory layout of various virtual machine implementations can vary, but the symbolic references they accept must be consistent, because the literal form of symbolic references is explicitly defined in the Class file format of the Java Virtual Machine specification.
2.9.1.6. Direct Reference
- A direct reference can be a pointer to a target, a relative offset, or a handle that can be indirectly located to the target. If there is a direct reference, the target of the reference must already exist in memory.
2.9.1.7. Initialization
The initialization phase is the last phase of class loading, and after the previous class loading phase, all operations are dominated by the JVM, except that the class loader can be customized during the loading phase. In the inception phase, you actually execute the Java program code defined in the class.
2.9.1.8. Class constructor
The initialization phase is the process of executing the class constructor methods. Methods are a combination of assignment operations that the compiler automatically collects for class variables in a class and statements in a static statement block. The virtual machine guarantees that the parent method will execute before the child method executes. The compiler may not generate () methods for a class that has no static variable assignments and no static statement blocks.
Note that class initialization is not performed in the following cases:
- A reference to a static field of a parent class by a subclass triggers initialization of the parent class, not the subclass.
- Defining an array of objects does not trigger initialization of the class.
- Constants are stored in the constant pool of the calling class at compile time and do not, in essence, refer directly to the class that defined the constant
Sends the class that defines the constant. 4. Obtaining a Class object by its Class name does not trigger Class initialization. If initialize is false, Class initialization is not triggered. This parameter tells the VM whether to initialize the Class. 6. The default loadClass method of ClassLoader does not trigger the initialization action.
2.9.2. Class loaders
The virtual machine design team put the loading action outside the JVM implementation to let the application decide how to get the required classes, and the JVM provides three types of loaders:
2.9.2.1. Starting the ClassLoader (Bootstrap ClassLoader)
Classes that are responsible for loading classes in the JAVA_HOME\lib directory or in the path specified by the -xBootCLASspath parameter and recognized by the virtual machine (by filename, such as rt.jar).
2.9.2.2. Extension ClassLoader
Is responsible for loading class libraries in the JAVA_HOME\lib\ext directory, or in the path specified by the java.ext.dirs system variable.
2.9.2.3. Application ClassLoader:
Responsible for loading class libraries on the user’s classpath.
The JVM loads classes through the parent delegate model, but we can also implement custom class loaders by inheriting java.lang.ClassLoader.
2.9.3. Parent delegation
When a class receives a classload request, it first does not attempt to load the class itself. Instead, it delegates the request to the parent class. This is true of every hierarchical classloader, so all load requests should be passed to the starting classload. The subclass loader will try to load the request itself only if the parent Class loader reports that it is unable to complete the request (the desired Class is not found in its load path).
One advantage of using parental delegation is that, for example, loading the java.lang.Object class in the rt.jar package ensures that whatever loader loads the class ends up delegating to the top bootstrap class loaderYou end up with the same Object using different class loaders.
2.9.4. OSGI (Dynamic Model System)
OSGi(Open Service Gateway Initiative) is a dynamic model system for Java. It is a series of specifications for Java dynamic modular system.
2.9.4.1. Dynamically changing constructs
The OSGi service platform provides the ability to dynamically change constructs on a variety of network devices without a restart. To minimize coupling and make these coupling manageable, OSGi technology provides a service-oriented architecture that enables these components to discover each other dynamically.
2.9.4.2. Modular Programming and Hot Swap
OSGi is designed to provide the foundation for modular programming of Java programs. Osgi-based programs are likely to be hot-pluggable at the module level. When an application is updated, it can be deactivated, re-installed, and then launched as part of a program, which is a very attractive feature for enterprise development.
OSGi describes a beautiful modular development goal, defines the services and architectures needed to achieve it, and has a mature framework to support it. But not all applications are suited to OSGi as an infrastructure. While it provides power, it introduces additional complexity because it does not adhere to the parent-delegate model of class loading.