Want to write this series for a long time, to oneself is also a summary and improvement. It turns out that when you’re learning JAVA, the introductory JAVA books will tell you some rules and rules, but it’s often hard to remember when you’re using it because you don’t use it enough and you don’t know why. Only when you know what it is can you be impressed and put into practice.
This article focuses on the out-of-heap memory and DirectBuffer in-depth analysis, understanding Java for out-of-heap memory processing mechanism, for the next file IO preparation
Java stack memory versus off-heap memory
First we throw out a formula:
MaxDirectMemorySize = -xmx specifies the maximum heap size + the maximum number of active threads * -xSS specifies the memory size per thread stack + -xx :MaxDirectMemorySize specifies the maximum direct memory size + MetaSpace sizeCopy the code
1. Stack memory
Stack memory refers to heap memory and stack memory: heap memory is GC managed memory and stack memory is thread memory.
Heap memory structure:
There is also a more detailed structure diagram (including MetaSpace and code cache) :
Note that after Java8, PermGen was replaced by MetaSpace, which is automatically expanded at runtime and is infinite by default
Let’s look at the following code to understand the stack relationship briefly:
public static void main(String[] args) {
Object o = new Object();
}
Copy the code
New Object() is allocated on the heap, and Object O is allocated on the main thread stack.
- Heap memory is used for all parts of the application, and then stack memory is used by running on a thread.
- Whenever an object is created, it is stored in the heap memory, and the stack memory contains its reference. Stack memory only contains references to raw value variables and object variables in the heap.
- Objects stored in the heap are globally accessible, whereas stack memory cannot be accessed by other threads.
- By JVM parameters
-Xmx
We can specify the maximum heap memory size, through-Xss
We can specify how much memory the thread stack occupies per thread
2. Off-heap memory
2.1. Generalized out-of-heap memory
After stack memory, the rest is off-heap memory, which includes memory allocated by the JVM itself during runtime, codecache, JNI, DirectByteBuffer, and so on
2.2. DirectByteBuffer
As a Java developer, we often say of heap memory, is actually a special heap memory, this mainly refers to the Java nio. DirectByteBuffer allocated memory when creating, we of this article is mainly about special heap memory, because it is close and we usually run into problems
Why use off-heap memory. Usually because:
- It can be shared between processes to reduce replication between VMS
- Improvements to garbage collection pauses: If you are applying some long-lived and abundant objects that often start YGC or FullGC, consider putting those objects out of the heap. A large heap can affect the performance of Java applications. If off-heap memory is used, it is managed directly by the operating system (not the virtual machine). The result is to keep a small amount of in-heap memory to reduce the impact of garbage collection on the application.
- This can improve performance of program I/O manipulation in certain scenarios. Eliminating the need to copy data from in-heap memory to out-of-heap memory.
3. JNI call and kernel mode and user mode
- Kernel mode: the CPU can access all data in memory, including peripheral devices such as hard disks and network cards. The CPU can also switch itself from one program to another.
- User mode: Only memory access is limited and peripheral devices are not allowed to access. CPU usage is deprived and CPU resources can be obtained by other programs.
- System calls: To enable upper-layer applications to access these resources, the kernel provides interfaces for upper-layer applications to access these resources
The Java invocation native method, or JNI, is a type of system call.
Let’s take an example, file reading; Java itself cannot read files because user mode does not have access to peripherals. You need to switch kernel state through system call to read.
Currently, JAVA IO has both stream-based traditional IO and block-based NIO (although file reading is not strictly NIO, haha). Flow oriented means from the flow, one can read one or more bytes to read these do you have the final say, there is no any cache (here refers to the use the stream without any cache, sending or receiving data is in the cache to the operating system, flow like a conduit to read data from the operating system’s cache) and only order data read from the stream, If you need to skip bytes or read bytes already read, you must cache the data read from the stream. Block-oriented processing is a bit different in that the data is read/written to buffer first, and you can control where the data is read as needed. This gives the user some flexibility in processing, however, the extra work you need to do is to check that all the data you need is in the buffer, and you need to make sure that the unprocessed data in the buffer is not overwritten when more data is in the buffer.
We will only examine the block-based NIO approach here, which in JAVA is known as ByteBuffer.
4. Zero-copy principle in Linux
Most Web servers handle a lot of static content, and most of it is reading data from disk files and writing it to sockets. Let’s use this process as an example to look at Linux workflows in different modes
4.1. Normal Read/Write mode
Code abstractions involved:
// Read from the file, save tmp_bufread(file, tmp_buf, len); // write tmp_buf to socket write(socket, tmp_buf, len);Copy the code
This seems like a simple step but has been copied a lot:
- When the READ system call is called, the data is copied into kernel mode via DMA (Direct Memory Access)
- The kernel mode data is then copied to the user mode buffer under CPU control
- After the read call, the write call first copies data from the user-mode buffer to the kernel-mode socket buffer
- Finally, the data in the socket buffer in kernel mode is copied to the nic device through DMA copy.
As you can see from the above procedure, the data has gone all the way from kernel mode to user mode, wasting two copies (first, from kernel mode to user mode; The second copy from user mode back to kernel mode is steps 2 and 3 of the previous four steps. , and both copies are CPU copies, which occupy CPU resources
4.2. Sendfile mode
Sending a file via sendFile requires only one system call, when sendFile is called:
- The data is first read from the disk into the kernel buffer via DMA copy
- Then copy the data from the kernel buffer to sokcet buffer using CPU copy
- Send sendfile. Compared with read/write mode, there is less one mode switch and one CPU copy. However, it is not necessary to copy data from the kernel buffer to the socket buffer.
4.3. Improvement of sendFile mode
The sendFile mode has been improved in the Linux2.4 kernel:
The improved processing process is as follows:
- DMA Copy Copy disk data to the kernel buffer 2. Append the location and offset of the current data to the socket buffer
- The DMA Gather Copy directly copies data from the kernel buffer to the nic based on the location and offset in the socket buffer.
After the above process, the data is copied from the disk only twice. (Actually Zero copy is kernel specific, data is zero-copy in kernel mode).
Currently, many high-performance HTTP servers have introduced sendFile mechanism, such as Nginx, Lighttpd and so on.
5. Java zero copy implementation changes
Zero-copy eliminates copying the operating system’s read buffer to the program’s buffer and from the program’s buffer to the socket’s buffer. This is done by copying read buffer directly to the filechannal.transferto () method in socketbuffer.java NIO
public void transferTo(long position,long count,WritableByteChannel target);
Copy the code
The transferTo() method transfers data from one channel to another writable channel, and its internal implementation depends on the operating system’s support for Zero Copy technology. In Unix operating systems and various versions of Linux, this functionality is ultimately implemented through the sendFile () system call. Here is the definition of this method:
#include <sys/socket.h>
ssize_t sendfile(int out_fd, int in_fd, off_t *offset, size_t count);
Copy the code
5.1. Underlying implementation prior to Linux 2.4
As mentioned earlier, let’s use the following two diagrams to more clearly show the replication and kernel-mode user switching that occurs:
There are only two times of switching between kernel and user mode, and only three times of copying data (only one time using CPU resources). After Linux2.4, we can remove this only one CPU copy
5.2. Low-level implementation after Linux 2.4
On Linux systems with a kernel of 2.4 or above, the socket buffer descriptor will be used to meet this requirement. This approach not only reduces switching between kernel user modes, but also eliminates the need for CPU replication. The transferTo() method is still called from the user’s point of view, but its essence has changed:
-
After calling the transferTo method, the data is copied by DMA from the file to a buffer in the kernel.
-
Data is no longer copied to the socket-associated buffer, only a descriptor (containing information such as the location and length of the data) is appended to the socket-associated buffer. DMA transfers data directly from the buffer in the kernel to the protocol engine, eliminating the only remaining data copy that requires a CPU cycle.
5.3 Zero-copy performance of JAVA common byte stream IO and NIOFileChannel:
Direct source code:
import java.io.BufferedInputStream; import java.io.BufferedOutputStream; import java.io.File; import java.io.FileInputStream; import java.io.FileNotFoundException; import java.io.FileOutputStream; import java.io.IOException; import java.io.InputStream; import java.io.OutputStream; import java.nio.channels.FileChannel; Public class FileCopyTest {/** * Copy files by byte stream * @param fromFile source file * @param toFile target file * @throws */ public static void fileCopyNormal(FileNotFoundException) File toFile) throws FileNotFoundException { InputStream inputStream = null; OutputStream outputStream = null; try { inputStream = new BufferedInputStream(new FileInputStream(fromFile)); outputStream = new BufferedOutputStream(new FileOutputStream(toFile)); Byte [] bytes = new byte[1024]; int i; // Read data into the input stream and write it to the output stream for replicationwhile((i = inputStream.read(bytes)) ! = -1) { outputStream.write(bytes, 0, i); } } catch (Exception e) { e.printStackTrace(); } finally { try {if(inputStream ! = null) { inputStream.close(); }if(outputStream ! = null) { outputStream.close(); } } catch (IOException e) { e.printStackTrace(); }} /** * Filechannel to copy files ** @param fromFile source file * @param toFile target file */ public static void fileCopyWithFileChannel(File fromFile, File toFile) { FileInputStream fileInputStream = null; FileOutputStream fileOutputStream = null; FileChannel fileChannelInput = null; FileChannel fileChannelOutput = null; try { fileInputStream = new FileInputStream(fromFile); fileOutputStream = new FileOutputStream(toFile); / / get a fileInputStream file channel fileChannelInput = fileInputStream getChannel (); / / get a fileOutputStream file channel fileChannelOutput = fileOutputStream. GetChannel (); // fileChannelInput channel data, Write to fileChannelInput fileChannelOutput channels. The transferTo (0, fileChannelInput. The size (), fileChannelOutput); } catch (IOException e) { e.printStackTrace(); } finally { try {if(fileInputStream ! = null) { fileInputStream.close(); }if(fileChannelInput ! = null) { fileChannelInput.close(); }if(fileOutputStream ! = null) { fileOutputStream.close(); }if(fileChannelOutput ! = null) { fileChannelOutput.close(); } } catch (IOException e) { e.printStackTrace(); } } } public static void main(String[] args) throws IOException { File fromFile = new File("D:/readFile.txt");
File toFile = new File("D:/outputFile.txt"); Preheat fileCopyNormal(fromFile, toFile); fileCopyWithFileChannel(fromFile, toFile); Long start = system.currentTimemillis ();for (int i = 0; i < 1000; i++) {
fileCopyNormal(fromFile, toFile);
}
System.out.println("fileCopyNormal time: " + (System.currentTimeMillis() - start));
start = System.currentTimeMillis();
for (int i = 0; i < 1000; i++) {
fileCopyWithFileChannel(fromFile, toFile);
}
System.out.println("fileCopyWithFileChannel time: "+ (System.currentTimeMillis() - start)); }}Copy the code
Test results:
fileCopyNormal time: 14271
fileCopyWithFileChannel time: 6632
Copy the code
The difference is more than double the time (the file size is about 8MB) and should be even more noticeable if the file is larger.
6. DirectBuffer allocation
The core buffer of NIO in Java is the ByteBuffer through which all IO operations are performed. There are two types of Bytebuffer: allocate HeapByteBuffer
ByteBuffer buffer = ByteBuffer.allocate(int capacity);
Copy the code
Distribution of DirectByteBuffer
ByteBuffer buffer = ByteBuffer.allocateDirect(int capacity);
Copy the code
The difference between the two:
6.1. Why is HeapByteBuffer copied one more time?
6.1.1. FileChannel Force API Description
FileChannel force method: The filechannel.force () method forces data in the channel that has not been written to disk to be written to disk. For performance reasons, the operating system caches data in memory, so there is no guarantee that data written to a FileChannel will be written to disk immediately. To ensure this, call the force() method. The force() method takes a Boolean parameter that specifies whether file metadata (permission information, etc.) is also written to disk.
6.1.2. IOUtil source code parsing for FileChannel and SocketChannel dependencies
Both the FileChannel and SocketChannel read and write methods rely on the same method as IOUtil, which we’ll look at here: ioutil.java
static int write(FileDescriptor var0, ByteBuffer var1, long var2, NativeDispatcher var4) throws IOException {// If DirectBuffer is used, write directlyif (var1 instanceof DirectBuffer) {
return writeFromNativeBuffer(var0, var1, var2, var4);
} elseInt var5 = var1.position(); Int var6 = var1.limit(); assert var5 <= var6; DirectByteBuffer int var7 = var5 <= var6? var6 - var5 : 0; ByteBuffer var8 = Util.getTemporaryDirectBuffer(var7); int var10; try { var8.put(var1); var8.flip(); var1.position(var5); // Write to DirectBuffer int var9 = writeFromNativeBuffer(var0, var8, var2, var4);if(var9 > 0) { var1.position(var5 + var9); } var10 = var9; } the finally {/ / recycling allocation DirectByteBuffer Util. OfferFirstTemporaryDirectBuffer (var8); }returnvar10; }} // The method of reading is similar to that of writing, omitted hereCopy the code
6.1.3. Why copy to DirectByteBuffer to read and write (system call)
First of all, the thread executing the native method is considered to be in SafePoint, so if the NIO is not copied to DirectByteBuffer, the GC will rearrange the object’s memory (see my other article: Blog.csdn.net/zhxdick/art…
Traditional BIO is stream-oriented, and the underlying implementation can be understood as writing byte arrays, calling native methods to write IO, and passing the parameters of this array. Even if GC changes the memory address, it can still find the latest address by referring to this array. The corresponding method is: FileOutputStream.write
private native void writeBytes(byte b[], int off, int len, boolean append)
throws IOException;
Copy the code
However, NIO, in order to improve efficiency, transfers memory address, eliminating an indirect application, but it must use DirectByteBuffer to prevent memory address change, corresponding to nativeDispatcher.write
abstract int write(FileDescriptor fd, long address, int len)
throws IOException;
Copy the code
So why does the memory address change? The GC collects unwanted objects and defragmentation, moving objects around in memory to reduce memory fragmentation. DirectByteBuffer is not controlled by GC. If you use HeapByteBuffer instead of DirectByteBuffer, if GC occurs during the system call, the HeapByteBuffer memory location changes, but the kernel state will not be aware of this change and the system call reads or writes the wrong data. So be sure to make IO system calls through a HeapByteBuffer that is not affected by GC.
Suppose we want to read a piece of data from the network and then send it out. The non-direct ByteBuffer process looks like this:
Network - > Temporary DirectByteBuffer - > Apply non-Direct ByteBuffer - > Temporary DirectByteBuffer - > NetworkCopy the code
In this way, a native memory is allocated directly outside the heap to store data, and the program reads/writes the data directly to the outside memory through JNI. Since the data is written directly to off-heap memory, there is no need to reallocate memory within the JVM’s managed heap to store the data, so there is no copying of in-heap and off-heap data. In this way, when I/O is performed, only the off-heap memory address needs to be passed to JNI’s I/O function.
The flow with Direct ByteBuffer looks like this:
Network - > Apply Direct ByteBuffer - > NetworkCopy the code
As you can see, in addition to the time spent constructing and destructing temporary Direct Bytebuffers, you can save at least two memory copies. Should Direct Buffer be used in all cases?
It isn’t. For most applications, the time between two memory copies is negligible, while the time to construct and destruct DirectBuffer is relatively long. In JVM implementations, some methods cache a portion of temporary Direct ByteBuffers, meaning that using Direct ByteBuffers saves only two memory copies, not construction and destruction. In Sun’s case, both the write(ByteBuffer) and read(ByteBuffer) methods cache temporary Direct ByteBuffers, Write (ByteBuffer[]) and read(ByteBuffer[]) generate a new temporary Direct ByteBuffer each time.
6.2. ByteBuffer created
6.2.1. ByteBuffer Creates a HeapByteBuffer
Allocated on the heap, the Java VIRTUAL machine is responsible for garbage collection directly, which you can think of as a wrapper class for a byte array
class HeapByteBuffer
extends ByteBuffer
{
HeapByteBuffer(int cap, int lim) { // package-private
super(-1, 0, lim, cap, new byte[cap], 0);
/*
hb = new byte[cap];
offset = 0;
*/
}
}
public abstract class ByteBuffer
extends Buffer
implements Comparable<ByteBuffer>
{
// These fields are declared here rather than in Heap-X-Buffer in order to
// reduce the number of virtual method invocations needed to access these
// values, which is especially costly when coding small buffers.
//
final byte[] hb; // Non-null only for heap buffers
final int offset;
boolean isReadOnly; // Valid only for heap buffers
// Creates a new buffer with the given mark, position, limit, capacity,
// backing array, and array offset
//
ByteBuffer(int mark, int pos, int lim, int cap, // package-private
byte[] hb, int offset)
{
super(mark, pos, lim, cap);
this.hb = hb;
this.offset = offset;
}
Copy the code
6.2.2. DirectByteBuffer
This class is not as simple as HeapByteBuffer
DirectByteBuffer(int cap) { // package-private
super(-1, 0, cap.cap);
boolean pa = VM.isDirectMemoryPageAligned();
int ps = Bits.pageSize();
long size = Math.max(1L, (long)cap + (pa ? ps : 0));
Bits.reserveMemory(size, cap);
long base = 0;
try {
base = unsafe.allocateMemory(size);
} catch (OutOfMemoryError x) {
Bits.unreserveMemory(size, cap);
throw x;
}
unsafe.setMemory(base, size, (byte) 0);
if(pa && (base % ps ! = 0)) { // Round up to page boundary address = base + ps - (base & (ps - 1)); }else {
address = base;
}
cleaner = Cleaner.create(this, new Deallocator(base, size, cap));
att = null;
Copy the code
Bits. ReserveMemory (size, cap) method
static void reserveMemory(long size, int cap) {
synchronized (Bits.class) {
if(! memoryLimitSet && VM.isBooted()) { maxMemory = VM.maxDirectMemory(); memoryLimitSet =true;
}
// -XX:MaxDirectMemorySize limits the total capacity rather than the
// actual memory usage, which will differ when buffers are page
// aligned.
if (cap <= maxMemory - totalCapacity) {
reservedMemory += size;
totalCapacity += cap;
count++;
return;
}
}
System.gc();
try {
Thread.sleep(100);
} catch (InterruptedException x) {
// Restore interrupt status
Thread.currentThread().interrupt();
}
synchronized (Bits.class) {
if (totalCapacity + cap > maxMemory)
throw new OutOfMemoryError("Direct buffer memory");
reservedMemory += size;
totalCapacity += cap; count++; }}Copy the code
The Bits class has a global totalCapacity variable that records the total size of all DirectBytebuffers. By default, the out-of-heap memory limit is similar to the in-heap memory limit (set by -xmx), which can be reset by -xx :MaxDirectMemorySize.
If not specified, the default value of this parameter is the value of Xmx minus the value of 1 Survior. If the boot parameter -XMx20m-xmn10M-XX: SurvivorRatio=8, then DirectMemory of 20m-1m =19M is applied
If the limit is exceeded, sytem.gc () is actively executed in the expectation that some out-of-heap memory will be proactively reclaimed. System.gc() will trigger a full gc, of course, if you don’t have the displayed setting -xx :+DisableExplicitGC to DisableExplicitGC. Also, you need to know that calling system.gc () does not guarantee that full GC will be executed immediately. Then go to sleep for 100 milliseconds to see if totalCapacity is down. If it is still out of memory, raise OOM exception. If the limit is approved, the famous sun.misc.Unsafe is called to allocate memory, returning the address of the memory base
As a result, most frameworks apply for a large chunk of DirectByteBuffer at startup and do their own memory management
Finally, create a Cleaner and bind the Deallocator class that represents the cleanup action – reducing totalCapacity in Bits and calling the Unsafe call Free to free memory.
6.2.3. ByteBuffer recycling
HeapByteBuffer is handled by GC. The DirectByteBuffer object in the heap is very small, only storing several attributes such as base address and size, and a Cleaner, but it represents a large segment of memory allocated behind, is the so-called iceberg object.First is the static variable of the Cleaner class. When the Cleaner object is initialized, it will be added to the Clener linked list, forming a reference relationship with FIRST. ReferenceQueue is used to save the Cleaner object that needs to be recycled.
If the DirectByteBuffer object is reclaimed in a GCAt this point, only the Cleaner object holds the data (starting address, size and capacity) of the off-heap memory exclusively, and on the next Full GC, the Cleaner object is put into the ReferenceQueue and the clean method is triggered.
A quick review of the GC mechanism in the heap, young GC occurs when the new generation is full; If the object is not invalidated at this point, it is not recycled; After a few young GC’s, objects are migrated to the old generation; Full GC occurs when the old generation is also full.
The DirectByteBuffer itself is very small. If it survives the young GC, it can comfortably stay in the old generation even if it has failed. It is not easy to burst the old generation and trigger the full GC. It’s just sitting there, taking up a lot of extra memory.
In this case, the system.gc() that is triggered when the quota limit is exceeded comes to the rescue. But this last insurance isn’t very good either, first it interrupts the entire process, then it puts the current thread to sleep for a full 100 milliseconds, and it still relentlessly throws an OOM exception if the GC doesn’t complete within 100 milliseconds. Also, just in case, just in case people are superstitious that some tuning guide set -disableExplicitGC to disable system.gc(), that’s not fun.
Therefore, it is better to recycle the out-of-heap memory proactively, as Netty does
7. Check DirectBuffer usage:
7.1. In-process fetch:
MBeanServer mbs = ManagementFactory. getPlatformMBeanServer() ;
ObjectName objectName = new ObjectName("java.nio:type=BufferPool,name=direct"); MBeanInfo info = mbs.getMBeanInfo(objectName) ;for(MBeanAttributeInfo i : info.getAttributes()) {
System.out .println(i.getName() + ":" + mbs.getAttribute(objectName , i.getName()));
}
Copy the code
7.2. Remote processes
JMX gets if JMX is not enabled on the target machine, then add the JVM argument:
-Dcom.sun.management.jmxremote.port=9999
-Dcom.sun.management.jmxremote.authenticate=false
-Dcom.sun.management.jmxremotAe.ssl=false
Copy the code
Restart the process and then local access via JMX connection:
String jmxURL = "Service: JMX: rmi: / / / jndi/rmi: / / 10.125.6.204:9999 / jmxrmi" ;
JMXServiceURL serviceURL = new JMXServiceURL(jmxURL);
Map map = new HashMap() ;
String[] credentials = new String[] { "monitorRole" , "QED"}; map.put("jmx.remote.credentials" , credentials) ;
JMXConnector connector = JMXConnectorFactory. connect(serviceURL , map);
MBeanServerConnection mbsc = connector.getMBeanServerConnection() ;
ObjectName objectName = new ObjectName("java.nio:type=BufferPool,name=direct"); MBeanInfo mbInfo = mbsc.getMBeanInfo(objectName) ;for(MBeanAttributeInfo i : mbInfo.getAttributes()) {
System.out .println(i.getName() + ":" + mbsc.getAttribute(objectName , i.getName()));
}
Copy the code
You can also view it locally using the JConsole tool:
But be careful not to collect too often. Otherwise, all threads will be triggered to the safe point (i.e. Stop the world).
7.3. Run the JCMD command to query information
This requires native memory collection to be enabled, but it often triggers all threads to enter the safe point (i.e. Stop the world), so it is not recommended to enable online applications.
Example:
$ jcmd 71 VM.native_memory
71:
Native Memory Tracking:
Total: reserved=1631932KB, committed=367400KB
- Java Heap (reserved=131072KB, committed=131072KB)
(mmap: reserved=131072KB, committed=131072KB)
- Class (reserved=1120142KB, committed=79830KB)
(classes # 15267)
( instance classes #14230, array classes #1037)
(malloc=1934KB # 32977)(mmap: reserved=1118208KB, committed=77896KB) ( Metadata: ) ( reserved=69632KB, Committed = 68276kb) (used= 667kb) (free= 157kb) (waste=0KB =0.00%) (Class space:) (reserved= 10486kb, Committed =9624KB) (Used =8939KB) (Free =685KB) (waste=0KB =0.00%) - Thread (reserved=24786KB, committed=5294KB) (Thread# 56)
(stack: reserved=24500KB, committed=5008KB)
(malloc=198KB # 293)
(arena=88KB # 110)
- Code (reserved=250635KB, committed=45907KB)
(malloc=2947KB # 13459)
(mmap: reserved=247688KB, committed=42960KB)
- GC (reserved=48091KB, committed=48091KB)
(malloc=10439KB # 18634)
(mmap: reserved=37652KB, committed=37652KB)
- Compiler (reserved=358KB, committed=358KB)
(malloc=249KB # 1450)
(arena=109KB # 5)
- Internal (reserved=1165KB, committed=1165KB)
(malloc=1125KB # 3363)
(mmap: reserved=40KB, committed=40KB)
- Other (reserved=16696KB, committed=16696KB)
(malloc=16696KB # 35)
- Symbol (reserved=15277KB, committed=15277KB)
(malloc=13543KB # 180850)
(arena=1734KB # 1)
- Native Memory Tracking (reserved=4436KB, committed=4436KB)
(malloc=378KB # 5359)
(tracking overhead=4058KB)
- Shared class space (reserved=17144KB, committed=17144KB)
(mmap: reserved=17144KB, committed=17144KB)
- Arena Chunk (reserved=1850KB, committed=1850KB)
(malloc=1850KB)
- Logging (reserved=4KB, committed=4KB)
(malloc=4KB # 179)
- Arguments (reserved=19KB, committed=19KB)
(malloc=19KB # 512)
- Module (reserved=258KB, committed=258KB)
(malloc=258KB # 2356)
Copy the code
The memory used by DirectBuffer is included in the Other category