This is the second day of my participation in the August More text Challenge. For details, see: August More Text Challenge
preface
- The last article introduced some basic knowledge of Synchronized and Monitor.
- Synchronized: Synchronized: Synchronized: Synchronized: Synchronized: Synchronized: Synchronized: Synchronized: Synchronized: Synchronized
- So this time I’ll look at the memory layout of objects in the JVM virtual machine
Object memory analysis
- In a HotSpot virtual machine, objects are divided into three blocks in memory: object headers, instance data, and padding;
- Object header: things like hash code, generational age, object lock, lock status flag, bias lock ID, bias time, array length, etc. Object headers typically occupy two machine codes (in a 32-bit virtual machine, one machine code equals four bytes, or 32bit, and in a 64-bit virtual machine, one machine code is eight bytes, or 64bit), but if the object is an array type, three machine codes are required. Because the JVM uses Java object metadata to determine the size of an object, a block is needed to record the array length; - Instance data: stores the property data of the class, including the property information of the parent class. - Fill it: The vm requires that the starting position of the object must be an integer multiple of 8 bytes. This block does not necessarily exist and is only used for byte alignment.Copy the code
Object head
- The object header of the Hotspot virtual machine consists of two parts. The first part is “MarkWord”, which stores the runtime data of the object itself, including Hashcode, GC generational age, lock status flags, thread-held locks, biased thread IDS, biased timestamps, and so on. It is the key to lightweight locking and biased locking.
- On a 32 – bit VM, the length of the data is 32 – bit and 64 – bit respectively (without considering pointer compression). The data is officially called MarkWord. Considering the spatial efficiency of the virtual machine, MarkWord is designed as a non-fixed data structure to store as much information as possible in a very small space. It will reuse its storage space according to the state of the object;
- In a 32-bit vm where the object is not locked, the 32bit space of MarkWord has 25 bits to store the object hashcode, 4 bits to store the age of the object generation, 2 bits to store the lock flag, and 1bit is fixed to 0. The object storage structure in other states (lightweight lock, weight lock, GC flag, bias lock) is as follows:
- The 64-bit VM structure is as follows:
Pointer to the compressed
- Today, our virtual machines are 64-bit, and 64-bit object headers are a waste of space, so the JVM will enable pointer compression by default and record object headers in 32-bit format
XX:+UseCompressedOops
- What information is compressed by Pointers?
- Global static variables (class attributes) of the object - Object header information: in 64-bit, the native object header size is 16 bytes, compressed 12 bytes - Object reference type: reference type itself 8 bytes, compressed 4 bytes - Object array type: array type itself 24 bytes, compressed 16 bytesCopy the code
- Chapter 8 of the Java Performance Authority Guide states that when the heap size is larger than 32GB, pointer compression is not required, and object references take up an additional 20 gb of memory, so 38GB of memory is required to be equivalent to 32GB of heap space with pointer compression enabled.
Analysis of Synchronized lock expansion process
There are four lock states: no lock, biased lock, light lock and heavy lock. With the competition of locks, biased locks will be upgraded to light locks, light locks will be upgraded to heavy locks, lock promotion is one-way, there is no lock demoted. In JDK1.6, bias lock and lightweight lock are enabled by default
Biased locking
- It is an optimization of the locking operation. Because in most cases locks are not only non-contested by multiple threads, but are always acquired by the same thread, bias locking is introduced to reduce the cost of acquiring locks by the same thread (which involves CAS operations and is more time-consuming).
- Optimization: If a thread acquies a lock, the lock goes into biased mode, and MarkWord’s structure becomes biased. When the thread requests the lock again, it can acquire the lock without doing anything else, saving a lot of work on the lock request and improving performance.
- For situations where there is no lock competition, biased locking has a good optimization effect. However, in cases where lock contention is high, biased locking fails because the thread applying for the lock may be different each time. When the biased lock fails, it does not immediately become a heavyweight lock, but is upgraded to a lightweight lock first.
Bias lock is enabled by default
- Open the biased locking: – XX: XX: + UseBiasedLocking BiasedLockingStartupDelay = 0
- To close the bias lock: -xx: -usebiasedlocking
Lightweight lock
- When biased locking fails, the virtual machine will use the optimization mode of lightweight locking, at which time MarkWord’s mechanism will also become lightweight locking mode
- Optimization mechanism: For most locks, there is no contention for the entire synchronization cycle, and in its scenario threads execute synchronized blocks alternately.
- If the same lock is accessed at the same time, then a light lock will be upgraded to a heavy lock.
spinlocks
- When the lightweight lock fails, a spin-lock optimization operation is performed
- Optimization mechanism: This is based on the fact that, in most cases, threads hold locks for a relatively short period of time, and it would be more than worth suspending a thread at the operating system level (the operating system requires a switch between user mode and core mindset), so spin locking assumes that for a short period of time, the current thread can acquire the lock. The virtual machine makes the thread currently trying to acquire the lock do several empty loops (spin operations), and after a number of spins, if the lock is acquired, it enters the critical section. If the lock is not obtained, the thread is suspended at the operating system level and is upgraded to a heavyweight lock.
Lock elimination
- Lock elimination is another optimization for virtual machines
- Optimization mechanism: The JVM virtual machine at JIT compilation (just-in-time compilation mechanism), by scanning and parsing the running context, to remove locks that are impossible to have a competition for shared resources, by eliminating unnecessary locks, can save meaningless lock application time
- For example, the append of StringBuffer is a synchronous method, but the StringBuffer in the add method is a local variable and is not used by other threads, so there is no lock contention and the JVM clears the lock automatically. Lock removal is based on escape analysis.
If Java is running in Server mode (which is more optimized than client mode) and escape analysis is enabled -XX:+DoEscapeAnalysis -XX:+EliminateLocks indicates that lock elimination is enabledCopy the code
Escape analysis
- The JVM uses escape analysis, and the compiler can do the following to optimize the code:
1. Synchronous ellipsis. If an object is found to be accessible only from one thread, then operations on that object may be performed without regard to synchronization 2. Converts heap allocation to stack allocation. If an object is allocated in a subroutine, the object may be stack allocated, not heap allocated, so that the pointer to the object never escapes. Separate object or scalar substitution. Some objects may not need to exist as a continuous memory structure to be accessed, so part or all of the object may be stored not in memory but in A CPU registerCopy the code
- All objects and arrays are allocated on the heap? So the question is not necessarily.
The last
- Study with an open mind and make progress together