Ali Technologies – Talk about the JVM internal lock upgrade process
What is the memory layout of objects in memory?
After new, an object is divided into four main parts in memory:
-
Markword: hashcode, GC generation age, lock status flags, thread-held locks, thread bias
-
Class pointer: a class file pointer to an object
-
Instance Data: Records variable data of an object
-
In the 64-bit server version, the object memory must be divisible by 8 bytes. If it is not divisible, it is made up by alignment.
Describes the underlying implementations of synchronized and ReentrantLock and the underlying principles of reentrant.
Synchronized:
Each object has a monitor. The monitor is locked when it is occupied, and the thread attempts to acquire ownership of the Monitor when it executes the Monitorenter instruction:
- If monitor’s number of entries is 0, the thread enters Monitor, then sets the number of entries to 1, and the thread is the owner of Monitor.
- If the thread already owns the monitor and just re-enters, the number of monitor entries is +1.
- If another thread has occupied monitor, the thread blocks until the number of monitor entries is zero, and then tries again to acquire ownership of monitor.
The thread executing monitorexit must be the owner of the monitor to which objectref corresponds. When the instruction is executed, the number of monitor entries decreases by 1. If the number of monitor entries decreases by 1, the thread exits the monitor and is no longer the owner of the monitor. Other threads blocked by the monitor can try to take ownership of the monitor.
Why is the underlying AQS CAS+ Volatile?
If the requested free sharing of resources, will thread is set to the current request resources effective worker threads, and set the Shared resource is locked, if requested to share resources being used, you will need to lock when a thread blocks waiting and awakened distribution mechanism, this mechanism AQS are implemented with CLH queue lock, Queues threads that temporarily cannot acquire locks.
Describe the four lock states and the lock upgrade process?
- No lock 01: The resource is not locked. All threads can access and modify the same resource, but only one thread can modify the resource successfully.
- Biased lock 01: Biased lock means that when a piece of synchronized code is accessed by the same thread all the time, there is no competition between multiple threads, so the thread will automatically acquire the lock in the subsequent access, thus reducing the consumption of acquiring the lock.
- Lightweight lock 00: The lightweight lock mainly refers to that when the current lock is biased, it is accessed by another thread. At this time, the biased lock is upgraded to lightweight lock. Other threads will try to obtain the lock through the form of spin, and the thread will not block, thus improving performance.
- Heavyweight 10: A heavyweight lock means that when one thread acquires the lock, all other threads waiting to acquire the lock block. The operating system is responsible for scheduling between threads.
Lock upgrade process 1. If the object is not locked, it is a common object. The lock flag is 01 and its bias is 0. 2. When the object is treated as A synchronous lock and A thread A grabs the lock, the lock flag is 01 and bias is 1, then the biased lock state enters; 3. When thread B tries to obtain the lock and finds that the synchronization lock is in the biased state but the thread ID is not B’s, thread B will use CAS to try to obtain the lock. If it succeeds in obtaining the lock, the biased thread ID is set to B. If the acquisition fails, the current biased lock is upgraded to a lightweight lock. Lock flag bit 00. The lightweight lock failed to acquire the lock, and will use spin to acquire the lock. 4. When the spins exceed a certain number (10 spins), or there are many threads (more than half of the CPU core), the lightweight lock will be upgraded to the heavyweight lock with the lock flag of 10. In this state, threads that do not grab the lock are blocked.
Object o = new Object() How many bytes in memory?
- Class pointer compression enabled: MarkWord8 bytes, class pointer 4 bytes, array length 4 bytes, no need to complete, a total of 16 bytes.
- When class pointer compression is turned off: MarkWord8 bytes, class pointer 8 bytes, array length 4 bytes, need to complete 4 bytes, total 24 bytes.
Are spinlocks efficient for specific gravity locks?
If there are too many threads, such as 10000 threads, then how long does it take for the CAS to switch values, and the CPU light switching between these 10000 threads consumes a lot of resources, in this case, it is natural to upgrade to a heavyweight lock, which is directly called to the operating system queue management. So even if you have 10,000 threads that’s dealing with sleep waiting to wake up in a queue.
Will opening bias lock improve efficiency?
Not necessarily, starting biased locks is not an appropriate option when it is clear that threads are competing, because biased locks are competing.
When multiple threads scramble for shared resources after biased lock is applied, they need to upgrade the lock to lightweight lock. In this process, biased lock is withdrawn and upgraded, thus reducing efficiency.
Where is the weight lock?
The JVM leaves all thread-related operations to the operating system, such as scheduling lock synchronization directly to the operating system, which must be enqueued first, and the operating system consumes a lot of resources when starting a thread.