The JVM and GC

The JVM buckets

List of JVM knowledge
JVM memory partition

Methods area

GC is used to unload constant pools and types

Store VM loaded class information, constants, static variables, and JIT-compiled code

Threads share

Run-time constant pool

The virtual machine stack

Stack frame, store local variable table, operand stack, dynamic link, method exit

Thread private

Local variable scale

Local method stack

Native method

The heap

Thread sharing, object instances, GC

Program counter

Not OOM, select the next bytecode

Why does JDK8 use meta space instead of permanent generation

Metaspace (permanent generation replacement reason, Metaspace features, Metaspace memory view analysis method)

  • Strings exist in persistent generation, which is prone to performance problems and memory overflow
  • It is difficult to determine the size of information about classes and methods, so it is difficult to specify the size of permanent generation. If the size is too small, it is easy to overflow the permanent generation, while if the size is too large, it is easy to overflow the old generation
  • Persistent generation introduces unnecessary complexity to the GC and is inefficient for collection

The method area is moved to Metaspace, and string constants are moved to the Java Heap.

Determine object survival

Reference counting: circular dependencies

Root accessibility analysis:

Available as GC Roots objects in Java include the following:

  • Objects referenced in the virtual machine stack (the local variable table in the stack frame);
  • The object referenced by the static property of the class in the method area;
  • The object referenced by the constant in the method area;
  • JNI (generally Native methods only) reference objects in the Native method stack
Strong Reference, Soft Reference, Weak Reference, Phantom Reference

Strong Reference: The garbage collector does not collect as long as the Strong Reference exists

Soft Reference: Objects that are useful, but not required. When OOM is about to occur, the system will include these objects in the collection range and perform a second garbage collection. If there is not enough memory after recycling, OOM is thrown. Java provides SoftReference to implement this.

Weak Reference: Describes non-essential objects. Weakly referenced associated objects can only survive until the next garbage collector occurs. Weak-reference objects are reclaimed regardless of whether memory is sufficient. WeakReference class

Phantom Reference: weakest type of Reference The existence of a virtual reference does not affect the lifetime of an object, nor can a virtual reference be used to obtain an instance of a – object. The sole purpose of setting a virtual reference to an object is to receive a system notification when the object is collected by the garbage collector. PhantomReference classes.

GC algorithm, garbage collector, collection strategy?

JVM memory allocation strategy and garbage collector

-XX:UseTLAB

Runtime data area

Stack allocation, TLAB allocation

TLAB is a thread-specific memory Allocation Buffer. Each thread allocates space from Eden, and when the thread is destroyed, we can naturally reclaim TLAB memory. Objects are usually allocated on the heap, and the heap is shared by threads. Therefore, multiple threads may apply for space on the heap, and each allocation of objects must be synchronized by threads, which makes allocation less efficient. Given that object allocation is almost the most common operation in Java, the JVM uses thread-specific areas such as TLAB to avoid multithreaded conflicts and increase the efficiency of object allocation.

Use the TLAB directive -xx :UseTLAB

Advantages: Thread safety, reduced garbage collection stress.

Disadvantages: TLAB space size is fixed, not flexible enough when facing large objects

.ClassFile structure

No one usually asks that.

Class file structure

Class loading mechanism

Class loading

The Bootstrap ClassLoader (Bootstrap ClassLoader) is used to load Java core class libraries and cannot be directly referenced by Java programs.

Extensions Class Loader: It is used to load Java extension libraries. The Implementation of the Java Virtual machine provides an extension library directory. The class loader finds and loads Java classes in this directory.

System Class Loader: It loads Java classes based on the CLASSPATH of the Java application. In general, Java application classes are loaded by it. Through this. GetSystemClassLoader () to get it.

User-defined class loaders are implemented by inheriting java.lang.ClassLoader classes.

Parent delegation mechanism

Parents delegate

Tomcat violates parental delegation

Tomcat class loading mechanism

How can spin locks be fair

Simple unfair spin lock and implementation of fair spin lock based on queuing

TLAB

Problems to be solved by TLAB

The problem TLAB is trying to solve is obvious, avoid allocating memory directly from the heap to avoid frequent lock contention.

OutOfMemoryErrorHave you seen it? How do I locate it?

OOM analysis

MATuse

JVM heap memory analysis tool MAT

java.lang.StackOverflowError

The reason:

  • Infinite recursive loop call
  • A large number of local variables are declared
  • A number of methods are executed

Solution:

  • Fixed infinite recursion Bug
  • Check loop dependencies
  • Increase thread stack memory space-Xss(Can be improved)
java.lang.OutOfMemoryError: Java heap space

The reason:

  • Create very large objects, very large arrays
  • Exceed the expected amount of requests to access data, second kill, snap up activities
  • Excessive use ofFinzalize()But the object has no GC
  • A memory leak

Solution:

  • -XmxThe big
  • Checking oversized Objects
  • Add machine, limit current downgrade
  • Memory leaks require code to locate
The difference between memory leak, memory overflow, and memory overflow scenarios?

Out of memory means that a program does not have enough memory to use when it requests memory

A memory leak is a situation in which a program fails to release the requested memory space after requesting it

Memory leakage scenarios are as follows:

  • Static collection classes, for exampleHashMap
  • Object properties in the collection are modified,remove()Invalid becausehashcode()Changed the
  • The release object does not delete listeners
  • The database connection and network connection are not closed
  • Internal class, external module references
  • The singleton pattern
GC overhead limit exceeded

Reason: Java process takes more than 98% of the time doing GC, but back to less than 2% of the memory, and the action is repeated five times in a row, will be thrown. Java lang. OutOfMemoryError: GC overhead limit exceeded the error.

Solution:

  • Check for dead loops and use large memory code
  • dumpMemory analysis
  • Increase the memory
Direct buffer memory

Reason: Java allows applications to access out-of-heap Memory directly using Direct ByteBuffer. Many high performance programs implement high speed IO using Direct ByteBuffer combined with Memory Mapped files. If the heap is rarely used, then the JVM does not need to perform GC and the Direct ByteBuffer objects are not collected. In this case, the heap is full, but the local memory is running out, and OOM occurs

Solution:

  • screeningByteBuffer.allocateDirectscreeningnetty.jetty
  • By boot parameter-XX:MaxDirectMemorySizeAdjust theDirect ByteBufferThe upper limit of
  • Check whether JVM parameters are present-XX:+DisableExplicitGCOption, if any, is removed, because this parameter will enableSystem.gc()failure
  • Check the out-of-heap memory usage code to see if there is a memory leak; Or called by reflectionsun.misc.Cleanerclean()Methods to actively release beingDirect ByteBufferThe memory space held
  • Increase the memory
Unable to create new native thread

The reason:

Each Java thread consumes a certain amount of memory, and this error is reported when the JVM requests the underlying operating system to create a new native thread if there are not enough resources to allocate

Solution:

  • Find ways to reduce the number of threads created in your application, and determine whether your application really needs to create so many threads
  • If you do need to create a lot of threads, increase the maximum number of threads at the OS level: executionulimia-aTo view the maximum thread limit, useulimit-u xxxAdjust the maximum number of threads
Metaspace

The reason:

Metaspace is an implementation of the method area in HotSpot. The biggest difference from persistent generation is that the Metaspace is not in virtual machine memory but uses local memory. Local memory is also full, so there will be exceptions.

Solution:

  • In application scenarios where the runtime generates a large number of dynamic classes, the recycling of these classes should be of special concern

  • -xx :MaxMetaspaceSize Specifies the maximum MetaSpace size. The default value is -1, indicating that the MetaSpace size is unlimited. The JVM changes this value dynamically.

  • -xx :MetaspaceSize Specifies the initial size, in bytes, of the metasapace, which triggers the GC to perform type unload and is adjusted by the collector

  • -xx :MinMetaspaceFreeRatio Controls the minimum percentage of free meta space after GC to reduce the frequency of garbage collection due to insufficient meta space. Similarly, MaxMetaspaceFreeRatio

Requested array size exceeds VM limit

Array burst, check whether the business needs to create such a large array.

Out of swap space

When the total memory requested by the JVM is greater than the available physical memory, the operating system starts swapping content out of memory to the hard drive. This error indicates that all available virtual memory has been used up. Virtual Memory consists of Physical Memory and Swap Space

Kill process or sacrifice child

The reason:

By default, the Linux kernel allows processes to apply for more memory than the available memory of the system. In this way, system resources can be used more efficiently.

However, this approach will inevitably bring a certain “oversold” risk. For example, some processes continue to occupy system memory, which then causes other processes to have no memory available. At this point, OOM Killer is automatically activated to look for low-scoring processes and “kill” them to release memory resources.

Solution:

  • Upgrade server configuration/quarantine deployment to avoid contention
  • OOM Killer Was tuned.
Common CMS GC problem analysis

Meituan CMS GC

The GC collector

Garbage collector family bucket

The GC tuning parameter

Tuning of ginseng stereotype

GC tuning purposes

Minimize the number of objects migrated to older ages

Reduce GC execution time.

Strategy 1: ** Reserve new objects for the new generation. Since the cost of Full GC is much higher than that of Minor GC, it is wise to allocate objects to the new generation as much as possible. In the actual project, analyze whether the allocation of the space size of the new generation is reasonable according to GC logs, and adjust the size of the new generation appropriately through -xMN command. Minimize the possibility of new objects going straight to the old age.

Strategy 2: Big objects go into the old age, although in most cases it makes sense to assign objects to the new generation. However, this practice is debatable for large objects. If large objects are allocated in the new generation for the first time, there may be insufficient space in the old age when many small objects that are not old enough will be allocated, and the object structure of the new generation will be destroyed, which may lead to frequent FULL GC. Therefore, for large objects, you can set them to go straight to the old age (of course, short-lived large objects are a nightmare for garbage collection). – XX: PretenureSizeThreshold object size can be set directly into the old s.

-xx :MaxTenuringThreshold Sets the age of the object to enter the old age to reduce the memory usage of the old age and reduce the frequency of full GC.

** Policy 4: ** Set a stable heap size with two parameters: -xms initial heap size, and -xmx maximum heap size. The Settings are the same.

** Strategy 5: ** Note: GC optimization is generally not required if the following metrics are met:

MinorGC takes less than 50ms to execute;

Minor GC is performed infrequently, about once every 10 seconds;

Full GC takes less than 1s to execute;

Full GC is performed infrequently, at least once every 10 minutes.

CMS vs G1

JVM various recyclers, their advantages and disadvantages, focus on CMS, G1

CMS vs G1 vs ZGC

Java garbage collection CMS, G1, ZGC

CMS

Concurrent Mark Sweep, a collector whose goal is to achieve the shortest collection pause time, is based on Concurrent “Mark Sweep” implementations. Default garbage collection algorithm prior to JDK1.7, concurrent collection, small pauses.

Advantages:

Concurrent, low pause

Disadvantages:

1. Very CPU-sensitive: while it does not cause user threads to stall during the concurrent phase, it does slow down the application by taking up a portion of the threads

2, unable to handle floating garbage: In the last step of concurrent cleanup process, the user county execution will also generate garbage, but this part of garbage is marked after, so only until the next gc to clean up, this part of garbage is called floating garbage

3. When CMS uses “mark-clean” method, it will produce a large amount of space debris. When the debris is too much, it will bring great trouble to the allocation of large object space. In order to solve this problem, CMS provides a switch parameter to enable the defragmentation merging process when CMS fails to perform FullGC. However, the defragmentation process cannot be concurrent, and the space defragmentation is gone but the pause time becomes longer

Process:

1. Initial tag: exclusive to PUC, marks only objects that GCroots can directly associate with

Sidebar: Four objects that GCroots can associate directly with

(1) Objects referenced in the vm stack; (2) Objects referenced by static attributes of the class in the method area; (3) Objects referenced by constants in the method area; (4) Objects referenced by JNI (Native method) in the local method stack

2, concurrent marking: can be executed in parallel with the user thread, marking all reachable objects

3, re-mark: exclusive CPU(STW), to mark the garbage object generated by the user thread in the concurrent marking stage

4, concurrent cleaning: can be executed in parallel with the user thread, garbage cleaning

Causes of FullGC in CMS:

1. There is not enough continuous space for the young band to be promoted to the old band, most likely due to memory fragmentation

2. During the concurrent process, the JVM feels that the heap will be full before the concurrent process ends and needs to trigger FullGC ahead of time

G1

Garbage First is a Garbage collector for server-side applications. G1 algorithm JDK1.9 after the default recovery algorithm, is characterized by maintaining a high recovery rate while reducing the pause.

Features:

1. Parallelism and concurrency: G1 makes full use of The hardware advantages of CPU and multi-core environment, using multiple cpus (CPU or CPU core) to shorten stop-the-world pause time. While other collectors would have paused GC actions performed by Java threads, the G1 collector can still allow Java programs to continue executing concurrently.

Generational collection: The concept of generational collection remains in G1. Although G1 can manage the entire GC heap independently without the cooperation of other collectors, it can work differently with newly created objects and old objects that have been around for a while and have survived multiple GC’s for better collection results. This means that G1 can manage the new generation and the old generation by itself.

3, spatial integration: as the G1 USES the concept of independent Region (Region), on the whole is based on the G1 “tag – sorting algorithm to realize the collection,” on the local (Region) of it is based on the “copy” algorithm, but in any case, this means that both algorithms G1 does not produce memory space debris during operation.

4, predictable pauses: this is another big advantage of G1 relative to CMS, reduce the pause time is the common concern of G1 and CMS, but G1 in addition to the pursuit of low pause, also can establish predictable pauses model, can use this explicitly specify a length of M segment within milliseconds, time on garbage collection may not consume more than N milliseconds.

If the operation of maintaining the Remembered Set is not counted, the OPERATION of the G1 collector can be roughly divided into the following steps:

1. Initial Making

2. Concurrent Marking

3. Final Marking

4. Live Data Counting and Evacuation

ZGC

The target garbage collection pause time is no more than 10ms and can handle both relatively small heaps (hundreds of MEgabytes) and large piles (terabytes). Throughput is reduced by no more than 15% compared to G1 so new GC features can be implemented later. And read barriers further optimize the collector

ZGC Description In general, ZGC is a concurrent, non-generational, region-based, NUMA-enabled compression collector. Because STW is only used during the enumeration of root nodes, the pause time does not increase with heap size or the number of live objects.

A core design of the ZGC is that read barriers are combined with colored Object Pointers (abbreviated as Colored Oops), which are generally Pointers that use unused bits of a 64-bit pointer to store metadata. This is why the ZGC can be executed concurrently with the user thread. From a Java thread perspective, the operation of reading a reference variable in a Java counterpart is a read barrier. Rather than simply fetching an object’s memory address, using read barriers allows you to use the information contained in colored Pointers to determine whether you need to do something before allowing a Java thread to read the address value of the pointer. For example, an object may have been moved by the garbage collector, and the read barrier can sense this and perform the necessary behavior.

— This allows us to reclaim/reuse the portion of memory during the move object/defragment phase before the pointer to the recyclable/reuse area is determined. (the original: It allows us to reclaim and reuse memory during the relocation/compaction phase, Before pointing into the Reclaimed/Furbished regions have been fixed. This helps reduce the overhead of the heap. This also means that we no longer need to implement a separate mark-tidy algorithm for Full GC. — This allows us to use relatively fewer and simpler GC barriers. This reduces the performance overhead of the JVM runtime. It also makes GC code in the JVM bytecode interpreter and JIT compiler easier to implement and optimize. — We currently store data related to markup and relocation in color Pointers. However, as long as there are enough unused bits in the color pointer, we can store more information in it that is useful for read barriers. We think this is a good foundation for implementing more features in the future. For example, in a complex and variable memory environment, we can store trace information in color Pointers to allow the garbage collector to move low-frequency objects to less frequently accessed memory areas as it moves objects.

JUCAnd multithreading

Thread pool core parameters and initialization
JUC interview questions
CountDownLatch, CyclicBarrier, and Semaphore
Thread pool parameters IO intensive /CPU intensive Settings
ThreadLocal related
ThreadLocalThe source code parsing

HashMap & ConcurrentHashMap

ConcurrentHashMap

ConcurrentHashMap 1.7, 1.8

HashMap

AbstractQueuedSynchronizer

What is your understanding of AQS?

AQS and already

AQS source code analysis

AQS source code – Blog garden

fromReentrantLockSource code analysisAQS

Inheritance relationships

  • Already there are three inner classes: Sync, NonfairSync, FairSync
  • AQS inherit the AOS
  • Sync inheritance AQS
  • NonfairSync (unfair lock) and FairSync (fair lock) inherit Sync, respectively

Call relationship

Inside the AbstractQueuedSynchronizer, there is a queue, we call it the synchronization waiting queue. It is used to save threads waiting on the lock(waiting due to the lock() operation). In addition, in order to maintain wait in the waiting thread on the Condition variable, AbstractQueuedSynchronizer and need to maintain a Condition variable waiting queue, that by the Condition. Await () caused by blocked thread.

Because a single reentrant lock can generate multiple condition variable objects, a single reentrant lock can have multiple condition variables waiting in queues. In effect, a wait list is maintained inside each condition variable object.

AQS provides two basic lock acquisition modes: shared and exclusive, including whether or not to be broken.

Exclusive lock: Exclusive lock acquisition mode, ignoring interrupts. Call tryAcquire at least once, otherwise the thread is enqueued and tryAcquire calls are retried until successful.

Shared lock: Locks are acquired in shared mode, ignoring interrupts. Call tryAcquireShared at least once, and if unsuccessful, join the queue and try again until it succeeds.

ReentrantLock, ReentrantReadWriteLock, CountDownLatch and Semaphore are all implemented based on AQS.

CountDownLatchThe source code

CountDownLatchSource code analysis – Nuggets

CountDownLatch mainly controls whether additional operations can be performed through counter state, and if not, suspends the thread through the locksupport.park () method until it is woken up by other threads.