The structure of heap memory

1. Structure diagram

We can see this picture from the authorities. As you can see, there are two types of heap allocation: COMMITTED and Virtual memory, and generation allocation: Young Generation Tenured Generation.

And we also know from this structure that there’s some memory that’s free up in between the generations that we haven’t used yet.

2, young generation & old age distribution

We can see that there are two concepts in the young generation: Eden and Survivor, and then Survivor is divided into two regions. The generation concept of memory is mainly related to object creation and object collection (GC), because most objects live only one generation, such as creating an object, calling one of its methods, and then the object can be recycled. Only a small number of objects will survive long, in which case they will be promoted in the Eden -> Survivor -> Tenured order (not necessarily in this order, but if, for example, Survivor space runs out, they may be allocated directly to Tenured). Then in the recycling can be in accordance with the characteristics of these generations to use different recycling algorithms to recycle. For example, the Eden area is a generation-using object, which requires frequent garbage collection (and objects that are not used again), and then promotion to Survivor space if more generations are used. Why does Survivor split into two? This is because this is a better place to use the copy algorithm. These will be discussed in garbage collection algorithms.

Some basic classes & Data structure definitions (based on OpenJDK9)

Many of the books I have read have been written in terms of concepts, but let’s take a look at the process of allocating heap space in terms of JVM implementation code. Implementation Let’s look at some of the basic classes defined by the JVM

1, CHeapObj

class CHeapObj {
 public:
  void* operator new(size_t size) throw(a);
  void  operator delete(void* p);
  void* new_array(size_t size);
};
Copy the code

This is the base class of the heap object, and you can see that it is mainly the overloaded methods that define new and delete.

2, CollectedHeap

class CollectedHeap : public CHeapObj<mtInternal> {
   .........
   public:
  enum Name {GenCollectedHeap, ParallelScavengeHeap, G1CollectedHeap }; .Copy the code

CollectedHeap GenCollectedHeap ParallelScavengeHeap G1CollectedHeap CollectedHeap GenCollectedHeap ParallelScavengeHeap G1CollectedHeap CollectedHeap GenCollectedHeap ParallelScavengeHeap G1CollectedHeap

3, GenCollectedHeap

class GenCollectedHeap : public CollectedHeap {
  friend class GenCollectorPolicy;
  friend class Generation;
  friend class DefNewGeneration;
  friend class TenuredGeneration;
  friend class ConcurrentMarkSweepGeneration;
  friend class CMSCollector;
  friend class GenMarkSweep;.enum GenerationType {
    YoungGen,
    OldGen
  };

private:
  Generation* _young_gen;
  Generation* _old_gen;

  // The singleton CardTable Remembered Set.
  CardTableRS* _rem_set;

  // The generational collector policy.GenCollectorPolicy* _gen_policy; .Copy the code

You can see that it defines two generation types: YoungGen OldGen OldGen.

4, CollectorPolicy

class CollectorPolicy : public CHeapObj<mtGC> {
 protected:
  virtual void initialize_alignments(a) = 0;
  virtual void initialize_flags(a);
  virtual void initialize_size_info(a);

  DEBUG_ONLY(virtual void assert_flags());DEBUG_ONLY(virtual void assert_size_info());size_t _initial_heap_byte_size;
  size_t _max_heap_byte_size;
  size_t _min_heap_byte_size;

  size_t _space_alignment;
  size_t_heap_alignment; .public:
  virtual void initialize_all(a) {
    initialize_alignments(a);initialize_flags(a);initialize_size_info(a); }...Copy the code

Here are the policies of the first collector, such as GenCollectorPolicy, MarkSweepPolicy, you can see the maximum and minimum values of the heap, spatial alignment, etc. Then there is mainly an initialize_all() method that is called at the time of the initial creation.

5, GenCollectorPolicy

class GenCollectorPolicy : public CollectorPolicy {
  friend class TestGenCollectorPolicy;
  friend class VMStructs;
 protected:
  size_t _min_young_size;
  size_t _initial_young_size;
  size_t _max_young_size;
  size_t _min_old_size;
  size_t _initial_old_size;
  size_t_max_old_size; . GenerationSpec* _young_gen_spec; GenerationSpec* _old_gen_spec;Copy the code
class GenerationSpec : public CHeapObj<mtGC> {
  friend class VMStructs;
private:
  Generation::Name _name;
  size_t           _init_size;
  size_t           _max_size;
Copy the code

You can see that there are specifications that define two generations: _young_gen_spec and _old_gen_spec

6, MarkSweepPolicy

class MarkSweepPolicy : public GenCollectorPolicy {
 protected:
  void initialize_alignments(a);
  void initialize_generations(a);

 public:
  MarkSweepPolicy() {}

  MarkSweepPolicy* as_mark_sweep_policy(a) { return this; }

  void initialize_gc_policy_counters(a);
};
Copy the code

Mark clear

7, ReservedSpace

class ReservedSpace VALUE_OBJ_CLASS_SPEC {
  friend class VMStructs;
 protected:
  char*  _base;
  size_t _size;
  size_t _noaccess_prefix;
  size_t _alignment;
  bool   _special;
 private:
  bool   _executable;
  // ReservedSpace
  ReservedSpace(char* base, size_t size, size_t alignment, bool special,
                bool executable);
 protected:
  void initialize(size_t size, size_t alignment, bool large,
                  char* requested_address,
                  bool executable); .// Accessors
  char*  base(a)            const { return _base;      }
  size_t size(a)            const { return _size;      }
  size_t alignment(a)       const { return _alignment; }
  bool   special(a)         const { return _special;   }
Copy the code

This class is very important, this class is the one that manages the allocated space, and you can see that there’s _base, and _size, and _base is where the allocated memory starts, and _size is the size, and that used to represent the allocated memory.

Subclasses of ReservedSpace

// Class encapsulating behavior specific of memory space reserved for Java heap.
class ReservedHeapSpace : public ReservedSpace {
 private:
  void try_reserve_heap(size_t size, size_t alignment, bool large,
                        char *requested_address); . };Copy the code

This ReservedHeapSpace is the heap space represented.

// Class encapsulating behavior specific memory space for Code
class ReservedCodeSpace : public ReservedSpace {
 public:
  // Constructor
  ReservedCodeSpace(size_t r_size, size_t rs_align, bool large);
};
Copy the code

This ReservedCodeSpace is used to describe allocated code space

// VirtualSpace is data structure for committing a previously reserved address range in smaller chunks.
class VirtualSpace VALUE_OBJ_CLASS_SPEC {
  friend class VMStructs;
 private:
  // Reserved area
  char* _low_boundary;
  char* _high_boundary;

  // Committed area
  char* _low;
  char* _high;
Copy the code

This VirtualSpace corresponds to the reserved space in the heap space diagram above. You can see that _low represents the low address and _high represents the high address.

JVM source code to heap application allocation code general process

1, university.cpp -universe_init ()

Its initialization starting point is in the universe_init() method of the university.cpp file

jint universe_init(a) {... jint status = Universe::initialize_heap(a);if(status ! = JNI_OK) {return status;
  }
  Metaspace::global_initialize(a); .if (UseSharedSpaces) {
    ...........
    MetaspaceShared::initialize_shared_spaces(a); StringTable::create_table(a); }else{...if (DumpSharedSpaces) {
      MetaspaceShared::prepare_for_dumping(a); }}...return JNI_OK;
}
Copy the code

The initialization of the heap here is the Universe::initialize_heap() method. Then there is the judgment of two JVM parameters, UseSharedSpaces and DumpShare Spaces. Sharespaces is unknown for now.

2, university.cpp-initialize_heap ()

1), initialize_heap ()

jint Universe::initialize_heap() { jint status = JNI_ERR; .if (_collectedHeap == NULL) {
    _collectedHeap = create_heap();
  }
  status = _collectedHeap->initialize();
  if(status ! = JNI_OK) {returnstatus; } ThreadLocalAllocBuffer::set_max_size(Universe::heap()->max_tlab_size()); .return JNI_OK;
}
Copy the code
// The particular choice of collected heap.
  staticCollectedHeap* _collectedHeap; .static CollectedHeap* heap(a) { return _collectedHeap; }
Copy the code

CollectedHeap(_collectedHeap) is used to initialize the CollectedHeap(_collectedHeap).

2), create_heap ()

CollectedHeap* Universe::create_heap(a) {...if (UseParallelGC) {
    return Universe::create_heap_with_policy<ParallelScavengeHeap, GenerationSizer>();
  } else if (UseG1GC) {
    return Universe::create_heap_with_policy<G1CollectedHeap, G1CollectorPolicy>();
  } else if (UseConcMarkSweepGC) {
    return Universe::create_heap_with_policy<GenCollectedHeap, ConcurrentMarkSweepPolicy>();
#endif
  } else if (UseSerialGC) {
    return Universe::create_heap_with_policy<GenCollectedHeap, MarkSweepPolicy>();
  }
  ShouldNotReachHere(a);return NULL;
}
Copy the code

You can see that different CollectedHeap and CollectorPolicy are used for different garbage collection algorithms.

You can see that the default of my current debug is the UseSerialGC algorithm

template <class Heap.class Policy>
CollectedHeap* Universe::create_heap_with_policy(a) {
  Policy* policy = new Policy(a); policy->initialize_all(a);return new Heap(policy);
}
Copy the code

While the initialize_all() method is called, create_heap_with_policy() is defined as a template template method.

3) Initialize assignment

Here the initialization assignment initializes the call

GenCollectorPolicy::GenCollectorPolicy() :
    _min_young_size(0),
    _initial_young_size(0),
    _max_young_size(0),
    _min_old_size(0),
    _initial_old_size(0),
    _max_old_size(0),
    _gen_alignment(0),
    _young_gen_spec(NULL),
    _old_gen_spec(NULL)
{}
Copy the code
virtual void initialize_all(a) {
    CollectorPolicy::initialize_all(a);initialize_generations(a); }Copy the code
public:
  virtual void initialize_all(a) {
    initialize_alignments(a);initialize_flags(a);initialize_size_info(a); }...void GenCollectorPolicy::initialize_size_info(a) {
  CollectorPolicy::initialize_size_info(a); _initial_young_size = NewSize; _max_young_size = MaxNewSize; _initial_old_size = OldSize;Copy the code

Here are some initial assignments of GenCollectorPolicy, and then the key is to call and implement the initialize_generations() method

void MarkSweepPolicy::initialize_generations(a) {
  _young_gen_spec = new GenerationSpec(Generation::DefNew, _initial_young_size, _max_young_size, _gen_alignment);
  _old_gen_spec   = new GenerationSpec(Generation::MarkSweepCompact, _initial_old_size, _max_old_size, _gen_alignment);
}
Copy the code
public:
 // The set of possible generation kinds.
 enum Name {
   DefNew,
   ParNew,
   MarkSweepCompact,
   ConcurrentMarkSweep,
   Other
 };
Copy the code

At this point we have completed the initialization of _young_gen_spec and _old_gen_spec. We can see that the current policy uses DefNew for the younger generation and MarkSweepCompact for the older generation.

Gencollectedheap.cpp-initialize ()

jint GenCollectedHeap::initialize(a) {...// Allocate space for the heap.
  char* heap_address;
  ReservedSpace heap_rs;

  size_t heap_alignment = collector_policy() - >heap_alignment(a); heap_address =allocate(heap_alignment, &heap_rs); .initialize_reserved_region((HeapWord*)heap_rs.base(), (HeapWord*)(heap_rs.base() + heap_rs.size()));

  _rem_set = collector_policy() - >create_rem_set(reserved_region());
  set_barrier_set(rem_set() - >bs());

  ReservedSpace young_rs = heap_rs.first_part(gen_policy() - >young_gen_spec() - >max_size(), false.false);
  _young_gen = gen_policy() - >young_gen_spec() - >init(young_rs, rem_set());
  heap_rs = heap_rs.last_part(gen_policy() - >young_gen_spec() - >max_size());

  ReservedSpace old_rs = heap_rs.first_part(gen_policy() - >old_gen_spec() - >max_size(), false.false);
  _old_gen = gen_policy() - >old_gen_spec() - >init(old_rs, rem_set());
  clear_incremental_collection_failed(a); .return JNI_OK;
}
Copy the code

This method is the method of memory allocation & memory allocation.

1), allocate(HEAP_alignment, &HEAP_RS)

char* GenCollectedHeap::allocate(size_t alignment,
                                 ReservedSpace* heap_rs){
  // Now figure out the total size.
  const size_t pageSize = UseLargePages ? os::large_page_size() : os::vm_page_size(a);assert(alignment % pageSize == 0."Must be");

  GenerationSpec* young_spec = gen_policy() - >young_gen_spec(a); GenerationSpec* old_spec =gen_policy() - >old_gen_spec(a);// Check for overflow.
  size_t total_reserved = young_spec->max_size() + old_spec->max_size(a); .assert(total_reserved % alignment == 0."Gen size; total_reserved=" SIZE_FORMAT ", alignment="
         SIZE_FORMAT, total_reserved, alignment);

  *heap_rs = Universe::reserve_heap(total_reserved, alignment); .return heap_rs->base(a); }Copy the code

As you can see, young_spec and old_spec are first obtained. Then the sum of its maximum value can get the total space of an application total_reserved. Memory space is then formally allocated using the reserve_heap method. Then heap_rs->base() returns the starting position of the space _base.

2), Universe::reserve_heap(total_reserved, alignment)

ReservedSpace Universe::reserve_heap(size_t heap_size, size_t alignment) {...size_t total_reserved = align_size_up(heap_size, alignment); .// Now create the space.
  ReservedHeapSpace total_rs(total_reserved, alignment, use_large_pages);

  if (total_rs.is_reserved()) {
    assert((total_reserved == total_rs.size() && ((uintptr_t)total_rs.base() % alignment == 0),
           "must be exactly of required size and alignment");
    // We are good.

    if (UseCompressedOops) {
      // Universe::initialize_heap() will reset this to NULL if unscaled
      // or zero-based narrow oops are actually used.
      // Else heap start and base MUST differ, so that NULL can be encoded nonambigous.
      Universe::set_narrow_oop_base((address)total_rs.compressed_oop_base());
    }

    returntotal_rs; }...return ReservedHeapSpace(0.0.false);
}
Copy the code

ReservedHeapSpace total_RS (total_reserved, alignment, use_large_pages); To create this space.

ReservedHeapSpace::ReservedHeapSpace(size_t size, size_t alignment, bool large) : ReservedSpace() {
  if (size == 0) {
    return; }...if (UseCompressedOops) {
    initialize_compressed_heap(size, alignment, large);
    if (_size > size) {
      // We allocated heap with noaccess prefix.
      // It can happen we get a zerobased/unscaled heap with noaccess prefix,
      // if we had to try at arbitrary address.
      establish_noaccess_prefix();
    }
  } else {
    initialize(size, alignment, large, NULL.false); }...if (base(a) >0) {
    MemTracker::record_virtual_memory_type((address)base(), mtJavaHeap); }}Copy the code

You can see that there’s also a parameter here that identifies UseCompressedOops. This currently defaults to true, but we simply change the value to initialize(size, alignment, large, NULL, false).

void ReservedSpace::initialize(size_t size, size_t alignment, bool large,
                               char* requested_address,
 ......
  _base = NULL;
  _size = 0;
  _special = false;
  _executable = executable;
  _alignment = 0;
  _noaccess_prefix = 0;
  if (size == 0) {
    return;
  }

  // If OS doesn't support demand paging for large page memory, we need
  // to use reserve_memory_special() to reserve and pin the entire region.
  boolspecial = large && ! os::can_commit_large_page_memory();char* base = NULL; .if (base == NULL) {
    // Optimistically assume that the OSes returns an aligned base pointer.
    // When reserving a large address range, most OSes seem to align to at
    // least 64K.

    // If the memory was requested at a particular address, use
    // os::attempt_reserve_memory_at() to avoid over mapping something
    // important. If available space is not detected, return NULL.

    if(requested_address ! =0) {
      base = os::attempt_reserve_memory_at(size, requested_address);
      if (failed_to_reserve_as_requested(base, requested_address, size, false)) {
        // OS ignored requested address. Try different address.
        base = NULL; }}else {
      base = os::reserve_memory(size, NULL, alignment);
    }

    if (base == NULL) return; .// Done
  _base = base;
  _size = size;
  _alignment = alignment;
}
Copy the code

Base = OS ::reserve_memory(size, NULL, alignment); It’s applied to memory and then assigned to _base by _base = base.

Os.cpp – reserve_memory(size, NULL, alignment)

char* os::reserve_memory(size_t bytes, char* addr, size_t alignment_hint) {
  char* result = pd_reserve_memory(bytes, addr, alignment_hint);
  if(result ! =NULL) {
    MemTracker::record_virtual_memory_reserve((address)result, bytes, CALLER_PC);
  }
  return result;
}
Copy the code

The pd_reserve_memory method is platform-dependent, currently Linux.

char* os::pd_reserve_memory(size_t bytes, char* requested_addr,
                            size_t alignment_hint) {
  return anon_mmap(requested_addr, bytes, (requested_addr ! =NULL));
}
Copy the code
static char* anon_mmap(char* requested_addr, size_t bytes, bool fixed) {
  char * addr;
  int flags;

  flags = MAP_PRIVATE | MAP_NORESERVE | MAP_ANONYMOUS;
  if (fixed) {
    assert((uintptr_t)requested_addr % os::Linux::page_size() = =0."unaligned address");
    flags |= MAP_FIXED;
  }
  // Map reserved/uncommitted pages PROT_NONE so we fail early if we
  // touch an uncommitted page. Otherwise, the read/write might
  // succeed if we have enough swap space to back the physical page.
  addr = (char*) : :mmap(requested_addr, bytes, PROT_NONE,
                       flags, - 1.0);

  return addr == MAP_FAILED ? NULL : addr;
}
Copy the code

You can see that we have finally arrived at the final Linux method mmap for requesting memory, which was requested by mmap(Requested_ADDR, bytes, PROT_NONE,flags, -1, 0).

5. Generation logic & subdivision of young generation

1) Generational logic

Now that we’ve allocated memory using the logic above, let’s go back to the previous logic and look at the partitioning logic for the next generation

ReservedSpace young_rs = heap_rs.first_part(gen_policy() - >young_gen_spec() - >max_size(), false.false);
_young_gen = gen_policy() - >young_gen_spec() - >init(young_rs, rem_set());
heap_rs = heap_rs.last_part(gen_policy() - >young_gen_spec() - >max_size());

ReservedSpace old_rs = heap_rs.first_part(gen_policy() - >old_gen_spec() - >max_size(), false.false);
_old_gen = gen_policy() - >old_gen_spec() - >init(old_rs, rem_set());
Copy the code

Heap_rs. last_part and heAP_rs. last_part are two methods for handling the last_part.

ReservedSpace ReservedSpace::first_part(size_t partition_size, size_t alignment,
                                        bool split, bool realloc) {
  assert(partition_size <= size(), "partition failed");
  if (split) {
    os::split_reserved_memory(base(), size(), partition_size, realloc);
  }
  ReservedSpace result(base(), partition_size, alignment, special(), executable());
  return result;
}
ReservedSpace
ReservedSpace::last_part(size_t partition_size, size_t alignment) {
  assert(partition_size <= size(), "partition failed");
  ReservedSpace result(base() + partition_size, size() - partition_size, alignment, special(), executable());
  return result;
}
Copy the code
ReservedSpace::ReservedSpace(char* base, size_t size, size_t alignment,
                             bool special, bool executable) {
  assert((size % os::vm_allocation_granularity()) = =0."size not allocation aligned");
  _base = base;
  _size = size;
  _alignment = alignment;
  _noaccess_prefix = 0;
  _special = special;
  _executable = executable;
}
Copy the code

Last_part = base(base() + partition_size); last_part = base() + partition_size; Heap_rs = heap_rs.last_part(gen_policy()->young_gen_spec()->max_size()), This is to move _base from the space already allocated to _young_gen (gen_policy()->young_gen_spec()->max_size())). Then another starting point is to call the base() method the next time it is allocated to _old_gen. How to complete the division of two areas.

2) Segmentation of the younger generation

Young_spec ()->init(young_rs, rem_set())).

Generation* GenerationSpec::init(ReservedSpace rs, CardTableRS* remset) {
  switch (name()) {
    case Generation::DefNew:
      return new DefNewGeneration(rs, init_size());

    case Generation::MarkSweepCompact:
      return new TenuredGeneration(rs, init_size(), remset);
#if INCLUDE_ALL_GCS
    case Generation::ParNew:
      return new ParNewGeneration(rs, init_size());
    case Generation::ConcurrentMarkSweep: {
      assert(UseConcMarkSweepGC, "UseConcMarkSweepGC should be set");
      if (remset == NULL) {
        vm_exit_during_initialization("Rem set incompatibility.");
      }
      // Otherwise
      // The constructor creates the CMSCollector if needed,
      // else registers with an existing CMSCollector
      ConcurrentMarkSweepGeneration* g = NULL;
      g = new ConcurrentMarkSweepGeneration(rs, init_size(), remset);
      g->initialize_performance_counters(a);return g;
    }
#endif // INCLUDE_ALL_GCS
    default:
      guarantee(false."unrecognized GenerationName");
      return NULL; }}Copy the code

This is the name judgment we defined earlier, we defined the young generation DefNew, so here we go new DefNewGeneration(rs, init_size()).

Then comes the initial creation of the constructor:

DefNewGeneration::DefNewGeneration(ReservedSpace rs,
                                   size_t initial_size,
                                   const char* policy)
  : Generation(rs, initial_size),
    _preserved_marks_set(false /* in_c_heap */),
    _promo_failure_drain_in_progress(false),
    _should_allocate_from_space(false)
{
  MemRegion cmr((HeapWord*)_virtual_space.low(), (HeapWord*)_virtual_space.high());
  GenCollectedHeap* gch = GenCollectedHeap::heap(a); gch->barrier_set() - >resize_covered_region(cmr);

  _eden_space = new ContiguousSpace(a); _from_space =new ContiguousSpace(a); _to_space =new ContiguousSpace(a); .// Compute the maximum eden and survivor space sizes. These sizes
  // are computed assuming the entire reserved space is committed.
  // These values are exported as performance counters.
  uintx alignment = gch->collector_policy() - >space_alignment(a); uintx size = _virtual_space.reserved_size(a); _max_survivor_size =compute_survivor_size(size, alignment);
  _max_eden_size = size - (2*_max_survivor_size);

  // allocate the performance counters
  GenCollectorPolicy* gcp = gch->gen_policy(a);// Generation counters -- generation 0, 3 subspaces
  _gen_counters = new GenerationCounters("new".0.3,
      gcp->min_young_size(), gcp->max_young_size(), &_virtual_space);
  _gc_counters = new CollectorCounters(policy, 0);

  _eden_counters = new CSpaceCounters("eden".0, _max_eden_size, _eden_space,
                                      _gen_counters);
  _from_counters = new CSpaceCounters("s0".1, _max_survivor_size, _from_space,
                                      _gen_counters);
  _to_counters = new CSpaceCounters("s1".2, _max_survivor_size, _to_space,
                                    _gen_counters);
  compute_space_boundaries(0, SpaceDecorator::Clear, SpaceDecorator::Mangle);
  update_counters(a); _old_gen =NULL;
  _tenuring_threshold = MaxTenuringThreshold;
  _pretenure_size_threshold_words = PretenureSizeThreshold >> LogHeapWordSize;

  _gc_timer = new (ResourceObj::C_HEAP, mtGC) STWGCTimer(a); }Copy the code

We can see that the young generation is divided again into three segments _eden_space, _from_space, _to_space, and then there is a counting, which is used for the generation ascending, and you can see that the promotion to tenuring space is 15, There are also _eden_counters(” Eden “), _from_counters(” S0 “) and _to_counters(” S1 “).

This is the generation division of the heap from the perspective of the JVM.

3) Reserved space

_virtual_space.initialize(rs, initial_size); / / Initialize (rs, initial_size);

class Generation: public CHeapObj<mtGC> {
  friend class VMStructs;
 private:
  jlong _time_of_last_gc; // time when last gc on this generation happened (ms)
  MemRegion _prev_used_region; // for collectors that want to "remember" a value for
                               // used region at some specific point during collection.

 protected:
  // Minimum and maximum addresses for memory reserved (not necessarily
  // committed) for generation.
  // Used by card marking code. Must not overlap with address ranges of
  // other generations.
  MemRegion _reserved;

  // Memory area reserved for generation
  VirtualSpace _virtual_space;
Copy the code
class DefNewGeneration: public Generation {
Copy the code

4. Application and Allocation of Metaspace

jint universe_init(a) {... jint status = Universe::initialize_heap(a);if(status ! = JNI_OK) {return status;
  }

  Metaspace::global_initialize(a); .return JNI_OK;
}
Copy the code

In front of us is mainly to comb the heap space memory application and generation division. Let’s take a look at the application process for meta space.

1、metaspace.cpp – global_initialize()

void Metaspace::global_initialize(a) {
  MetaspaceGC::initialize(a);// Initialize the alignment for shared spaces.
  int max_alignment = os::vm_allocation_granularity(a);size_t cds_total = 0;
  MetaspaceShared::set_max_alignment(max_alignment);
  if (DumpSharedSpaces) {
     ............
#endif // _LP64
#endif // INCLUDE_CDS
  } else{...#ifdef _LP64
    if(! UseSharedSpaces &&using_class_space()) {
      char* base = (char*)align_ptr_up(Universe::heap() - >reserved_region().end(), _reserve_alignment);
      allocate_metaspace_compressed_klass_ptrs(base, 0);
    }
#endif // _LP64. _tracer =new MetaspaceTracer(a); }Copy the code
static CollectedHeap* heap(a) { return _collectedHeap; }
Copy the code

(Universe::heap()->reserved_region().end())); (Universe::heap()->reserved_region(). Then allocate_metaspace_compressed_klass_ptrs(base, 0) is used to apply for space

2、metaspace.cpp – allocate_metaspace_compressed_klass_ptrs()

void Metaspace::allocate_metaspace_compressed_klass_ptrs(char* requested_addr, address cds_base) {...// Don't use large pages for the class space.
  bool large_pages = false;

#ifndef AARCH64
  ReservedSpace metaspace_rs = ReservedSpace(compressed_class_space_size(),
                                             _reserve_alignment,
                                             large_pages,
                                             requested_addr);
#else // AARCH64
  ReservedSpace metaspace_rs;

  // Our compressed klass pointers may fit nicely into the lower 32
  // bits.
  if ((uint64_t)requested_addr + compressed_class_space_size()"4*G) {
    metaspace_rs = ReservedSpace(compressed_class_space_size(), _reserve_alignment, large_pages, requested_addr); }...initialize_class_space(metaspace_rs); . }Copy the code
ReservedSpace::ReservedSpace(size_t size, size_t alignment,
                             bool large,
                             char* requested_address) {
  initialize(size, alignment, large, requested_address, false);
}
Copy the code

The constructor of the ReservedSpace calls initialize to apply memory.

void ReservedSpace::initialize(size_t size, size_t alignment, bool large,
                               char* requested_address,
                               bool executable) {... _base =NULL;
  _size = 0;
  _special = false;
  _executable = executable;
  _alignment = 0;
  _noaccess_prefix = 0;
  if (size == 0) {
    return; }...if (base == NULL) {
    if(requested_address ! =0) {
      base = os::attempt_reserve_memory_at(size, requested_address);
      if (failed_to_reserve_as_requested(base, requested_address, size, false)) {
        // OS ignored requested address. Try different address.
        base = NULL; }}else {
      base = os::reserve_memory(size, NULL, alignment); }...// Done
  _base = base;
  _size = size;
  _alignment = alignment;
}
Copy the code

As you can see, requested_address is not NULL as it was when the heap was requested. So there is a call to the base = OS ::attempt_reserve_memory_at(size, requested_address) method.

char* os::attempt_reserve_memory_at(size_t bytes, char* addr) {
  char* result = pd_attempt_reserve_memory_at(bytes, addr);
  if(result ! =NULL) {
    MemTracker::record_virtual_memory_reserve((address)result, bytes, CALLER_PC);
  }
  return result;
}
Copy the code
char* os::pd_attempt_reserve_memory_at(size_t bytes, char* requested_addr) {
  const int max_tries = 10;
  char* base[max_tries];
  size_t size[max_tries];
  const size_t gap = 0x000000; .char * addr = anon_mmap(requested_addr, bytes, false);
  if (addr == requested_addr) {
    returnrequested_addr; }...if (i < max_tries) {
    return requested_addr;
  } else {
    return NULL; }}Copy the code
static char* anon_mmap(char* requested_addr, size_t bytes, bool fixed) {
  char * addr;
  intflags; .// Map reserved/uncommitted pages PROT_NONE so we fail early if we
  // touch an uncommitted page. Otherwise, the read/write might
  // succeed if we have enough swap space to back the physical page.
  addr = (char*) : :mmap(requested_addr, bytes, PROT_NONE,
                       flags, - 1.0);

  return addr == MAP_FAILED ? NULL : addr;
}
Copy the code

Then the Linux method mmap is invoked using anon_mmap(Requested_ADDR, bytes, false) to request memory. So by doing this whole thing, we know that in memory, there’s logic in the heap memory and in the meta space that’s continuous end to end.

Above is the heap memory and meta-space memory application analysis.