Write the unimportant words first

Yesterday I did a little bit of research on arrays. In another article, “Diagramming Data Structures js – Array Structures”

I spent two days reading nearly 20 articles on the V8 array implementation and writing no fewer than 50 cases and nearly 100 memory snapshots to learn about the memory storage structure of arrays in V8. I thought for a long time about “how can I make it easy to understand”, and I thought that was a very challenging problem.

So the first thing you see is the first problem

Is the memory distribution of elements in an Array in Javascript continuous? Why is that?

At first glance, I think the answer is: yes, the array allocated memory space is contiguous. Why ask such a question? Is the answer really that simple? I wasn’t convinced, after all, that I hadn’t delved too deeply into JavaScript’s Array underlayer implementation. And then I went on my Google tour with the question… Boy, there is a problem! .

What is an array

Before we talk about arrays, let’s review the definition of arrays in data structures: If you don’t know array structures, take a look at the diagram of data structures JS – Array structures

In computer science, an array data structure, or array for short, is a data structure composed of a collection of elements of the same type, allocated a contiguous chunk of memory for storage. The index of an element can be used to calculate the corresponding storage address of the element. Quoted from Wikipedia \

According to the definition of arrays given by Wikipedia, arrays satisfy:

  1. All elements in an array are elements of the same type (elements of the same type require the same amount of storage space, so we can easily use the element index to calculate the location of the element);
  2. Allocate a contiguous block of memory storage (fixed length, continuous).

If you think back to JS arrays, you’ll see that JS arrays are a little “special”…

Js arrays are a bit special

If you have written JS code, you are familiar with the following code:

let arr = [1.true."Yuechi is looking for a job.", {name:"The moon and the".age:21}] // 1. Arrays can store any element, and they can be different
console.log(arr.length) / / 4

arr.push("Kneel and push inward.") // You can append any element to the array, and the length can be changed
console.log(arr.length) / / 5
Copy the code

As you can see from the above code, JS arrays are somewhat special compared to arrays in data structures:

  1. The elements of the same JS array can be of different data types, so we certainly can’t allocate space for each element of fixed length, so such an array also can’t calculate the corresponding storage address of an element through the element index.
  2. JS arrays can be arbitrarily sized. Do you still feel that the JS array allocates contiguous space in memory? If it’s contiguous, if we increase the size of the array indefinitely, how do we guarantee that the region behind it can be allocated to the array?

The article is very long, but calm down and listen to me slowly.

Look at the array from V8

With the above questions in mind, we need to take a systematic look at JavaScript arrays. Here is a look at V8 objects (elements, Properties, Hidden Class, Descriptor Array, Fast attributes, slow attributes). If you are not familiar with V8 objects, take a look at V8 fast and slow attributes and fast and slow arrays.

Fast and slow properties in V8

In order to avoid a lengthy article, I won’t go into depth on V8’s fast and slow attributes, but I’ll go straight to the conclusion: In V8, the former is referred to as array-indexed Properties and Named Properties, respectively, and the former is typically traversed first. Both are stored in two separate data structures, with Pointers to properties and Elements, respectively, as shown below:

The reason why the two data structures are stored is to make the adding, deleting, changing and checking of attributes relatively efficient in different situations. In fact, V8 has a strategy: if the number of named attributes is within the predefined range of the initial size of the object, the named attributes are directly stored in the object itself, without the need to query through the properties pointer and then obtain the corresponding key value. This saves the middle step, thus improving the efficiency of finding attributes. Properties that are stored directly into the object itself are called in-object Properties. Properties are in the same hierarchy as properties and Elements.

Fast and slow arrays

By analogy with the fast and slow attributes, consider the example we used in the previous section:

const LIMIT = 6 * 1024 * 1024;
let arr = new Array(LIMIT); / / array
arr[arr.length+1026] = 1; // The fast array is converted to the slow array
Copy the code

In this example, after line 2 is declared, arr is an empty array, but line 3 immediately defines the index arr. Length +1026 as 1, At this point, creating a continuous memory array of arr. Length +1026+1 for ARR to store such sparse data would consume a lot of memory. To deal with this situation, V8 would degrade the array to a slow array. Create a dictionary to store the “key, value, descriptor” triplet. When defining keys, values, or descriptors using Object.defineProperty, V8 uses the slow attribute, which corresponds to the slow array. So that’s a little bit obvious, but what is a fast array and what is a slow array, and when does a fast array convert to a slow array, and what are the conversion rules?

// v8/src/objects/js-array.h
// The JSArray describes JavaScript Arrays
// Such an array can be in one of two modes:
// - fast, backing storage is a FixedArray and length <= elements.length();
// Please note: push and pop can be used to grow and shrink the array.
// - slow, backing storage is a HashTable with numbers as keys.
class JSArray : public JSObject {
 public:
  // [length]: The length property.
  DECL_ACCESSORS(length, Object)
  // ...
}
Copy the code

As you can see from the V8 array definition, arrays can be in one of two modes:

  1. Fast modeThe storage structure ofFixedArrayandLength less than or equal toelements.length, can be accessed throughpush 和 popAdd and shrink arrays;
  2. Missile modelThe storage structure is a todigitalFor the keyHashTable.

Fast array FixedArray

  1. Fast arrays are one kindLinear storage, internal storage iscontinuousMemory (newly created empty arrays, the default storage is fast arrays);
  2. The fast array length is variable, you can dynamically adjust the size of the storage space according to the increase and deletion of elementsThrough the expansionandShrinkage mechanismImplementation;

From the linearity and continuity of point 1, I can confirm that FixedArray conforms to the definition of an array structure in data structures.

Data structure – Array structure

Expansion mechanism

H 105 // Number of element slots to pre-allocate for an empty array. (Default empty array pre-allocate size is 4) static  const int kPreallocatedArrayElements = 4; // v8/src/objects/js-objects.h 537 static const uint32_t kMinAddedElementsCapacity = 16; // v8/ SRC /objects/js-objects.h 540 // Computes the new capacity when expanding the elements of JSObject. static uint32_t NewElementsCapacity(uint32_t old_capacity) { // (old_capacity + 50%) + kMinAddedElementsCapacity // (Expansion formula: new_capacity = old_capacity + old_capacity / 2 + 16) return old_capacity + (old_capacity >> 1) + kMinAddedElementsCapacity; } // v8/src/code-stub-assembler.cc 5137 Node* CodeStubAssembler::CalculateNewElementsCapacity(Node* old_capacity, ParameterMode mode) { CSA_SLOW_ASSERT(this, MatchesParameterMode(old_capacity, mode)); Node* half_old_capacity = WordOrSmiShr(old_capacity, 1, mode); Node* new_capacity = IntPtrOrSmiAdd(half_old_capacity, old_capacity, mode); Node* padding = IntPtrOrSmiConstant(JSObject::kMinAddedElementsCapacity, mode); return IntPtrOrSmiAdd(new_capacity, padding, mode); } // v8/src/code-stub-assembler.cc 5202 // Allocate the new backing store. Node* new_elements = AllocateFixedArray(to_kind, new_capacity, mode); // Copy the elements from the old elements store to the new. // The size-check above guarantees that the |new_elements| is allocated // in new space so we can skip the write barrier. CopyFixedArrayElements(from_kind, elements, to_kind, new_elements, capacity, new_capacity, SKIP_WRITE_BARRIER, mode); StoreObjectField(object, JSObject::kElementsOffset, new_elements);Copy the code

By default, the pre-allocated size of an empty array is 4. When the array is expanded, for example, when the memory of the array is insufficient, the array will be expanded. The minimum capacity to be expanded is 16. Copy the original data to the new memory, then add length + 1, and return length.

For example, if the current memory size of the array is 4, the expanded memory size will be 20

4 plus 4/2 plus 16 is 22

We could give it a try

Element size = array length * element size + 8 for an empty array, the array length is 4, and each element size is 4

function A(){
    let arr = new Array(4).fill(1)
    this.arr =arr
}
let a = new A()
Copy the code

We define an array of four elements and initialize it to 1

The number of memory Spaces allocated by new Array(len) or Array(len) is len. [1,2,3…] The amount of memory used by the array is the same as the length of the array

The amount of memory that the array occupies is zero

4 times 4 plus 8 is 24

We can make a snapshot for observation, and the result meets our expectation

When we push another element into the array, the memory of the current array is already full, so we trigger a scaling mechanism to create a larger array. The length after capacity expansion is obtained by using capacity expansion mechanism technology:

4 plus 4/2 plus 16 is 22

Therefore, the memory space occupied by it should be:

22 times 4 plus 8 is 96

function A(){
    let arr = new Array(4).fill(1)
+   arr.push(1) // The capacity expansion mechanism is triggered
    this.arr =arr
}
let a = new A()
Copy the code

It turns out that the memory footprint is 100, which is 23 bits more than we calculated above. When ARR [23] is set to 1, it is found to be enlarged to 52 bits, which is two more bits than the calculated result (23 + 11 + 16)=50

So I think the formula for len plus 1 plus len plus 1 over 2 plus 16 makes more sense

So I went on to verify the next two expansion results of this formula

Sure enough, the next two expansion locations are 95 and 160

Does anyone know why this is different from the expansion mechanism above? Know the friend also please don’t hesitate to give advice

Shrinkage mechanism

// if the size is greater than or equal to length * 2 + 16, The contraction capacity adjustments if (2 * length + JSObject: : kMinAddedElementsCapacity < = capacity) {/ / if wining half the elements won 't be  used, trim the array. // Do not trim from short arrays to prevent frequent trimming on // repeated pop operations. // Leave some space to allow for subsequent push operations. int elements_to_trim = length + 1 == old_length ? (capacity - length) / 2 : capacity - length; isolate->heap()->RightTrimFixedArray(*backing_store, elements_to_trim); // Fill the non-trimmed elements with holes. BackingStore::cast(*backing_store) ->FillWithHoles(length, std::min(old_length, capacity - elements_to_trim)); } else { // Otherwise, BackingStore::cast(*backing_store)->FillWithHoles(length, old_length); }Copy the code

If the size of the array is greater than or equal to Length * 2 + 16, the size will be shrunk. Otherwise, uninitialized positions will be filled with objects. Elements_to_trim is the size to be clipped and needs to be determined by length + 1 and old_length to shrink all or only half of the empty space.

This I did not verify (lazy), know on the line, the difference should not be too big. Ha, ha, ha

Slow array HashTable

const arr = [1.2.3]
arr[1999] = 1999
// How will arR be stored?
Copy the code

In this example, after line 1 is declared, ARR is a fully populated array, but line 2 immediately defines index 1999 as 1999. In this case, creating a full array of length 2000 for ARR to store such sparse data would be very memory consuming. To deal with this situation, V8 degrades arrays to slow arrays, creating a dictionary to store the “key, value, descriptor” triplet. That’s what the Object.defineProperty(Object, key, Descriptor) API does as well.

Since there is no way to get V8 to find the HiddenClass and store the descriptor information in JavaScript API level, So when defining keys, values, or descriptors using Object.defineProperty, V8 uses the slow attribute, which corresponds to the slow array.

Object.defineproperty is the core API of Vue 2. When objects or arrays are large, access speed inevitably slows down because of underlying principles.

When a fast array is converted to a slow array

// From: 'SRC /objects/js-objects.h'
static const uint32_t kMaxGap = 1024;

// from: 'SRC /objects/dictionary.h
// JSObjects prefer dictionary elements if the dictionary saves this much
// memory compared to a fast elements backing store.
static const uint32_t kPreferFastElementsSizeFactor = 3;

// ...
class NumberDictionaryShape : public NumberDictionaryBaseShape {
 public:
  static const int kPrefixSize = 1;
  static const int kEntrySize = 3;
};


// From: 'SRC /objects/js-objects-inl.h
// If the fast-case backing storage takes up much more memory than a dictionary
// backing storage would, the object should have slow elements.
// static
static inline bool Shoul``dConvertToSlowElements(uint32_t used_elements,
                                               uint32_t new_capacity) {
  uint32_t size_threshold = NumberDictionary::kPreferFastElementsSizeFactor *
                            NumberDictionary::ComputeCapacity (used_elements) *
                            NumberDictionary::kEntrySize;
  return size_threshold <= new_capacity;
}

static inline bool ShouldConvertToSlowElements(JSObject object,
                                               uint32_t capacity,
                                               uint32_t index,
                                               uint32_t* new_capacity) {
  STATIC_ASSERT(JSObject::kMaxUncheckedOldFastElementsLength <=
                JSObject::kMaxUncheckedFastElementsLength);
  if (index < capacity) {
    *new_capacity = capacity;
    return false;
  }
  if (index - capacity >= JSObject::kMaxGap) return true;
  *new_capacity = JSObject::NewElementsCapacity(index + 1);
  DCHECK_LT(index, *new_capacity);
  // TODO(ulan): Check if it works with young large objects.
  if (*new_capacity <= JSObject::kMaxUncheckedOldFastElementsLength ||
      (*new_capacity <= JSObject::kMaxUncheckedFastElementsLength &&
       ObjectInYoungGeneration(object))) {
    return false;
  }
  return ShouldConvertToSlowElements(object.GetFastElementsUsage(),
                                     *new_capacity);
}
Copy the code

The conclusion from looking at the source code is that there are two cases where fast arrays are converted to slow arrays

  • If the fast array has the same capacity after expansionMore than nine times, means it is more thanHashTableFormal storage takes up more memory, and fast arrays are converted to slow arrays
  • If the difference between the new index and the original maximum index is greater than 1024, the fast array will be converted to the slow array

So, the previous example:

const arr = [1, 2, 3]
arr[1999] = 1999
Copy the code

1999-2 > 1024, ARR converts from fast arrays to slow arrays stored in hash form.

But my test results for the first case may not be the same

let arr = new Array(4).fill(1)
arr[500] = 1
Copy the code

According to the source code, the length of the expanded ARR is 22 or 23, and 500 is much larger than 23*9. But it still hasn’t been converted to slow array, comments are welcome if anyone knows why.

How do I see if it’s a fast array or a slow array? This can be found by looking at the memory occupied by Elements in a memory snapshot, as described below.

When a slow array is converted to a fast array

The V8 source code states that if a slow array is not explicitly identified as unconvertable to a fast array

  • A slow array is converted only when converting to a fast array can save at least 50% of the space.

I will not paste the source code

This does not seem to exceed the simulation

let arr = []
arr[0] = 1
arr[2] = 1
arr[3] = 1
arr[10000] = 1
/ / 120
arr[1] = 1
delete arr[10000]
Copy the code

Arr is always a slow array in the code, and if it is converted to a fast array, its elements should have a size of only 24. It obviously takes up more than half the space of 120, but it doesn’t seem to be converted to a slow array as a result. Makes me confused. Comments are welcome if you know.

V8 classification of numbers

Number of ECMAScript

I believe many people will be asked this interview question

Why doesn’t 0.2+0.1 equal 0.3?

I am in the article 0.1+0.2! = 0.3? Find out how it works. I don’t want to go into too much detail here. Otherwise, I will not finish writing the day after tomorrow, woo ~~

So, here’s a problem

Js uses 64 bits to represent numbers, but does V8 use 64 bits to represent numbers?

Why do you ask? As we know, numbers can be represented in many ways in memory (see below), and 64-bit, by far, is the slowest

representation bits
8-bit binary complement 0010, 1001,
32-bit binary complement 0000 0000 0000 0000 0000 1010
A decimal number encoded in binary 0100, 0010,
32-bit IEEE-754 single-precision floating point 0100 0010 0010 1000 0000 0000 0000 0000
64-bit IEEE-754 double precision floating point 0100 0000 0100 0101 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000

When we use THE JS language to program, in fact, most of the use of 32-bit numbers can be represented, so the engine made this optimization

The ECMAScript standard states that number needs to be treated as a 64-bit double-precision floating-point number, but in fact, using 64 bits to store any number all the time is actually very inefficient, so JavaScript engines don’t always use 64 bits to store numbers. Internally, the engine uses other memory representations (such as 32-bit), as long as all observable features outside the number are aligned with 64-bit representations.

Smi and HeapNumber in V8

Rather than just using 32 bits to represent numbers, V8 also classifies them into Smi and HeapNumber

Note: this is only engine level processing, js only recognizes numbers internally, does not distinguish between integer and floating point numbers

// HeapNumber -(2**30)-1 // HeapNumber -(2**30) // SMI-42 // smI-0 // HeapNumber 0 // Smi 4.2 // HeapNumber 42 // Smi 2**30-1 // Smi 2**30 // HeapNumber Infinity // HeapNumber NaN // HeapNumberCopy the code

And as you can see from the top,SmiRepresents theSmall integersAnd theHeapNumberRepresents floating-point numbers and numbers that can’t be represented in 32 bits, such asNaN,Infinity,-0

Why distinguish between the two? The reason, again, is that small integers are so common in our coding that V8 has singled them out and optimized them for fast integer operations

So how do you optimize it?

So here we declare an object, and the object’s x value is oneSmiAnd the y value is oneHeapNumberV8 toHeapNumberA memory object is specifically allocated to store values and willo.yObject pointer to the memory entity.

When we update their values,SmiIs updated in place, whileHeapNumberDue to itsimmutableV8 will open up a new memory entity to store the new values, and then use theo.yObject pointer to the memory entity.

If we need frequent updatesHeapNumberThe performance efficiency will be greater thanSmiMuch more slowly:

During this short cycle, the engine had to create sixHeapNumberInstance,0.1,1.1,2.1,3.1,4.1,5.1By the end of the loop, five of the instances will be garbage.

To prevent this problem, V8 provides an optimized way to update in placeSmiValue:When a digital memory region has a nonSmiWhen the value is in range, V8 marks this area asDoubleField, and it is assigned a 64-bit floating-point representationMutableHeapNumberInstance.

Later, when you update this area again, V8 no longer needs to create a new oneHeapNumberInstance, while can be directly inMutableNumberIt is updated in the instance.

As I said,HeapNumberandMutableNumberBoth use pointer references to refer to memory entities, whileMutableNumberIs mutable if at this point you are going to belongMutableNumberThe value of theo.xAssign to other variablesyYou don’t want to change next timeo.xWhen,yChange with it. To prevent this, wheno.xWhen shared,o.xWithin theMutableHeapNumberIt needs to be repackaged intoHeapNumberPassed to they:

Arrays of fully filled and perforated

const o = ['a', 'b', 'c'] console.log(o[1]) // 'b'. delete o[1] console.log(o[1]) // undefined o.__proto__ = { 1: 'B' } console.log(o[0]) // 'a'. console.log(o[1]) // 'B'. But how do you decide to access the prototype chain? console.log(o[2]) // 'c'. console.log(o[3]) // undefinedCopy the code

If all positions in an array have values, we call it a fully filled array. If some positions are undefined at initialization (as in const arr = [1,, 3] arr[1]) or are deleted after definition (as in the above example), we call it a Holey array.

This example in V8 access can be explained by the following figure:

The array O is originally packed, so accessing O [1] can fetch the value directly without accessing the prototype. While line 4: delete O [1] introduces a hole (the_hole) for the array, which is used to mark nonexistent attributes, line 6 defines the 1 attribute on the prototype for O, which will be punched when o[1] is retrieved again to continue the prototype chain query. Queries on the prototype chain are expensive and can be reduced depending on whether there is a the_hole.

V8 classifies the array Elements

V8 currently distinguishes 21 different Elements types, each of which can have a bunch of optimizations, with the following six common ones

  • PACKED_SMI_ELEMENTSType: indicates that all elements in the array are represented bySMIA fully populated array of type
  • HOLY_SMI_ELEMENTSType: indicates that the elements in the array are represented bySMIType of a perforated array
  • PACKED_DOUBLE_ELEMENTSType: represents the elements in the array byHeapNumberA fully populated array of type
  • HOLY_DOUBLE_ELEMENTSType: represents the number of elements in the arrayHeapNumberType of a perforated array
  • PACKED_DOUBLE_ELEMENTSType: Used for elements that cannot be represented asSMIorDOUBLEA fully populated array of type
  • HOLY_DOUBLE_ELEMENTSType: Used for elements that cannot be represented asSMIorDOUBLEType of a perforated array

When we define an array:

var a = [0.1.2]; // PACKED_SMI_ELEMENTS
Copy the code

JS engine gives it the class PACKED_SMI_ELEMENTS, which means that the elements in the array are SMI (small Integers);

a.push(3.45);      // PACKED_DOUBLE_ELEMETNS
Copy the code

Then, add a floating point element of 3.45 to a, and the class of array A in JS Engine becomes PACKED_DOUBLE_ELEMENTS for floating point numbers and integers that cannot be represented as SMI;

a.push("a");       // PACKED_ELEMENTS
Copy the code

Finally, by adding a string of type “A”, the type of array A in JS Engine is converted to PACKED_ELEMENTS for values that cannot be represented as SMI or DOUBLE. (Note that the conversion to PACKED_ELEMENTS causes additional boxing because the element type, floating point 3.45, is being converted).

The following figure shows the variety of array A.

So if I reset the fourth element of array A to float, will the type of a be changed back to PACKED_DOUBLE_ELEMETNS?

The answer is no. Array types have a strict hierarchy, just like jumping from the building, you can go from 10 to 9, but there is no way to go from 9 to 10, because wudang ladder does not exist.

So, we’re going to assign the array, and at this point, we’re going to assign its sixth bit to be 6;

a[6] = 6;
Copy the code

The value of array A is visible on the browser console;

console.log(a) // [0, 1, 2, 3.45, 'a', empty, 6]
Copy the code

The assignment skips subscript 5, so the value with subscript 5 becomes a hole, so the type of the array becomes HOLY_ELEMENTS.

Can we change the array back to PACKED_ELEMENTS if we set the fifth digit to 1?

The answer is: Wudang ladder does not exist

How do JS arrays store different data types

The problem, as explained above, adjusts the type of the array elements depending on the data type. For example,

let arr = new Array(100).fill(1) // PACKED_SMI_ELEMENTS The size of elements is 408 bytes

arr[0] = 0.1 // PACKED_DOUBLE_ELEMETNS Elements the size is 808 bytes
Copy the code

The array is also reallocated when elements are changed from PACKED_SMI_ELEMENTS to PACKED_DOUBLE_ELEMETNS, where each element takes up 8 bytes. PACKED_SMI_ELEMENTS takes up only four bytes. So the memory footprint of the ARR is going to double.

let a = [1."hello".true.function () {
  return 1;
}];
Copy the code

This type is changed to PACKED_ELEMENTS when other types that are not numbers are present, but the number of bytes per element in PACKED_ELEMENTS depends on whether DOUBLE data is present, 8 bytes if it is present, and 4 bytes if it is not

That’s because all data stored in arrays except floating point numbers are stored in memory addresses, which take up only 4 bytes.

Note: An attempt is made to adjust the number of bytes per element only if the elements type of the array changes

Such as:

let arr = new Array(100).fill(1) // PACKED_SMI_ELEMENTS occupies 4 bytes
arr[0] = 0.1 // PACKED_DOUBLE_ELEMETNS takes 8 bytes
arr[0] = 1 // PACKED_DOUBLE_ELEMETNS elements type does not change, still takes 8 bytes
Copy the code

But what about the following?

let arr = new Array(100).fill(1) // PACKED_SMI_ELEMENTS occupies 4 bytes
arr[0] = 0.1 // PACKED_DOUBLE_ELEMETNS takes 8 bytes
arr[0] = {} // PACKED_ELEMETNS takes 4 bytes because elements have changed and there is no DOUBLE element
Copy the code

So we only try to adjust the number of bytes for each element if the elements type of the array changes

conclusion

There were so many that I wrote it from 9 a.m. to 5 p.m.

Arrays are divided into slow arrays and fast arrays. The slow array index is stored linearly and consecutively, and the slow array is stored using HashTable.

Elements in an array are divided into many types, and the amount of memory each element occupies depends on the elements type and whether there are floating point numbers in the element. If there are floating point numbers or elements of type PACKED_ELEMETNS, each is 8 bytes, otherwise 4 bytes

All types of data other than numbers are stored in numbers as Pointers, so all types and values that SMI can represent can be 32 bits (4 bytes), except for some numbers that SMI cannot represent.

Expansion and contraction of arrays: Expansion is triggered if an array is assigned to an index that exceeds its capacity (do not vary too much, otherwise the array will be converted to a bit slower). If the capacity is greater than or equal to Length * 2 + 16, the shrink mechanism is triggered.

The performance of a fast or slow array is to store the format and when it is converted

Some remaining issues:

Some of the actual test results in this article are inconsistent with the V8 source code, as noted here. Welcome to leave a message.

The test environment was Google Chrome 93.0.4577.63

Question 1: Why are the results of the expansion mechanism tests different from those described in the source code?

The expansion mechanism in the source code is

function NewElementsCapacity(len){
    return len + (len >> 2) + 16
}
Copy the code

And that’s what my tests showed

function NewElementsCapacity(len){
    len++
    return len + (len >> 2) + 16
}
Copy the code

Arr = [1,2,3,4]; arr[500] = 1; The length is greater than x 9 after capacity expansion. Why is it not converted to a slow array?

Question 3: Why is the slow array not converted to the fast array in the following code? What kind of example would be converted to a fast array?

let arr = [] 
arr[0] = 1 
arr[2] = 1 
arr[3] = 1 
arr[10000] = 1 // Elements occupy 120B
arr[1] = 1 
delete arr[10000If I convert to a length of zero4Elements has only 24B of space, saving much more space50%.Copy the code

Question 4: Why does elements in a fast array take up 8 bytes more space than the actual elements?

Refer to the article

JS code Optimization –Array Array

Array types in V8

“Fast and Slow properties and Fast and Slow Arrays in V8”

How much do you know about memory allocation of JS variables?

The JS V8 | deep understanding of JS Array – JS Array in memory space is allocated on continuous?”

Exploring the bottom implementation of “Array” under JS V8 Engine

Chromium Code Search

V8 Array Optimization you May not Know