• Javascript array. push is 945x faster than array. concat 🤯🤔
  • By Shi Ling
  • Translation from: The Gold Project
  • This article is permalink: github.com/xitu/gold-m…
  • Translator: Xuyuey
  • Proofread by: Qian Junying, MLS

To merge arrays with thousands of elements, use arr1.push(… Arr2) saves time compared to ARR1 = arR1.concat (ARR2). If you want to go even faster, you can even write your own function to merge arrays.

Wait a minute… with.concatHow long does it take to merge 15,000 arrays?

Recently, we had a user complain that when he tested their UI using UI-Licious, it was significantly slower. Each i.lick i.sick i.see command that normally took about 1 second to complete (post-processing, such as screenshots) now takes more than 40 seconds to complete, so tests that would normally take 20 minutes are now taking hours to complete, seriously slowing down their deployment process.

I quickly set up the timer to lock in the part of the code that was causing the slowness, but when I found the culprit, I got a surprise:

arr1 = arr1.concat(arr2)
Copy the code

The.concat method of an array.

To allow you to write tests using simple instructions such as i.lick (“Login”) instead of using CSS or XPATH selectors such as i.lick (“#login-btn”), Ui-licious uses dynamic code analysis (patterns) to analyze the DOM tree, based on the site’s semantics, accessibility attributes, and a variety of popular but nonstandard patterns, to determine what and how to test your site. These.concat operations are used to squash the DOM tree for analysis, but when the DOM tree is very large and deep, the performance is very poor. This is what happened when our users recently updated their applications, and this wave of updates also caused their pages to become significantly bloated (this is a performance problem on their side, That’s another topic).

use.concatIt takes six seconds to merge 15,000 arrays with an average of five elements.

What?

6 seconds…

Just 15,000 arrays with an average of five elements?

It’s not a huge amount of data.

Why is it so slow? Is there a faster way to merge arrays?


The benchmark to compare

Push vs. concat, which merges 10,000 arrays of 10 elements

So I started looking (BY Google search, I mean) at benchmarks.concat versus other ways of merging arrays in Javascript.

It turns out that the fastest way to merge arrays is to use the.push method, which takes n arguments:

Arr1. push(arr2[0], arR2 [1], arR2 [3],... , arr2 [n]) / / because my Array size is not fixed, I use the ` apply ` methods Array. The prototype. Push. Apply (arr1, arr2)Copy the code

By comparison, it’s faster. It’s a leap.

How fast?

I ran some performance benchmarks myself to see for myself. Lo and behold, this is the difference implemented on Chrome:

👉 links to tests on JsPerf

Merge an array of size 10 10,000 times at a.concat rate of 0.40 ops/ SEC (operations per second) and a.push rate of 378 ops/ SEC. Push is 945 times faster than Concat! The difference may not be linear, but it is evident in this small amount of data.

In Firefox, the execution result is as follows:

Generally, Firefox’s SpiderMonkey Javascript engine is slower than Chrome’s V8, but.push is still ranked number one, 2,260 times faster than Concat.

We made the above changes to the code, and it fixed the whole slow down problem.

Push vs. concat, merge 2 arrays with 50000 elements

But ok, what if instead of merging 10,000 arrays of 10 elements, you were merging two huge arrays of 50,000 elements?

Here are the results on Chrome:

👉 links to tests on JsPerf

.push is still faster than.concat, but this time 9 times faster.

It’s not dramatically 945 times slower, but it’s slow.


More elegant extension operations

If you think the Array. The prototype. Push. Apply (arr1, arr2) very repetitive, you can use the expansion of the ES6 operators do a simple modification:

arr1.push(... arr2)Copy the code

Array. The prototype. Push. Apply (arr1, arr2) and arr1. Push (… Arr2) is negligible.


But whyArray.concatSo slowly?

It has a lot to do with the Javascript engine, and I don’t know for sure, so I asked my friend @picoCreator, co-founder of gpu.js, who has spent a lot of time researching the V8 source code. Because my MacBook doesn’t have enough memory to run.concat merging two 50,000-length arrays, @picoCreator lent me the baby gaming PC he used to benchmark gpu.js to run JsPerf tests.

Obviously, the answer has a lot to do with how they work: when merging arrays,.concat creates a new array, while.push just modifies the first array. These additional operations (adding elements from the first array to the returned array) are the key to slowing down.concat.

Me: “Nani? Can’t be? Is that it? But why such a big gap? It can’t be!” Picocreator: “I’m not kidding, try writing down native implementations of.concat and.push and you’ll see!”

So I gave it a try, wrote several implementations, and compared them to Lodash’s _. Concat:

👉 links to tests on JsPerf

Native implementation 1

Let’s discuss the first set of native implementations:

.concatNative implementation of

Var arr3 = [] // Add arr1for(var i = 0; i < arr1Length; I ++){arr3[I] = arR1 [I]for(var i = 0; i < arr2Length; i++){
  arr3[arr1Length + i] = arr2[i]
}
Copy the code

.pushNative implementation of

for(var i = 0; i < arr2Length; i++){
  arr1[arr1Length + i] = arr2[i]
}
Copy the code

As you can see, the only difference between the two is that.push modifies the first array directly in the implementation.

Results of a conventional implementation approach:

  • .concat : 75 ops/sec
  • .push: 793 ops/ SEC (10 times faster)

Results of native implementation method 1:

  • .concat : 536 ops/sec
  • .push: 11,104 ops/ SEC (20 times faster)

It turns out that my own concat and push are faster than their regular implementations… But as you can see, simply creating a new array and copying the contents of the first array into it can significantly slow down the process.

Native implementation 2 (preallocates the size of the final array)

We can further improve the native implementation by pre-allocating the size of the array before adding elements, which can make a huge difference.

With pre-allocation.concatNative implementation of

Var arr3 = Array(arr1Length + arr2Length) // Add arR1for(var i = 0; i < arr1Length; I ++){arr3[I] = arR1 [I]for(var i = 0; i < arr2Length; i++){
  arr3[arr1Length + i] = arr2[i]
}
Copy the code

With pre-allocation.pushNative implementation of

// Preallocate arR1. length = arr1Length + arr2Length // Add arR2 elements to ARR1for(var i = 0; i < arr2Length; i++){
  arr1[arr1Length + i] = arr2[i]
}
Copy the code

Results of native implementation method 1:

  • .concat : 536 ops/sec
  • .push: 11,104 ops/ SEC (20 times faster)

Results of the native implementation of Method 2:

  • .concat: 1578 ops/SEC
  • .push: 18,996 ops/ SEC (12 times faster)

Preallocating the size of the final array can improve the performance of each method by a factor of 2-3.

.pushAn array of vs..pushA single element

What if we just push one element at a time? It will be better than the Array. The prototype. Push. Apply (arr1, arr2) it?

for(var i = 0; i < arr2Length; i++){
  arr1.push(arr2[i])
}
Copy the code

The results of

  • .pushEntire array: 793 ops/ SEC
  • .pushSingle element: 735 ops/ SEC (slow)

So it makes sense that.push individual elements is slower than.push the entire array.

Conclusion: Why.push 比 .concatfaster

All in all, the main reason concat is so much slower than.push is that it creates a new array and needs to copy the elements from the first array into the new array.

Now there’s another mystery for me…

Another fan

Why is a regular implementation slower than a native implementation? 🤔 AGAIN I ask @picocreator for help.

We looked at the _.concat implementation of Lodash to get some hints about the general implementation methods of.concat, since they are comparable in performance (lodash is a little faster).

As it turns out, according to the specification of the general implementation of.concat, this method is overridden and supports two ways of passing parameters:

  1. Pass n values to be added as arguments, for example:[1, 2]. Concat (three, four, five)
  2. Pass the array to be merged as an argument, such as:[1, 2]. Concat ([three, four, five])

You can even write: [1,2].concat(3,4,[5,6])

Lodash also overloads and supports two ways of passing parameters. Lodash puts all the parameters into an array and then beats it flat. So it makes sense if you pass it multiple arrays. But when you pass an array that needs to be merged, it doesn’t just use the array itself, it copies it into a new array, and then beats it flat.

… Well…

So you can definitely tune performance. This is why you want to implement array merging yourself.

Again, this is just me and @picoCreator’s understanding of how the regular implementation of.concat works in the engine, based on Lodash’s source code and his slightly outdated knowledge of V8 source code.

You can read the source code for Lodash in your spare time by clicking here.


added

  1. Our tests only used arrays containing integers. We all know that Javascript engines execute faster with arrays of specified types. If there are objects in the array, the results are expected to be slower.

  2. Here are the specifications for the PC used to run the benchmark:


Why did we have such a large array operation during the UI-Licious test?

In principle, the UI-Licious test engine scans the DOM tree of the target application and evaluates semantics, accessible attributes, and other common patterns to determine the target elements and test methods.

This way we can ensure that we write the test as simply as this:

// Go to dev. To i. goto ("https://dev.to") // Type in the search box and search for i.fuse ("Search"."uilicious") i. Tip () // I should be able to see myself and my co-founder I. Tip ()"Shi Ling")
I.see("Eugene Cheah")
Copy the code

No CSS or XPATH selectors are used, which makes the test easier to read, less sensitive to changes in the UI, and easier to maintain.

Note: Public service announcements — keep the DOM count small!

Unfortunately, DOM trees tend to grow as people use modern front-end frameworks to build more and more complex and dynamic applications. Frameworks are a double-edged sword in that they allow us to develop faster, but people often forget how much redundant they add. When reviewing the source code of various websites, I am often surprised by the number of elements that exist simply to wrap other elements around them.

If you want to know if your site has too many DOM nodes, you can run Lighthouse.

According to Google, the best DOM tree is:

  • Less than 1500 nodes
  • Depth less than 32 levels
  • The parent node has fewer than 60 children

A quick check of dev. to feed shows that its DOM tree size is pretty good:

  • A total of 941 nodes
  • Maximum depth is 14
  • The maximum number of child elements is 49

Not bad!

If you find any errors in the translation or other areas that need improvement, you are welcome to revise and PR the translation in the Gold Translation program, and you can also get corresponding bonus points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


Diggings translation project is a community for translating quality Internet technical articles from diggings English sharing articles. The content covers the fields of Android, iOS, front end, back end, blockchain, products, design, artificial intelligence and so on. For more high-quality translations, please keep paying attention to The Translation Project, official weibo and zhihu column.