What if I told you that everything you know is false and that some of the major features of the beloved ECMAScript released in recent years could easily cause performance problems?

Set a few years ago, let’s go back to the innocent days of ES5…

I remember the day ES5 was released, and our favorite Javascript introduced some great array methods — forEach, Reduce, Map, filter — that made the language more powerful and made writing code more fun and smooth. Code becomes more accessible.

Around the same time, Node.js was introduced, which allowed us to smoothly transition from the front end to the back end and really redefined full-stack development.

Today, Node.js, which uses the latest ECMAScript on the V8 engine, struggles to be recognized as one of the dominant server-side development languages. Therefore, it needs to prove to be efficient in terms of performance. Of course, there are many performance parameters to consider, and no one language can perform better than all others. But does writing javascript out of the box like the functions mentioned above help or hurt your application’s performance?

In addition, javascript is considered a logical solution for client development not just for displaying views, because the user’s computer will be better, the network will be faster, but can we rely on the user’s computer when we need a super high performance application or a very complex application?

To test these questions, I tried to compare several scenarios and gain insight into my results, which I did on Node.js v10.11.0, Chrome, and macOS.

1. Go through the number group

The first scenario I did was summing an array of 100,000 pieces of data. This works in real life, I get a list from the database and sum it up, no extra DB operations.

I used for, for-of, while, forEach and reduce to compare the sum among random 100,000 pieces of data, and the results are as follows:

    For Loop, average loop time: ~10 microseconds
    For-Of, average loop time: ~110 microseconds
    ForEach, average loop time: ~77 microseconds
    While, average loop time: ~11 microseconds
    Reduce, average loop time: ~113 microseconds
Copy the code

When looking at How to do array summation on Google, reduce is recommended as the best implementation, but the worst. My must-use method, forEach, doesn’t perform very well either. Even the latest ES6 method for-of provides only the worst performance method. It is 10 times worse than the old for loop method, which is also the best performing method.

There are two main reasons why the latest and most recommended methods can make Javascript so slow. Reduce and forEach require each execution a callback function that is called recursively and “bloat” the stack, as well as additional operations and validation of the executing code.

2. Copy the array

Copying arrays may not seem like a fun scenario, but it is the cornerstone of immutable functions that do not modify the input while generating the output.

Performance tests also showed interesting results – when copying 100,000 pieces of random data, the old method was still faster than the new one. Using the ES6 extended operations […arr] and array. from, plus the ES5 map method, arr.map(x=>x) performs worse than the old arr.slice() and concatenation methods [].concat(arr).

    Duplicate using Slice, average: ~367 microseconds
    Duplicate using Map, average: ~469 microseconds
    Duplicate using Spread, average: ~512 microseconds
    Duplicate using Conct, average: ~366 microseconds
    Duplicate using Array From, average: ~1,436 microseconds
    Duplicate manually, average: ~412 microseconds
Copy the code

3. Object iteration

Another commonly encountered scenario is Object iteration, usually where we can’t value by a specific key, but have to iterate over a JSON structure or Object. We have the old for-in (for(let key in obj)) method, as well as the new object.keys (obj) and object.entries (obj) methods.

We used the above method to perform performance analysis on 100,000 objects, each containing 1000 random keys and values. The results are as follows:

    Object iterate For-In, average: ~240 microseconds
    Object iterate Keys For Each, average: ~294 microseconds
    Object iterate Entries For-Of, average: ~535 microseconds
Copy the code

The reason for this result is that the latter two schemes create enumerable groups of values, rather than simply iterating through them without keys. But the final result is still worth watching.

conclusion

My conclusion is obvious – if performance is critical to your application, or if your service needs to handle some overloads, then using cool, more readable, and cleaner methods can have a significant performance impact on your application – perhaps 10 times slower!

Later, make sure that the new approach meets the requirements before blindly following new trends. For small applications, fast iteration and high readability are perfect for code, but for stressful servers and large client applications, they may not be best practices.

Original text: hackernoon.com/3-javascrip…