This is the sixth day of my participation in the Gwen Challenge. For details, see Gwen Challenge.
Having introduced the leading front-end libraries in reading the 2020 Javascript Trends Report looking ahead to ES2020, this article takes a look at the ultimate performance battle between Javascript frameworks
I was browsing the web recently and realized that there hasn’t been a good JavaScript framework on the front end in two years. As 2020 promises to be an extraordinary year, let’s take a look at the performance comparisons between popular script libraries.
What Javascript frameworks are the most popular? Here we compare JavaScript frameworks with the top 20 stars on Github and compare them with THE JS Framework Benchmark.
Disclaimer: This comparison is interesting and potentially educational throughout the process. As always, each library here is efficient enough for most things. If anything, performance can come from a variety of different technologies. Although it can be used as a reference, the performance of individual use cases should be verified independently. The latest official results can be found here. In this declaration, comparison is only objective, the actual development process needs to be combined with team level, collaboration, efficiency and other comprehensive consideration.
contrast
JS Framework Benchmark using the latest Google Chrome 87 key results. They run on Core I7 Razor Blade 15 under Fedora 33 with closure relief.
Filter out all the problematic implementations and get the top 20 script libraries on Github by star count. For libraries with multiple versions, no third-party libraries are used to get the latest version and the highest-performing variants. Github’s top 20 scripting libraries are listed below.
- Vue (177k)
- React (161k)
- Presents (68.9 k)
- Svelte (40.5 k)
- Preact (27.9 k)
- Ember (21.7 k)
- HyperApp (18.2 k)
- Inferno (14.6 k)
- Riot (14.4 k)
- Yew (14.2 k)
- Mithril (12.5 k)
- Alpine (12.4 k)
- Knockout (9.9 k)
- Marko (9.9 k)
- Lit – HTML (6.9 k)
- Rax (7k)
- Elm (6.2 k)
- Ractive (5.8 k)
- Solid (4.7 k)
- Imba programme (4.1 k)
Note: The LitElement implementation will be used as the lit-HTML example because the standard example has been flagged as a problem. The overhead should be minimal because it is raw lit-HTML wrapped in a single Web component.
This is a pretty good part of the current Web development ecosystem. Although Github Stars is not all, there are more than 100 libraries in the comparison, so you need to choose relatively popular ones.
Each library will be compared in three categories:
- DOM performance
- Start the indicators
- Memory usage
In addition, the frameworks were divided into four groups to best compare them to their raw performance peers. However, script libraries will be ranked in all three categories.
Within each group, there will be a reference Vanilla JavaScript entry. Using all the best techniques, this implementation has been optimized for optimal performance. It will serve as a benchmark for all comparisons.
Group 4 – Standard performance
This is the largest group, made up of some of the most popular script libraries. There are also many companies backed by Facebook, Google, eBay and Alibaba. These script libraries are either not very performance-oriented in one area, or they highlight one area of performance and perform poorly in others.
There’s a lot of red and orange here, but keep in mind that these libraries are only about 2 times slower on average than the painstakingly hand-crafted imperative Vanilla JavaScript example we used here. What is the difference between 400ms and 200ms?
React is the leader in this group in terms of raw performance. Given architectural diversity, React, Marko, Angular, and Ember as a whole aren’t that far apart, although it never stops. Still React, this is the implementation of React Hooks, in this case the leader. For all the people who create classes that point to extra functions and stick with them, the performance parameters are not with you. React Hooks are the most efficient way to use React.
Most of the libraries here have either naive list sorting resulting in really poor swap row performance or are expensive to create. Ember is an extreme case of this, because its update performance is much better than the rest of the group, but in some of the worst cases it is the creation process.
The slowest libraries (Knockout, Ractive, and Alpine) are fine-grained reactive libraries with similar architectures. Knockout and Ractive (also written by Rich Harris, author of Svelte) are from early 2010, before the VDOM library took over. I also doubt Alpine expects to render 10,000 rows using its JavaScript methods. After the comparison, we will not see another library of pure fine – grained equations.
Next, we will compare the startup metrics of categories based primarily on the size of library packages.
The list varies a lot here. Where Alpine does worst, we can see that it has the smallest bundle size and fastest startup time. Marko (from eBay) followed, followed by Rax (from Alibaba). All three libraries are built for server-side rendering, primarily through easier client-side interaction. That’s why they’re in group 4 in terms of performance, but ahead of startup here.
The bottom half of the table is the largest package of software we have in the benchmark, ending with Ember, more than twice as much as any other implementation. I don’t know why it takes more than half a Megabyte to render this table. But it does hurt startup performance.
The last category we’ll look at is memory consumption.
Memory tends to mirror the patterns we’ve seen because it has a big impact on performance, and larger libraries tend to use more memory. Alpine, Marko and React lead the way. Aging fine-grained reaction libraries use the most in Ember. Ember is huge, and after only rendering six buttons on the page, it already uses more memory than Vanilla used in the entire suite.
Group 4 results
Typically, this group represents 300,000 stars on GitHub and is probably the largest share of NPM downloads, but Marko and Alpine rank highest on average among this population. React ranks third behind them in terms of performance.
This is the group where we have the titanium dioxide ratio frame, and our old reactivity library is gone. Let’s continue to be optimistic.
Group 3 – Performance awareness
In these frameworks, you know that performance has been taken into account. They are aware of scale and find a balance between creation and update costs. We see a variety of approaches. A Web Assembly framework in Yew (written in Rust), a Web component in LitElement. Without further ado, let’s see how they work.
The scores have improved a little bit, and we’re seeing an even bigger gap. Preact is the highest performing of the group, adding only LitElement. Vue 3 is bundled with Riot, both libraries in the middle, and both have histories that include reactivity and VDOM. Mithril was one of the first VDOM libraries to put performance first, and Yew was the only WASM library tail.
All of these libraries are similar in terms of performance. There’s no pure library in this pile. They all use top-down rendering, whether it’s a VDOM or a simple Tag Template Literal diff. Their reconciliation lists are smarter than the previous group (see Swap row performance). However, most still have some of the slowest selection row performance.
Yew was the exception, but slower in other respects. Let’s see if the rest of the tests help.
Things have improved, but Preact is still the leader in start-up metrics. Yew is the only truly large script library in the bunch. The WASM library does lean to the bigger side.
Again, we see a kind of result pairing. Vue is the second largest script library after Yew. Preact and Riot are very compact. Mithril and LitElement are in the middle
Preact is a 4KB replacement for React and is definitely the smallest library we’ve seen so far. But the smallest library continues. However, any library in this range should not be too concerned with their package size.
Yew won this time. It has the smallest memory footprint of all the frameworks tested. The WASM library does this very well. Everything else is very close. Mithril and Preact were the worst, but not by much.
There’s nothing more to talk about from here. You might think that the LitElement example might be lighter than other non-Yew libraries because it does not use the virtual DOM as other libraries do. But as we’ll see later, VDOM doesn’t mean more memory.
Group 3 results
Riot and Preact ranked the best on average, followed by LitElement at no. 3. Riot, while underperforming, had no weaknesses in this group and won the comparison. But you won’t be disappointed with any of these frameworks. With WASM and Web components, they represent what many see as the future of the Web.
The next group represents a different vision of the future of the web.
Group 2 – Performance champion
This group’s script library is highly competitive. We have most libraries called compiled languages. Each has its own characteristics. We have unchanging structured Elm, Ruby-inspired Imba, and “gone” Svelte.
Note: Not everyone is familiar with Svelte’s old “vanishing frame” moniker. It describes the ability to basically self-compile from the output.
The strange thing about HyperApp is that it is the complete opposite of other apps. No compiler. There is no template. You only need h features and a minimal Virtual DOM
Well, the smallest virtual DOM wins. Contrary to recent claims, it turns out that not only is the virtual DOM a recipe for poor performance, but compilation doesn’t help other libraries.
In the compiled library, we actually see 3 different ways to render all with roughly the same average performance.
- Imba uses DOM coordination (closer to LitElement we saw earlier)
- Elm uses the virtual DOM
- The last Svelte uses a component response system
You should note that the virtual DOM libraries have the worst selection lines, because that’s where their extra work comes in. But these libraries also have faster initial rendering. If you look closely at the results so far, you should notice the shared nature of virtual DOM libraries compared to reactive libraries. But beyond that, performance is rigorous.
So, let’s move on. How does our compiler adjust startup time/package size?
Well, as you can see, this little virtual DOM library not only performs better, it’s also smaller than the others. In fact, HyperApp is the smallest implementation of all of our libraries. The compiler cannot win.
Both it and Svelte are smaller than our Vanilla JavaScript reference build. How is that possible? Write abstractions with less code in a more reusable way. The Vanilla JS implementation was optimized for performance rather than size.
Elm is still competing in this group. However, Imba is beginning to move into the realm of some of the libraries in Group 4.
All right, to compare memory, the compiler’s last chance to shine.
Memory was close, almost a tie, but Svelte eventually won for the compiler. Some sweet revenge on the virtual DOM shows that it’s smaller and faster than it is.
To be honest, all of these libraries have excellent memory profiles. It should be clear by now that there is a relationship between less memory and better performance.
Group 2 results
Don’t believe the hype?
Not at all. More is going on than meets the eye. Well-designed systems, either at run time or compile time, or regardless of the technical approach, can produce high-performance systems.
HyperApp was the clear winner in this category, followed by Svelte, Elm and Imba. With this focus on performance, you know that these libraries are available in most cases and always show up at the top of the benchmark.
So what’s left?
If I tell you that declarative JavaScript libraries are so confident in their performance, they don’t have to worry about raw WASM, Web workers, or whatever technology you use.
Group 1 – Performance elite
In a way, this might be called “blindingly fast” and I believe it was once one of the catchphrases for these libraries. There are now only two script libraries left if you want to keep track. In fact, there are a few script libraries in this category that are constantly pushing the boundaries. But only two are popular. They are, on average, less than 20% slower than the original hand-optimized Vanilla JS.
It’s worth seeing. Here we have two libraries that, if you look at their code, might be considered brothers, but in completely different ways. Inferno is one of the best performing virtual DOM libraries in the world. Yes, three of the top five are virtual DOM libraries. The slowdown in Select Row testing can be seen as evidence.
Solid, on the other hand, uses fine-grained reactivity, such as the slowest old library in group 4. It’s an odd place for the technology to reappear, but as we’ve seen, Solid addresses their weaknesses. Creation time is as fast as update time. An incredible 5% difference from Vanilla JavaScript.
Oddly, what Inferno and Solid have in common is the JSX template and the React inspired API. For all the other libraries with optimized custom DSLS, you probably wouldn’t expect to find anything at the peak of performance. But, as HyperApp shows, certain things affect performance less than one might think.
Solid added HyperApp and Svelte as a third library, which is smaller than the Vanilla JS implementation. But Inferno was no slouch.
It seems that when the performance library is small, sometimes adding more code can improve performance. Better list coordination algorithms, more specific safeguards, more refined updates.
Inferno may be larger than some of the libraries in the previous groups, but it’s still a library under 10KB that beats almost all of them in terms of performance.
Is there. Apart from Yew and his use of WASM, they are the lowest memory consuming frameworks in the competition. Given their performance, that’s not surprising.
This memory consumption figure reflects very careful consideration of the object and the closure created. Many of these do come from custom JSX transformations that both libraries do.
This increase in memory performance is particularly important to Solid because Solid, like most fine-grained reactive libraries, swaps CPU overhead for memory consumption. Being able to conquer memory overhead in this comparison is a big part of how Solid took a technique similar to most of the slowest libraries and made it the fastest.
First set of results
The sky is the limit.
… Or Vanilla JavaScript is. But our declarative libraries here perform so poorly that you would never know the difference. We need to think carefully about the issues we face when working with the DOM, and many different techniques can render the DOM effectively. We see it here. Solid took the performance crown with a technology that a decade ago would have been considered ancient and slow, and Inferno proved once again that Virtual DOM couldn’t get anything done efficiently.
conclusion
When building a JavaScript front end, we have a number of options. This is just a quick look at the performance overhead of the framework. User code has a bigger impact when it comes to actual performance in your application.
But what I really want to impress here is that it’s important to test your solution and understand performance. Reality is always different from marketing. The virtual DOM is not guaranteed to be slow. There is no guarantee that the compiler will produce the smallest package. Custom template DSLS are not guaranteed to be optimal.
Finally, I’ll provide you with a complete table that shows all the libraries together. Just because the library is coming to an end doesn’t necessarily mean it’s slow, but it scored worse than these highly competitive competitors.
All of the framework
All frames in a single chart.
performance
Start the
memory
The ranking
All results are added to a list (20 points for first place and 20 points for last place). In the event of a tie, results take precedence.
- Solid (57)
- HyperApp (54)
- Inferno (51)
- Svelte (51)
- Elm (46)
- Riot (40)
- Preact (39)
- Imba (36)
- lit-html (36)
- Yew (32)
- Vue (29)
- Mithril (29)
- Marko (28)
- Alpine (28)
- React (19)
- Rax (16)
- Angular (12)
- Knockout (11)
- Ractive (8)
- Ember (6)