• Benchmarks for the Top server-side Swift Frameworks vs. Node.js
  • The Nuggets translation Project
  • Translator: Tuccuay
  • Proofreader: Eel fish, Nicolas(Yifei) Li

preface

Recently when I was working on server-side Swift, I was asked the following question:

“Can Swift beat Node.js on the server?”

Swift is a device that can be used to do just about anything, including a server, and has been fascinating since it was first opened source and ported to Linux. I’m sure many of you are as curious as I am, so I’d love to share what I’ve learned.

The most popular server-side Swift framework

As of this writing, the most popular server-side Swift frameworks, in order of number of stars received on Github, are as follows:

  • Perfect ⭐ ️ 7956
  • Vapor ⭐ ️ 5183
  • Kitura ⭐ ️ 4017
  • Zewo ⭐ ️ 1186

Organizational Form of this paper

This article will be presented as follows:

  • This quick guide
  • The results in this paper,
  • methodology
  • Detailed results
  • Conclusion and Description

The results in this paper,

Here is a summary of the results of the main tests, which I would like to say:

No matter what the score is, all the performance within these frameworks is great, right

Methodological notes

Why blog and JSON?

Blogging is better than printing “Hello, World! While on-screen usage is common, JSON is also a common use case. Good benchmarking takes into account how each framework performs under a similar load, which carries more stress than simply printing two words to the screen.

Keep doing the same things

In each topic test project I try to make sure that the blogs are as similar as possible, and that they fit the syntax style of each framework. In order to produce the same content verbatim across many data structures using different frameworks, let each framework work with the same data model, but some aspects such as URL routing will vary greatly to suit the syntax and style in each framework.

Some minor differences

There are some minor differences to note directly between the different Swift server frameworks.

  • In Kitura and Zewo, the presence of whitespace in absolute paths causes problems at build time, as does building any framework in Xcode.

  • Zewo uses the Swift snapshot version 05-09-A, which means he has some problems building in release mode, so he runs in Debug mode. Because of this problem, all tests on Zewo are run in debug mode (this will not include release optimizations).

  • The handling of static files is the focus of many server-side Swift frameworks. Vapor and Zewo both recommend using Nginx as a proxy for static files and then using the framework as a back end. Perfect recommends using its built-in handlers, but I haven’t seen any comments from IBM about this. Since this study was not designed to explore how frameworks connect to server applications such as Nginx, static files are handled using each framework itself. You might want to consider this when choosing Vapor and Zewo for performance reasons, which is one reason WHY I considered including JSON tests.

  • [Updated September 1] Zewo is a single-threaded application that you can get an additional performance boost by running an instance on each CPU because they run concurrently rather than in multi-threaded mode. In this study, only one instance of each application will run.

  • Toolchains. Each framework selects a different snapshot version from the Toolchains released by Apple. The versions tested at the time of this release are as follows:

    • DEVELOPMENT – the SNAPSHOT – for 2016-08-24 – a Perfect
    • Developing-snapshot — 2016-07 — 25 — A For Vapor & Kitura
    • DEVELOPMENT – the SNAPSHOT – 2016-05-09 – a for Zewo
  • Vapor run release special syntax. If you simply execute binary packages, you’ll get some logging in the console to help with the development and debugging process. This will introduce some additional performance overhead. To make Vapor run in Release mode you need to add –env=production, for example:

    .build/release/App --env=productionCopy the code
  • When using Zewo, even if you cannot use release mode on the 05-09-A toolchain, you can still optimize release by adding the following code:

    swift build -Xswiftc -OCopy the code
  • Node.js/Express does not build because it does not distinguish debug from release.

  • Static file processing includes Vapor’s default middleware. If you are not using static files and want to optimize speed, use this method to ignore the Vapor default middleware to improve speed. , you must include code like this (as I did in VaporJSON) :

    drop.middleware = []Copy the code

Why node.js/Express?

I decided to use Node.js Express as a control to include in the test. This is because it has a very similar syntax to the Swift server framework and is widely used. It helps establish a baseline to show how impressive Swift can be.

Development of the blog

At some point it starts with what I call “chasing pinball.” The Swift server framework is currently under very active development, as each Swift 3 preview has a ton of changes relative to the last one. So Apple’s Swift team has caused all server-side Swift frameworks to need frequent new releases. They are not well documented, so I am very grateful to the framework team members and the Swift server-side framework community at large. I would also like to thank the countless community members and framework team for helping me along the way. It’s a lot of fun and I’m happy to do it.

An additional disclaimer, even though I don’t need a license note, is that all of the resources included in the source are from Pixbay’s copyrighted images, which is helpful for creating a sample program.

Environment and variables

To minimize the impact of different environments, I used a 2012 Mac Mini and re-installed El Capitan (10.11.6), then downloaded Xcode 8 Beta 6, And set the command-line-tools to Xcode 8. Then use Swiftenv to install the necessary snapshot versions, clone the repository and clean compile each blog in Release mode without running both tests at the same time. The test server specification looks like this:

In the development, I used the 2015 rMBP. I did a build test here, which made more sense because it was my real life development device. I use WRK to get my score, and I use Thunderbolt 2 cables to connect two devices, because Thunderbold Bridges can have an incredible bandwidth so your router is not a limitation, He was able to test the server more reliably by using a separate machine to generate load while the blog was running on a separate machine. This provides a consistent testing environment, so I can say that every blog is running on the same hardware and conditions. To satisfy some curiosity, I developed the specs for the device as follows:

Benchmark test

In my tests, I decided to use four threads each to generate 20 connections for 10 minutes. 4 seconds is not a test, and 10 minutes is a reasonable amount of time, because getting a lot of data and running 20 connections with 4 threads would be too much of a burden on the blog to break links.

The source code

If you’d like to explore the project’s source code or do any of your own tests, I’ve assembled the test code into a repository that you can find here:

Github.com/rymcol/Serv…

Detailed results

Build time

I think you might want to look at the build time first. Build time is a big part of day-to-day development time, and it counts as a performance of the framework, so I feel like I’m exploring real numbers and a sense of duration.

How to run

For each framework,

swift build --clean=distCopy the code

then

time swift buildCopy the code

After running, run a second test

swift build --cleanCopy the code

The last

time swift buildCopy the code

Both builds use SPM(Swift Package Manager) to manage dependencies, including regular, clean dependencies that have been downloaded.

How does it work

This runs on my native 2015 rMBP and is built in Debug mode, as this is the normal process when developing applications using Swift.

Build time results


Memory usage

My second concern is the amount of memory used while the framework is running.

How to run

Step 1 Start the footprint (simply start the process)

The second step tests the peak memory footprint on my server

wrk -d 1m -t 4 -c 10Copy the code

Step 3: Test the footprint a second time using the following method

wrk -d 1m -t 8 -c 100Copy the code

How does it work

The test was run on a clean Mac Mini dedicated test server. Reflects how each framework might behave in release mode. Only one framework is running on the command line at a time and is restarted before the next test. The only window open during the test was the activity monitor, which I used to visualize the memory footprint. As each framework runs, I simply indicate when spikes occur in the activity monitor.

Memory usage result


Thread usage

The third thing I look at is thread usage for each framework under load

How to run

Step 1 Start the footprint (simply start the process)

The second step is to generate threads on my test server with the following command:

wrk -d 1m -t 4 -c 10Copy the code

How does it work

This is a dedicated test server built with a clean Mac Mini, and each framework is executed in release mode as much as possible. Only one framework is running on the command line at a time and is restarted before the next test. The only window open during the test was the activity monitor, which I used to visualize the memory footprint. As each framework runs, I simply indicate when spikes occur in the activity monitor.

A description of these results

There is no “win” category. Many different applications manage threads differently, and these frameworks are no exception. Zewo, for example, is a single-threaded application that will never use more than one thread (if you don’t actively run on every CPU). While Perfect uses every AVAILABLE CPU, Vapor uses one CPU per thread model. So the goal of this diagram is to make the peak thread load easier to see.

Thread usage results


Blog test

The first benchmark deals with /blog routing, which is a fake blog post interface that returns five random images per request.

How to run

wrk -d 10m -t 4 -c 20 http://169.254.237.101:(PORT)/blogCopy the code

Run each blog with Thunderbolt bridge from my rMBP.

How does it work

In memory tests, each framework runs in release mode and is restarted before each test. Only one framework is running on the server at a time. All activities are kept to a minimum to ensure that the environment is as similar as possible.

The results of


JSON test

Since everyone has their own way of handling static files, it seems fairer to use a simple interface to do the same test, so I added a/JSON route to test each application to return random numbers between 0 and 1000 from the sandbox. This test is done separately to ensure that static file handlers and middleware do not affect the results.

How to run

wrk -d 10m -t 4 -c 20 http://169.254.237.101:(PORT)/jsonCopy the code

Run for each JSON project

How does it work

In the other tests, each framework ran in release mode and was restarted before each test. Only one framework is running on the server at a time. All activities are kept to a minimum to ensure that the environment is as similar as possible.

Results

conclusion

The answer to my question was overwhelmingly yes. Not only does Swift work as a server framework, but all Swift server frameworks perform incredibly well, with Node.js ranking in the bottom two in every test.

The server-side Swift framework can save you a lot of time because it can share the base code base with other Swift applications. As you can see from the results here, the Server-side Swift framework is a very strong competitor in the programming space. I personally use Swift as much as possible in programming (especially on the server side). I can’t wait to see more amazing projects coming out of the community.

involved

If you are interested in server-side Swift, now is the time to get involved! These frameworks still have a lot of work to do, such as their documentation. And there are some really cool examples (both open source and closed source). You can learn more here:

  • Perfect: Website | Github | Slack | Gitter
  • Vapor: Website | Github | Slack
  • Kitura: Website | Github | Gitter – Zewo: Website | Github | Slack

Keep in touch

If you have any questions, just get in touch with me on Twitter @rymcol.

Additional information required: This section was added on September 1, 2016, with some data optimized and corrected for Zewo builds using the Swift Build-c release method. A grant from PerfectlySoft provided the impetus for my research. I am also on the Perfect & Vapor team on Github. I am not an employee of any of them and my opinions do not represent their views. I try to be absolutely fair because I’m developing on all four platforms at the same time, and I really want to see the result that all the code used for research is open, so you can always check the tests or repeat some of the tests yourself.