This is the first day of my participation in the First Challenge 2022.

  • HTTP/3 is Fast
  • Request Metrics
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: jaredliw
  • Proofreader: luochen1992, finalwhy

HTTP/3 is here, and it’s a big deal for Web performance. Let’s see how much it speeds up the site!

Wait, isn’t HTTP/2 bad? Hasn’t it been very popular in recent years? It is, but it still has some problems. To address these issues, the new version of the protocol is working towards “standards Track” (one of the RFC’s categories).

HMM, but does HTTP/3 really make the web faster? It certainly does, and we’ll prove that with benchmarking.

preview

Before we dive into the details, let’s have a quick preview of the benchmark results. In the diagram below, we request the same site on the same network using the same browser, the only difference being the VERSION of the HTTP protocol. Each site was repeated 20 times, and the response time was measured using the browser’s Performance API. (More details on benchmarking can be found below.)

You can clearly see the performance improvement with each new version of the HTTP protocol compared to HTTP/1.1

These differences will become more pronounced when geographic distances are greater or networks are less reliable.

Before we dive into the details of HTTP/3 benchmarks, we need to know some background.

HTTP, brief

The first official version of HTTP (Hypertext Transfer Protocol 1.0) was completed in 1996. However, there were some practical issues and some standards needed to be updated, so HTTP/1.1 was released a year later in 1997. As the author puts it:

However, HTTP/1.0 did not fully consider the need for tiered proxies, caching, persistent connections, and the impact of virtual hosting. In addition, the number of applications that advertise themselves as “HTTP/1.0” but do not fully implement HTTP/1.0 has increased dramatically; Therefore, we need a new version of the protocol so that two communicating applications can confirm each other’s true communication capabilities.

Eighteen years later, a new version of HTTP was released. In 2015, RFC 7540 announced with much fanfare that it would standardize HTTP/2 as the next major version of the protocol.

One connection, one file

If a web page requires 10 JavaScript files, the browser will need to retrieve all 10 files to complete the load. In the DAYS of HTTP/1.1, only one file could be downloaded at a time over a TCP connection to the server. This means that the files are downloaded in sequence, and if one file is delayed, all subsequent downloads will be blocked. This phenomenon is called queue head blocking; This is bad for page performance.

To solve this problem, browsers can open multiple TCP connections to retrieve resources in parallel. However, this is a resource-intensive approach. Each new TCP connection consumes resources on both the client and the server. And when you introduce TLS, there will be a lot of SSL negotiation going on. So we need a better solution.

Multiplexing in HTTP/2

The big thing about HTTP/2 is its multiplexing mechanism. HTTP/2 solves the problem of queue blocking at the application layer by converting data to binary and making it possible to multiplex multiple file downloads. That is, a client can request all 10 files at the same time and download them in parallel over a TCP connection.

Unfortunately, HTTP/2 communication still suffers from queue blocking, and the source is at the next level — TCP becomes the weakest link in the transport chain. Any data flow with packet loss will have to wait for the packet to be retransmitted before it can continue.

However, because the parallel nature of HTTP/2 multiplexing is not visible to TCP’s packet loss recovery mechanism, a missing or out-of-order packet causes all active transactions to stop, whether or not it is directly affected by packet loss.

In fact, HTTP/1.1 actually performs better in a high-packet loss environment because HTTP/2 opens too many parallel TCP connections!

True multiplexing in QUIC and HTTP/3

Now, HTTP/3. The main difference between HTTP/2 and HTTP/3 is the transport protocol used. Unlike TCP, HTTP/3 uses an entirely new protocol, QUIC. QUIC is a generic transport protocol that solves HTTP/2’s TCP header blocking problem. This protocol allows you to create a series of stateful streams over UDP (much like TCP).

The QUIC transport protocol includes the reuse of streams and flow control for each stream, both of which are similar to those implemented in HTTP/2. QUIC can improve HTTP performance over TCP mapping by providing flow-level reliability and congestion control throughout the connection.

If you don’t care how the test works, skip to the results below!

Benchmarks for HTTP/3

To understand how HTTP/3 makes a difference in performance, we need to set up a benchmark environment.

HTML

To better match the actual usage, this test considered three scenarios — a small site, a content site (lots of images and some JavaScript resources), and a single-page application (lots of JavaScript resources). I looked at several real-life sites, calculated the average number of images and JavaScript files for each site, and wrote demo sites that matched the number (and size) of those resources.

  • Small sites
    • 10 JavaScript files from 2 kB to 100 kB;
    • 10 1kB to 50KB images;
    • The total load size is 600 kB, and there are 20 blocked resources in total.
  • Content sites
    • 50 JavaScript files from 2 kB to 1 MB;
    • 55 images from 1 kB to 1 MB;
    • The total payload size is 10MB, for a total of 105 blocked resources (check out cnn.com in developer tools to see why this is so large).
  • Single page application
    • 85 JavaScript files from 2 kB to 1 MB;
    • 30 pictures from 1 kB to 50 kB;
    • The total payload size is 15MB, with 115 blocked resources (check out JIRA in developer tools).

The service side

Caddy serves as the server for this test, providing resources and HTML.

  • All responses are usedCache-Control: "no-store"To ensure that the browser redownloads the resource each time;
  • HTTP/1.1 and HTTP/2 use TLS 1.2;
  • HTTP/3 uses TLS 1.3;
  • 0-RTT is enabled for all HTTP/3 connections.

The geographical position

The tests were conducted from my computer in Minnesota to three separate data centers (hosted by Digital Ocean) :

  • New York, USA
  • London, England
  • Bangalore, India

The client

The browser will request the same page 20 times in a row, three seconds apart, and the process is completely automated. The network is rated at 200 Mbps. No other applications are running on the computer at the time of data acquisition.

How fast is HTTP/3?

New York, USA

The following are HTTP/2 and HTTP/3 response times for requesting three different sites from the New York Data Center:

HTTP / 3 in:

  • Smaller sites are 200 milliseconds faster
  • Content sites are 325 milliseconds faster
  • Single-page apps are 300 milliseconds faster

Minnesota is 1,000 miles from New York; That’s not a big deal for an Internet connection. Importantly, however, HTTP/3 can improve performance by this much, even over relatively short distances.

London, England

In this test, I also included benchmarks for HTTP/1.1. To show how much faster HTTP/2 and HTTP/3 are, I’ve kept the scales on the axes in the chart below the same. You can see how slow HTTP/1.1 is for content sites; So slow that you can’t even show the chart properly!

As you can see, the speed increase is even more pronounced when the network is further apart.

  • Smaller sites are 600 milliseconds faster (three times faster than New York)
  • Content sites 1200 milliseconds faster (3.5 times faster than Nyc)
  • Single-page apps are 1000 milliseconds faster (3 times faster than Nyc)

Bangalore, India

The improvement in HTTP/3 performance is most noticeable when loading pages from India. I’m not going to test HTTP/1.1 because it’s too slow. Here are the results of HTTP/2 compared to HTTP/3:

HTTP/3 continues to lead when requests involve larger geographic areas and more network hops. More notable is how centralized HTTP/3’s response time data is distributed. When packets travel thousands of miles, QUICS will play an important role.

In every case HTTP/3 is faster than HTTP through the ages!

Why is HTTP/3 so fast?

True multiplexing

The true multiplexing nature of HTTP/3 means that no queue header blocking occurs anywhere on the stack. When you request resources from a further geographic location, the likelihood of packet loss is much higher, and the need for TCP retransmission increases.

Game-changing 0-RTT

In addition, HTTP/3 also supports 0-RTT QUIC connections, reducing the number of data round-trips required to establish secure TLS connections.

The 0-RTT feature in QUIC allows clients to send application data before the three-way handshake is complete. This functionality is achieved by reusing parameters from previous connections. 0-RTT relies on the important parameters remembered by the client and provides the server with a TLS session ticket to recover the same information.

However, you should not blindly enable 0-RTT. Based on your threat model, it may have some security issues.

0-RTT data has weaker security attributes than other types of TLS data. To be specific:

  1. Data is not forward secret; Data is only encrypted by keys derived from pre-shared keys (PSK).
  2. There is no guarantee that connections will not be played back.

Can I use HTTP/3 now?

Maybe. Although the protocol is still in Internet-draft status, there are many different practices on the market.

As a benchmark, I chose Caddy in particular. I only need to modify a simple configuration in Caddyfile to enable HTTP/3.

Nginx also has experimental support for HTTP/3 and is moving towards a release of HTTP/3 (which is in the near future).

Tech giants like Google and Facebook already offer services over HTTP/3. In modern browsers, Google.com already uses HTTP/3 entirely.

For those “stuck” in the Microsoft ecosystem, Windows Server 2022 will reportedly support HTTP/3, but you’ll need to perform some “esoteric” steps to enable it.

conclusion

HTTP/3 can provide a significant improvement in the user experience. In general, the more resources a site requires, the greater the performance gain for HTTP/3 and QUIC. As the standard gets closer to being finalized, it may be time to consider enabling HTTP/3 for your site.

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.