From the evolution of HTTP history, details of the use of protocol changes, understand the cause of the discovery of problems, decoding QUIC support in HTTP3, explore the future of HTTP3.
HTTP, HTTP2, and HTTP3
With all of you to review the history of the HTTP first and see what HTTP, HTTP2 compared HTTP3 improvement and upgrading.
HTTP VS HTTP2
Multiplexing: When multiplexing, multiple file transfers sometimes require only one TCP connection to be maintained. If HTTP1 is used, a maximum of six TCP connections can be opened to transmit multiple data. If a new connection is added, the process will be blocked because of congestion. So HTTP/2 solves HTTP header blocking.
Header compression and Server Push: HTTP2 does header compression via HPACK. HTTP2 is a binary protocol, which is more efficient in parsing than text-based HTTP, and it also adds Server Push.
Under TCP, delays, such as slow starts introduced to prevent initial blocking, are still not addressed. TCP packet header blocking. For example, packet loss occurs and the transmission data involved in the entire connection needs to be retransmitted.
Although HTTP2 has some improvements over HTTP, if you have a 2% packet loss rate, then HTTP2 has no performance advantage.
The figure above shows the cause of TCP Head of line blocking. The HTTP2 protocol is based on TCP, but TCP itself does not solve header congestion. Why? Because HTTP2 will transfer all files in a TCP connection, as long as a packet loss occurs in this TCP connection, the connection must be re-established, before all transfer content must be retransmitted, resulting in congestion.
HTTP3 VS HTTP2
-
HTTP3 is not an improvement on the HTTP protocol per se, but rather focuses on improving transport efficiency. Here are some of the improvements HTTP3 has made over HTTP2:
-
HTTP3 further extends HTTP2’s multiplexing with stream. In HTTP3 mode, the usual number of streams will be generated as many files are transferred. When packet loss occurs in one of these files, you simply need to retransmit the stream corresponding to the lost packet file.
-
HTTP3 is no longer based on TCP, but through UDP, in the user space to ensure the reliability of the transmission, compared to TCP, UDP QUIC protocol improves the speed of connection establishment, reduce latency.
-
HTTP3 supports Connection migration and NAT rebinding by introducing the Connection ID.
-
HTTP3 has a BUILT-IN TLS security mechanism that includes authentication, encryption, data, and load.
-
Congestion control. TCP is implemented in the kernel, while HTTP3 moves congestion control out of the kernel and through user space. The advantage of this is that you don’t have to wait for kernel updates to make it easy to iterate quickly.
-
Head compression. HTTP2 uses HPACK, HTTP3 is replaced by hPACK-compatible QPACK compression scheme. QPACK optimizes support for out-of-order sending, as well as compression.
Why QUIC
It can be seen from the figure that the QUIC protocol layer realizes reliable data transmission, congestion control, encryption, and multi-channel data flow.
As for why QUIC uses UDP, before we get to that, we need to know one thing. Frequent user mode and core state switching can be inefficient. In theory, it is possible to move things from the application layer to the kernel to improve efficiency, but doing so will affect the stability of the operating system. On the other hand, we can choose to migrate this part of the content into user space. For example, the current popular DPDK, when the network card will transmit data packets, it is bypassing the kernel in user space control and application. At present, the DNS of the cloud has carried out such a process, so that the overall efficiency of the cloud has been improved 5-10 times.
Now let’s talk formally about why QUIC uses UDP because of the following points:
-
Avoid ossification: QUIC protocol encryption load is also a way to avoid protocol fossilization. For example, when the middle layer processes UDP data, it only needs to process it in the way of packets, without paying attention to the specific information of the internal layer.
-
Abandon improving TCP itself
-
Innovation direction: QUIC is proposed by Google, so UDP is based on the browser as the starting point, from the protocol, from the browser direction to innovate.
TLS 1.3 Vs TLS 1.2
TLS 1.3 and 1.2 some of the main improvements are the above points, you can generally look at:
-
TLS 1.3 uses a new encryption suite
-
TLS 1.3 defines several new certificate types and key exchange mechanisms. In TLS 1.3 you no longer need to specify it, it can deduce the certificate type according to the key suite accessories.
Problems with QUIC
Let’s talk about the current problems with QUIC.
First of all, the performance optimization improvements over the years have been for TCP, leaving no improvement in UDP performance. Of course, with the release of QUIC3, I believe there should be a relative investment in the follow-up.
The second problem is security, known as reflex attacks, or forging the original address. This means that the original address of the packet sent is forged, not the real address, which can cause amplification attacks. The reason is that the QUIC handshake process is asymmetric, especially on the first request, the client only needs to send a few bytes of information to the server, and the server needs to return a lot of things to the client, so the opportunity for asymmetry is magnified. Draft 27 defines two rules and mechanisms to limit reflection attacks: When the client sends the Initial packet, the length of the first packet must be over 1200 bytes, and the remaining part is filled with Padding frames. In addition, when the server is uncertain about the reliability of the client, it can send a Retry packet to request the client to provide authentication information again.
Open source implementation of QUIC
Next, let’s briefly talk about the current use of open source:
-
Quiche: This is a library made with Rust, called via Nginx. Google’s own library is also called quiche, written in C++.
-
ATS: Apache Traffic Server
-
Golang: Caddy fixings;
-
Python + C, aioquic
-
Microsoft msquic
The above is just a few of the many open source QUIC implementations out there, and I did some simple tests with Quiche and Aioquic.
The curl that supports HTTP3 is provided from CloudFlare, and the value returned is HTTP/3 200. Alternative Service (h3-27) supports http3 draft-27 on UDP port 443. This alt-SVC has been around since HTTP2, and continues to be around HTTP3, because sometimes the browser doesn’t know whether the server supports QUIC or not, so it makes a request over TCP, and when H3 supports it, it makes a connection over UDP.
Using curl you can’t open Google with curl. Using curl you can’t open Google with curl.
How to deploy and implement the QUIC implementation of HTTP3
There are two main ways to achieve this, one is proxy, the second is through Nginx.
Tencent uses it to implement the framework by integrating it into Nginx. Also, since QUIC encrypts the headers in each request, Tencent has a separate hardware encryption group, so if you use Tencent, all your encryption and decryption will be accelerated through their hardware.
If all subsequent requests are made through HTTP3, you need to take precautions to increase the CPU usage. Of course, just like DPDK mentioned above, that is, data is thrown into FPGA for encryption and decryption is a solution that can be considered.
Through the proxy to achieve this way, there is no official news at present. I can do integration outside, like Cloudflare does, and then link integration inside Nginx.
There are LBS and Marco currently, this is because LBS only accepts TCP/UDP, it does not see HTTP. That is, we made a load balancing cluster through which to migrate to Nginx. Using UDP is equivalent to having gone through a four-layer load balancing, so you can try to extract the baseline of QUIC and use it to do encryption and decryption on the proxy, thus improving efficiency.
Recommended reading
The practice of containerization in data centers
Starting from COVID-19, Gossip protocol