This is the 9th day of my participation in the August More Text Challenge. For details, see:August is more challenging

Compared with HTTP/1.1, HTTP/2 can be said to greatly improve the performance of web pages, just need to upgrade to this protocol can reduce a lot of previous performance optimization work, but HTTP/2 is not perfect, HTTP/3 is to solve some of the problems of HTTP/2 and was introduced.

The defect of HTTP1.1

  1. High delay — head-of-line Blocking
  2. Stateless property – hindering interaction
  3. Plaintext transmission – Insecure
  4. Server push is not supported

1. High latency — slows down page loading

Although network bandwidth has increased rapidly in recent years, we have not seen a corresponding reduction in network latency. The network delay problem is mainly due to head-of-line Blocking, which causes that the bandwidth cannot be fully utilized.

Queue head blocking means that when a request in the sequence of sequential requests is blocked for some reason, all subsequent requests are blocked at the same time, resulting in a delay in the client receiving data. The following methods have been tried to solve the head block:

  • Allocate resources on the same page to different domain names to increase the connection upper limit.Chrome has a mechanism that, by default, allows up to six simultaneous TCP persistent connections for the same domain nameWhile you can share a TCP pipe with a persistent connection,But only one request can be processed at a time in a pipeOther requests are blocked until the current request is completed. In addition, if there are 10 requests under the same domain name, four of them will be queued until the ongoing request is completed.
  • Merging small files reduces the number of resources. A technique for merging several smaller images into one larger image and then “cutting” the smaller images using JavaScript or CSS.
  • Inlining resourcesAnother technique to prevent sending too many small image requests is to embed the original image data in the URL in the CSS file to reduce the number of network requests.
  • Reduce the number of requests. Concatenation uses tools such as Webpack to package multiple smaller JavaScript files into one larger JavaScript file, but if one of the files changes, a large amount of data will be re-downloaded from multiple files.

Stateless features — large HTTP headers

Stateless means that the protocol has no memory of the connection state. Pure HTTP has no mechanisms such as cookies, and every connection is a new connection.

Because the packet Header usually carries many fixed Header fields such as “User Agent”, “Cookie”, “Accept” and “Server” (as shown in the figure below), which can be up to hundreds or even thousands of bytes, but the Body is often only tens of bytes (such as GET request and 204/301/304 response). He became a real “big head son”. The content carried in the Header is too large, which increases the cost of transmission to some extent. What’s more, a large number of field values in the request and response packets are duplicate, which is very wasteful.

3, plaintext transmission — bring insecurity

When HTTP/1.1 transmits data, all the content transmitted is in plaintext. Neither the client nor the server can verify the identity of the other, which fails to ensure data security to a certain extent.

4. Server push message is not supported

Introduction to SPDY protocol and HTTP/2

1. SPDY protocol

As mentioned above, due to HTTP/1.x defects, we are introducing Sprite graphs, inlining small graphs, using multiple domain names, and so on to improve performance. However, these optimizations bypasses the protocol until 2009, when Google unveiled its own SPDY protocol, which addresses HTTP/1.1’s inefficiencies. With SPDY, Google officially reinvented HTTP itself. Reducing latency, compressing headers, and so on, SPDY demonstrated the effectiveness of these optimizations, which eventually led to HTTP/2.

HTTP/1.1 has two major drawbacks: insufficient security and low performance. Due to the huge historical baggage of HTTP/1.x, protocol modification, compatibility is the first priority, or it will destroy countless existing assets on the Internet. As you can see above,SPDY is located below HTTP and above TCP and SSL, making it easy to work with older VERSIONS of THE HTTP protocol (encapsulating http1.x contents into a new frame format), while using existing SSL capabilities.

Once the SPDY protocol was proven to work in Chrome, it was used as the basis for HTTP/2, where the main features were inherited.

2. Introduction to HTTP/2

In 2015, HTTP/2 was released. HTTP/2 is an alternative to the current HTTP protocol (HTTP/1.x), but it is not a rewrite. The HTTP methods/status codes/semantics are the same as HTTP/1.x. HTTP/2 is based on SPDY and focuses on performance. One of the biggest goals is to have only one connection between the user and the site. From the current situation, some of the top sites at home and abroad have basically implemented THE deployment of HTTP/2, using HTTP/2 can bring 20%~60% efficiency improvement.

HTTP/2 consists of two specifications:

  1. Hypertext Transfer Protocol version 2 – RFC7540
  2. HPACK – Header Compression for HTTP/2 – RFC7541

HTTP / 2 new features

1. Binary transmission

There are two main reasons for the large reduction in the amount of data transmitted through HTTP/2: binary transmission and Header compression. Let’s start with binary transmission. HTTP/2 transmits data in binary format, rather than plain text packets as in HTTP/1.x. Binary protocol parsing is more efficient. HTTP/2 splits the request and response data into smaller frames, and they are encoded in binary.

It moves some of the features of TCP to the application layer, “splitting” the original “Header+Body” message into small bits of binary “Frame”, using the “HEADERS” Frame to store Header DATA and the “DATA” Frame to store entity DATA. After the HTP/2 data is divided into frames, the “Header+Body” message structure disappears completely and the protocol sees only “fragments”.

In HTTP/2, all communication with a domain name is done on a single connection, which can host any number of bidirectional data streams. Each data stream is sent as a message, which in turn consists of one or more frames. Multiple frames can be sent out of order and reassembled according to the stream identifier at the beginning of the frame.

2. Header compression

Instead of using traditional compression algorithms, HTTP/2 has developed a special “HPACK “algorithm to build a” dictionary “on both sides of the client and server. The index number is used to represent repeated strings, and the Huffman encoding is used to compress integers and strings, which can achieve a high compression rate of 50%~90%.

To be specific:

  • The “header table” is used on both the client and server to track and store previously sent key-value pairs. The same data is no longer sent through each request and response.
  • The header table exists throughout the lifetime of the HTTP/2 connection and is updated incrementally by both the client and the server.
  • Each new header key-value pair is either appended to the end of the current table or replaces the previous value in the table

For example, in the following two requests, request one sends all header fields, and request two sends only differential data, which reduces redundant data and overhead

3. Multiplexing

Multiplexing was introduced in HTTP/2. Multiplexing solves the problem of browsers limiting the number of requests to the same domain name. It also makes it easier to achieve full speed transmission, since each new TCP connection requires a slow increase in transmission speed.

This link gives you a sense of just how much faster HTTP/2 is than HTTP/1.

In HTTP/2, with binary framing, HTTP/2 no longer relies on TCP links for multi-stream parallelism. In HTTP/2:

  • All communication with the domain name is done on a single connection.
  • A single connection can carry any number of bidirectional data streams.
  • The data stream is sent as a message, which in turn consists of one or more frames that can be sent out of order because they can be reassembled based on the stream identifier at the beginning of the frame.

This feature provides a significant performance boost:

  • The same domain name needs only one TCP connection. Using one connection to send multiple requests and responses in parallel, the entire page download process needs only one slow start, and avoids the problem of multiple TCP connections competing for bandwidth.
  • Multiple requests/responses are sent in parallel and interleaved. The requests/responses do not affect each other.
  • In HTTP/2, each request can have a priority value of 31 bits, with 0 being the highest priority and larger numbers having lower priority. With this priority value, the client and server can adopt different policies when processing different streams to send streams, messages, and frames in an optimal manner.

As shown in the figure above, the multiplexing technique allows all request data to be transmitted over a single TCP connection.

4, Server Push

HTTP2 also changes the traditional “request-reply” mode to a certain extent. Instead of responding to requests passively, servers can also create new “streams” that send messages to clients actively. For example, sending possible JS and CSS files to the client in advance of the browser’s request for HTML reduces latency. This is called “Server Push”.

For example, as shown in the following figure, the server actively pushes JS and CSS files to the client, without requiring the client to parse the HTML to send these requests.

In addition, the server can actively push, and the client also has the right to choose whether to receive. If the server pushes a resource that has already been cached by the browser, the browser can reject it by sending an RST_STREAM frame. Active push also complies with the same origin policy. In other words, the server cannot push third-party resources to the client at will, but only after the confirmation of both parties.

5. Improve safety

For compatibility purposes, HTTP/2 continues the “plaintext” feature of HTTP/1. As before, data can be transmitted in plaintext. Encrypted communication is not mandatory, but the format is binary, but decryption is not required.

But because HTTPS is the way to go, and the major browsers, Chrome, Firefox, etc., have publicly declared that they only support encrypted HTTP/2, the “de facto” HTTP/2 is encrypted. In other words, the HTTP/2 commonly seen on the Internet uses the “HTTPS” protocol name and runs over TLS. The HTTP/2 protocol defines two string identifiers: “H2” for encrypted HTTP/2, and “H2c” for plaintext HTTP/2.

HTTP / 2 faults

Although HTTP/2 solves many of the problems of previous versions, it still has a huge problem, mainly caused by the underlying TCP protocol. The disadvantages of HTTP/2 are as follows:

  1. TCP and TCP+TLS connection latency
  2. TCP queue header blocking is not completely resolved
  3. Multiplex causes server stress to increase
  4. Multiplexing is easy to Timeout

Build the delay

HTTP/2 is transmitted using THE TCP protocol, but HTTPS is secured using the TLS protocol, which also requires a handshake, so there are two handshake delays:

(1) When establishing a TCP connection, a three-way handshake is required with the server to confirm that the connection is successful, that is, 1.5 RTT is consumed before data transmission.

There are two versions of TLS, TLS1.2 and TLS1.3. Each version takes a different time to establish a connection, roughly requiring 1 or 2 RTT.

In all, we need to spend 3-4 RTTS before transferring data.

Round-trip Time (RTT) : indicates the round-trip Time. Indicates the total delay from the time when the sender sends data to the time when the sender receives the acknowledgement from the receiver (the receiver sends the acknowledgement immediately after receiving the data).

The head block was not completely resolved

We mentioned earlier that in HTTP/2, multiple requests run in a TCP pipe. When packet loss occurs, however, HTTP/2 does not perform as well as HTTP/1. Because TCP has a special “packet loss retransmission” mechanism in order to ensure reliable transmission, lost packets must wait for retransmission confirmation. When HTTP/2 loses packets, the entire TCP waits for retransmission, blocking all requests on the TCP connection (see figure below). For HTTP/1.1, multiple TCP connections can be enabled. If this happens, only one of the connections is affected, and the rest of the TCP connections can transmit data.

RTO: Retransmission TimeOut; The RTO is a dynamic value that changes as the network changes. The RTO is calculated based on the round-trip time RTT for a given connection. The ACK returned by the receiver is the sequence number of the next set of packets to be received.

One might think why not just change the TCP protocol? In fact, this is already an impossible task. Because TCP has been around for so long, has become ubiquitous on so many devices, and is implemented by the operating system, it is impractical to update.

Multiplex causes server stress to increase

Multiplexing does not limit the number of simultaneous requests. The average number of requests was the same as usual, but there were actually many short bursts of requests, leading to a transient QPS surge.

Multiplexing is easy to Timeout

A large number of requests are sent at the same time, and because there are multiple parallel streams within the HTTP2 connection and the network bandwidth and server resources are limited, the resources of each stream are diluted, and even though they start at a shorter time, they can all time out.

Even with a load balancer like Nginx, throttling correctly can be tricky. Second, even if you introduce or adjust queuing mechanisms into your application, there are only so many connections you can handle at one time. If a request is queued, be careful to discard the request after the response times out to avoid wasting unnecessary resources. reference

HTTP/three new features

1. Introduction to HTTP/3

Google was aware of these issues when it introduced SPDY and created a “QUIC” protocol based on UDP, allowing HTTP to run on QUIC instead of TCP. And this “HTTP over QUIC” is the next big version of the HTTP protocol, HTTP/3. It is a qualitative leap forward on the basis of HTTP/2, truly “perfect” to solve the “queue head blocking” problem.

Although QUIC is based on UDP, many new functions have been added on the original basis. Next, we will focus on several new functions of QUIC. However, HTTP/3 is still in draft form and may change before release, so this article tries not to cover volatile details.

2. New QUIC features

As mentioned above, QUIC is based on UDP, which is “connectionless” and doesn’t require a “handshake” or “wave” at all, so it’s faster than TCP. In addition, QUIC also achieves reliable transmission, ensuring that the data will be able to reach the destination. It also introduces “streaming” and “multiplexing” similar to HTTP/2, where individual “streams” are ordered and may block due to packet loss, but other “streams” are not affected. Specifically, the QUIC protocol has the following characteristics:

  • It realizes the functions of flow control and transmission reliability similar to TCP

    While UDP does not provide reliable transmission, QUIC adds a layer on top of UDP to ensure reliable data transmission. It provides packet retransmission, congestion control, and other features found in TCP.

    In what ways does the QUIC protocol improve? The main points are as follows:

    1. Pluggable – different congestion control algorithms are implemented at the application level.
    2. Monotonically increasing Packet Number – Packet Number is used instead of TCP SEQ.
    3. Reneging is not allowed – as long as a Packet is Ack, it must be correctly received.
    4. Forward Error correction (FEC)
    5. More Ack blocks and increase the Ack Delay time.
    6. Flow control based on stream and Connection levels.
  • Realizes the quick handshake function

    Because QUIC is UDP based, QUIC can use 0-RTT or 1-RTT to establish connections, which means that QUIC can send and receive data as fast as possible, which can greatly improve the first page opening time. 0RTT connectivity is arguably QUIC’s biggest performance advantage over HTTP2.

  • TLS encryption is integrated

    The current VERSION of QUIC uses TLS1.3, which has more advantages than earlier versions of TLS1.3, the most important of which is to reduce the number of RTT spent on shaking hands.

    In the case of a full handshake, a 1-RTT is required to establish the connection. TLS1.3 recovery allows you to directly send encrypted application data without the need for additional TLS handshakes, that is, 0-RTT.

    But TLS1.3 isn’t perfect either. The 0-RTT of TLS 1.3 cannot guarantee Forward secrecy. In short, if an attacker obtains a Session Ticket Key by some means, the attacker can decrypt the previously encrypted data.

    To alleviate this problem, you can set the DH static parameters related to the Session Ticket Key to expire in a short time (usually a few hours).

  • Multiplexing, completely solve the problem of TCP squadron header blocking

    Unlike TCP, QUIC enables multiple independent logical data streams on the same physical connection (see figure below). The realization of the data flow alone transmission, the TCP squadron header blocking problem is solved.

  • Connect the migration

    TCP defines a connection based on four elements (client IP and port, server IP and port). QUIC lets the client generate a Connection ID (64-bit) to distinguish between different connections. As long as the Connection ID remains the same, the Connection does not need to be re-established, even if the client’s network changes. Since the migration client continues to use the same session key to encrypt and decrypt packets, QUIC also provides automatic encryption authentication for the migration client.

conclusion

  • HTTP/1.1 has two major disadvantages: insufficient security and low performance.
  • HTTP/2 is fully compatible with HTTP/1, is “more secure HTTP, faster HTTPS”, binary transmission, header compression, multiplexing, server push and other technologies can make full use of bandwidth, reduce latency, so as to greatly improve the Internet experience;
  • QUIC based on UDP, is the underlying support protocol in HTTP/3, the protocol based on UDP, and take the essence of TCP, to achieve a fast and reliable protocol.

Interview questions http2 and http1.1 difference, understand http3, talk about;

The answer, since read this article, then try to summarize it, deepen the memory!!

The resources

  1. Http2.0 principle detailed analysis
  2. HPACK: the silent killer in HTTP/2
  3. QPACK: HTTP /3 header compression
  4. DH algorithm
  5. ForwardSecrecy
  6. TLS 1.3 VS TLS 1.2 shows you how powerful TLS 1.3 is
  7. CaddyWeb server QUIC deployment
  8. Various attempts at QUIC
  9. The QUIC protocol is used to realize real-time live video broadcasting
  10. New features for decrypting HTTP/2 and HTTP/3
  11. Web communication protocols, you also need to know: SPDY and QUIC
  12. What do you think of HTTP/3?

If this article helped you, remember to like 👍 collection and follow oh 😊, hope to like a lot more…

If there are any mistakes in this post, feel free to correct them in the comments section