Reference:

Mp.weixin.qq.com/s/Zt8ljlUa2…

Juejin. Cn/book / 684473…

After inventing HTTP1.1

Over the years the number and size of web page requests has increased

Http1.1 defects

When it is simple, it is not safe and performance is not high

1. High latency — Slows down page load times While network bandwidth has increased dramatically in recent years, we have not seen a corresponding decrease in network latency. The network latency problem is caused by the Blocking Of the Head Of Line, which causes that the bandwidth cannot be fully utilized

Queue head blocking means that when one request in the sequential request sequence is blocked for some reason, all the subsequent requests are also blocked, causing the client to fail to receive data. People have tried to solve queue congestion by introducing Sprite maps, inlining small maps into Base64, using multiple domain names, merging small files, and so on

2. Stateless features – Huge HTTP headers

Since packet headers generally carry many fixed Header fields (as shown below), such as “User Agent”, “Cookie”, “Accept” and “Server”, up to hundreds or even thousands of bytes, while Body is usually only dozens of bytes (such as GET request and 204/301/304 response), He became the son of the big head. The Header carries too much content, which increases the cost of transmission to some extent. What’s more, there are many duplicate field values in thousands of request response packets, which is very wasteful

3. Plaintext transmission – insecure HTTP/1.1 in the transmission of data, all transmitted content is in plaintext, the client and server can not verify the identity of the other side, which cannot ensure data security to a certain extent.

If you use wifi that someone has set up, all your traffic will be intercepted and your account password will be parsed

4. The server does not support message push

Http2 features

At its core is a dramatic reduction in the amount of data transferred

The amount of data is represented as binary transmission and Header compression

More than 2 push; Browsers have concurrent request limits, TCP starts slowly, loading large files takes a lot of time, multiplexing and header compression –~~!

1. Multiple requests are multiplexed concurrently over a SINGLE TCP connection (only one TCP connection is used for a domain name, 1.1 can have six

2. Server push The server can actively push resources to the client

3. The new binary format is more analytical and extensible than HTTP/1.1 text format

4. Header compression reduces transmission size

Multiplexing: The original 1.1 although there is a long connection, but the HTTP connection is only serial port to send, the HTTP connection number to slow 2.0 do you have a frame and flow, the concept of each flow has multiple frames, multiplexing is a TCP connection can have multiple flow, so I can send multiple requests, and then there can be judged by the frame which belongs to flow, Knowing which request corresponds improves transmission performance

multiplexing

Simply put, a domain name requires only one TCP connection and can interleave messages without blocking

In HTTP/1, for performance reasons, we introduced Sprite diagrams, inline base64 with small diagrams, use multiple domain names, merge small files, and so on. This is all because browsers limit the number of requests per domain (Chrome typically limits the number of connections to six). When a page requests a lot of resources, the Head of line blocking causes that when the maximum number of requests is reached, The remaining resources need to wait for other resource requests to complete before initiating requests.

The introduction of multiplexing in HTTP/2 allows all requested data to be transmitted over a single TCP connection. Multiplexing is a good solution to the browser’s problem of limiting the number of requests to the same domain name, and it also indirectly makes it easier to achieve full speed transmission, since every new TCP connection requires a slow increase in transmission speed.

Multiplexing means that multiple streams can exist in a TCP connection. In other words, multiple requests can be sent, and the peer end can know which request belongs to by the identifier in the frame. By using this technique, the queue header blocking problem in older VERSIONS of HTTP can be avoided and the transmission performance can be greatly improved

  • tips

In HTTP/2, each request can have a priority value of 31 bits. 0 indicates the highest priority. A higher value indicates a lower priority. With this priority value, clients and servers can take different policies when dealing with different streams to optimally send streams, messages, and frames.

Binary transmission

Http1.1 is used to transmit the header and body text, and is now used to parse the binary faster, and split the packet into a frame, frame by frame, in streaming mode, frame can be transmitted out of order, according to the frame header stream identifier can be reassembled

The header compression

In HTTP/1, we transmit the header as text, and in cases where the header carries a cookie, we may need to repeat the transfer of hundreds to thousands of bytes at a time.

In HTTP /2, the transmitted headers are encoded using the HPACK compression format, reducing their size. An index table is maintained at both ends to record the headers that have been recorded. The key names of recorded headers can be transmitted later in the transmission process. After receiving data, the peer end can find the corresponding value by the key names.

Server push

Usage scenario: when the browser just requests HTML, it sends the JS and CSS files that may be used to the client in advance to reduce the delay of waiting, which is called “server push”.

Of course you can use Prefetch if the browser is compatible.

The server can actively push, the client also has the right to choose whether to receive. If the server pushes a resource that has already been cached by the browser, the browser can reject it by sending RST_STREAM. Active push also complies with the same origin policy. In other words, the server cannot push third-party resources to the client, but only after the confirmation of both parties.

security

It doesn’t seem to have anything to do with HTTP2, it’s HTTPS, okay

Mainstream browsers only support encrypted HTTP2, which must use the “HTTPS” protocol name

The disadvantage of http2

Http2 is more the cause of TCP

1. Both TCP and TLS require a handshake, which is delayed

2. The TCP queue header is blocked

TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP TCP This is even worse than HTTP1.1, which has 6 TCP channels and does not affect other requests

http3

A really “perfect” solution to the “queue head blocking” problem.

A new protocol was created based on UDP, adding many new features, such as multiplexing, 0-RTT, encryption using TLS1.3, flow control, ordered delivery, retransmission, etc

multiplexing

Realize that there can be multiple independent logical data flows on the same physical connection. Realize the separate transmission of data stream, and solve the problem of TCP squadron head blocking

Transferring a single data stream ensures orderly delivery and does not affect other data flows, which solves the problem with TCP.

And QUIC will perform better on mobile than TCP. Because TCP identifies connections based on IP and port, this approach is vulnerable in the volatile mobile network environment. But QUIC identifies a connection by ID, so no matter how your network environment changes, as long as the ID stays the same, you can reconnect quickly.

Shake hands soon

Udp has no connection, and uses a relatively new version of TLS, fast handshake

The function of similar transmission reliability is realized

QUIC adds a layer on top of UDP to ensure reliable data transmission. It provides packet retransmission, congestion control, and other features found in TCP.

If I want to send three packets this time, the protocol calculates the xOR value of the three packets and issues a separate checksum, so four packets have been sent.

When non-checksum packets are lost, the contents of the lost packets can be calculated through the other three packets.

Of course, this technique can only be used in the case of a missing packet, if there are multiple missing packets you can’t use error correction mechanism, you can only use retransmission.

summary

HTTP/2 greatly improves performance through multiplexing, binary streaming, Header compression, and so on, but there are still problems

QUIC is based on UDP and is the bottom support protocol in HTTP/3. This protocol is based on UDP and takes the essence of TCP to achieve a fast and reliable protocol

  • This article is part of the “Gold Nuggets For Free!” Event, click to view details of the event