Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

preface

The development of HTTP protocol has gone through a relatively long stage, from the original HTTP/0.9 to HTTP/3.0, HTTP protocol has undoubtedly become a very powerful protocol, today we will talk about HTTP/2.0 and HTTP/3.0.

What improvements have been made to HTTP/2.0?

Binary framing

Binary framing is to divide all information transmitted on a connection into smaller messages and frames, and encode them in binary format. The head information is placed in Headers frame, and the main information is encapsulated in Data frame.

Answer to questions:

  1. Why can HTTP/2.0 binary convert all content?

Because the binary framing layer is the middle layer between the application layer and the transport layer, all information passes through and can be converted.

  1. Why binary?

It's more efficient, and computers prefer binary numbers

multiplexing

Multiplexing is a technology that can send requests in parallel without waiting for the response to return, eliminating unnecessary delays and reducing page load time.

Answer to questions:

  1. What is the difference between HTTP/2.0 multiplexing and pipes in HTTP/1.1?

A pipe can also send requests in parallel, but the response must be returned in the order in which it was sent, whereas multiplexing requires no waiting and is less likely to cause congestion. The basis of multiplexing is binary framing because it can be sent and received out of order.

The first compression

Most of the fields can be reused after the first request. And as the page gets more complex, more and more requests are made from the same page. If the header is not compressed, there is a significant traffic overhead

Answer to questions:

  1. How does head compression work?

Browsers and servers that support HTTP2.0 maintain the same static table and dynamic table, with a built-in Huffman encoding table. Static tables store common headers and common header key-value pairs, such as Method: Get and cookie. The dynamic table starts empty, and if the header hits the name in the static table, the key-value pair is added to the dynamic table, such as Cookie: XXXX. The reason for this is that a request or response header hitting a static or dynamic table requires only one byte to indicate that this byte is an address pointing to the data in the table.

Server push

The server can send multiple responses to a client request, for example, a browser asking the server for index.html, which contains a style sheet and an image. The traditional approach is to send three requests to the browser. The server can send all the three files to the browser in one request, reducing the number of requests and improving the performance of the web page.

Answer to questions:

  1. What are the drawbacks of server push?

If the server pushes content, the browser has a cache, it will waste bandwidth. One way to avoid this is to configure the server on the server side and implement the server push only for the first request.

Second, HTTP/3.0 was born meaning

HTTP/3.0 is based on UDP and is based on QUIC(Quick UDP Internet Connection) protocol.

Connect the migration

A TCP unique identifier is identified by the source IP, source port, destination IP, destination port, and quad. The source IP address and port are generally stable. However, the destination IP address and port may change due to network elements. Once the IP address and port change, the TCP connection is disconnected. Because QUIC is based on UDP, a UDP protocol is no longer identified by a quad, but is identified by a 64-bit number randomly generated by the client. As long as the ID stays the same, the UDP will exist, the connection will remain, and the upper-layer business logic will not feel the change.

Headless blocking

First, let’s look at two reasons for queue head congestion:

Graph TD header block problem --> HTTP header block problem --> TCP header block problem

HTTP/2.0 multiplexing solves the problem of queue head blocking at the HTTP layer, but TCP queue head blocking still exists. When the packet timeout is confirmed or lost, it will wait for retransmission, so it will block the current window sliding to the right, resulting in blocking. However, QUIC is based on UDP. Its innovation lies in that QUIC relies on a strict monotone increasing packet sequence, and there will be streamID and Streamoffset offsets in a packet. Even if packet loss or timeout is confirmed during the process, subsequent packets will not wait. This problem can be avoided by reassembling it based on ID and offset after receiving it.

Custom congestion control

TCP is in the transport layer and exists in the system by default, while QUIC is in the application layer and is more flexible when it comes to redefining the congestion algorithm according to the actual situation.

Forward safety and forward error correction

Udp is said to be unreliable, but Google has added this mechanism to QUIC: after each set of data is sent, xOR (efficient) is performed on that set of data, and the result is also sent, so that the receiver has two versions of the data, and can correct and verify the original data. This ensures reliability.

The last

⚽ this article mainly introduces the significance of HTTP/2.0 and HTTP/3.0 ~ ⚾ if you are interested in this article, welcome to like, more wonderful knowledge is waiting for you! 😘 🏀GitHub blogs at github.com/Awu1227. 🏉 I have other columns, please read ~ 🏐 play the beauty of CSS 🎱Vue from giving up to getting started 🎳 simple JavaScript