concept

HTTP as an Internet human mode, the full name is HyperText Transfer Protocol, also is HyperText Transfer Protocol. To understand this concept, let’s break it down into three parts:

  • hypertext
  • transmission
  • agreement

hypertext

In short, more than plain text. But some specific, how we say that ordinary text is generally composed of text value of the article. But in web pages, there is such a text, can be text, pictures, audio, video and other components. But the most important thing is that it can also have hyperlinks that jump from one text to another.

HTML (HyperText Mark Language) is a format of HyperText, also can be said to be a specification.

transmission

It’s a familiar name, of course, for objects moving from one place to another.

However, HTTP can transmit packets from the client to the server, and data from the server to the client. This is a two-way transmission.

agreement

In the same environment, both individuals need to meet certain norms. For example, every social citizen needs to abide by laws and regulations, which is a kind of agreement

HTTP is a protocol used in the computer world. It establishes a specification for communication between computers (two or more participants), as well as related controls and error handling (behavior conventions and specifications), in a language that computers can understand.

The overall idea is that HTTP is a “convention and specification” in the computer world for “transporting” text, pictures, audio, video, and other “hypertext” data between “two points”

The HTTP 1.0 features

advantages

Http1.0 is a great many advantages that bring us into the Internet age

  1. HTTP / 1.0Is relatively simple, the basic message is the request header plus the request body (header + body), and so is the headerkey-valueSimplicity is form;
  2. HTTP 1.0 is flexible and extensible, and the request methods, URLS, ports, and headers in the protocol can all be specified by the developer.
  3. Cross-platform, the biggest advantage of THE emergence of HTTP is that it is cross-platform. As long as you meet my transport protocol, you can not limit the platform. This also makes THE application of HTTP become very many, from the former C side, to the current B side, etc

disadvantages

Everything is a double-edged sword, it has good side and bad side. So what’s the problem in the age of http1.0?

  1. HTTP transport is stateless, meaning that without state processing, anyone can access the server, since the service doesn’t know anyone. There are solutions to thisCookie, session, token, and later JWT
  2. Clear transmission. Here’s a correction to the concept that using the GET transport is not a plaintext transport. Post is the same, we can use the packet capture tool to get the corresponding data packet when debugging. Plaintext transmission is no doubt the naked information on the Internet. The way to solve this plaintext transmission ishttpsHTTPS is based on HTTP is extensible in the baseTCP and HTTPBetween, add onessl/tlsLayer.
  3. One of the most critical disadvantages of HTTP 1.0 is its network response mode, the serial response mode, which requires that you wait for the last request to return before you can respond to the next request. In this case, if the last request is blocked, the whole site will be blocked and no data can be obtained. This behavior is also called queue head blocking. In addition, every network request requires three handshakes and four waves.

Http1.0 has so many problems, so Http1.1 2.0 3.0 is in the optimization of the problem

The HTTP 1.1

The main solution for HTTP 1.1 is

  • HTTP1.0Each connection requires three handshakes and four waves (each sent request has to wait for the previous request to come back), and the request is serial, adding additional communication overhead;
  • To solvehttp1.0The connection cannot be long
  1. A long connection

In HTTP 1.1, the connection: keey-Alive in the request header tells the server that I currently want to use a persistent connection (persistent connection). The advantage of doing this is to reduce the additional overhead caused by ᯿ reestablishment and disconnection of TCP connections and reduce the load on the server.

The characteristic of persistent connections is that the TCP connection is maintained as long as either end does not explicitly request to disconnect.

  1. Pipeline network transmission

Based on http1.1 have long connection function, the achievement of pipeline (pipeline)

The idea of a pipeline is that I don’t have to wait for the previous request to respond before I ask for the next request. Below the specific image:

This pipelined transport also needs to be returned in the order requested. The request-reply pattern exacerbates HTTP’s performance problems. It is easy to have a head block.

The bottleneck of HTTP / 1.1

  1. The server responds in the order of the request. If the server responds slowly, the client will not be able to request the data, that is, the queue head block;
  2. No request priority control;
  3. Requests can only start from the client, and the server can only respond passively.
  4. The request/response Header is sent uncompressed. The more information in the Header, the greater the delay. Only the Body part can be compressed;

5. Additional communication overhead caused by sending the same header to each other every time.

http/2

HTTP/2 is based on HTTPS, so the security of HTTP/2 is guaranteed. (IN case this gets too long, I want to keep HTTPS to itself), Http2 makes the following changes:

  1. The head of compression

In HTTP/2, if you send multiple requests and the headers are identical or similar, then the header is compressed. The protocol helps you eliminate duplication. The HPACK algorithm is used. The HPACK algorithm uses two tables to store the attributes in the current request header. A static table, a dynamic table to use together. A header table is maintained on both the client and server. All fields are stored in this table and an index number is generated. Instead of sending the same field, only the index number is sent.

  1. Binary format

In HTTP1.1 we can see the packet data we requested because the encoding is UTF-8. But in HTTP /2, the binary encoding format is used, which is not friendly to us, but is friendly to the computer. Since the computer only understands binary, it does not need to convert the plaintext packet into binary after receiving the packet. Instead, it directly parses the binary packet, which improves the data transmission efficiency.

HTTP/2 is not a plain text message like HTTP/1.1. Instead, HTTP/2 is fully in binary format. The header and data bodies are both binary and collectively called frames: header frames and data frames.

  1. The data flow

HTTP/2 packets are not sent sequentially, and successive packets within the same connection may belong to different responses. Therefore, the packet must be flagged to indicate which response it belongs to.

All the data packets for each request or response are called a Stream. Each data stream is marked with a unique number that specifies that the data streams emitted by the client are numbered odd and the data streams emitted by the server are numbered even

The client can also specify the priority of the data flow. With a higher priority, the server responds to the request first.

  1. Serve Push

As mentioned above, HTTP1.1 uses a “request-reply” approach, which prevents the server from actively pushing messages to the client in the first place. In HTTP/2, this approach is broken. Instead of responding to requests passively, the server can create new streams that actively send messages to the client. For example, sending possible JS and CSS files to the client in advance of the browser’s request for HTML reduces latency. This is called “Server Push” (also called Cache Push).

The server can actively push, and the client has the right to choose whether to receive. If the server pushes a resource that has already been cached by the browser, the browser can reject it by sending an RST_STREAM frame. Active push also complies with the same origin policy. In other words, the server cannot push third-party resources to the client at will, but only after the confirmation of both parties

  1. multiplexing

HTTP/2 allows multiple requests or responses to be sent concurrently on a single connection, rather than being returned in a one-to-one sequence.

Let’s get a picture of the pleasure of multiplexing

Multiplexing solves the problem of browsers limiting the number of requests to the same domain name. It also makes it easier to achieve full speed transmission, since each new TCP connection requires a slow increase in transmission speed. The removal of serial requests in HTTP/1.1 eliminates the need to queue and eliminates the “queue head blocking” problem, reducing latency and greatly improving connection utilization.

The defects of HTTP / 2

It looks like a good deal, but there are some problems. Why else would THERE be HTTP /3?

The main problem with HTTP/2 is that multiple HTTP requests reuse a TCP connection, and the underlying TCP protocol has no idea how many HTTP requests there are. When a packet is lost, TCP’s retransmission mechanism is triggered, so all HTTP requests in a TCP connection must wait for the lost packet to be retransmitted.

http/3

Based on HTTP/ 2, TCP packet loss triggers TCP retransmission mechanism, and the TCP protocol layer can not know how many HTTP protocols, in HTTP/3, the HTTP layer TCP protocol changed to UDP!

There are many pictures on the Internet to compare it!

UDP is known to be unreliable, but the UDP based QUIC(Quick UDP Internet Connections) protocol can achieve similar reliable transmission.

How does HTTP3 work

QUIC is a pseudo TCP + TLS + HTTP/2 multiplexing protocol over UDP.

Reference:

  • www.sohu.com/a/347106117…
  • Kobayashi graphic network (here is not propaganda ha, really learned things, from the public search on the line)
  • Blog.csdn.net/qq_38937634…
  • Blog.itpub.net/31483669/vi…
  • Blog.csdn.net/u011955252/…