• HTTP / 1.1
    • Improved persistent connection
      1. The feature is that multiple HTTP requests can be sent over a single TCP connection, and the TCP connection will remain open as long as the browser or server does not explicitly disconnect
    • Immature HTTP pipelining
      1. Persistent connections reduce TCP setup and disconnection times, but they wait until the previous request is returned before the next one can be made. If a request in the TCP channel is not returned in time for some reason, all subsequent requests will be blocked. This is known as the head blocking problem.
      2. HTTP1.1 attempts to solve the problem of head blocking through pipework techniques. Pipelining is the technique of submitting multiple HTTP requests to the server in batches. Although the requests can be sent in batches, the server still needs to respond to the browser’s requests based on the order in which they were requested, but this approach was eventually abandoned
    • Provide virtual host support
      1. The host field in the HTTP1.1 request header is used to indicate the current domain name address, so that the server can do different processing based on different host values

  • HTTP/2
    • HTTP1 is not ideal for broadband utilization, which is a core problem HTTP1 gets, due to the following three reasons
      1. TCP slow start
        • Once a TCP connection is established, it enters the sending state. At first, the TCP protocol sends data at a very slow speed, and then speeds up until the sending speed reaches an ideal state. This process is called slow start
        • Slow start is a TCP policy to reduce network congestion, and there is no way to change it
        • And speak slowly, can lead to performance problems because some of the key resource file is commonly used in the page is not big, such as HTML documents, CSS files, and js files, usually the files after a TCP connection to establish good will initiate the request, but the process is a slow start, so spend much more time than the normal time, This delays the precious first rendering of the page.
      2. TCP competition
        • If multiple TCP connections are enabled at the same time, these connections will compete for fixed broadband
        • For example, when the system establishes multiple TCP connections at the same time, when the broadband is sufficient, the sending or receiving speed of each connection slowly increases, but when the broadband is insufficient, these TCP connections slow down the sending or receiving speed, for example, there are 200 files on a page. 3 CDNS are used, so 6*3, or 18 TCP connections need to be established to download resources when loading web pages. During the downloading process, each TCP connection needs to dynamically slow down the speed of receiving data.
        • Some TCP connections download some key resources, while others download common resources, such as pictures and videos. However, multiple TCP connections cannot negotiate which key resources are downloaded first, which may affect the download speed of key resources.
      3. Queue head congestion problem
        • With persistent connections in HTTP/1, although it is possible to share a TCP pipe, only one request can be processed at a time in a pipe, and other requests can only be blocked until the current request is completed, which means that we cannot send and receive requests in one pipe at will.
        • This is a very serious problem, because there are many factors blocking requests, and are some uncertain factors, if some requests are blocked for 5 seconds, then the subsequent queuing requests will be delayed waiting for 5 seconds, in this waiting process, broadband, CPU are wasted

      HTTP/2 multiplexing

      • The problem with HTTP/1.1 is that slow starts and competing bandwidth between TCP connections are due to TCP mechanisms, and queue blocking is due to HTTP/1.1 mechanisms
      • Based on this, the idea of HTTP/2 is that a domain uses only one TCP long connection to transfer data, so that the entire page resource download process requires only a slow start, and also avoids the problem of multiple TCP connections competing for bandwidth
      • HTTP/2 needs to implement parallel requests for resources, that is, requests can be sent to the server at any time, and there is no need to wait for other requests to complete. The server can then return the processed request resources to the server at any time
  • The HTTP/2 solution can be summarized as follows:
  1. Only one TCP long connection is used for a domain name
  2. Eliminate queue head congestion

  • This is HTTP/2’s core, most important, and most disruptive multiplexing mechanism.
  • You can see that each request has a corresponding ID, such as stream1 for index.html and Stream2 for foo.css, so that the browser side can send the request to the server at any time.
  • After receiving these requests, the server will decide which content to return first based on its own preferences. For example, the server may have already cached the response headers of index.html and bar.js, so when receiving the request, the server can immediately return the response headers of index.html and bar.js to the browser. The index.html and bar.js response body data is then returned to the browser. It can be sent arbitrarily because the penny data has a corresponding ID, and when the browser receives it, it filters out the content with the same ID and concatenates it into a complete HTTP response
  • HTTP/2 uses multiplexing technology to break requests into frames for data transmission, which has the added benefit that when receiving a high priority request, such as a js or CSS critical resource request, the server can suspend the previous request to prioritize the critical resource request

Implementation of multiplexing

  • First, the browser prepares the request data, including the request line, the request information, and, if it is a POST method, the request body.
  • These data are processed by the binary frame layer and converted into frames with request ID numbers, which are sent to the server through the protocol stack
  • After the server receives all the request frames, it merges all the frames with the same ID into one complete request message.
  • The server then processes the request and sends the processed response line, response header, and response body to the binary co-framing layer, respectively.
  • Similarly, the binary framing layer converts the response data into frames with the request ID number that are sent through the protocol stack to the browser.
  • After the browser receives the response frame, it submits the frame’s data to the request based on the ID number

As you can see from the flow above, HTTP multiplexing is implemented by introducing the binary framing layer.

HTTP is the language used by browsers and servers to communicate, and although HTTP/2 has introduced the framing layer technology, the semantics of HTTP/2 remain the same as HTTP/1.1, meaning that the language of communication does not change. For example, developers can still tell the server what type of file they want to receive by accepting the request header. You can still use cookies to maintain logins, you can still use Cache to Cache local files, nothing has changed, just the transfer method, which is very important for developers, which means we don’t need to rebuild the ecology for HTTP/2

HTTP/HTTP / 1 and 2

HTTP/2

HTTP/2 is a core feature of the use of multiplexing technology, so it can send multiple URL requests through a TCP connection, multiplexing technology can make full use of bandwidth, maximum avoid the problems caused by the slow start of TCP, but also to achieve the header compression, server push and other functions, The transmission speed of page resources has been greatly improved.

In the ERA of HTTP/1.1, to improve the efficiency of parallel downloading, browsers maintained six TCP connections for each domain name. However, with HTTP/2, browsers only needed to maintain one TCP persistent connection for each domain name and solved the problem of HTTP/1.1 queue blocking.

  • Like HTTP/1.1, HTTP/2 is still based on TCP. TCP was originally designed for a single connection. You can think of a TCP connection as a virtual pipe between two computers. Data to be transferred from one end of the computer goes into the pipe in order, and eventually appears in the same order at the other end of the pipe

During TCP transmission, the blocking caused by the loss of a single packet is called queue header blocking on TCP.

We know that in HTTP/2, multiple requests are running in a TCP pipeline, and if packet loss occurs in any of the data streams, all requests in the TCP connection will be blocked.

This is different from HTTP/1.1, when using HTTP/1.1, the browser opens six TCP connections for each domain name. If one OF the TCP connections is blocked, the other five connections can still transmit data. Therefore, as the packet loss rate increases, the transmission efficiency of HTTP/2 will become worse and worse. Some test data show that HTTP/1.1 is more efficient than HTTP/2 when the system reaches a packet loss rate of 2%.