This article has participated in the third “topic writing” track of the Denver Creators Training Camp. For details, check out: Digg Project | Creators Training Camp third is ongoing, “write” to make a personal impact.

http1.0

HTTP1.0 was first used in web pages in 1996. At that time, it was used for simple web pages and network requests. It was a stateless, connectionless application-layer protocol

http1.1

  1. Http1.1 is based on text parsing and treats all requests and responses as plain text
  2. Http1.1 adds caching (strong caching and negotiated caching)
  3. Http1.1 has long connections and supports requestspipelining(pipelining),
  4. Http1.1 flow control is based on TCP connections. When a connection is established, the two ends create a buffer through the system default mechanism. Since HTTP1.1 relies on the transport layer to avoid overflow, each TCP connection requires a separate flow control mechanism

Cache handling (strong and negotiated caching)

Browser caching optimizes performance, and browser caching is divided into strong caching and negotiated caching, both of which read from the client

  1. The strong cache does not send a request, but reads the resource directly, and can get a status code that returns 200
  2. Use the HTTP headerExpiresandCache-ControlBoth fields are used to indicate how long a resource will be cached. Expires sets an expiration date, while cache-control provides more options and is more detailed, and if set at the same time, its priorityaboveExpires

Negotiate the cache

  1. The server checks whether the cache resource is available. The server checks whether the request is matched through the Request header. After the match, the 304 status code is returned and a new Request header is returned to inform the client to obtain the request from the cache
  2. A normal refresh enables weak caching and ignores strong caching. Strong caching is enabled only when you enter a url in the address bar or favorites, reference a resource through a link, and so on
  3. If the time expires, send a header request to the server with if-none-match and if-modified-since, back to 1

http2

  1. Http2 offers significant performance improvements over HTTP1.1
  2. Http2 is multiplexed over a connection
  3. Http2 has header compression
  4. Http2 has a new binary format that uses a binary framework layer to encapsulate all messages in binary while still maintaining HTTP syntax
  5. Http2 allows clients and servers to implement their own flow control mechanisms, rather than relying on the transport layer, where both sides swap available buffer sizes to allow them to set up their own receive Windows on multiplexed streams
  6. Http2 allows the server to actively “push” responses into the client cache

Htpp2 header compression

  1. Http2 header compression is also known asHAPCK.The design is simple and flexibleBecause the HPACK format is intentionallysimpleandinflexibleReduces the risk of interoperability or security issues due to implementation errors
  2. Http1.1 has no header compression, which can significantly increase latency by eliminating redundant header fields, limiting vulnerabilities of known security attacks, and using limited memory requirements in constrained environments because redundant header fields unnecessarily consume bandwidth as requests increase

Http2 multiplexing

  1. The key to HTTP performance optimization is not high bandwidth, but low latency
  2. TCP connections “tune” themselves over time, initially limiting the maximum speed of the connection, but increasing the speed of the transmission over time if the data is successfully transferred. This tuning is called TCP slow start. For this reason, HTTP connections that are already abrupt and short become very inefficient
  3. HTTP /2 enables more efficient use of TCP connections by having all data flows share the same connection, allowing high bandwidth to truly serve HTTP’s performance gains. Http1.1 has low performance wire head blocking, which can occur when a request times out

http3

Http3 is based on a new transport layer protocol (QUIC), and http3 is a new specification to run on QUIC. The previous version of HTTP3 is based on TCP. QUIC is to replace TCP. Address some TCP bugs

tcp

  1. Stream level multiplexing is not supported. TCP serializes all objects in the same stream. Therefore, it does not know the object level partition of the TCP segment and cannot reuse packets in the same stream
  2. Redundant communication will occur, and the TCO triple connection handshake will have redundant message exchange sequences
  3. Data transfers can be suspended intermittently, and a flaw in TCP that is lost due to sequential processing is called row blocking

QUIC

  1. Also has header compression and optimized support for out-of-order sending, as well as optimized compression rate
  2. The connection is established through UDP instead of TCP, which improves the speed of connection establishment and reduces the delay
  3. TCP by itself cannot solve header congestion, quIC solves this problem
  4. Connection ID enables HTTP3 to support Connection migration and NAT rebinding