Some differences between HTTP1.0 and HTTP1.1

Cache handling

In HTTP1.0, header if-modified-since (comparing the last update time of the resource) and Expires (depending on the local time of the client) are used as criteria for caching.

HTTP1.1 introduces more cache control policies:

  • Entity tag: Indicates the matching information of resources
  • If-Unmodified-Since: Compares whether the last update times of resources are inconsistent
  • If-Match: Compares ETag consistency
  • If-None-Match: Compares whether eTAGS are inconsistent

More cache headers are available to control the cache strategy.

Bandwidth optimization

In HTTP1.0, there are some wasted bandwidth phenomena, such as the client only needs a part of an object, but the server sends the whole object, and does not support breakpoint continuation.

HTTP 1.1 supports breakpoint continuation by default.

The Host header processing

HTTP1.0 assumes that each server is bound to a unique IP address, so the URL in the request message does not pass the hostname. However, with the development of virtual host technology, there can be multiple virtual hosts on a physical server, and they share the same IP address. HTTP1.1 both Request and response messages should support the Host header field, and an error (400 Bad Request) will be reported if there is no Host header field in the Request message.

A long connection

HTTP1.0 requires the keep-alive parameter to tell the server to establish a long connection, whereas HTTP1.1 supports long connections by default, somewhat compensating for the need to create a connection on every request.

HTTP is based on THE TCP/IP protocol. Creating a TCP connection requires a three-way handshake, which costs a certain amount of money. If a connection needs to be re-established each time, performance will be affected. Therefore, it is best to maintain a long connection, which can be used to send multiple requests.

HTTP1.1 supports PersistentConnection and Pipelining processing, which delivers multiple HTTP requests and responses over a SINGLE TCP connection, reducing the cost and latency of establishing and closing connections.

Management of error notifications

Add 24 error status response codes in HTTP1.1. For example, 409 (Conflict) indicates that the requested resource is in Conflict with the current state of the resource. 410 (Gone) Indicates that a resource on the server is permanently deleted.

New Request Mode

  • PUT: Requests the server to store a resource
  • DELETE: requests the server to DELETE the identified resource
  • OPTIONS: Requests queries about server performance or resource-related OPTIONS and requirements
  • CONNECT: Reserve the request for future use
  • TRACE: The request server sends back received request information for testing or diagnosis

Differences between HTTP2.0 and HTTP1.x

The http1.x version of the bug can be summarized as follows: threads block, there is a limit to the number of requests for the same domain name at any one time, and any requests exceeding that limit will be blocked.

Binary framing

Parsing of HTTP1.x is text-based. There are natural defects in format parsing based on text protocol. There are various forms of text expression, and many scenarios must be considered in order to achieve robustness. Binary is different, only recognizing the combination of 0 and 1. Based on this consideration HTTP2.0 protocol parsing decision to adopt binary format, implementation is convenient and robust.

HTTP2.0 adds a binary framing layer between the application layer (HTTP2.0) and the transport layer (TCP/UDP). Without changing the semantics, methods, status codes, URIs, and header fields of HTTP1.x, it solves the performance limitations of HTTP1.1, improves transmission performance, and achieves low latency and high throughput. In the binary framing layer, HTTP2.0 splits all transmitted information into smaller messages and frames and encodes them in binary format, where the headers of http1. X are encapsulated in HEADER frames. The corresponding Request Body is wrapped inside the DATA Frame.

  • Frame: The smallest unit of HTTP2.0 data communication message: the logical HTTP message in HTTP2.0. Such as request and response, messages consist of one or more frames.

  • Flow: A virtual channel that exists in a connection. Streams can carry two-way messages, and each stream has a unique integer ID.

MultiPlexing

Multiplexing allows multiple request-response messages to be sent simultaneously over a single HTTP2.0 connection. Connection sharing improves connection utilization and reduces latency. Each request is used as a connection sharing mechanism. A request corresponds to an ID. In this way, a connection can have multiple requests. The requests of each connection can be randomly mixed together, and the receiver can assign the requests to different server requests according to the REQUEST ID.

In the HTTP1.1 protocol, browser clients are limited in the number of requests for the same domain name at the same time. Requests exceeding the limit are blocked. This is one of the reasons why some sites have multiple static resource CDN domains.

Of course, HTTP1.1 can also establish several more TCP connections to support handling more concurrent requests, but creating TCP connections is inherently expensive.

TCP connections are preheated and protected by checking whether data is successfully transmitted. If yes, the transmission speed is increased. Therefore, for instantaneous concurrent connections, the server responds slowly. It is best to use an established connection that can support instantaneous concurrent requests.

HTTP2.0 can easily implement multi-stream parallelism without relying on the establishment of multiple TCP connections. The same domain requires only one TCP connection, eliminating the latency and memory consumption caused by multiple TCP connections. HTTP2.0 reduces the basic unit of HTTP protocol communication to frames that correspond to messages in a logical flow. Messages are exchanged bidirectionally over the same TCP connection in parallel.

The header compression

The http1. x header contains a large amount of information and is sent repeatedly each time. HTTP2.0 uses the HPACK algorithm to compress the header data and reduce the size of the header that needs to be transmitted. The communication parties cache a header fields table, and update the HTTP header differentially, avoiding repeated header transmission and reducing the size of transmission.

Header adopts the following compression strategy:

  • HTTP2.0 uses “header tables” on both client and server sides to track and store previously sent key-value pairs. For the same data, it is no longer sent through each request and response.
  • The header table exists for the lifetime of the HTTP2.0 connection and is gradually updated by both the client and the server;
  • Each new head key-value pair is either appended to the end of the current table or replaces the previous value in the table.

Server push

Server push is a mechanism for sending data before the client requests it.

The server can actively push other resources while sending the PAGE HTML, rather than waiting for the browser to parse to the appropriate location, initiate a request and then respond. For example, the server can actively push JS and CSS files to the client without having to send these requests while the client parses the HTML.

These resources pushed by the server side actually exist somewhere on the client side, and the client side can load these resources directly from the local, without going to the network, the speed is naturally much faster.

Reference article:

  • The differences between HTTP1.0, HTTP1.1, HTTPS and HTTP2.0
  • Read HTTP/2 features


  • Ps: Personal technical blog Github warehouse, if you feel good welcome star, give me a little encouragement to continue writing ~