You’ve probably heard a lot about HTTP/2 lately. First of all, if you’ve looked at the features of Java9, you’ll notice that in Java9, there’s a new way to handle HTTP calls. There’s a new HTTP Client that will replace HttpURLConnection, It also provides support for WebSocket and HTTP/2. Tomcat, Undertow, and Jetty support HTTP/2 in the Web container supported by Spring Boot 2.0.

So, in this article, let’s take a look at what HTTP/2 really is.

HTTP

HyperText Transfer Protocol (HTTP) is an application-layer Protocol for distributed, collaborative and hypermedia information systems. HTTP was originally designed to provide a way to publish and receive HTML pages. Resources requested over HTTP or HTTPS are identified by Uniform Resource Identifiers (URIs).

HTTP is a stateless request – and response-based application layer specification based on TCP/IP, which is transmitted in ASCII code. It does not involve packet transmission and mainly defines the communication format between the client and the server. Port 80 is used by default.

There are three versions of HTTP, namely HTTP/1.0, HTTP/1.1, and HTTP/2. HTTPS is another protocol, simply a secure version of HTTP.

HTTP / 1.0

In May 1996, HTTP/1.0 was released. In order to improve the efficiency of the system, HTTP/1.0 stipulated that the browser and the server only maintain a short connection, each request of the browser needs to establish a TCP connection with the server, and the server immediately disconnects the TCP connection after completing the request processing. The server does not keep track of each client or record past requests.

Note that in HTTP/1.0 mentioned above, the browser and server only maintain a transient connection and the connection cannot be reused. This means that only one request can be sent per TCP connection. Once the data is sent, the connection is closed, and if additional resources are requested, a new connection must be created.

We know that setting up a TCP connection requires three handshakes, which is a time-consuming process. As a result, HTTP/1.0 has poor performance. Now, when you open a web page and it has a lot of images, videos, etc., HTTP/1.0 is obviously not enough to meet the performance requirements.

HTTP / 1.1

HTTP/1.1 was created in 1999 to address the shortcomings of HTTP/1.0. The main improvement over HTTP/1.0 is the introduction of persistent connections. Persistent connections: Multiple HTTP requests and responses can be sent over a single TCP connection, reducing the cost and latency of establishing and closing connections.

With the introduction of persistent connections, the HTTP protocol improved significantly in terms of performance and could be used almost everyday, which is why this version is still in use today. Of course, there are still some inadequate, will be introduced in detail.

There are other differences between HTTP/1.0 and HTTP/1.1 that are not covered here. There are also a lot of information on the Internet, you can look up by yourself.

SPDY

Although HTTP/1.1 provides persistent connections over HTTP/1.0, which improves efficiency, there is still a lot of room for improvement.

It is because of the shortcomings of HTTP that SPDY was born. In 2009, Google unveiled its own SPDY protocol to address the inefficiencies of HTTP/1.1. It is designed to reduce page load times by 50%. SPDY mainly provides the following functionality (more on HTTP2 later) :

  • Multiplexing. Multiple requests share a TCP connection.
  • The header compression. Delete or compress HTTP headers
  • Server push. Provides a mechanism for the server to initiate communication and push data to the client.

SPDY sits below HTTP and above TCP and SSL, making it easy to accommodate older versions of HTTP.

In fact, SPDY was popular for a long time before HTTP2 came along. At present, many famous Internet companies have adopted the SPDY series protocol in their websites or apps (the current latest version is SPDY/3.1), because of its performance improvement is obvious. The major browsers (Google, Firefox, Opera) already support SPDY, which has become the industry standard. The HTTP Working Group finally decided to develop HTTP/2 based on SPDY/2.

HTTP/2

Below is an official demo created by Akamai to illustrate the difference in performance between HTTP/1.1 and HTTP/2. 379 images were requested at the same time, with HTTP/1.1 loading time of 4.54s and HTTP/2 loading time of 1.47s.

HTTP/2 is the first update to the HTTP protocol since HTTP 1.1 was released in 1999 and is based on the SPDY protocol. Developed by Hypertext Transfer Protocol Bis (HTTPBIS) working group of Internet Engineering Task Force (IETF). The organization submitted the HTTP/2 standard proposal to IESG for discussion in December 2014 and it was approved on 17 February 2015. The HTTP/2 standard was officially published as RFC 7540 in May 2015.

Here’s how HTTP/2 improves over HTTP/1.1:

Binary framing

In HTTP/2, a layer has been added between the application layer (HTTP2.0) and the transport layer (TCP or UDP) : the binary framing layer. This is the biggest change in HTTP2. HTTP2’s performance improvement over HTTP1.1 is largely due to the introduction of this layer.

At the binary framing layer, HTTP/2 splits all transmitted information into smaller messages and frames and encodes them in binary format.

This single-connection multi-resource approach reduces the pressure on the server side, resulting in less memory footprint and higher connection throughput. In addition, the reduction of TCP connections improves network congestion, while the reduction of slow start time makes congestion and packet loss recovery faster.

multiplexing

Multiplexing allows multiple request-response messages to be sent simultaneously over a single HTTP/2.0 connection. In the HTTP1.1 protocol, the browser client has a certain number of requests for the same domain name at the same time, exceeding this limit will be blocked. Multiplexing allows multiple request-response messages to be sent simultaneously over a single HTTP2.0 connection.

Once the TCP connection for HTTP2 requests is established, subsequent requests are sent as stream. The basic unit of composition of each stream is a frame. Clients and servers can break HTTP messages into discrete frames, send them out of order, and then reassemble them at the other end.

That is, HTTP2.0 communication is all done over a single connection that can host any number of two-way data streams. Like, I request a page http://www.hollischuang.com. All resource requests on the page are requested and responded to over a single TCP between the client and server!

The header compression

HTTP/1.1 headers carry a lot of information and are sent repeatedly each time. To reduce this overhead, HTTP/2 uses the HPACK Header compression algorithm to compress the Header.

Server push

In simple terms, when the user’s browser and the server establish a connection, the server will actively push some resources to the browser and cache the mechanism. With caching, when a browser wants to access a cached resource, it can read it directly from the cache.

The resources

HTTP 1.0/1.1/2.0,HTTPS HTTP2.0,SPDY,HTTPS Some things you should know about HTTP2.0 research on multiplexing

Pay attention to my public number, fresh article in hand, 2000G of Java learning resources for free