This is the first day of my participation in Gwen Challenge
Shortcomings of HTTP1.1
1. Queue head is blocked
HTTP/1.1 further improved the efficiency of the HTTP protocol by introducing pipelining, which allows clients to send multiple requests simultaneously over the same TCP connection. However, this requires that the server must return the response in the order in which the request was sent. When a client requests multiple files sequentially, and one of the requests is blocked for some reason, all the subsequent requests are blocked. As a result Of head-of-line Blocking, no matter how large the bandwidth is, it cannot be fully utilized.
[question]
- What are the solutions to queue head congestion? Are there measures at the front end?
- Use multiple domain names, [concurrent connections – the browser can only make a maximum of 6 connections to the same domain name, domain fragmentation – use different domain names for the same resource, which is equivalent to increasing the number of concurrent connections]
- Sprite image is introduced to combine several small images into one large image, which is provided for JS cutting and reduces the number of resource requests. What’s the problem? When the small image is updated, the whole image needs to be re-requested, which wastes a lot of bandwidth.
- The small graph is inlined, the image binary data is encoded by Base64, and the encoded data is embedded in HTML/CSS files to reduce the number of network requests. – [Front-end measures]
- Use tools such as Webpack to package compressed files. – [Front-end measures]
- Load on Demand – reduce the number of first HTTP requests
2. Plaintext transmission: insecure
Data security cannot be guaranteed
3. Stateless, large and repetitive head
Because HTTP protocol is stateless, each request needs to carry HTTP headers, especially when carrying cookie headers, which will be large. A body is usually small and top-heavy.
4. Server push is not supported
HTTP/1.1 does not support server push messages, so when the client needs to get notifications, it can only continuously pull messages through timers, which undoubtedly wastes a lot of bandwidth and server resources.
New features of HTTP/2
In response to HTTP/1.1’s shortcomings, HTTP/2 has added some new features. HTTP/1.1 does not support server push messages, so when the client needs to get notifications, it can only continuously pull messages through timers, which undoubtedly wastes a lot of bandwidth and server resources. HTTP/2 is a replacement for the current HTTP protocol (HTTP/1.x), but it is not a rewrite. HTTP methods/status codes/semantics are the same as HTTP/1.x. HTTP/2 is based on SPDY and is focused on performance, with the biggest goal being to use only one connection (connec-tion) between the user and the site.
1. Binary transmission
One of the keys is to add a binary framing layer between the application layer (HTTP/2) and the transport layer (TCP or UDP). In the binary framing layer, HTTP/2 splits all transmitted information into smaller messages and frames and encodes them in binary format. The http1. x Header is encapsulated in the HEADERS frame, and the corresponding Request Body is encapsulated in the DATA frame. After the HTTP/2 DATA frame is divided, the “Header+Body” packet structure disappears completely, and the protocol sees only” fragments “. In HTTP/2, all communication under a domain name is done over a single connection that can host any number of two-way data streams. Each data stream is sent as a message, which in turn consists of one or more frames. Multiple frames can be sent out of order and reassembled according to the stream ID at the beginning of the frame.
2.Header compression (HPACK)
On the one hand, the header information is compressed using gzip or COMPRESS and then sent. On the other hand, both the client and the server maintain a header table where all fields are stored, generating an index number, and then not sending the same field, but only the index number, which increases speed.
3. Multiplexing
Multiplexing refers to the presence of multiple streams in a TCP connection. This means that multiple requests can be sent, and the peer end can know which request belongs to by the identifier in the frame.
multiplex
HTTP/2 multiplexes TCP connections so that both the client and the browser can send multiple requests or responses at the same time in a single connection without having to follow the sequence, thus avoiding “queue congestion”.
The data flow
HTTP/2 sends multiple requests/responses in parallel, interleaving each other. HTTP/2 refers to all packets of data for each request or response as a stream. Each data stream has a unique number. The data flow ID must be marked when the data packet is sent to distinguish which data flow it belongs to. In addition, it is stipulated that the data stream sent by the client must have an odd number of ids, and the data stream sent by the server must have an even number of ids.
priority
In HTTP/2, each request can have a priority value of 31 bits. 0 indicates the highest priority. A higher value indicates a lower priority.
4. Server Push
HTTP/2 allows a server to send resources to a client unsolicited, which is called server push. A common scenario is that a client requests a web page that contains many static resources. Under normal circumstances, the client must receive the web page, parse the HTML source code, find the static resource, and then send a static resource request. In fact, the server can expect the client to request a web page, it is likely to request static resources, so take the initiative to send these static resources together with the web page to the client.
5. Improve security
HTTP/2, for compatibility reasons, continues HTTP/1’s plaintext features, and can not use encrypted communication, but HTTPS is the trend, so major browsers declared only support encrypted HTTP/2, so HTTP/2 is now mostly encrypted.