Http2 multiplexing
In HTTP1.1, making a request looks like this: The biggest problem with this process is that each request requires an HTTP connection, which is called three handshakes and four waves. This process takes a considerable amount of time during a single request and is logically unnecessary. Because of the continuous requests for data, it is normal to establish a connection for the first time, then occupy the channel and download other files, how efficient it is! To solve this problem, keep-alive is provided in HTTP1.1, which allows us to establish a single HTTP connection to return multiple requests. But there are two problems:
- HTTP 1.1 transfers data based on serial files, so these requests must be ordered, so we actually save the time to set up a connection, not the time to fetch data.
- If we set the maximum number of concurrent requests to 300 in Apache and set the maximum number of requests to 6 due to the limitations of the browser (Chrome), then the maximum number of concurrent requests the server can handle is 5. When the 51st person accesses the server, it will have to wait for one of the previous requests to complete.
HTTP/2 introduces the concept of binary data frames and streams. The frames identify the data sequentially, so that the browser can merge the data sequentially after receiving the data, without the data being corrupted after the data merge. Again, because of sequences, the server can transfer data in parallel.
HTTP/2 is stream-based for all requests under the same domain name, which means that no matter how many files are accessed from the same domain name, only one connection is established. Also, Apache’s maximum number of connections is 300, and because of this new feature, the maximum concurrency goes up to 300, a six-fold increase. HTTP2 is delivered in binary format, replacing the text format of Http1. X, which is parsed more efficiently. Multiplexing replaces the sequence and blocking mechanisms of HTTP1.x. In HTTP1.x, multiple concurrent requests require multiple TCP connections, and browsers limit the number of TCP connections to 6-8 for resource control.
HTTP2:
- All communication under the same domain name is completed on a single connection, eliminating the delay and memory consumption caused by multiple TCP connections.
- Requests and responses can be interleaved in parallel on a single connection, with no interference between them.
Two, more image of the analysis
Queue head blocking is a performance limitation phenomenon in computer network. This occurs when the first packet in a column (queue) is blocked and the entire column is blocked. For example, it can occur in a switch with cached input, it can occur because the transfer order is out of order, or it can occur in the case of multiple requests in the HTTP pipeline.
-
HTTP/1: Each server receives 1 order and provides the correct item, and then does not process any more items. So if we had 10 items, we would have to call 10 servicers. The restaurant only needs to hire labor, but it takes time to call tired people to order enough food.
-
HTTP/1/1: The server has been modified to receive more orders, but only one at a time. To order faster, you can call 2-3 service personnel according to your needs. It’s pretty good overall, but it’s not optimal. These people are considered persistently connected
- HTTP/2: If the restaurant improves the new technology, the staff will still write down all the orders and start picking them up. But these people will be more flexible, come up with those goods to send each section alternately, and marked. Therefore, a lot of waiting time is reduced, and only one person is needed to serve a table, reducing personnel costs.
Multiplexing sends multiple requests and responses interlaced in parallel without interfering with each other. Multiplexing is stream-based for all requests under the same domain name, so there is no symdomain parallel blocking plug.