Server Push, or Server Push, is described in detail in RFC7540 #section-8.2. In simple terms, HTTP/2.0 allows the Server to pre-empt/push a Response, along with the corresponding promise request, to the Client, which is associated with the request previously initiated by the Client.
The application scenario is also limited. If the Server knows what resources the Client needs, push rules are usually configured in advance.
No Server Push
In HTTP/1.X, there is no Server Push. Suppose an index. HTML page contains three resources: HTML code, CSS files, and JS files. So if you want to load the page completely, you need three request-response times, that is, three RTT, as follows:
Server Push principle
An index. HTML page contains three resources: HTML code, CSS files, and js files. If the request for index. HTML (in case of a Server Push hit) is made, the flow is shown as follows:
Comparing with the above figure, we can see that there are two fewer requests (style.css, main.js) in the Server Push process for loading index.html, thus reducing the page load time.
To summarize, the principle of Server Push is as follows:
-
The Client initiates a request.
-
When a Client request matches a Server Push rule, the Server first responds with a PUSH_PROMISE frame, which promises to Push resources on a new Stream.
-
The Server responds to the current request on the Stream of the current request.
-
The Server pushes specific resources on the promised Stream.
Let’s look at the two key frame frames in Server Push.
PUSH_PROMISE
A pushpromise frame (RFC7540 #section-6.6) is the key to Server Push, which tells the Client that a resource is about to be pushed on a Stream. The Payload format is as follows:
+---------------+
|Pad Length? (8)|
+-+-------------+-----------------------------------------------+
|R| Promised Stream ID (31) |
+-+-----------------------------+-------------------------------+
| Header Block Fragment (*) ...
+---------------------------------------------------------------+
| Padding (*) ...
+---------------------------------------------------------------+
Copy the code
1. Pad Length
An 8-bit field containing the length of the frame, populated in eight-bit bytes. Only if the PADDED flag is set.
2. R
Single reserved bit.
3. Promised Stream ID
An unsigned 31-bit integer that identifies the promised stream Id.
4. Header Block Fragment
Header block fragment containing the request header field that promises to push the resource path.
5. Padding
Fill eight bits.
RST_STREAM
Stream reset frame (RFC7540 #section-6.4). If a Client receives a PUSH_PROMISE frame and the resource cache is detected locally, the Client sends RST_STREAM (Error Code = REFUSED_STREAM) to reject the stream, telling the Server, I don’t need this resource anymore. Stop pushing me.
The Payload format of the RST_STREAM frame is as follows:
+---------------------------------------------------------------+
| Error Code (32) |
+---------------------------------------------------------------+
Copy the code
1. REFUSED_STREAM: reject the stream.
2. CANCEL: Cancels the stream.
Of course, there are other codes, see official document RFC7540 #section-6.4 for details.
Push competition issues
Does Server Push definitely reduce page load (request response) time? The answer is no. As soon as the Server responds to the page request, it starts pushing the promised Stream, even though the Client detects a local cache and sends an RST_STREAM frame to reject the Stream’s push. However, due to timing problems, the Server may start pushing the RST_STREAM before receiving it, resulting in wasted Server bandwidth. Like this one:
Let’s do a test. Load a page with server Push repeatedly. Because Chrome caches resources, the Client sends RST_STREAM frames. The following figure shows the Wireshark data packets:
As shown above, the Client sends RST_STREAM[6] and RST_STREAM[8], but the Server has already started to push some data on Stream6 and Stream8, so this is a waste of bandwidth.
Push the test
Finally, let’s start at 0 and do a test to see how Server Push and No Server Push behave. Let’s use static resources as an example.
1. Prepare resources
First, prepare two HTML pages on the server: serverpush.html and noServerpush.html. The contents are the same as the following:
<! DOCTYPE html> <html> <head> <meta charset="utf-8"> <script src="/js/main.js"></script> <link rel="stylesheet" href="/css/style.css"> <title>Server Push</title> </head> <body> <p>hello server push</p> </body> </html>Copy the code
Then prepare the /js/main.js and/CSS /style.css files.
2. Configure rules
Nginx.conf: nginx.conf: nginx.conf: nginx.conf: nginx.conf: nginx.conf: nginx.conf:
location /serverpush.html {
root /root/tomcat/html;
index index.html index.htm;
http2_push /css/style.css;
http2_push /js/main.js;
}
Copy the code
Save and restart nginx(nginx-s reload).
3. Start the test
Then we request www.laoqingcai.com/serverpush…. The Wireshark displays the packet exchange process as follows:
As shown above, the whole push process is also simple:
-
Client request /serverpush.html, open Stream 1.
-
The Server responds to PUSH_PROMISE on Stream 1 by promising to push/CSS /style. CSS on Stream 2 and /js/main.js on Stream 4.
-
The Server responds to serverpush.html on Stream 1.
-
Server pushes promised resources/CSS /style.css on Stream 2.
-
The Server pushes the promised resource /js/main.js on Stream 4.
The whole process is also consistent with the push principle of our analysis. The Wireshark capture diagram for NoserverPush is not posted here. If you are interested, you can use Wireshark to capture packets.
4. Comparison of results
Let’s take a look at the actual benefits of Server Push. Two graphs were cut, the first one with Server Push and the second one with No Server Push.
After many tests and comparisons (excluding the impact of network jitter), we found that the loading of web pages with Server Push was obviously about 20-30ms faster than that without Server Push. As you can see from Chrome DevTools, the Server Push resource loading process is missing several stages:
1. Stalled
The connection stagnation phase, namely the connection wait phase, includes the connection negotiation process and so on.
2. Request Sent
Request sending time.
3. Waitting (TTFB)
The elapsed time from sending a request to the first Byte of receiving a response from the Server depends on the network status and Server processing capability.
The bottom line is that Server Push reduces the number of requests inside the page, which in turn reduces request wait time and request round trip times (RTT), because bandwidth is not the bottleneck.
conclusion
Finally, we conclude that using Server Push can reduce RTT times and load wait time of page resources, and ultimately reduce page load time. Especially in the case of more static resources.
However, at present, clients have a caching mechanism for static resources, and the caching time is relatively long. If Server Push is used, Push competition may occur, and advance Push will waste Server bandwidth. Therefore, the actual effect and problems should be carefully considered when using this feature.
Refer to the link
The original blog
rfc7540
Are you hungry Server Push