“
The Internet Engineering Task Force (IETF) submitted the HTTP/2 standard proposal to the Internet Engineering Steering Group (IESG) for discussion in December 2014, which was approved on 17 February 2015 and formally published as RFC 7540 in May 2015. – Wikipedia
In retrospect, five years have passed since the HTTP/2 standard was released. In those five years, the technology of the front end has changed dramatically: the gradual decline of Internet Explorer, the rise and flourish of MVVM frameworks, and the emergence of WebAssembly.
Over the past five years, HTTP/2 has also been popularized and pushed out at a phenomenal rate.
According to the latest data, HTTP/2 browser support is at an impressive 96%.
See HTTP / 2
“
How does HTTP/2 improve over HTTP1.x?
The above question has become a must-ask for front-end interviews. Binary frame, multiplexing, header compression, server push these key words we are already familiar with can not be more familiar.
HTTP/2 offers a noticeable performance boost, with such high browser support and a low cost of use that it’s hard not to use. HTTP/2 is now used to improve the loading performance and user experience of websites in China.
But let’s look at HTTP/2 features, binary framing, multiplexing, header compression these several needless to say, as long as HTTP/2 is enabled, you can use these features to bring performance optimization to the site. So, where did Server Push go?
We searched all the major portals in China, but there was no sign of Server Push. What’s the problem?
Why not Push
Briefly review the concept of Server Push:
“
Server push is when a server pushes resources to the browser without receiving a request from the browser.
The above image looks nice, but when Server Push is used, CSS and HTML are returned together, thus reducing the time of one RTT.
Let’s try to solve the first problem:
Why is Server Push not widely used?
Need to be configured?
“
ServerPush is the only HTTP/2 protocol requires the developer to configure the function, other functions are automatically implemented by the server and browser, do not need to care about developers.
Is it the need for configuration that makes Server Push difficult to promote?
Nginx, Apache and other gateways have long supported Server Push. After simple configuration, add the following to the Header of the HTML Response:
Link: ; rel=preload; as=style
Copy the code
ServerPush😁 can be easily implemented.
A simple configuration sentence or two won’t stop developers from pursuing extreme performance, so what else could be the reason?
CDN brings pain
The comparison between “Without Push” and “With Push” in the figure above is only the simplest scenario, which cannot be used in actual production.
In actual scenarios, CDN is generally used to load resources in order to improve the loading speed and reduce the load on the server. As a result, our loading sequence diagram would look like this:
As long as our CDN supports ServerPush, like Nginx, we can implement ServerPush with the corresponding Header.
Continue to look at ServerPush CDN support: On April 28, 2016, CloudFlare, the largest foreign provider, announced ServerPush support.
However, in China, major CDN vendors seem to turn a blind eye to ServerPush, an HTTP/2 feature. Tencent Cloud, Ali Cloud two big cloud vendors CDN product documents are not related to the description. Only one small factory “Youpaiyun” published a PR article in 2018:
Make the Internet faster, Server Push features and open the way in detail
After a lot of searching, we found some clues:
An article on Tencent Cloud + community begins by stating that Tencent Cloud already supports ServerPush and has conducted related performance tests.
Everything is ready except the east wind. Yet the historical baggage is heavy. Static resources and HTML different domains make CDN ServerPush a dream.
Take a common source site and CDN different domain pages as an example, HTML as the entrance of Web request, in order to avoid CDN Cache users can not update applications, generally choose not to go through THE CDN, directly to the Origin Nginx.
Static resources, such as JS, CSS, and IMG, are CDN domain names.
HTML and CSS are not in the same domain and cannot be pushed at all.
Visit each major website in China, it is found that basically all use this static resource and HTML different domain scheme, in order to achieve ServerPush, it is necessary to promote CDN primary domain.
CDN primary localization is a complex and risky project, especially when the business is running at a high speed, just like changing a tire on the highway. Its plan must go through layer upon layer design, this article does not do the research here.
Assuming we have implemented CDN primary domain, can Server Push immediately improve performance?
Unknown is not a weapon: Push Cache
“
At this stage, the Server cannot know whether the Client has a Cache
If the Client already has a Cache for that resource, the static resource of Push will waste bandwidth meaninglessly. Although the browser can prevent the Server from sending resources to the Client through RST_STREAM, some resources are already being transferred over the network.
Some exploring
Push CGI
This article by Me:
HTTP/2 Server Push
It is mentioned in the article that in order to avoid static resources, different HTML domains and Push Cache, we choose not to Push static resources, but Push CGI requests.
Compared to Preload CGI logic, ServerPush CGI requests can reduce one RTT time without worrying about Cache issues, resulting in a steady performance improvement.
Cache Digests (Draft)
At present, IETF has discussed a draft that uses Cache Digests and requests to bring the client’s Cache information for server identification, so as to solve the Push Cache problem of ServerPush.
Early Hints (Draft)
The 103 Early Hints Message status response code is also currently in draft form, allowing users to pre-load some resources while the server is still preparing the response data through a simple HTTP packet back.
It avoids Push Cache issues compared to Server Push, but is not as high-performance as Server Push.
The performance data
Theoretical performance data
The following chart summarizes the ests of various schemes (Preload, 103 Early Hints, Server Push, Server Push + Cache Digests) :
(Note: RTT of HTTPS TLS handshake is not considered)
As you can see, Server Push + Cache Digests have significant performance advantages. Hopefully it will become the norm soon.
The measured data
The Nginx Team benchmarked Server Push for the first time in a real-world scenario, as shown below:
Introducing HTTP/2 Server Push with NGINX 1.13.9
Server Push can also provide performance improvements in this scenario.
When Push?
Under what circumstances should we use Server Push?
RTT is too long
There is no doubt that an RTT saved by ServerPush can lead to good optimization when the Client and Server are too long and the bandwidth is abundant.
A new question is: How long for RTT should ServerPush be used?
An article from the Google Chrome team gives the formula:
Rules of Thumb for HTTP/2 Push
Translation:
Unfortunately, we do not normally know the user’s bandwidth and RTT time.
First access user
To avoid the Push Cache problem, we can only Push the user for the first time.
Consider using cookies to identify first-time users, but it is important to note that cookies do not fully describe all static resource caches. For example:
The Cookie of the Client is not invalid, but the Cache of CSS resources is invalid. In this case, the Server chooses not to Push the Server because the Cookie of the Client is invalid.
Of course, you could design a more sophisticated scheme to tag cookies to tell the Server when to Push.
Fortunately, there are plans to do this:
The H2O server provides a solution called cache-Aware Server push, which records all cached resources in cookies so that the server knows which resources do not need to be pushed.
However, keeping track of all resource paths in cookies takes up a lot of space, so you need to compress the paths.
Consider using bloom Filter to reduce cookie data. Check out this article:
NGINX supports HTTP/2 server push
Client rendering
If an application uses SSR, static Push resources may take up bandwidth for loading HTML, increasing the first screen time. The CSR scenario is better suited to Server Push.
PUSH only the current page when the network is idle
Because Server Push may occupy the Client’s narrow and insufficient bandwidth, it may backfire in some scenarios.
Another reason is that Server Push uses cold TCP connections. The slow start of TCP results in slower resource loading than hot TCP connections
It is more appropriate for the Service Worker to fetch the resources of the next page.
Refer to the article
Introducing HTTP/2 Server Push with NGINX 1.13.9
Rules of Thumb for HTTP/2 Push
HTTP/2 Server Push
Make the Internet faster, Server Push features and open the way in detail
Server Push best practices for HTTP/2
To push, or not to push? ! – The future of HTTP/2 server push – Patrick Hamann – JSConf EU 2018