Last time we looked at Nginx configuration for a single-page application, this time we’ll take a brief look at page loading optimization from a front-end perspective, but this is just a scratch, since I’m not a professional Nginx optimizer.
Page load
First, let’s take a look at what happens in the middle of our page loading process, which is more time-consuming, such as when we visit Github:
- Queued, Queueing: This parameter is used
HTTP / 1.1
Yes, there will beTeam head block
, the browser can open a maximum of 6 concurrent connections for each domain name. - 例 句 : Browsers had to pre-allocate resources and schedule connections.
- DNS Lookup: DNS resolves the domain name.
- Initial Connection, SSL: Establish a connection to the server, TCP handshake, or TLS handshake if you are HTTPS.
- Request sent: The server sends data.
- TTFB: Waiting for the data to return, network transmission, i.e
First byte response time
. - Content Dowload: receives data.
As you can see from the graph, it takes a lot of time to connect to the server, to receive the data, and of course the DNS resolution, but there is a local cache, so there is almost no time.
Gzip – Reduced load volume
First we can compress our JS and CSS with gzip: vue.config.js:
const CompressionWebpackPlugin = require('compression-webpack-plugin')
buildcfg = {
productionGzipExtensions: ['js'.'css']}configureWebpack: (config) = > {
config.plugins.push(
new CompressionWebpackPlugin({
test: new RegExp('\ \. (' + buildcfg.productionGzipExtensions.join('|') + '$'),
threshold: 8192.minRatio: 0.8}}))Copy the code
Open gzip in nginx:
The server module:
# Real-time compression using gZIP
gzip on;
gzip_min_length 1024;
gzip_buffers 4 16k;
gzip_comp_level 6;
gzip_types text/plain application/javascript application/x-javascript text/css application/xml text/javascript;
gzip_vary on;
gzip_disable "MSIE [1-6]\.";
# use gzip_static
gzip_static on;
Copy the code
Just to make it simple,
gzip_static
It will automatically find the corresponding file.gz
The file, the relation of thisgzip
On or off andgzip_types
And so on is not relevant, you can understand as priority return.gz
File.
gzip
Enable for real-time compression of the request file, which is consumedcpu
Such as the request file aboveContent-Length
Is greater thangzip_min_length
Is compressed to return.
If you have a. Gz file, you just need to enable gzip_static. If not, you need to enable gzip real-time compression, but I recommend the former.
To see if Gzip is enabled successfully, you can look at the header header that is returnedContent-Encoding: gzip
The original gzip file is 124KB, and the returned gzip file is 44KB. The compression efficiency is quite large.
Cache control – No request is the best request
The browser to the server cache interaction, can be described in detail, want to see other people to complete the article, I just here from the configuration:
location /mobile {
alias /usr/share/nginx/html/mobile/;
try_files $uri $uri/ /mobile/index.html;
if ($request_filename ~ .*\.(htm|html)$){
add_header Cache-Control no-cache;
}
if ($request_uri ~* /.*\.(js|css)$) {
# add_header Cache-Control max-age=2592000;
expires 30d;
}
index index.html;
}
Copy the code
Negotiate the cache
Last-Modified
Our single-page entry file is index.html, which determines which JS and CSS we want to load, so we set the negotiating cache-control no-cache for the HTML file, and the HTTP status code is 200 when we first load it. The server will return a last-modified timestamp indicating the Last time the file was Modified. On a second refresh, the browser will send the Modified timestamp to the server via if-modified-since. If no changes have been made (Etag also checks), the server will return a 304 status code saying my file has not changed. You can just use the cache.
Etag
HTTP defines an Etag as the entity tag of the requested variable. You can think of it as an ID that changes as the file changes. This is similar to last-modified, and the server will return an Etag. Some servers have different Etag calculations, so it may cause problems when distributed. Files are not cached, but you can turn this off and use last-modified only.
Strong cache
When our single-page application is packaged, tools such as Webpack will generate corresponding JS according to the changes of the file. In other words, if the file is unchanged, the hash value of JS will remain unchanged. Therefore, we can use strong cache when loading JS and other files, so that the browser can directly use the value from the cache without any request in the cache time class. For example, by setting expires (or cache-control, which has a higher priority) to 30 days, the browser can access the same cached JS and CSS within the Cache time by taking 200 from the Cache without asking our server.
Note: this method is based on the hash generated by the package above. If you generate 1.js, 2.js, etc., and you change the class contents in 1.js and the package is still 1.js, the browser will still take it from the cache and not request it. What that means is that you need to make sure that you’re changing the hash value of the package.
Forced to refresh
Strong cache can be a bit of a shock when used well, but if it’s used in the wrong situation and it’s always using the browser cache, how do you clear this up? A common way to clear this up is to Ctrl+F5 or check Disable cache on the browser console. In fact, when I ask for a file, I automatically add a header cache-control: no-cache, which means I don’t want to Cache it, so the browser will just send a request to the server.
Long connection – Reduces the number of handshakes
TCP handshake and TLS handshake is still relatively time-consuming, such as the previous HTTP1.1 before the connection is each into TCP three handshake, super time-consuming, but 1.1 default use of long connection, can reduce the handshake overhead, but if you do a large file upload will find that after a certain time will be broken, This is because Nginx’s default keepalive connection time is 75s before it breaks. When your page is loading a lot of stuff, you can extend the keepalive_requests to reduce the number of keepalive requests. As for large file uploads, you can choose to upload fragments, which will not be covered here.
Server:
keepalive_timeout 75;
keepalive_requests 100;
Copy the code
HTTP/2- More secure HTTP, faster HTTPS
HTTP/2 does not require SSL to be enabled, but browsers require SSL to be enabled in order to use HTTP/2. Technologies such as header compression, virtual “streaming” transmission, and multiplexing can greatly improve the Internet experience by making full use of bandwidth and reducing latency. Nginx is fairly simple to start:
server { listen 443 ssl http2; ssl_certificate /etc/nginx/conf.d/ssl/xxx.com.pem; ssl_certificate_key /etc/nginx/conf.d/ssl/xxx.com.key; ssl_ciphers ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-CHACHA20-POLY1305:ECDHE+ AES128:! MD5:! SHA1; Ssl_prefer_server_ciphers on; # Mitigate BEAST attacksCopy the code
HSTS- Reduced 302 redirection
Most websites are now in HTTPS, but the problem is that when users type in web addresses, they don’t normally type in HTTPS. They are still on port 80, so we usually rewrite on port 80:
server{
listen 80;
server_name test.com;
rewrite^ (. *) $ https://$hostThe $1 permanent;
}
Copy the code
However, this kind of redirection increases the network cost, and there is an extra request. What should I do if I directly access HTTPS next time? We can use HSTS, port 80 unchanged, in port 443 server added:
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains;" ;Copy the code
This tells the browser: My website must strictly use THE HTTPS protocol, and HTTP is not allowed within the max-age period. Please use HTTPS directly next time you visit, then the browser will only redirect port 80 when you visit for the first time. After that access is directly HTTPS (with includeSubDomains specified then this rule applies to all subdomains of the site as well).
Session Ticket- HTTPS Session reuse
We know that SSL handshakes are time-consuming for HTTPS traffic, and asymmetric encryption is used to protect session key generation. And the real transmission is through symmetric encryption for communication transmission. It takes too long for us to do an SSL handshake every time we refresh. Since both parties have the session key, we can communicate with this key. This is called session reuse. The server encrypts the key, generates a session ticket, and sends it to the client. After the request is closed, if the client initiates a subsequent connection (within the timeout period), the client sends the session ticket to the server when establishing an SSL connection with the server next time. The server unlocks the session ticket and takes out the session key for encrypted communication.
ssl_protocols TLSv1.2 TLSv1.3; Enable protocols above TLS1.2
ssl_session_timeout 5m; # Expiration time, minutes
ssl_session_tickets on; # Enable the Session Ticket cache of the browser
Copy the code
This series of updates can only be arranged on weekends and after work, the update of more content will be slow, hope to help you, please more star or like collection support.
This article address: xuxin123.com/web/nginx-s…