- 图 文 address: 10X Performance Increases: Optimizing a Static Site
- By JonLuca De Caro
- Translation from: The Gold Project
- This article is permalink: github.com/xitu/gold-m…
- Translator: Starrier
- Proofreader: Dandyxu, Hopsken
10 times better performance: Optimize static sites
A few months ago, I was traveling abroad and wanted to show a friend a link from my personal (static) website. I tried to browse my website, but it took longer than I expected. There’s absolutely no dynamic content — just animation and some responsive design, and the content stays the same all the time. I was shocked by the results: DOMContentLoaded took 4 s and the entire page took 6.8 s to load. Twenty requests (1MB of total data) for static sites were diverted. I’m used to a 1 GB/s low-latency Internet connection between Los Angeles and my server in San Francisco, which makes this monster look lightening fast. In Italy, 8 MB/s makes all the difference.
This was my first attempt at optimization. So far, every time I want to add a library or resource, I just introduce it and point to it using SRC =””. From caching to inlining to lazy loading, I didn’t pay attention to any form of performance.
I started looking for people with similar experiences. Unfortunately, a lot of the literature on static optimization is quickly out of date — the advice from 2010 or 2011 is either talking about libraries, making assumptions that are no longer tried at all, or simply repeating some of the same principles over and over again.
But I did find two great sources of information — the high-performance browser web and Dan Luu’s experience with static website optimization. While NOT as good as Dan at stripping format and content, I did manage to make my pages load about 10 times faster. DOMContentLoaded takes about a fifth of a second, and the entire page loads in only 388 ms (which is actually a bit inaccurate, but the reason for the lazy loading is explained below).
process
The first step in the process was to sort through the site, and I wanted to figure out what was taking the most time and how best to parallelize everything. I ran various tools to analyze my site and test it around the world, including:
- tools.pingdom.com/
- www.webpagetest.org/
- tools.keycdn.com/speed
- Developers.google.com/web/tools/l…
- Developers.google.com/speed/pages…
- webspeedtest.cloudinary.com/
Some of them offer suggestions for improvements, but you can only do so much when you have 50 requests for a static site — from spaced giFs left over from the ’90s to resources that are no longer used (I loaded six fonts but only used one).
My Site Timeline – I tested this on the Web Archive without capturing the original image, but it looks very similar to what I saw a few months ago.
I wanted to improve everything I could control — from the content and speed of JavaScript to the actual Web server (Ngnix) and DNS Settings.
To optimize the
Simplify and merge resources
The first thing I noticed about both CSS and JS was that I made about a dozen requests (without ANY HTTP keepalive of any kind) to various websites, and some of them were HTTPS requests. This adds multiple round-trips to various CDN’s or servers, where some JS files are requesting other files, resulting in the blocking cascade shown above.
I use WebPack to combine all the resources into a SINGLE JS file. Every time I make a change to the content, it automatically simplifies and converts all of my dependencies to a single file.
const UglifyJsPlugin = require('uglifyjs-webpack-plugin');
const ZopfliPlugin = require("zopfli-webpack-plugin");
module.exports = {
entry: './js/app.js',
mode: 'production',
output: {
path: __dirname + '/dist',
filename: 'bundle.js'
},
module: {
rules: [{
test: /\.css$/,
loaders: ['style-loader'.'css-loader'] {},test: /(fonts|images)/,
loaders: ['url-loader']
}]
},
plugins: [new UglifyJsPlugin({
test: /\.js($|\?) /i }), new ZopfliPlugin({ asset:"[path].gz[query]",
algorithm: "zopfli".test: / \. (js | HTML) $/, threshold: 10240, minRatio: 0.8]}});Copy the code
I tried different configurations. Right now, the bundle.js file is in the of my site and is blocked. Its final size is 829 KB, including every non-image resource (fonts, CSS, all libraries, dependencies, and JS). The vast majority of fonts used were font-awesome, accounting for 724 of the 829 KB.
I went through the Font Awesome library and all the ICONS were removed except for the three I was going to use, Fa-Github, Fa-envelope and Fa-code. I use a service called Fontello to extract the ICONS I need. The new size is only 94 KB.
The way the site is currently built, it doesn’t look right if we just have stylesheets, so I’ve accepted the blocking feature of a single bundle.js. The load time was 118 ms, an order of magnitude better than before.
This also brings some additional benefits — I no longer point to a third-party resource or CDN, so the user does not need to (1) perform a DNS query for the resource, (2) perform an HTTPS handshake, and (3) wait for the resource to be fully downloaded.
While CDN and distributed caches may make sense for a large distributed site, they don’t make sense for my small static site. It’s worth weighing whether you need to optimize that extra 100 ms or so.
Compressing resources
I loaded an 8 MB profile picture and displayed it at 10% aspect ratio. This isn’t just a lack of optimization, it’s a near-neglect of user bandwidth usage.
I use webspeedtest.cloudinary.com/ to compress all the images – it also advised me to switch to the webp, but I hope that as many as possible compatible with other browsers, so I insisted on using JPG. While it is entirely possible to build a system that only delivers WebP to a browser system that supports it, I want to keep it as simple as possible, and the benefits of adding an abstraction layer may not seem obvious.
Improved Web Server – HTTP2, TLS, etc
The first thing I did was to transition to HTTPS – initially, I ran Ngnix on port 80 and only served files from /var/www/html.
server{ listen 80; server_name jonlu.ca www.jonlu.ca; root /var/www/html; index index.html index.htm; location ~ /.git/ { deny all; } location ~ / { allow all; }}Copy the code
First set HTTPS and redirect all HTTP requests to HTTPS. I started from Let’s Encrypt (a great organization that just started signing wildcard certificates! Wildcard Certificates) to get their own TLS certificates.
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name jonlu.ca www.jonlu.ca;
root /var/www/html;
index index.html index.htm;
location ~ /.git {
deny all;
}
location / {
allow all;
}
ssl_certificate /etc/letsencrypt/live/jonlu.ca/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/jonlu.ca/privkey.pem; # managed by Certbot
}
Copy the code
By adding http2 directives, Ngnix will be able to take advantage of all the latest features of HTTP. Note that to take advantage of HTTP2 (formerly SPDY), you must use HTTPS, read more here.
You can also use the HTTP2 push directive to use HTTP2 push images/ headshot.jpg;
Note: Enabling GZIP and TLS can expose you to BREACH control. Since this was a static site and the actual risk to BREACH was low, I felt comfortable staying compressed.
Take advantage of caching and compression instructions
What else can be accomplished just by using Ngnix? The first is the cache and compression instructions.
I’ve been sending raw, uncompressed HTML. You just need a separate Gzip; Yes, I can go from 16,000 bytes to 8,000 bytes, a 50% reduction.
In fact, we can improve this number even further by setting Ngnix’s gzip static to on, which will look for pre-compressed versions of all requested files in advance. This is combined with our WebPack configuration above — we can pre-compress all the files at build time using the ZopflicPlugin! This saves computing resources and allows us to maximize compression without sacrificing speed.
Also, my site changes very little, so I want to cache resources for as long as possible. In this way, the user does not need to re-download all the resources (especially bundle.js) on future visits.
My updated server configuration is shown below. Note that I won’t cover all the changes I’ve made, such as TCP Settings changes, GZIP directives, and file caching. If you want to learn more, read this article on Ngnix tuning.
worker_processes auto;
pid /run/nginx.pid;
worker_rlimit_nofile 30000;
events {
worker_connections 65535;
multi_accept on;
use epoll;
}
http {
# #
# Basic Settings
# #
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Turn of server tokens specifying nginx version
server_tokens off;
open_file_cache max=200000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
include /etc/nginx/mime.types;
default_type application/octet-stream;
add_header Referrer-Policy "no-referrer";
# #
# SSL Settings
# #Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /location/to/dhparam.pem; ssl_ciphers'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE -RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA -AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AE S256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA -AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:! aNULL:! eNULL:! EXPORT:! DES:! RC4:! MD5:! PSK:! aECDH:! EDH-DSS-DES-CBC3-SHA:! EDH-RSA-DES-CBC3-SHA:! KRB5-DES-CBC3-SHA';
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security 'max-age=31536000; includeSubDomains; preload';
ssl_certificate /location/to/fullchain.pem;
ssl_certificate_key /location/to/privkey.pem;
# #
# Logging Settings
# #
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
# #
# Gzip Settings
# #
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;
gzip_min_length 256;
# #
# Virtual Host Configs
# #
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Copy the code
And the corresponding server block
server {
listen 443 ssl http2;
server_name jonlu.ca www.jonlu.ca;
root /var/www/html;
index index.html index.htm;
location ~ /.git/ {
deny all;
}
location ~* /(images|js|css|fonts|assets|dist) {
gzip_static on; # Tell Nginx to first look for compressed versions of all requested files.
expires 15d; # 15 day expiration for all static assets}}Copy the code
Lazy loading
In the end there was one small change to my actual site, and the optimization it brought was not negligible. There are five images that you can’t see until you press the appropriate TAB, but they are loaded at the same time as everything else (because they are located in
I wrote a short script that uses the Lazyload class to modify the attributes of each element. The images are loaded only when the appropriate box is clicked.
$(document).ready(function() {$("#about").click(function() {$('#about > .lazyload').each(function() {
// set the img src from data-src
$(this).attr('src', $(this).attr('data-src'));
});
});
$("#articles").click(function() {$('#articles > .lazyload').each(function() {
// set the img src from data-src
$(this).attr('src', $(this).attr('data-src'));
});
});
});
Copy the code
So once the document is loaded, it will modify the then load it into the background.
Future improvements
There are a few other changes that can make pages load faster — most notably using Service Workers to cache and intercept all requests, keeping the site even offline, and caching content on the CDN so that users don’t have to do a full round trip to the server in SF. These are all valuable changes, but not particularly important for a personal static site, which is an online resume (about me) page.
conclusion
This increased my page load time from 8 s for the first load to 350 ms, and then up to 200 ms for subsequent pages. I really recommend reading the High-performance Browser Web — you can read it in no time, and it provides a very good overview of the modern Internet, optimized at every level of the Internet model.
Am I missing anything? Are there any violations of best practices? Or could it improve the content of my narrative or something? Please feel free to correct — JonLuca De Caro!
Diggings translation project is a community for translating quality Internet technical articles from diggings English sharing articles. The content covers the fields of Android, iOS, front end, back end, blockchain, products, design, artificial intelligence and so on. For more high-quality translations, please keep paying attention to The Translation Project, official weibo and zhihu column.