原文 原文 : Doing Differential Serving in 2019 原文 原文 : []
If you’re reading this, you’re probably one of those people who’s been looking for pragmatic, forward-thinking ways to speed up your website. So when I read a guide by Phil Walton called Differentiated Services, I was intrigued. If you haven’t heard of this technique, you can compile and provide two separate JavaScript bundles for your site:
- One bundle contains all babel-fied transformations and Polyfills for all browsers – available onlyThe actualThey need their older browsers. This may be the package you have generated.
- The second bundle has the same functionality as the first package, but with little transformation or polyfills. This package is available only to modern browsers that can use them.
We use Babel to transform scripts so that we can use them anywhere, but we do so with some danger. The extra code it adds to most configurations is usually not necessary for users on modern browsers. With some effort, you can change the build process to reduce the amount of code we send to large numbers of users on modern browsers, while maintaining compatibility with legacy clients such as IE11. The purpose of differentiated services is not just to improve delivery time – that would certainly help. It can also help reduce blocking on the main thread by reducing the number of scripts the browser has to process, which is a resource-intensive process.
In this guide, you’ll learn how to set up differentiated services in your 2019 build pipeline, from setting up Babel to what tweaks you need to make in WebPack, and the benefits of doing it all.
Set your Babel configuration
Outputting multiple builds of the same application involves the Babel configuration for each target. After all, multiple Babel configurations in a single project are not uncommon. This is usually done by placing each individual configuration object under the env object key. Here’s what’s in the Babel configuration for differentiated services:
// babel.config.js
module.exports = {
env: {
// This is the configuration we will use to generate bundles for older browsers
legacy: {
presets: [["@babel/preset-env", {
modules: false.useBuiltIns: "entry".// This should be reasonable for older browsers.
targets: "> 0.25%, last 2 versions, Firefox ESR"}]],plugins: [
"@babel/plugin-transform-runtime"."@babel/plugin-syntax-dynamic-import"]},// This is the configuration used to generate packages for modern browsers.
modern: {
presets: [["@babel/preset-env", {
modules: false.targets: {
// This will be for browsers that support the ES module.
esmodules: true}}]],plugins: [
"@babel/plugin-transform-runtime"."@babel/plugin-syntax-dynamic-import"]}}};Copy the code
You’ll notice that there are two configurations: Modern and Legacy. These control how Babel converts each package. Ironically, the tool that polyfills and adds unnecessary transformations to our code is the same tool that we can use to send less code. In Phil’s original article, he used Babel 6 babel-preset-env to achieve this goal. Now that Babel 7 is released, I’ve used @babel/preset-env instead.
The first thing to note is that @babel/preset-env uses different options in each configuration. For the Legacy option, we pass the Browserslist query to the Targets option for older browsers. We also tell preset to include polyfills from @babel/polyfill and useBuiltIns options. In addition to presets, we also include some necessary plug-ins.
Note: useBuiltIns accepts two values in addition to false. These values are *entry and usage*. The documentation explains the differences well, but it’s worth noting that “Usage” is experimental. ** It usually produces smaller packages than “entry”, but I found that I needed to specify “entry” to get the script to run in IE 11.
For the modern configuration, the configuration looks basically the same except for the value targets. Instead of passing a Browserslist query, I pass an object using the esModules option. When set to true, @babel/preset-env will use less conversions because the preset target is browsers that natively support the ES module and async/ await or other modern features. The useBuiltIns option is also removed because pollyFill is not required for all the functions used in the project. That said, if you’re using cutting-edge features that modern browsers don’t support well, your application may need some polyfills. If your application violates this setting, please set useBuiltIns appropriately.
Configure WebPack to differentiate services
Webpack – and most other packaging tools – will provide a feature called the multiple compiler pattern. This feature is critical to differentiating services. The multi-compiler mode allows you to pass multiple arrays of configuration objects to vomit out multiple sets of packages:
// webpack.config.js
modules.exports = [{
// Object config one
}, {
// Object config two
}];
Copy the code
This is important because we can pass two separate configurations that use the same entry point. We can also adjust the rules in each configuration as needed.
But that’s easier said than done. Webpack can be very complex, and there’s nothing more complex than that when you’re dealing with multiple configurations. But it’s not impossible, so let’s figure out what it takes to get there.
Start with common configurations
Because you’re building the same entry points separately, your configurations will have a lot in common. Common configuration is a convenient way to manage these similarities:
// webpack.config.js
const commonConfig = {
// 'devMode' is process.env.node_env! == result of "production"
mode: devMode ? "development" : "production".entry: path.resolve(__dirname, "src"."index.js"),
plugins: [
// Common plug-ins in both configurations]};Copy the code
From here, you can write individual WebPack configurations and use extension syntax to incorporate common configurations into each configuration:
// webpack.config.js
const legacyConfig = {
name: "client-legacy".output: path.resolve(__dirname, "src"."index.js"),
module: {
rules: [
// loader...]},// Use the extension syntax to incorporate common configurations into this object.. commonConfig };const modernConfig = {
name: "client-modern".// Use the.mjs extension
output: path.resolve(__dirname, "src"."index.mjs"),
module: {
rules: [
// loader...]},/ / same as above.. commonConfig };module.exports = [legacyConfig, modernConfig];
Copy the code
The point here is that if you combine the commonalities between the two configurations with common objects, you can minimize the number of configuration lines you have to write. From there, you focus only on the key differences between each goal.
Manage two configurations
Now that you know how to manage the commonalities between each configuration, you need to know how to manage the different configurations. Managing Loaders and plug-ins can get tricky when you compile common entry points to different targets. This is especially true if you are dealing with more than JavaScript assets. Here are some Pointers I hope will be helpful to you.
babel-loader
Arguably, the most common loader you’ll see in any WebPack configuration is babel-Loader. For what we want to achieve, you need to use Babel-Loader in your new and old configuration objects, although the configuration is slightly different. Babel-loader for older browser targets will be configured as follows:
// webpack.config.js
const legacyConfig = {
// ...
module: {
rules: [{test: /\.js$/i.// Make sure your third-party libraries are packaged in a separate chunk
// Otherwise this exclusion pattern may break your build on the client side
exclude: /node_modules/i.use: {
loader: "babel-loader".options: {
envName: "legacy" // Point to env.legacy in babel.config.js}}},/ / other loader...]},// ...
};
Copy the code
In modern browser targets, the only difference is that we change the value of the test regular expression to /\.m? Js $/ I to include the ES module file extension (.mjs) in the NPM package. We also need to change the value of options.envname to “modern”. Options.envname refers to the individual configuration contained in babel.config.js in the previous example.
// webpack.config.js
const modernConfig = {
// ...
module: {
rules: [{test: /\.m? js$/i.exclude: /node_modules/i.use: {
loader: "babel-loader".options: {
envName: "modern" // Point to env.modern in babel.config.js}}},// Other loaders...]},// ...
};
Copy the code
Other Loaders and plug-ins
Depending on your project, you may have other loaders or plug-ins that handle assets types other than JavaScript. How to handle them for each target browser depends on your project requirements, but here are some suggestions.
- youmayNo changes are required to the other loaders. It’s important to remember that WebPack can manage not only JavaScript but a lot of other content as well. CSS, images, fonts, basically every one of them requires you to install a loader. Therefore, it is important to output the same content (or at least maintain the same reference to Assets) for each target browser.
- Some loaders allow you to disable file launch. This is useful in differentiated service building. For example, suppose you use
file-loader
Handles the import of non-javascript resources. In a modern browser configuration, you can tellfile-loader
Output file, which you can specify in the configuration of older browsersEmitFile: falseTo prevent the file from being written to disk twice. thismayIt helps speed things up. - Null-loader may also be useful for using multiple configurations to control the loading and sending of files.
- Be careful with the hashed version of Assets. Suppose you use an image optimization loader (such as image-webpack-Loader) to optimize the image. You may need to use this Loader in both configurations, as one assets diagram will contain references to unoptimized images and the other will contain references to optimized images. Because the file contents are different for each build, the file hashes will also be different. The result is that one set of users will get the unoptimized image assets, while the rest will get the optimized image assets.
- Plug-ins are an entirely different thing, and the best guidance for using them in differentiated service Settings varies. For example, if you use
copy-webpack-plugin
To remove files or entire directories fromsrc
Copy to todist
, you only need it in one configuration, not two. That is, using the same plug-in in both configurations does not cause problems, butmayCan affect build speed. - If your loader and plug-in configuration starts to get a little messy, NPM scripts can be a good substitute. For simple projects, I often install the image optimization binaries (such as pngQuant-bin) locally in my project via NPM and use NPX in the NPM script after the build is complete. This reduced the clutter in my WebPack configuration, which was a welcome change.
- If you’re using
assets-webpack-plugin
Things get complicated when you generate the assets list for both versions. You need to create a single instance to pass to each configured plug-in array andFollow this advice. In one of my projects, I use it in a Node scriptassets-webpack-plugin
Inject the script references into the generated HTML (more on this later).
The bottom line of these points is that you should keep references to assets parallel between builds, and generally avoid writing the same assets to disk more than once. However, it is also reasonable and convenient to do things in a potentially complex build environment.
Manage your Uglifier
Until recently, Uglify-JS was the default Uglifier for WebPack. Terser became the default in version 4.26.0. If you’re using version 4.26.0 or later, good news for you – you’re all set up and you don’t need to do any extra build work!
However, if you are using an earlier version, terser is not the default Uglifier, but uglify-js, and you will need to use terser in your new configuration. This is because Uglify-js does not understand JavaScript syntax outside of ES5. It doesn’t recognize things like arrow functions, async/ await, and so on.
With your old configuration, you don’t need to do anything because it should already be built. However, for your new configuration, you need to NPM install terser-webpack-plugin for your project. Next you need to add the terser-webpack-plugin to the optimization.minimizer array:
// webpack.config.js
const TerserWebpackPlugin = require("terser-webpack-plugin");
const modernConfig = {
// ...
optimization: {
minimizer: [
new TerserWebpackPlugin({
test: /\.m? js$/i.// If you want to output.mjs files, you need this
terserOptions: {
ecma: 6 // It can also be 7 or 8}}})]// ...
};
Copy the code
In our new configuration, we print the file with the.mjs extension. In order for Terser to recognize and modify these files, we need to modify the Test regular expression accordingly. We also need to set the ECMA option to 6 (although 7 or 8 are also valid).
Inject script references into HTML
You might use the HTml-Webpack-plugin to process the generated HTML files for your app shell tags, and for good reason. It is a flexible plug-in that can handle a lot of the busy work of inserting and
Fortunately, it only takes a little brain work to solve this problem. For the projects that use differentiated services, I use the assets-webpack-plugin to collect the assets generated by WebPack, as shown below:
// webpack.config.js
const AssetsWebpackPlugin = require("assets-webpack-plugin");
const assetsWebpackPluginInstance = new AssetsWebpackPlugin({
filename: "assets.json".update: true.fileTypes: [
"js"."mjs"]});Copy the code
From there, I add the instance of the assets-webpack-Plugin from the old and new configurations to the plugins array. I have used the following config plug-in instance options to fit my project:
filename
Indicates where the assets JSON file should be output.- I set up
update
fortrue
, which tells the plug-in to reuse the same Assets JSON file for both the new and old configuration. - I update
fileTypes
To ensure that theassets.json
Contains the new configuration generated.mjs
file
From here on out, it’s where you touch the hacky. To get
Hopefully, the HTmL-Webpack-plugin supports this natively, as I personally don’t want to use this approach. You may need to design your own temporary solution.
Is it worth it?
You’re halfway through this article, and I’m sure the question remains: Is this technology worth it? My answer is a resounding yes. I use differentiated services on my site and I think the benefits speak for themselves:
All JS assets | Gzip (Level 9) | Brotli (Level 11) | |
---|---|---|---|
Legacy | 112.14 KB | 38.6 KB | 33.58 KB |
Modern | 34.23 KB | 12.94 KB | 12.12 KB |
For my site, the JavaScript size was reduced by nearly 70%. To be fair, as the package size has grown, I’ve noticed that the savings from differentiated services have decreased significantly. In my work I often come across bundles over 300 KB and I’ve seen something close to 10%, but that’s still a significant reduction! Much of what worked in my favor was that my particular project required a fairly large number of polyfills for traditional packing, and my project could skip this step in between with little or no polyfills or modern packing changes.
It can be tempting to look at compression statistics and say it doesn’t matter, but you must always keep in mind that compression only reduces the transfer time of a given asset. Compression has no effect on parse/compile/execute time. If you use Brotli to compress a 100 KB JavaScript file to 30 KB, yes, the user will receive it very quickly, but the file is still equivalent to 100 KB of JavaScript.
On devices with lower processing power and memory, this is a crucial difference. In my regular tests on Nokia 2 Android phones, the impact of differentiated services on loading performance is clear. Here’s the performance tracking on the Chrome device THAT I visited my personal website before implementing differentiated services:
Site performance tracking in Chrome’s DevTools shows a lot of scripting activity prior to the implementation of differentiated services.
Here’s how the differentiated service performed on the same device after it was deployed:
Site performance tracking in Chrome’s DevTools shows a significant reduction in scripting activity after the implementation of differentiated services.
A reduction in scripting activity of about 66% is reliable. When websites get interactive faster, they are more useful and enjoyable for everyone.
conclusion
Differentiated service is a good thing. If this trend from HTTPArchive is any indicator, most websites in production will still deliver a lot of polyfilled and converted legacy JS that many users simply don’t need. If we need to support users on older browsers, we should seriously consider this two-pronged approach to providing JavaScript.
If nothing else, this should play a role in looking at the packaging size of JavaScript in the future, and how to remain proactive about JavaScript tools in reducing the amount of code we send our users. Depending on your audience, you may not even need to offer two different bundles, but the configuration shown may give you an idea of how to send less code than you currently do.
It’s worth noting that the state of JavaScript tools changes frequently. It feels very much like a “fast moving and breaking stuff” space. For example, the Alpha for WebPack 5 already exists and requires a lot of change. It’s not unreasonable to assume that something might go wrong. Interestingly, I still see projects on WebPack 3 in my work. Upgrading some projects may take more time than others. This technique – as documented – will still be useful in the future.
If you are interested in seeing my current site using this technology, please check out this repo. I hope you can learn as much as you can from my project.
resources
- Deploy ES2015 + code in production today
- Kristofer Baxter’s repo example of this technique
- The ECMAScript module in the browser
- Using JavaScript modules on the Web
@babel/preset-env
The document- Terser document