This is not a pure study post, originally intended for production projects. The company has a new, small campaign. So I hope to learn some new techniques and apply them. The new project exists as a subsystem of the old project, so it has to be somewhat consistent. The version of fIS, the original construction tool used in this old project, is quite old, so I dare not upgrade it for fear of any mishaps, so I cannot move it. After learning many strategies on the Internet, I tried to build one, solved some problems, but also left some doubts.
Project Resource path
Making: github.com/johnshere/a…
I. Environment configuration
Node: v10.6.0 (v6.12.3), yarn: 1.7.0, webpack: 4.16.1 operating system: windows10
Yarn is a package management tool similar to NPM but more efficient. For command exchange, see yarnpkg.com/zh-Hans/doc… You can install it using NPM (which reminds me why IE exists, -_-)
npm install -g yarn
Copy the code
You can also go to the official website to download the client
2. Directory structure
As shown in figure:
SRC: project source code; Release: engineering release; Webpack. config: webpack configuration file;
The HTML after publication remains the same as the path in SRC. This way, using relative paths to access pages in your code does not cause structural problems.
3. Switch between two build environments (independent of WebPack)
The company’s original FIS is the original version, which has not been updated and maintained by anyone. Now it has fallen behind the current technical version for many years, but it must be used. Webpackage 4 was not compatible with it, so I installed two different versions of Node, V10.6.0 and v6.12.3. Switch when using
set dir=D:
set name1=node612
set name2=node106
set name=node
if exist %dir%\%name1% (
echo "node612 ==> node"
ren %dir%\%name% %name2%
ren %dir%\%name1% %name%
)else (
echo "node106 ==> node"
ren %dir%\%name% %name1%
ren %dir%\%name2% %name%
)
pause
Copy the code
By switching Node, you are essentially switching the entire development environment, since both build tools depend on Node. Execute the switch in CMD or Powershell:
changeNodeName
4. Webpack configuration
This should be a top priority, but I first identified some problems I wanted to solve before writing the configuration
- Ensure that the directory structure remains unchanged after publication
- Split common files, such as styles, images; For caching purposes
- Large split files should not be too large (unresolved) for users to load frequently
- Ensure good cache between files do not interfere with each other
- Escape the grammar
1, webpack. Entry. Util. Js
const path = require("path");
const Glob = require("glob");
const fs = require("fs");
letObj = {/** * get entry from directory * @param {[type]} globPath [description]
* @return {[type]} [description]
*/
getEntryJs: function (globPath) {
globPath = path.resolve(__dirname, globPath);
let entries = {};
Glob.sync(globPath).forEach(function (entry) {
let basename = path.basename(entry, path.extname(entry)),
pathname = path.dirname(entry),
paths = pathname.split('/'),
fileDir = paths.splice(paths.indexOf("src") + 1).join('/'); // Only js in the page pathif (pathname.indexOf("page") > -1) {// && fileDir && fileDir.indexOf(("page") === 0)) {
entries[(fileDir ? fileDir + '/' : fileDir) + basename] = pathname + '/'+ basename; }}); // Directory page reserved entries["index"] = path.resolve(__dirname,".. /src/index").split("\ \").join("/");
console.log("---------------------------------------------\nentries:");
console.log(entries);
console.log("-- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -");
returnentries; }, /** * get Html entry from directory * @param {[type]} globPath [description]
* @return {[type]} [description]
*/
getEntryHtml: function (globPath) {
globPath = path.resolve(__dirname, globPath);
let entries = [];
Glob.sync(globPath).forEach(function (entry) {
let basename = path.basename(entry, path.extname(entry)),
pathname = path.dirname(entry),
paths = pathname.split('/'),
// @see https://github.com/kangax/html-minifier#options-quick-reference
minifyConfig = process.env.NODE_ENV === "production" ? {
removeComments: true,
// collapseWhitespace: true,
minifyCSS: true,
minifyJS: true
} : ""; // Only HTML in the page directory // Keep the directory pagesif (entry.indexOf("page") > 1) {let chunkName = paths.splice(paths.indexOf("src") + 1).join('/') + "/" + basename;
entries.push({
filename: chunkName + ".html",
template: entry,
chunks: ['public/vendor', chunkName], minify: minifyConfig }); }}); // Keep directory page entries.push({filename:"index.html",
template: path.resolve(__dirname,".. /src/index.html").split("\ \").join("/"),
chunks: ['public/vendor'."index"]}); // Save the entry JSON file this.entry2jsonFile (entries);returnentries; }, /** * Generate the corresponding JSON file * @param entries */ entry2JsonFile:function (entries) {
console.log(entries);
let json = {};
if(entries) { entries.forEach(v => { json[v.filename] = v.filename; }); } console.log(json); // Write files synchronouslylet fd = fs.openSync(path.resolve(__dirname, ".. /src/entry.json"), "w");
fs.writeSync(fd, JSON.stringify(json), 0, "utf-8"); fs.closeSync(fd); }}; // obj.getEntry(".. /src/page/**/*.js");
// obj.getEntryHtml('.. /src/page/**/index.html');
module.exports = obj;
Copy the code
The entry identification of this place refers to:
Github address: github.com/givebest/we…
The entry tool is mainly for identifying JS and HTML; I modified the original logic to meet my requirement that only entries in the page directory be identified. At the same time, I added a method that writes all the HTML paths to a JSON file and saves them (used later in dev-server mode). This tool does special processing for the key value of chunk. It can be seen that the path from SRC is cut as the key value. Since webpack name supports path, this can achieve the effect of question 1.
2, webpack. Base. Conf. Js
const path = require("path");
const HtmlWebpackPlugin = require('html-webpack-plugin');
// const ExtractTextPlugin = require('extract-text-webpack-plugin');
const CleanWebpackPlugin = require("clean-webpack-plugin");
const MiniCssExtractPlugin = require("mini-css-extract-plugin");
const entryUtil = require("./webpack.entry.util");
let entryJs = entryUtil.getEntryJs('.. /src/page/**/index.js');
let conf = {
entry: entryJs,//js package entry recognition
output: {
path: path.resolve(__dirname, ".. /release"),
filename: "[name].[chunkHash].js".// publicPath: ".. /.. /public"
},
module: {
rules: [{test: /\.css$/.// loader: ExtractTextPlugin.extract({
// fallback: 'style-loader',
// use: 'css-loader'
// })
use:[MiniCssExtractPlugin.loader,'css-loader']//'style-loader',
},
{
test: /\.html$/.loader: 'html-withimg-loader'
},
{test: require.resolve("jquery"), loader: "expose-loader? $! expose-loader? jQuery"}},plugins: [
// new HtmlWebpackPlugin({
// filename: "index.html",
// template: "src/page/index.html",
// chunks: ["main", "vender"]
// }),
// new ExtractTextPlugin("./[name].[chunkHash].css")
new CleanWebpackPlugin(["release"] and {root: path.resolve(__dirname, ".."),
verbose: true.dry: false
}),
new MiniCssExtractPlugin({
filename: "[name].[contenthash:7].css".chunkFilename: "[name].[contenthash].css"})].optimization: {
splitChunks: {
cacheGroups: {
commons: {
name: "public/vendor".chunks: "all".minChunks: 2}}}},resolve: {
extensions: [".js".".jsx"].alias: {
layer: path.resolve(__dirname, ".. /src/public/js/layer/mobile/layer.js"),
"layer.css": path.resolve(__dirname, ".. /src/public/js/layer/mobile/need/layer.css")}}};/ / HTML entrance
let entryHtml = entryUtil.getEntryHtml('.. /src/page/**/index.html');
entryHtml.forEach(function (v) {
conf.plugins.push(new HtmlWebpackPlugin(v));
});
module.exports = conf;
Copy the code
This is where I need to explain. I started to learn webpack, and then I searched for various posts on the Internet, studied, modified, and tested these configuration files. Some of the changes took so long that I even forgot them. .
2.1 Obtaining Entry and HtmlWebpackPlugin
Use the tool to get the specified HTML and js. I’m going to limit this to index, because many template files in the company use the HTML suffix. The entry of Webpack only recognizes JS, so HtmlWebpackPlugin needs to be used here. Without generating a corresponding relationship between HTML and JS, new HtmlWebpackPlugin will be required. So the entryHtml above is pushed, and the production environment is judged in the entryHtml.
2.2 Splitting the CSS File
I’m using the MiniCssExtractPlugin now, but I originally used the ExtractTextPlugin from the comments (I remember it from the comments too, hahaha). The ExtractTextPlugin will not work on WebPack4. Now you must specify version @next, otherwise it will not work with WebPack4. As follows:
yarn add ExtractTextPlugin@next
Copy the code
Once configured, I used it for a while and finally replaced it while thinking about the fourth question above. The ExtractTextPlugin doesn’t seem to work with Contenthash. Our company is engaged in BSS system, the business is complex, and the frequency of changing the business logic is very fast, so the index.js is changed a lot, but the style and picture are not changed much, you can’t change an if else, users need to update CSS and picture. So switch to the MiniCssExtractPlugin as it is now. Then configure the MiniCssExtractPlugin
Filename indicates the configuration for the CSS file to be separated from each chunk. Chunkfilename indicates the configuration for the common CSS file to be separated
2.3 loading jquery
Jquery does not implement modularization, special processing is done in loader; Then in each JS you can use require or import to import jquery, but in reality, this will only do the import, $is still a global object.
2.4 Image path in HTML
I see in some previous posts that it is necessary to add a reference judgment and loader identifier to the HTML tag; It’s not very friendly; Here we use a loader: html-withimg-loader. With this loader, we do not need to handle the image link in the HTML.
2.5 clean up
If you do not clear the existing files, there will be residual files every release, although there is no impact, but can not tolerate. The CleanWebpackPlugin can specify a clean re configuration such as:
new CleanWebpackPlugin(["release"],{
root: path.resolve(__dirname, ".."),
verbose: true,
dry: false
}),
Copy the code
new CleanWebpackPlugin(["release/*.js"."release/**/*.*"],{
root: path.resolve(__dirname, ".."),
verbose: true,
dry: false
}),
Copy the code
3, webpack. DevServer. Conf. Js
The development environment
'use strict';
const path = require("path");
const webpack = require("webpack");
const merge = require('webpack-merge');
const base = require('./webpack.base.conf');
// process.env.NODE_ENV = "development";
module.exports = merge(base, {
mode: "development",
devtool: "eval-source-map",
output: {
path: path.resolve(__dirname, ".. /release"), / /".. /release_dev"),
filename: "[name].[hash].js",
},
module: {
rules: [
{
test: /\.(png|jpg|gif)$/,
// loader: 'url-loader? limit=8192&name=./public/images/[name].[hash].[ext]'
loader: {
loader: 'url-loader', options: {// The options option here can define how much image to convert to base64 name:'[name].[hash].[ext]', / /limit: 8192, // Indicates that images smaller than 50KB are converted to base64, and those larger than 50KB are paths // outputPath:'/public/images'/ / define the output images folder}}}}], plugins: [new webpack. HotModuleReplacementPlugin ()], devServer: {port: 8080, contentBase: path.resolve(__dirname,".. /release"), // The directory where the page loaded by the local server is locatedhistoryApiFallback: true, // do not jump to inline:true// Refresh hot:true, // enable hot update, // proxy: {'/o2o/*':{
target: 'https://www.baidu.com',
secure: true,
changeOrigin: true}}}});Copy the code
This has been tweaked slightly from base, mainly to use webpack-dev-server; This configuration file exists for it.
3.1 the output of the hash
Here are the reasons why the hash chunkhash into a hash, after using HotModuleReplacementPlugin chunkhash and contenthash cannot be used. Some places say that “hot:true” should be removed, but MY own actual test does not work, just remove hot still error; So I simply changed to hash, anyway is the native debugging, the impact is not big.
3.2 devServer
This feature is powerful and very developer friendly. Webpack – dev – server installation
yarn add webpack-dev-server
Copy the code
The proxy function is still very powerful, directing background service requests to our test environment or local. Our original FIS is packaged with a layer of Nginx, each time we need to open, configure nginx separately. It’s good to integrate it here. Local development reduces dependencies and facilitates debugging.
3.4 Entry Table of Contents
The entry tool previously wrote all the entries to a JSON file. This is where it comes in. Our project is not spa at all, and using Webpack is still a bit far-fetched. When webpack-dev-server is started, it opens index.html in the root directory by default. In fact, our project has many pages, no matter which one is opened by default is not convenient for development, I simply made the index.html into a directory page. Display all paths in Entry. json. Click to enter each page.
// const $ = require("jquery");
import $ from "jquery";
const entryJson = require("./entry.json");
console.log(1122333,entryJson);
$(() => {
$("html").css("font-size"."16px");
for (let k in entryJson){
$("body").append("+entryJson[k]+"' >"+entryJson[k]+"</a></br>"); }});Copy the code
4, webpack. Pro. Conf. Js
The production environment
'use strict';
const path = require("path");
const merge = require('webpack-merge');
const base = require('./webpack.base.conf');
module.exports = merge(base, {
mode: "production",
optimization: {
splitChunks: {
cacheGroups: {
commons: {
name: "public/vendor",
chunks: "all",
minChunks: 2
}
}
}
},
module: {
rules: [
{
test:/\.js$/,
exclude: /node_modules/,
loader: "babel-loader"
},
{
test: /\.(png|jpg|gif)$/,
// loader: 'url-loader? limit=8192&name=./public/images/[name].[hash].[ext]'
loader: {
loader: 'url-loader', options: {// The options option here can define how much image to convert to base64 name:'[name].[hash].[ext]'.limit: 8192, // indicates that images less than will be base64, and those greater than will be outputPath:'public/images'// Define the output image folder}}}}});Copy the code
This production configuration is also adjusted based on the previous base.
4.1 Publishing Directory Adjustment
The small project exists as a subproject in the old project, so the URL is not directly accessible and requires a level 1 path with “project name”. The outputPath of urL-loader and all chunknames need to add a paragraph of “Activity”. You need to debug them by yourself. Such as:
Xxxx.com/index.html – > xxxx.com/activity/in… Xxxx.com/public/1.cs… – > xxxx.com/activity/pu…
One thing to notice about this, when I first tried it, I thought I’d just change output; But after testing it, it didn’t work. The reason for this is simple: the image SRC is the uniform resource locator (URL) for browsers. Simply changing the path of output does not add “activity” to the locator, it simply changes the path to save the file after publication. X now needs to deepen a directory level at release time, for example:
optimization: {
splitChunks: {
cacheGroups: {
commons: {
name: "activity/public/vendor",
chunks: "all",
minChunks: 2
}
}
}
},
Copy the code
{
test: /\.(png|jpg|gif)$/,
// loader: 'url-loader? limit=8192&name=./public/images/[name].[hash].[ext]'
loader: {
loader: 'url-loader', options: {// The options option here can define how much image to convert to base64 name:'[name].[hash].[ext]'.limit: 8192, // indicates that images less than will be base64, and those greater than will be outputPath:'activity/public/images'// Define the output image folder}}}Copy the code
4.2 Image Segmentation
As shown in the code, url-loader is used and limit is set; The file is generated separately when the image exceeds the limit, otherwise it is base64 storage. However, I have a difficult problem here. When the image is stored separately, the hash value of options.name cannot be set to contentHash or chunkHash, and I have not found a proper solution. (It’s ok to use hash values to some extent, but I don’t feel good about it)
4.3 the Babel compilation
Use Babel to escape ECMAScript6’s syntax and make it compatible with older browsers. Create a new file in the project root directory.
{
"presets": ["env"]}Copy the code
Install Babel
yarn add babel-core babel-loader babel-preset-env
Copy the code
4.4 mode NODE_env
Mode: Production is set here in the Webpack configuration file, and the Node environment is set to Production in the startup script. Delete devtool. The environment set here, along with the identification of the environment in the Entry tool, will configure the compression Settings.
Package. The json scripts
As follows:
{
"scripts": {
"dev": "cross-env NODE_ENV=development webpack --config ./webpack.config/webpack.dev.conf.js"."pro": "cross-env NODE_ENV=production webpack --config ./webpack.config/webpack.pro.conf.js --progress"."devServer": "webpack-dev-server --config ./webpack.config/webpack.devServer.conf.js --open --mode development"."watch": "webpack --config ./webpack.config/webpack.dev.conf.js --watch"}}Copy the code
First, install cross-env to set up the Node environment; You can see the use of cross-env in the script above
yarn add cross-env
Copy the code
Set up the two webpack configuration files above, but do not actually use, in fact use the command is the contents of scripts. Only here can be simplified operation, but we only need to start the script, as follows: development environment:
yarn run devServer
Copy the code
Production environment:
yarn run pro
Copy the code
Run can also be omitted. In webpack-dev-server mode, the actual published content will not be written to the hard disk. If we need to view the content by ourselves, we can run:
yarn run watch
Copy the code
It just doesn’t make a lot of sense, because I’ve found that every time you make a change, you create a bunch of files, and pretty soon you’re creating a bunch of junk, and it’s hard to dig through it.
The problem of legacy
- How can I cache a large image for a long time
- The public file is too large. Vender, the test project I wrote, has more than 1 trillion. It doesn’t feel very big, but it is terrible in the real project. And our project is a mobile terminal, so the white space time for downloading large files is also very uncomfortable.