Enterprise level scaffolding construction
Cause I am a ali’s bosses, gave a problem that builds an enterprise scaffolding, although I also wrote some scaffolding structures before content, but not a complete integration and distribution to a field version of NPM, just give me some of his own learning content has carried on the record, so, when I see these problems, My first reaction was to integrate and optimize what I had learned
Scaffolding mind maps
This is also a personal habit of mine. Before I start to write something, I must first think about what the content contains and then expand it accordingly. Although this is one of the requirements, it is completely consistent with my personal development habit
I’ve divided the scaffolding content into five parts, which can be considered four main parts, since argv_store is a self-written library that manages CLI parameters
- argv_store
- Webpack configuration
- Project code
- Cli tool
- Deployer deployer one-click deployment capabilities
argv_store
It manages the parameters we enter on the CLI interface, such as:
mj-cli create –rename test
Argv_store when using MJ-CLI gets your arguments and their corresponding contents:
// The command argument is create
// Get the options parameter
{
'--rename': 'test'
}
Copy the code
The content is simple, even saying that there is nothing good to say, but this is the kind of effect I want
Compared with community general scheme of commander, I write this library on the complexity of temporary couldn’t compared with, but why don’t I use the commander himself again wrote a such processing tools, because I want to reach higher the readability of the code, can recognize at a glance, you what’s operation method of the project, So that the next person can see it directly
At this level, compare our implementation:
// commander
program
.command('setup [env]')
.description('run setup commands for all envs')
.option("-s, --setup_mode [mode]"."Which setup mode to use")
.action(callback);
program
.command('exec <cmd>')
.description('execute the given remote cmd')
.option("-e, --exec_mode <mode>"."Which exec mode to use")
.action(callback)
program.parse();
// argv_store
program
.command('exec'.'execute the given remote cmd', callback)
.option('-e, --exec'.'Which exec mode to use')
.command('setup'.'run setup commands for all envs', callback)
.option('-s, --set'.'Which setup mode to use')
.parse()
Argv_store and COMMANDER are both interchangeable
Copy the code
I admit that I was going to use Commander at first, but when I saw the way it was used and the way it was done, I felt that my own soul didn’t want to use it (purely for personal reasons), so I wrote a library for argv_store that mostly borrowed from Commander. In terms of complexity, Not as good as Commander, but in terms of readability, I think I can see it more clearly
I won’t go into the details of how argv_store is used. It’s all on Github, though it’s pretty simple.
Webpack configuration
I should have started with argv_store, but why did I put argv_store first, because it was outside the scaffolding, not part of the scaffolding, after all, it was just my habit and a change
At the beginning of writing, I have been recalling what I have learned before and what can be reflected in this configuration. Then, according to my classification, it can be roughly divided into the following categories:
- The basic configuration
- Cache handling
- Environment to distinguish the
- Packaging optimization
- Externally configurable
- Script command
The basic configuration
Yes, the basic configuration, which is what we have to write when we write webPack, will be briefly introduced in the following slightly different content
entry
For entry files, I thought about it and decided to choose a pattern that runs on both single and multiple pages, which requires deep coupling with the business code
JSX file in SRC/View folder. JSX file in SRC /view folder. JSX file in SRC folder. Use the name of each folder as the name of the HTML file
if (fs.existsSync(signalPath)) {
entryObj.index = signalPath;
} else {
const data = fs.readdirSync(viewPath);
data.map(name= > {
const dirPath = `${viewPath}/${name}`;
const stats = fs.statSync(dirPath);
if (stats.isDirectory() && fs.existsSync(`${dirPath}/index.jsx`)) {
entryObj[name] = `${dirPath}/index.jsx`; }})}Object.keys(entryObj).map(key= > htmlPlugin.push(new htmlWebpackPlugin({
template: `${DIR}/template/index.tpl.ejs`.filename: `${key}.html`.minify: true.inject: true.collapseWhitespace: true
})));
Copy the code
optimization
In the choice of compression, will consider the development environment and production environment, so the development environment is not compressed, in the production environment will be compressed JS and CSS
module
Module in addition to the normal js, CSS, img, front and other content to compile, for less/sass, we specially made some support, a project support module and non-module form of content, use oneof to only use oneof the content
Cache handling
When we use webpack, we use hash as the identity. If the hash changes frequently, then for us the strong cache policy is constantly updated. So to solve this problem, Webapck4 provides a new capability, contenthash, And moduleId: hash
When we set these two contents, each time the contents of the package will change less, and only after the file is modified will the new hash value be generated, because the contenthash is for the content of the file, why do we need to change the moduleId if we already have this contenthash? Since there is a default file order for each package, new or deleted files will also change the name, so you need to change the file ID to hash to successfully include the add and delete files
Environment to distinguish the
The packaging configuration for WebPack is broken down into three levels of content
- development
- production
- preProduction
The webpack.config.js file is exported as a function by default, and the parameter is dev (the above environment variable). Set different file contents according to different environment and then configure the corresponding configuration through the difference of build or dev command. For example, it is the way of analyz to obtain the corresponding package information
Packaging optimization
This is a commonplace thing, in Webpack4 is already very enough to show the ability to use the package, only a simple configuration can be completed packaging, there are several commonly used optimization
- Use TerserPlugin to compress JS
- Use happypack for loader conversion of JS
- Use the dllPlugin to speed up packaging time by packaging files in advance to form dependency packages
- Enabling packaging cache can improve the secondary packaging rate
Externally configurable
You can use the process. CWD method to obtain the directory where the current folder is located, and then use config.js in the root directory to configure the corresponding file contents that need to be modified. Another option is to split it into multiple files and configure different WebPack profiles for different situations
Script command
NPM package can be executed directly by using the command line. By default, the file with the bin field will be placed under the corresponding node_module/. Bin directory. It’s locally executable, or globally executable if it’s a global download
Dev and build
Development environments are packaged with build environments
add
Add a new page to avoid reinventing the wheel
deploy
The ability to deploy directly to the corresponding server and perform one-click deployment
In the beginning, I did not add this part of scaffolding. Later, when I reviewed the whole project, I found that it would be more effective if there was a way to automate deployment, so I added the NPM run deploy command. Currently, I only have the ability to deploy in one environment without extending the deployment of redundant environments. Package for single person application in development environment (this will be updated later, the goal is to go straight through the development test and live engineering line)
Project code
In this project, I used some common react libraries, including react-Router, redux, and redux-thunk, mainly for single-page Settings, not considering multi-page Settings, which may need to be handled by you after downloading. Follow the steps above to implement your own
Cli tool
This is actually a scaffolding command, just like the script command of webpack, but to save a lot of things for others, through Node to download the corresponding Git package, and then delete unnecessary files according to the configuration to achieve out-of-the-box effect
conclusion
Full Github address
mj_script
mj_react
argv_store
mj-cli
Finish this, I feel the webpack and the node has the capability to improve a lot, but as a whole, the program code for the react also has some problems, there is no good to separate files, as well as the contents of the overall architecture still exists a certain extent, the lack of, even said, with respect to consider trade-offs do not optimal, Do you want everything, but the code blocks are confusing, and you need to make a more meaningful difference between webpack commands and your Git packages