In the past six months, I spent most of my energy on business component library, and another major thing I did in the development and maintenance of business component library was the complete engineering process of component library (or material assets).
This article is not a teaching article, but only a summary of part of the work in the past six months. It also serves as a memo of my own. Because it is the internal assets of the company, many places cannot post source code and screenshots.
Talk about front-end engineering
In my opinion, front-end engineering refers to the usual workflow of a part of the tedious repetitive work in the form of tools precipitation, so as to reduce the development of students mechanized repetitive work. The tools here are not limited to local project scripts, cloud services, etc. For example, we initialized the project through CRA, packaged and built the project through Webpack/Vite, and launched the front-end project through cloud services, etc.
From my personal experience, all the simple repetitive mechanical labor involved in development can be more or less significantly improved through engineering capabilities.
Front-end engineering is as much as performance optimization, and the main entry point of this paper lies in the automation of front-end engineering, using the ability of engineering automation to penetrate the initial -> development -> construction -> online whole life cycle of the material system, reject all inefficient.
See engineering whole process from life cycle
Initialization phase
For the initialization phase, the component/block initialization template is provided on the material side to facilitate the rapid initialization of the component/block.
The development phase
As a display site of material system, it contains multi-dimensional front-end assets such as components, blocks, solutions and tool sets, so it adopts MutiRepo architecture and completely breaks away from the traditional NPM Link package development mode.
- On the component side & material side, after the development script is started, the source code changes are monitored in real time and synchronized to the site cache directory.
- On the site side, add the. Dev directory as the cache directory for local development, and alias the component library to the. Dev directory in dev mode. And under the support of Vite, continue to improve the development experience.
- During the cold startup phase, the local development script prechecks the validity of the. Dev directory and makes a full copy of the source code of the component under node_modules as a temporary cache file if it is illegal.
- In the hot update phase, devServer listens for changes in the. Dev directory files to be hot updated in real time.
For more on local development, check out my previous article: Speed up Your Productivity with Vite
The construction phase
The material side construction phase is only to build the packages that generate ESM packages and ES5. Here we mainly talk about the things done in the site side construction.
Generating documentation
Since the component library is developed entirely through TS + jSDoc compliant annotations, we use the script at the documentation step:
- Scan all component directories;
- Parsing component type declarations and annotations to generate document AST JSON through TypeDoc;
The site then parses the document AST JSON and generates the document at run time.
Generate dependencies
Another function of our display site is to display the atomic components that the current business components depend on and generate dependency diagrams. Similar to the logic of generating documents, we will scan the business component directory by script to analyze the AST of the business components and generate dependency JSON. The site will dynamically load JSON to display dependency Sangi diagrams at runtime.
Generate basic component information
Component metadata already exists in component source code, so there is no need for secondary maintenance on the site side. Therefore, at the same time of generating dependencies, we parse the metadata in the component through AST, and generate the basic information of the component JSON. The site dynamically loads THE JSON display component name, component designer, component description, component design draft, and other information at runtime.
Synchronous block code
In the previous article, we shared the front-end material ecology from the business component library. Our block was stored in the form of source code, so it was just necessary to pull the latest source code in real time during construction. Therefore, we realized the following functions through the engineering script:
-
For blocks:
- Read the block Container directory;
- Analyze the block variant dependency under Container;
- Pull master/subblock by tool;
- Load code at site runtime;
-
For solutions:
- Read the material. Json file under the solution to obtain the solution dependency;
- Pull master/subblock by tool;
- Load code at site runtime;
Upload a Demo
Demo preview of our business component library is implemented differently from other component libraries, which write demos on Markdown at development time and parse markdown into HTML at compile time. Our business component library Demo will be stored in TSX source code form in site test, uploaded to CDN during construction, and loaded with CDN source code resources for real-time preview at run time.
Release phase
We have a strict chain release process from material to site:
First of all, for components and blocks, we automatically published NPM and CDN through the automatic publishing tool Materials-tools-release, which reduced the cost of code merging, branch checking, version detection, automated scripts, release and other steps and reduced the risk of errors.
Previous releases (using component libraries as an example) :
- It’s a norm, nothing to talk about;
- You may have forgotten this step;
- You might forget this step too;
- You might forget this step too;
- You might forget this step too;
- This step is probably the one you forget the most, but you can make up for it with the prePublish check box;
- This step may be the only one you do;
- I don’t think you even know what this is;
Now released:
Fewer steps means fewer mistakes.
Finally, site deployment.
Tools to precipitate
In the process, we precipitated a set of tools that served the common component library scenario.
materials-tools-ast
Babel based on the secondary encapsulation, you can easily obtain the ability to import and export files. Analyze component derivation relationships
materials-tools-cli
Material-tools-cli Provides two functions: Material pull kernel and material pull CLI tool. The kernel serves cli tools and vscode plug-ins mainly do the following:
- Identify whether the material exists, identify whether the current directory has the same material name;
- Pull material from CDN according to material name;
- Merge dependency and prompt according to pulled material dependency;
materials-tools-utils
The commonly used tool functions in the process of engineering script development are encapsulated:
- File. ts: file operation;
- Log. ts: outputs logs.
- Command. Ts: Performs operations on the command line.
materials-tools-release
Automated publishing tools that reduce the cost and risk of errors in code merging, branch persistence, version checking, automated scripting, publishing, and other steps.
Automated publishing of toolsets
The toolset used lerna-based MonoRepo architecture, but Lerna didn’t meet some of our requirements (for example, we had a lot of unified custom NPM hooks), so we did a secondary wrapper on top of Lerna:
- Use lerna CHANGED to get the updated package version information.
- Update the package version to be published by lerna version;
- For example, if a depends on B and B depends on C, the order is c -> B -> A.
- Execute according to a sorted list. Execute according to custom hooks.
- Release;