Written by F(X) Team-Rem
Imgcook provides dependency manageme-like functionality in the Taobao internal version for introducing other dependency packages, such as axios, underscore, @rax/ Video, etc. when writing functions in the imgCook editor.
Json is a GUI interface, so it’s tedious to open the code from each function and view the dependency operations. As a result, every time after developing a imgCook module, If you rely on other packages (and most of the time you do), you need to open the functions one by one and confirm the version number and add them in dependency management, which in my case is often a painful process.
How to solve
Imgcook provides Schema source code development Schema. Replace the GUI steps by directly modifying the module protocol (Schema) in the editor, and search for Dependencies. I found that the dependency management function is implemented via imgCook. dependencies in the protocol:
{
"alias": "Axios"."packageRax1": "axios"."versionRax1": "^ 0.24.0"."packageRaxEagle": "axios"."versionRaxEagle": "^ 0.24.0"."checkDepence": true
}
Copy the code
Since the code of the function also exists in the protocol, do you just need to process the original protocol document, scan out the corresponding dependencies and save them to the node, and then click “Save” to see the package list in dependency management updated?
Realize the function
To this end, I implemented the function of pulling module protocol content in @imgcook/ CLI. The specific Pull Request is as follows: Imgcook /imgcook- Clip #12 and imgcook/imgcook- Clip #15, you can use the command line tool to pull the corresponding protocol (Schema) as follows:
$ imgcook pull <id> -o json
Copy the code
After execution, the module protocol content is printed to stdout on the command line.
With this capability, you can implement command-line tools that work with imgCook-CLI data sources based on Unix pipelines. For example, JSON output from imgCook pull is not readable. Imgcook-prettyprint to beautify the output
#! /usr/bin/env node
let originJson = ' ';
process.stdin.on('data'.(buf) = > {
originJson += buf.toString('utf8');
});
process.stdin.on('end'.() = > {
const origin = JSON.parse(originJson);
console.log(JSON.stringify(origin, null.2));
});
Copy the code
The above program receives the data upstream of the Pipeline, the imgCook protocol content, via process.stdin, and parses and beautififies the output in the end event by running the following command:
$ imgcook pull <id> -o json | imgcook-prettyprint
Copy the code
This is a simple example of a Unix Pipeline program.
Let’s look at how to automate dependency generation in this way. Similar to the above example, let’s create another file ckdeps:
#! /usr/bin/env node
let originJson = ' ';
process.stdin.on('data'.(buf) = > {
originJson += buf.toString('utf8');
});
process.stdin.on('end'.() = > {
transform();
});
async function transform() {
const origin = JSON.parse(originJson);
constfuncs = origin.imgcook? .functions || [];if (funcs.length === 0) {
process.stdout.write(originJson);
return;
}
console.log(JSON.stringify(origin));
}
Copy the code
Imgcook. Functions To obtain the code contents of the function, for example:
{
"content": "export default function mounted() {\n\n}"."name": "mounted"."type": "lifeCycles"
}
Copy the code
Then is by parsing the content, access to import statements in your code, generating the corresponding dependent objects to origin. The imgcook. In dependencies, so we need to quote @ SWC/core to parse the JavaScript code:
const swc = require('@swc/core');
await Promise.all(funcs.map(async ({ content }) => {
const ast = await swc.parse(content);
// the module AST(Abstract Syntax Tree)
}));
Copy the code
After obtaining the AST, you can obtain the import statement information through the code, but because the AST is complex, @swC /core provides a special traversal mechanism as follows:
const { Visitor } = require('@swc/core/visitor');
/** * is used to store the list of dependent objects obtained through function parsing */
const liveDependencies = [];
/** * defines accessors */
class ImportsExtractor extends Visitor {
visitImportDeclaration(node) {
let alias = 'Default';
liveDependencies.push({
alias,
packageRax1: node.source.value,
versionRax1: ' '.packageRaxEagle: node.source.value,
versionRaxEagle: ' '.checkDepence: true});returnnode; }}// The usage mode
const importsExtractor = new ImportsExtractor();
importsExtractor.visitModule(ast);
Copy the code
The visitor class ImportsExtractor inherits from @swc/core/visitor, whose syntax type name is ImportDeclaration because it iterates through import declarations, So simply implement the visitImportDeclaration(node) method to get all the import statements in the method, convert them into dependent objects and update them according to the structure of the corresponding node. Once the extractor is defined, all that remains is to feed the AST to the extractor so that all of the module’s dependencies can be collected for subsequent dependency generation.
As you can see from the code above, the version number currently uses an empty string, which will cause the dependent version information to be lost if we update the protocol content, so we need to define a way to get the version.
Since the front-end dependencies are stored in NPM Registry, we can retrieve the version via the HTTP interface, for example:
const axios = require('axios');
async function fillVersions(dep) {
const pkgJson = await axios.get(`https://registry.npmjs.org/${dep.packageRax1}`, { type: 'json' });
if (pkgJson.data['dist-tags']) {
const latestVersion = pkgJson.data['dist-tags'].latest;
dep.versionRax1 = ` ^${latestVersion}`;
dep.versionRaxEagle = ` ^${latestVersion}`;
}
return dep;
}
Copy the code
In accordance with the rules of the https://registry.npmjs.org/${packageName}, we can get the package information stored in the Registry. Then data[‘dist-tags’]. Latest represents the version of the latest tag, which is simply the latest version of the current package, You can then add a ^ version prefix based on the version number (you can also modify the final version and the NPM Registry as desired).
The last step is to update the dependency information we grabbed from the function code and print it:
async function transform() {
// ...
origin.imgcook.dependencies = newDeps;
console.log(JSON.stringify(origin));
}
Copy the code
Then by running:
$ imgcook pull <id> -o json | ckdeps > { ... ."dependencies": [{ ...<updated dependencies> }] }
Copy the code
The developer then simply copies the output JSON to the editor and saves it. Oh wait, you can’t save JSON directly in the editor. Instead, you need to use the ECMAScript Module (export default {… Imgcook-save = imgCook-save = imgCook-save = imgCook-save = imgCook-save = imgCook-save = imgCook-save = imgCook-save
#! /usr/bin/env node
let originJson = ' ';
process.stdin.on('data'.(buf) = > {
originJson += buf.toString('utf8');
});
process.stdin.on('end'.() = > {
transform();
});
async function transform() {
const origin = JSON.parse(originJson);
console.log(`export default The ${JSON.stringify(origin, null.2)}`);
}
Copy the code
The last complete command is:
$ imgcook pull <id> -o json | ckdeps | imgcook-save
> export default{... }Copy the code
This way, we can copy content directly to the editor.
Effect of experience
For example, I added a dependency to AXIos in the created function of one of my projects, closed the window, hit Save (make sure Schema is saved), and then ran the following command:
$ imgcook pull <id> -o json | ckdeps -f | imgcook-save
Copy the code
Then open Schema editing in the editor, copy and save the generated content, and open dependency Management to see:
The code generated by parsing has been updated to the dependencies panel, which is finally free to do other things.
One more thing?
Is that the end of it? On macOS, the pbcopy command is provided to copy stdin to the clipboard.
$ imgcook pull <id> -o json | ckdeps | imgcook-save | pbcopy
Copy the code
This saves copying and allows ⌘ to open the editor directly after command execution.
Imgcook /cli supports output JSON text, which means imgCook is part of the Unix Pipeline ecosystem. In this way, we can build some interesting and useful tools along the way. It collaborates with many Unix tools (such as BPCopy, grep, cat, sort, etc.).
In this article, I verified the feasibility and experience of imgCook editor by relying on autogeneration examples and using Unix Pipeline. At present, I can make up for the lack of experience on the editor in this way, so that I can use the core functions of IMGCook more conveniently.