Xixian, flow product team of Alibaba Cloud
In the era of intelligent technology explosion, its application layers in all aspects emerge endlessly. Around the word “efficiency improvement”, the front end team of Ali Cloud traffic products put forward a front end intelligent code generation scheme using picture recognition technology under the rich background business scenarios.
In the process of exploration, with the continuous improvement of platform capability, the generated code is no longer satisfied with the stack of render function of React component, so it is urgent to generate Dva module in line with daily development style. This paper will introduce the overall thinking of D2C of our team and the general solution.
background
Dumbo is an intelligent development platform that uses image recognition algorithm to generate front-end code with one click. At present, it has landed in several Ali Cloud console and middle and background projects.
Firstly, the basic link of Dumbo is to generate JSON description (Schema) conforming to convention specifications by using intelligent technology through a picture, and then manually fine-tuning and correction through visual building platform. Finally, React module code is generated. Of course, for some of the design requirements are not strong, users can even directly on the build platform to drag and drop, and then generate code.
One might wonder why you would want to regenerate code when you already have a platform to build, i.e. a Runtime that allows JSON descriptions to be rendered directly. In addition, complex interaction scenes and non-standard UI are unavoidable. The goal of engineers is to fulfill the requirements. In order to avoid excessive complexity of the platform, secondary development of intelligently generated code is the optimal solution in line with the current human resources of the team.
Train of thought
Prior to the implementation of generating Dva modules, code generation was only the output of index.jsx with limited Schema information. In addition, in the preliminary implementation, the final code generation will be based on the Schema node, manually create THE AST node, and finally generate the whole AST according to the Schema, so as to obtain the final code. However, the cost of AST manipulation is high, and the readability of AST is almost zero. For some scenes that need optimization, AST is bulky and bloated. Therefore, this scheme abandoned AST and adopted the direct string splicing scheme by referring to some code generation experience of the group. In order to minimize unnecessary manual intervention, the overall code generation process can be simply described as follows:
The content of Schema preprocessing is the preliminary supplement and adjustment after intelligent algorithm recognition. After Schema preprocessing, users can use Dumbo platform to make a series of manual adjustment intervention and details supplement to the existing Schema, and then enter the Schema enhancement part, which mainly makes a series of adjustment for the style of the final generated code. Finally, the completed Schema contains all the information for the entire project and is assembled into code.
plan
Let’s expand each step and briefly describe the specific implementation process.
Schema preprocessing
The first is the pre-processing part. For image recognition, the result is an array of components, whose properties contain the name and location of the component, and nothing more. Through a series of location processing and nested assembly, a very primary Schema tree that conforms to the background standard protocol specification of Ali economy can be obtained. In the following figure, we describe it as Dumbo Schema. Since image recognition cannot make a deeper judgment on interaction and action, the information contained in the Schema tree at this time is very limited. In order to make the generated code as full as possible, we need to preprocess here and add interaction actions of common functions.
PrePlugins
Using Table as an example, we can briefly describe what PrePlugins have done. For Table, loading property is added to the Table node, and the same property value is set to this.state. The onSort and rowSelection attributes are added to the Table node based on the existing information in the Schema, and the values of the attributes are set to a simplified version of the example function. In addition, to ensure Schema integrity, the fetchTableData method is mounted on schema. methods, which implements simple isLoadingTableData linkage. Finally, Will call on Schema.lifeCycles.com ponentDidMount fetchTableData at a time.
Schema enhancements
Before generating the code, we need to put the preliminary generated Schema onto the canvas, and the user makes a series of adjustments. There are still a series of problems to be dealt with in the Schema to code after adjustment, including code style modification, support for Dva, etc. This is the final stage for manipulating the Schema. The enhanced Schema will be traversed directly to generate the corresponding Chunk of code, and finally spliced into complete code.
PostPlugins
Again using Table as an example, here’s a brief description of what PostPlugins are doing: In the Table scenario, if the default code output format is followed, the entire Table and table. Column will be directly output, which does not conform to our normal code writing style. Here, we need to extract all the child elements under the Table and add them to the loop in the form of map.
DvaPlugins
In principle, DvaPlugins should also be categorized as PostPlugins, but because of their particularity, they are explained separately here. It should be pointed out that the Dva model design at this time has certain deficiencies.
In principle, action, along with Sage, takes on a lot of business logic to keep the view layer simple. However, due to the limited context control of the whole front-end application on the current canvas construction, even after the final generation of Dva module is completed, the result is still a single page store, and store only deals with asynchronous requests, bearing very little on the business logic. In Dumbo, due to the diversity of logic, the generation of model can only be completed in this “style”, so it is no choice.
Take the current team project as an example. A proper DVA module includes five files, actions. Js, index.js, model.js, selectors. Among them, the contents of actions. Js and selectors are relatively fixed, index.js is the main content of the page, service.js is the collection of requests initiated by the page, and model.js is the most important part of DVA generation.
Since all DVA is generated around asynchronous data, all data interaction requests can be configured according to the dataSource attribute in the group specification Schema. In the logic of generating the code, for better output of the side effects function in the Dva module and initialization of State in the model, we agreed on the format of xxxTypeAsync for the data source ID. XXX is a user-defined data name. Please use the format of small hump. Type indicates the data Type. Async is a fixed identifier; The generated code will mount everything under the data field returned by the API to state. For side effects that do not need to be mounted to state, start xxxTypeAsync with XXX starting with set.
The basic principle of dVA module generation design is to ensure the normal rendering of canvas. In other words, all the configuration needs to be done to ensure that the exported code works properly in the canvas.
Code generation
At this point, the Schema contains all the information about what is already on the canvas. Here we need to traverse each node of the Schema, splicing each node to generate a Chunk object. Each Chunk object contains at least three attributes: Name, Content, and linkAfter, which respectively represent the name of the current Chunk, the code fragment represented by the current node, and the location of the Chunk. Where, linkAfter is an array of name attributes of other Chunk objects, which is used to indicate that the current Chunk should appear after these contents and to control the sequence of Chunk output. The concatenation process of content is mainly the recursive concatenation process of nodes. Each independent node can independently represent the code fragment represented by the current node by discarding the line text information, and the boundary conditions can be handled patiently.
The splicing process iterates over Chunk multiple times. Each time the loop will find all Chunk with the length of 0 linkAfter, record its name, concatenate its Content into the result string, and then delete the name in the linkAfter of all other chunks. In this way, until all Chunk linkAfter are empty, the final splicing is completed in sequence, which is the generated code.
The sample
A thousand words are not worth a chestnut.
The original request
According to the specification, data configuration can only be performed on the outermost Page node, so click on the outermost Page container of the canvas first. Then, click “Data” in configuration plug-in on the right; Finally, click Add Custom Data Source to open the data configuration form.
After simply filling in, we drag in an antD ordinary Table on the canvas, edit the Table column configuration in the right attribute, and adjust the Table column configuration according to the interface field. Here in the data array, choose to use a variable, fill this. State. TableDataAsync. Data. The List, look at this field, looking at a long, children do not be afraid of, tableDataAsync is just our data source ID, The following data.List is the field level for the data source.
At this point, the canvas has been rendered with real data.
Data source active trigger
Now let’s configure the Table to flip the page, same as the Table component above. Under the properties on the right, click on the pager and add the pager’s onChange property. Notice how it’s called
According to the group specification, dataSourceMap must be used as an identifier for calls to data sources. Here tableDataAsync is still the data source ID of the configuration item above, call the load method and pass the parameter. In addition, the load method returns a Promise, supporting then chain calls. If you need to continue the request after then, you can continue to nest the load function.
function(val) {
this.dataSourceMap.tableDataAsync.load({
pageNum: val
}).then(res= > {
this.setState({
tableDataAsync: res
})
})
}
Copy the code
Look at the code
Finally click on the upper right to view the code, you can get
Finally, the details of code generation are highlighted here:
- All data source ids need to be in the xxxTypeAsync format.
- The side effect is named get${data source ID}, and the default is to mount the data content of the returned data to state, res.data. Note the return of the interface.
- If the data source ID is /^set(\w+)/, no operation will be mounted to state.
- Currently, the code in index. JSX only controls the state related to the data source ID, which will be converted to the corresponding props. There is no processing for other states. Whether to put all the state into the model in the future needs additional discussion.
The problem
- After data definition, according to the group dataSource specification, this. DataSource [‘xxxTypeAsync’] needs to be invoked in the case of non-initialization request, which has some complexity for non-front-end students.
- The generated Dva module is processed according to the information in the Schema. There is no good way for the business logic that Dva should undertake.
- Currently, the dataSource URI on the canvas requires the interface itself to support CORS cross-domain.
Looking forward to
At present, all the operations on the canvas side are handled uniformly by the group’s specifications. In order to coordinate the generation of Dva module codes, we have made a series of conventions. After the data-related configuration is complete, you need to actively invoke the configured data according to the convention to trigger the Ajax call on the page. You will continue to pay attention to it and optimize the usage. In addition, the subsequent forwarding of the data interface will be supported through the gateway to realize the forwarding operation of real data, complete the canvas and code.
Finally, thank you for reading to the end. At present, the whole project is still in high-speed iteration. We have a lot of ideas, and we have made a lot of compromises due to human resources. We look forward to advancing together on the road of intelligence!