As one of the four technical directions of Alibaba Economic Front End Committee, the front end intelligent project has experienced the stage test of 2019 Singles’ Day and achieved a good result. 79.34% of the online codes of the new modules in tmall Taobao’s Singles’ Day venue were automatically generated by the front end intelligent project. During this period, the R&D team experienced a lot of difficulties and thinking. This series of “How is the Front-end Code Intelligently Generated” will share with everyone the dribs and DRBS of technology and thinking in the front-end intelligent project.


The text/card raccoon dog

【 Reading tips 】

  • The full text is longer, some terms ali tao department internal marketing field unique. If there is any confusion, please comment and exchange.
  • This paper focuses on the problems and solutions of D2C in the face of ali’s internal taobao promotion scenarios. We will set foot in more R&D scenarios in the near future.
  • Imgcook only has some of the features mentioned in this article. The community version of ImgCook is currently unavailable.

An overview of the

Imgcook is a creative chef who cooks with all kinds of design draft images (Sketch/PSD/ static images) as raw materials, and generates maintainable UI view codes and logic codes with one click through intelligent means.

Logic development is the last and most time-consuming step in the requirements graph of front-end development. From the perspective of the whole front-end development process, in addition to the initial static view writing, all data mapping, adding dynamic effects, function writing, event flow, buried point log and other codes are essentially a supplement to the static view information.

In the figure below, the output of requirements is the result of collaboration between the product, interaction designer, and visual designer, and the implementation of requirements is all implemented by the programmer. If “visual artwork combined with PRD interactive documentation” equals the final requirements document for deliverable development, then “static view + Logic” equals the final code for a front-end page.

adventure

Front-end development work belongs to a GUI (Graphic User Interface) programming, from the command line era into the graphical User Interface era until now, the convenient Interface development exploration has not stopped. In the early front-end field, MVC and MVVM design ideas and jquery, backbone. js, Angular, Vue, React and other excellent framework class libraries have emerged.

Separation of concerns is the guiding principle in GUI development, which simplifies software internalization by separating View and Model data. In fact, most of the design ideas and frameworks in the field of interface development follow this basic idea, and THE WEB technology of HTML + CSS + JS is also a manifestation of this idea.

The react idea advocated by the group is closer to this original core idea, providing only V and M and a rendering process associated with them. Simply put, it is Ui = render(Data). We recognized and carried out the D2C project based on the visual draft view reduction code as one of the bases.

The business logic we’ll explore in this article is what’s in the project code other than the View View. If D2C is a visual-to-code process, the missing pieces of code from the final live page will be implemented by the business logic layer of this article.

In hierarchical

The business logic layer is the most downstream of the D2C core capability, and all the intelligent capabilities of servitization need to be finally landed in the logic layer.

Challenges and Difficulties

Analysis of the situation

In the D2C system, most of the technical systems are based on the dimension of design draft vision, and the goal is to solve the accuracy and rationality of layout structure, field class name, inline component identification. However, the business logic needs to make up for the lack of ability of D2C, so the technical scheme and the whole project are not quite consistent.

At the same time, the business logic layer, as the key layer connecting the previous and the next, is responsible for undertaking D2C intelligent ability and output it to the visual layout platform. An intelligent result is a probability, a value that has a probability of being inaccurate, whereas a downstream visual choreography IDE is a program that requires a deterministic protocol to ensure that the output code is finally online. Whether business logic can achieve intelligent stable landing is a great challenge for us.

In the existing D2C link diagram, the input information of the business logic layer includes the following input items in addition to the UI structure transformed by layout algorithm:

  • Semantically inferred class name
  • Field binding Guess bindable fields (including image categories and NLP categories)
  • Component A component identified by a component
  • .

The result of the business logic layer is a visual choreography protocol with logic. The fields of this protocol can be divided into the following types according to the functions finally realized:

  • The view model
  • Field is bound
  • The function logic
  • Custom component transformation
  • .

Target link

In traditional development links, UI coding and logic need to be manually coded. In the current D2C link, based on the D2C visual restore ability, we can realize the automatic development of UI coding, and basically omit the development time of UI. The goal of D2C business logic layer is to realize the automation of logic coding. We hope to upgrade the D2C capability comprehensively, achieve the unified restoration of vision + logic, and achieve zero input of front-end coding.





Problem analysis

Step 1: Think about the real logical development process

Before we realize the automation of logical intelligence, we need to analyze how a logical code is written.

Assuming that we have to develop good static view, the next need to page logic code requires a process of input, the input sources can be our visual draft, past experience, under the framework of special rules, PD requirements documents, etc., from the input source of demand we get demand information and representations for a logical point.

For example, there is a “search” button in the visual draft, and we can retrieve past experience in our mind while observing. This search is most likely the logic of a network request triggered by a click. With the help of requirement documents and communication with relevant interface persons, we know the way of this network request and the returned content. Accordingly, the shape of a requirement is embodied as a logical point, and the next step is logical coding and testing.

In order to realize the intelligent generation of logic, the above operation line must be fully automatic. By observing the moving line of requirement coding, we can obviously find that the logical coding process can be divided into two major stages before and after taking the requirement representation as the dividing line. The former stage to achieve the collection of requirements, the latter stage to achieve the realization of the requirements, the middle of the point of each demand is our development to code to solve the work.

In our desired link, the business logic layer will provide two incremental capabilities of logical preconfiguration and logical restore for D2C capabilities. These two incremental capabilities correspond to the demand moving line is the collection of requirements and the implementation of requirements. In D2C system, we call it logical recognition and ** logical expression, and ** are respectively integrated into the following two positions of the link.

Step 2: How to identify logic?

In D2C system, logical identification is a pre-configured process. Different plans can be configured for different logic points. When using D2C, the user considers that the logic exists if the configuration is matched.

In the tao marketing scene where D2C is first landed, we try to analyze the biggest problems faced by intelligent generation of logic.

Thought 1: “How to identify the logical points contained in the module?”


A glance at the layout pattern of the above modules shows that it is a 1 row 2 module that requires a loop logic.

The “scan” action can be identified by analyzing the structure of the page given by the layout algorithm.

After analyzing each text, “cross shop full reduction, miss waiting for next year” is a copy of interest point type.

Human “analysis of characters” is the process of extracting characters, and different features are usually extracted for different purposes. Words related to interest points such as “cross-shop” and “full subtraction” appear frequently in this text, and our word segmentation algorithm based on ALiWs and naive Bayes multi-classification can effectively distinguish them.

“1000 units sold” looks like it needs to be bound to the monthly sales field;

This is also a process of analyzing text, which can be accurately distinguished by re.

The coupon on the lower left contains some significant visual features, for which a block is usually abstracted as a business component;

The coupon in the lower left is a business component whose style, text and number of nodes are characterized by data, which can be analyzed by using traditional machine learning.

The picture of the commodity is a white background map, which is bound to the field of the white background map with high probability.

Commodity graph is a kind of graph with obvious characteristics, which can be easily identified based on image classification algorithm.

The Buy now button may require an event to jump to the details page, or it may not jump to the module’s outer layer to jump to.

This logic can only be recognized by domain experience.

Thought 2: “Can we cover our scene?”

There are many similar analysis processes, which we will not go into for the moment. In order to solve the proposition, we urgently need to know the composition of the above logical points in the actual scene. Therefore, we analyze the relevant modules of Amoy marketing promotion and obtain the following logical point distribution histogram:

Among them, the number of data binding class logic accounts for more than 50%, followed by buried point correlation logic, loop correlation logic, function correlation logic for business processing, component logic and so on. In general, Amoy marketing scene module development logic is a set of rules to follow, there are norms to follow, enumerable, reusable, patterning, systematic program. After years of double-11 verification, we can basically confirm that the current specification can meet business needs, and there will not be a massive influx of unknown needs in a short time.

Thought 3: “What are our means of recognition?”

D2C system itself has a lot of low-level intelligent means, assisted by expert experience, can carry out comprehensive retrieval and identification of the above logic. Examples are as follows:

Random forest algorithm: A random forest is a classifier containing multiple decision trees, and its output categories are determined by the mode of the categories output by individual trees. It can be used for both regression and classification tasks, and it is easy to see the relative importance of the model’s input features.

Xgboost (eXtreme Gradient Boosting) is an eXtreme Gradient boost, which is used to great effect in games. It is a massively parallel vTREE tool and is the fastest and best open source Vtree toolkit available.

Text NLP classification: Text analysis is based on ALiWs word segmentation algorithm and naive Bayes multi-classification provided by Ali PAI platform. The main features of AliWS include ambiguity segmentation, multi-granularity segmentation, named entity recognition, part-of-speech tagging, semantic tagging, user maintenance of user dictionaries, and user intervention or correction of word segmentation errors.

Image classification: Classify the images in the business scene, use CNN network, and conduct migration training based on ResNet. It is also deployed on the PAI platform and is exactly the same as the text NLP classification productized link.

Semantic service: D2C customized semantic service for class names based on mobile scenarios. The expert system was used internally to develop the strategy tree, and two services such as Alinlp semantic entity, lexical analysis and translation were used in the specific discrimination process, and the self-built iconFont service realized the identification of small ICONS.

Layout algorithm: D2C based on its own row scan strategy developed absolute positioning to Flex layout rule algorithm, while providing cyclic detection, local group and other key functions. And so on.

In addition, we have some business domain specific logic that is featureless, and this part of logic is implemented using human intervention.

Finally, we decided to choose a variety of recognition methods, from the layout vision, text semantics, image features, empirical rules to achieve logical discrimination, and add necessary additional information for the use of logical expression. These programs that identify module logic are named logic recognizers.

Each recognizer will give identification results based on its own field of expertise, and achieve the purpose of nearly artificial thinking through the comprehensive retrieval of visual manuscripts.

Step 3: How to express logic?

Logical expression depends on two elements: the expression form of logic and the concrete content of logic.

Thought 1: “What form does logic take?”

In order to clarify the expression form of logic, D2C needs a scene that can bear intelligent achievements. Specifically, in terms of implementation, D2C needs a set of protocols and an intelligent intervention platform that can bear intelligent achievements.

Imgcook (D2C capability implementation application) combined with react, Vue and other excellent front-end frameworks, referring to various rival products, realizes a set of simple version based on Data driven (Ui = render(Data)) life cycle, and hopes that users can develop and write components based on this specification.

In 2019, Alitao Marketing upgraded its marketing module specification to rax1.0 system based on hooks, and ImgCook implemented a set of code generation services for mobile and PC based on the new component specification. This gives developers a lifecycle to use under hooks, and it is not necessary for developers to care whether the module is hooks or some other technical solution, just follow the module development style specified by ImgCook.

Having constrained the user’s coding specifications, ImgCook provides visual actions to implement the code. Imgcook IDE currently allows you to visualize most static modules. Here is a panel for visualizing the logic of the modules.

Take the typical requirements in Amoy marketing as an example. In imgCook’s visual editor, we usually implement it like this (assume that the current data of the module is a single-loop, one-row, two-item module, and the iteration object is item).

  • Bind data for the price of the node: Click node Properties -> Data -> Add a data binding with the value ofitem.itemActPrice.
  • Truncate the module when it is less than one line: Go to Quick Settings -> Code Write -> Add a method -> Write truncation code toitemsData whose length is not divisible by 2.
  • Buried some logic: Common buried point need to click node properties -> type switch to buried point link, click Add a data binding, adddata-track-type,exp-typeAnd add data binding for it; In addition to node type conversion and data binding, a node for real-time exposure needs to be added to the circular node, and its exposure type needs to be set as real-time exposure.
  • Loading more: Sliding loading requires more exposure events for the module, and loading more code in event handlers. Click to load more requires adding click events to the module and loading more code in the event handler
  • .

Abstract the operation steps of each logic are sorted out, and the following logic implementation steps are obtained:


Imgcook IDE each column in the table is an abstraction of sorts, so you can see that most of the logic can be implemented in the form of configuration. Since there are fewer marketing modules for large-scale business logic, we decided to generate function operators based on reuse rather than speculation, controlling the flow using only the order of execution and the presence or absence of a return value. More fine-grained arithmetic, logic operators, and flow control statements were not considered for the time being.

Thought 2: “What content is logic filled with?”

The intermediary between visualization and real code is ImgCook’s protocol, which we then need to automate.

The core of automated protocol generation is content. It’s not what the node does that matters, it’s what the node is bound to and what’s in the generated code. For this reason, the node information, global variables, manual configuration and other contents involved in the current process will be passed by our logical recognizer when it is executed, and the injection will be performed in the runtime of the logical expression. The data necessary for the current logic to express clearly and accurately is named as the logical context in the D2C business logic layer.

Here are some real logical contexts:

Logic 1: Bind active prices to nodes

// "logical context"
const recognizeResult = {
	element: "Text_0".// Which node needs to bind data
	meta:  {
		expression: "item.itemActPrice" // What is the binding expression}}Copy the code
Imgcook private protocol (imgCook private protocol) {// Data binding is managed by the outermost node.
const layoutJson = {
	id: "The root node".children: [...]. .// Node tree 🌲
	dataBindStore: [
		{
			belongId: "Text_0".// {{element}}
			value: {
				isStatic: false.expression: "item.itemActPrice" // {{meta.expression}}}}]... }Copy the code

Logic two: Less than one Line to intercept render array ** This logic function content is a reusable XTPL template that can be accessed directly recognizeResult as a render context.

// "Logical context" : Scope is one of the processing results of the core analysis page layout of the logical layer, and each logical context is accessible
const recognizeResult = {
	"element": null.// This logic does not mount nodes
	"meta": {}, // This logic also requires no additional arguments
	"scope": {
		"gSize": 2.// The current module is a row of 2 modules. Lines that do not meet the requirements of 2 modules will be deleted
		"loop": "items".// The current module loops through the array property data.items
		"loopArgs": "item".// The object inside the loop is called item}}Copy the code
// Finally translate to imgCook private protocol function in the sample protocol
const layoutJson = {
    id: "The root node".children: [...]. .// Node tree 🌲
    scirptStore: [
		{
	    	content: `... {{~ # if (CTX) userLogicConfig) sliceFloor)}} / / hole processing, Const count = math.floor (data.{{scope.loop}}. Length/gSize) * gSize; {{scope.loop}} = data.{{scope.loop}}.slice(0, count); {{~/if}} ... `.name: "getModuleRows".// The truncation logic is written in the split function
			type: "custom"}]... }Copy the code

The render context can be used to “trap” the protocols that will eventually be added, achieving an accurate representation of the protocol.

Through lengthy analysis, our proposition of intelligent generation of business logic has been refined to two points: how to use intelligent ability to recognize logic and produce context and how to realize automatic expression of logic based on context. After these two points were confirmed to be feasible, we started the formal design.

Intelligent logic layer design

Solution overview

Based on a complete derivation of the proposition, we have a complete outline of the capabilities of the business logic layer.

Logical identification + Logical Expression = Logical intent A logical recognizer requires you to decide how to perform identification based on the actual scenario. At this stage, visual draft and manual rules are received, and the output is recognition result. By analogy with the human thought process, D2C has now “confirmed the requirements of the module”.

The logic expresser is a preset version of imgCook’s visual operations that translates the recognition results and displays the impact of the logic directly on the final module. By analogy with the human thought process, D2C has “written the requirement”.


Functional division

Based on the above derivation process and responsibility definition, we divide the business logic layer into logic identification, logic expression, logic core and other modules.

  • Logic identification provides unified access to intelligent capabilities to ensure that business logic can be accurately identified by a specified identifier and produce a unified logic context.
  • Logical expression is responsible for the configuration of the logical visual protocol, can automatically apply the logical context, after the logic is identified automatically displayed on the module.
  • The core part of the logic layer provides the serial integration and expansion capabilities of the two, such as timing control, layout mode support, bottom-vo (View Object) generation, logical context injection, manual intervention, etc. From the global control of the whole static design draft logic process.
  • The Libs provides basic capabilities, some for core calls and some for identifying function calls in a logical context.


As mentioned above, D2C is piloted in the marketing field of Alibaba’s internal taobao system. In order to facilitate access to other business scenes of the group, we have the concept of logical scenes attached to the team. A logical scenario is a set of logic that solves a business domain. As long as the business logic in a business domain is enumerable, standardized and customizable, you can build your own logical scenario, which is convenient for other students in your team to develop modules.

Logical core function disassembly

Layout pattern recognition

D2C supports the identification of a row of N-type modules, horizontal and vertical cycles, and cycles nested at any level, covering most static module layouts in the marketing domain. It is worth noting that D2C has a high requirement on the standard degree of design draft, and the requirement of module layout restore accuracy in double-11 activities must be 100%, which requires that intelligence must be regularized as the bottom. For this purpose, we have upgraded the D2C design draft agreement, you can use the form of marks on the design draft to organize the design draft, to ensure that the layout restore structure is accurate, the cycle detection is normal.


View-model deduction

The premise of intelligence is standardization. Views identified by D2C need to be mapped to data model (Schema) to properly express logic. D2C synchronously retrieves the schema during module view layout recognition to ensure that the loop level and the fields at each level correspond. However, at present, developers cannot guarantee that modules have a formed schema, so imgCook implemented the view model deduction, which can automatically predict the model when there is no schema, to ensure that D2C is a complete system only from the perspective of visual draft.

Deduction is a process of building a data model tree. In the process of layout structure, we regard the circular layout as the branch, and each node triggering binding data binding as the leaf of the branch. The specific content of the leaf is obtained from the data aggregation results of the previous modules.

Render context injection

Function recognizers and view expressers are two places in the logical layer that execute functions based on the Node VM, and the interface definition shows the connection between them. Function recognizer

// Function recognizer entry parameter
export interface LayoutJson {
	children: LayoutJson[];
	style: any;
}
export interface LayoutResult {
	ctx: Ctx, // Development context. slightly
	scope: Scope, // Global variables. slightly
	UserLogicConfig: UserLogicConfig, // User input. slightly
}
export interface Options {
	utils: Utils, // Tool methods. slightly_ :any.// lodash // https://www.lodashjs.com/docs/latest
} 
// Function recognizer output parameter (output only when there is recognition result)
export interface RecognizeResult {
	order: number; // When this logic needs to express order control, it will be sorted in order from smallest to largest
	element: number; // Id of the node to be logically mountedmeta? :any; // Other identification results
}
Copy the code

View expresser

// View expression input parameter
export interface LayoutJson {
	// join with the recognizer
}
export interface RecognizeResult {
	// The output parameter of the recognizer
}
export interface Options {
	// Same as input parameter 3 of the recognizer
}
// The view expression takes an argument
export interface ViewResult {
	layout: LayoutJson // Processed layout JSON
}
Copy the code

Function operator timing control

In actual scenarios, there is less coding logic that needs to be generated automatically through D2C. We use timing and whether there is a return value for simple data flow control. Process control only occurs in the handle function of a lifecycle event or node event. For example, suppose we have three pieces of logic that need to be implemented in created: “stuff an image into a loop array”, “truncate an array by row”, and “send an exposure buried point”. So by configuration order and whether or not there is a return value, we can get this created function.

function created() {
	let data = this.get();
	// created-flow // created process starts
	data = this.addImage(data);  // Order 1, return value
	data = this.sliceArray(data); // Order 2, return value
	this.expTrack(data); // Order 3, no return value
  
	return data;
}
export default created;
Copy the code

Human intervention

We also know that there is a lot of logic that cannot be informed by Design. To solve this generation of featureless logic, we add human intervention to coordinate. Coordinated way is a parameter information to define a custom form for user input, in internal link layoutResult. UserLogicConfig access. The second coordination method is logic filtering: each configuration of logic has developer intervention options, checked, module developers have the right to shield this logic. These control measures will be reflected in the query popup of module restoration. If you do not know the specific intention of the popup content, you need to contact the person in charge of the current logical scene.


Logical recognizer at a glance

Logical recognizers are an optional configuration. For a logic, we have the following diagram to guide the user to use the correct recognizer:


1. UI material identifier

  • Features:
    • Compared with the random forest algorithm, xGBoost algorithm performs better in our data set to identify the visual feature nodes with logic in the module
    • It is suitable for scenarios where the visual features of the sub-components are obvious and the logical components have certain complexity
    • A material identifier is essentially a classifier that tells the administrator that a node is a logical carrier. They don’t tell us anything more
    • The draft injection protocol can be used to override this logic when the UI material identifier does not meet expectations

2. NLP text recognizer

  • Features:
    • The text classification model is implemented based on ALiWs word segmentation algorithm and naive Bayes multi-classification. Training the text samples you input can help you effectively classify the text in your problem domain.
    • This method is recommended when you have a large volume of text
    • Contains built-in recognition results covering some common classifications of inconvenient text NLP training links. For example: price, original price, commodity drawing, white background drawing and so on.

3. Custom function recognizer

  • Features:
    • Used when the target logic can be identified by analyzing the style, structure, text, and other information in javascript code
    • It is a good logical library construction scheme without sample training UI material recognizer and NLP text recognizer
    • The function recognizer accepts user input to make logical decisions. For example, in the tmall business scenario, there are two completely different logical expressions for buried points. We can write two buried point logics and choose one from the two recognition functions. In addition, function recognizers can grab component tree information to provide a more powerful logical context to the logical expression

4. Default recognizer:

  • Features:
    • The default logic is applied to all modules and its logic recognition result is always true
    • For some visual level has no feature, relatively general logic
    • It is mostly used with “developer intervention”, and the developer decides whether to use it or not
    • The default recognizer cannot obtain component tree information. If you need to obtain information, go to the function recognizer

5. Regular recognizer

Is the regular analysis version of NLP, ability is a subset of NLP recognition.

A glance at the logical expression

A logical representation is the result of a combination of abstracted child representations. We disassemble the implementation of a logic into the most fine-grained visual operation. By analyzing the specific implementation of this logic, we configure expressions in the background successively for logical assembly. When the recognizer tells the expression that the current logic is active, it automates the code that implements that logic.


1. View child expression

  • The ability to handle view-level changes is extremely powerful and theoretically overrides the ability of all other expressors. Imgcook wants to visually configure view operations as well in the future, so it wants view expressers to focus only on view-level changes, with clear responsibilities, and not the capabilities of other expressers
  • This expression receives a view-handling function that is executed by the VM

2. Data binding subexpression

  • Can add a standard data binding, automatic deduplication. Most of the time you need to express things dynamically with attributes in the logical context
  • This expression receives a configuration of data binding

3. Event binding child expression

  • You can add an event binding with an event execution handle by default, and each event execution handle can populate the process with function operators.
  • This expression receives the configuration of an event binding

4. Function operator subexpression

  • Construct a custom method and decide which handle to call from. In general, function operators can only be called in the flow of event execution handles and lifecycle functions, and the flow is controlled by sorting and whether there are any returned objects.
  • This expression receives the configuration of a function, the content of which can be written using the XTemplate syntax

5. Dependency management sub-expression

  • Used when the first few child expressions need to introduce tripartite dependencies
  • This expression receives an injection of a dependency

The ground effect

Double 11 Logical Scenario Construction

In the development of the Double 11 module, imgCook constructed a set of unique business logic scenarios for tao marketing activities based on the intelligent logic link mentioned in this paper. Built-in default logic such as margin setting logic based on Amoy marketing visual specification, burying logic based on Amoy marketing burying logic, module rendering splitting logic based on RAx-hooks solution. In addition, a large number of logic of data binding and component identification are configured in combination with the intelligent capabilities provided by recognizers such as text NLP and UI material classification algorithm. When such features are contained in the visual draft, the developer will automatically apply the logic code to the result.





Double 11 Logical restore indicator

According to statistics, about 78.94% of modules used IMGCook business logic to generate links on November 11, 2019, and 79.34% of generated module codes were retained in the online codes after the restoration. The average number of matched business logic was about 14. That is to say, each new module developed based on a new D2C link saves the developer at least a dozen logic. The correlation is even higher on modules of a static UI with weak logic. For example, the following modules can be restored to the state of lifting and testing, which greatly reduces the workload of developers.


future

The current lack of

In the process of double 11 landing, also exposed a lot of problems, such as the process is not friendly, tao system module development process is demand -> design draft -> module development -> front and back end joint adjustment -> module online. As a system for giving logic to a design draft and making it directly into a new concept of a liveable module, there was no development time reserved for it. At present, we make up for the above deficiencies by increasing the involvement of human resources in advance of development. In the future, we will bring the production process of demand and design draft into the category of business logic layer, and provide one-stop r&d closed-loop experience for the modules that can be supported. The designer is responsible for designing the UI, the PD is responsible for adding requirements based on the UI, and the developer is responsible for maintaining the usable logic in the background. Combined with the future trend of the team, D2C’s practical experience in the field of business logic will effectively help the whole system in the future.

Future plans

D2C intelligent logic system has verified its ideas and taken a solid step. In the future, logical intelligence system 2.0 will focus on the following directions:

Product Form upgrade

As stated at the beginning of this article, Design + PRD equals requirements. D2C is a technical system based on design draft. We will access PRD structural capability in the future to replace imgCook’s current manual intervention link and realize full link zero development.

Indicator driven optimization

Based on the statistics of dual-drive modules, we developed the concept of code availability, that is, the proportion of generated code to live code by IMGCook. In the future, we will extend the access of more logic recognizer algorithms, provide more abstract and easy-to-use logic expressers, and realize the organization of business logic layer kernels. Imgcook will be metrics-driven development that will allow intelligently generated code to collide with the real business, and ultimately move toward providing superior intelligent services.

Imgcook Studio

Imgcook editor kernel upgrade. Based on the kernel, we will derive marketing business, community business, small program business and other business platforms, and extend the construction of logical scenes to the group level, so as to achieve customization in more business scenes. Finally realize the stable output of the front-end intelligent capability.

summary

To put it simply, we hope to have stronger intelligence ability, wider service scenarios and higher efficiency demands. So that the whole system really becomes an intelligent service deduced from the visual draft view and generated all module logic. From the front-end coding accounted for 79.34%, to the front-end “zero RESEARCH and development”, to the demand “zero research and development”, and finally to the whole demand “zero investment”.


More recommendations:

  • Experience smart code generation with one click
  • Imgcook Zhihu column brings you cutting-edge information on front-end intelligence
  • Imgcook Nuggets Column Design draft Intelligently generates code
  • Nailing communication


Welcome to join us: [social recruitment/school recruitment] [Hangzhou] [Ali Tao Department of technology – channel and D2C intelligence] front-end recruitment: at this moment, not me!