An imagination experiment

Hello World on cloud is in full swing! Each participant can receive a global unique serial number commemorative certificate produced by Ali Cloud!

As one of the four technical directions of The Front End Committee of Alibaba Economy, when the front end intelligent direction is mentioned, some people are inevitably curious about what the front end combined with machine learning can do and how to do it, and whether it will have a great impact on the front end in the future. This paper takes the scenario of “automatic generation of code from design draft” as an example to describe our thinking and process practice in detail.

The combination of front-end intelligence and cloud IDE can further improve the development experience of front-end developers through the various services and interfaces opened by the latter, as well as the ability of design draft to generate code, which can be transformed from the original design draft to generate deployable applications/pages at any time. So at the end of this article, I will show you how to use imgCook plug-in on Ali Cloud development platform to complete the application of design draft generation with one click and start your journey of intelligent development.

Front End intelligent background

Machine learning is in full swing in the industry, and “AI is the consensus of the future” has frequently appeared in the media. Teacher Kai-fu Lee also pointed out in AI · Future that nearly 50 percent of human jobs will be replaced by ARTIFICIAL intelligence within 15 years, especially simple and repetitive jobs. And white-collar jobs are easier to replace than blue-collar ones; Because blue-collar jobs may need breakthroughs in robotics and software and hardware related technologies to be replaced, while white-collar jobs generally need only breakthroughs in software technology to be replaced, will the “white collar” jobs in our front end be replaced, and when and how much?

Back in 2010, software swallowed almost every industry, leading to the boom of recent years. And in 2019, the software development industry itself is being swallowed up by AI. You see: in the DBA world, question-to-sql, where you can generate SQL statements for a domain by simply asking questions; TabNine, a machine learning-based source code analysis tool, assists in code generation. The designer industry also has P5 Banner intelligent designer “Deer Class”, and the intelligent combination of testing field is also wonderful. What about the front end?

We have to mention a familiar scenario, which is Design2Code (hereinafter referred to as D2C). The current stage of Alibaba Economy Front-end Committee – The Direction of Front-end Intelligence focuses on how to make AI help the front-end function to improve the efficiency and upgrade, eliminating simple and repetitive work. Let front-end engineers focus on more challenging tasks!

Imgcook what is imgcook?

Imgcook uses Sketch, PSD, static images and other forms of input to generate maintainable front-end code with one click through intelligent technology, including view code, data field binding, component code, part of the business logic code, etc.

It expects to be able to use intelligent means to become a front-end engineer, on the premise of light constraints on the design draft to achieve a high degree of restoration, release front-end productivity, help front-end and designers efficient collaboration, so that engineers focus on more challenging work!

The goal of the code generated by the design draft is to let the AI help the front-end function to improve the efficiency and upgrade, and eliminate simple and repetitive work content. Let’s first analyze the “conventional” front end, especially those facing C-end business, and the general work flow is as follows:

The normal development workload of the “normal” front end is mainly focused on view code, logic code and data syndication, which we then break down and analyze piece by piece.

1. Problem definition

View code development, generally based on visual draft to write HTML and CSS code. How to improve efficiency? When faced with the repetitive work of UI view development, it naturally comes to the solution of packaging and reuse materials such as componentalization and modularization. Based on this solution, there will be the precipitation of various UI libraries, and even the higher production packaging of visual assembly, but the reuse of materials can not solve all the problems of the scene. Personalized businesses, personalized views are popping up everywhere. Is it possible to directly generate usable HTML and CSS code?

This is a proposition that the industry has been trying constantly. Basic information of layers can be exported through the development plug-in of the design tool. However, the main difficulty here is the high requirement for the design draft and the poor maintainability of the generated code, which is the core problem.

1) High requirements for design draft

High requirements for design drafts will lead to increased costs for designers, which is equivalent to the front-end workload transferred to designers, resulting in great difficulty in promotion.

One possible solution is to use ComputerVision (ComputerVision) and export layer information to remove the constraints of the design. Of course, the best way to export the design is to export an image directly, which has no requirements on the designer, which is our dream solution. We have also been trying to separate suitable layers from static images, but currently they are not available enough in the production environment (problems of small target recognition accuracy and complex background extraction still need to be solved). After all, the meta information contained in the design draft is more and more accurate than the meta information extracted from a single image.

2) Code maintainability

Generated code structures typically face maintainability challenges:

  • Reasonable layout and nesting: including absolute positioning to relative positioning, redundant node deletion, reasonable grouping, circular judgment and so on;
  • Element adaptation: element expansibility, element alignment, element maximum width and height fault tolerance;
  • Semantematization: multi-level semantematization of class names;
  • Style CSS expression: background color, rounded corners, lines, etc., can be used to analyze and extract styles by CV, and use CSS expression styles as far as possible instead of using pictures.

These problems are broken down and sorted into special solutions, which seem endless, but fortunately, the major problems found at present are basically solved.

Many people argue that the implementation of this capability seems to have nothing to do with intelligence at all, but more to do with expert rule systems related to layout algorithms. Yes, at the present stage, this part is more suitable for the rule system. For users, the layout algorithm needs close to 100% availability. In addition, most of the problems involved here are the combination of countless attribute values, and the current rule is more controllable. If models must be used, this can be defined as a multi-classification problem. At the same time, any application of deep learning model requires a clear problem definition process, and clearly defining problem rules is an essential process.

But in cases where rules are cumbersome to resolve, models can be used to assist. For example, in the judgment of reasonable grouping (as shown in the figure below) and circular items, we have found in practice that in various cases there are still misjudgments and algorithm rules are difficult to enumerate, which requires contextual semantic recognition between elements, which is also the key problem to be solved with the model.

2. Technical solutions

Based on the above overview and problem decomposition, we stratified the capability overview of the existing D2C intelligent technology system, which is mainly divided into the following three parts:

  • Identification ability: that is, the identification ability of the design draft, intelligent analysis from the design draft contains layers, basic components, business components, layout, semantics, data fields, business logic and other multidimensional information; If the intelligent recognition is not accurate, it will be supplemented and corrected by visual manual intervention. On the one hand, it is to visualize the low-cost intervention and generate highly available codes. On the other hand, the data after these interventions are labeled samples, feeding back to improve the accuracy of intelligent recognition.

  • Presentation ability: mainly responsible for data output and project access: a) Make standard structured description Schema2Code through DSL adaptation; B) Project access through IDE plug-in capability;

  • Algorithm engineering: In order to better support the intelligent capabilities required by D2C, high-frequency capabilities are servitized, mainly including data generation and processing and model services:

    • Sample generation: mainly deal with sample data from various channels and generate samples;
    • Model services: Mainly provides model APIS to encapsulate services and data reflux.

(Front-end intelligent D2C capability summary stratification)

In the whole scheme, we use the same set of data protocol specification (D2C Schema) to connect the capabilities of each layer, ensure that the recognition can be mapped to the specific corresponding fields, and correctly generate the code through the code engine and other schemes during the expression stage.

1) Intelligent identification technology stratification

In the whole D2C project, the core is the machine intelligence recognition part of the above recognition ability part, which is further decomposed as follows:

  • Material identification layer: mainly through the image identification ability to identify the image of the material (module identification, atomic module identification, basic component identification, business component identification);

  • Layer processing layer: mainly separate the layers in the design draft or image, and sort out the layer meta-information combined with the recognition results of the previous layer;

  • Layer reprocessing layer: To further normalize the layer data of the layer processing layer;

  • Layout algorithm layer: convert 2d absolute positioning layer layout to relative positioning and Flex layout;

  • Semantic layer: semantic expression of layers in code generation side through multi-dimensional features of layers;

  • Field binding layer: the static data in the layer is combined with the data interface to do interface dynamic data field binding mapping;

  • Business logic layer: generate business logic code protocol for configured business logic through business logic identification and expression;

  • Code engine layer: finally output the code protocol processed by each layer intelligently, and output various DSL codes through the expression ability (the engine of protocol to code).

(D2C identification capability technology stratification)

2) Technical pain points

Of course, the incomplete recognition and low recognition accuracy have always been a platitude topic of D2C and a core technical pain point of IMGCook. We try to analyze the factors causing this problem from the following perspectives:

  • Inaccurate recognition problem definition is not accurate: problem definition is not the primary factor of influence model recognition, many people believe that sample and model is the major factor, but before that, may be the beginning of the definition of the problem is a problem, we need to decide our recognition model is appropriate to do, if applicable, that how to define clearly the rules, etc.

  • High-quality data set samples are lacking: We in the identification of each machine intelligence recognition ability depends on different samples, the samples we can cover many front development scenarios, each sample of data quality, data standard is unified, the characteristics of the engineering process is unified, sample whether there is ambiguity, interoperability, this is our current problems;

  • Low model recall and misjudgment: We tend to accumulate many different types of samples in different scenarios as training, expecting to solve all recognition problems through a single model, but this often leads to low recall rate of some classifications of the model and misjudgment of some ambiguous classifications.

[Problem Definition]

Computer vision in the deep learning class model is suit to solve the issues of classification and target detection problem, we should judge a recognition problem using depth model is the premise of – whether our own judgment and understanding, this kind of problem if there is ambiguity, etc., if the person cannot accurately judge, then the recognition problem may not be very suitable.

If the judgment is suitable for deep learning classification, it is necessary to continue to define all classifications, which need to be rigorous, exclusive, and completely enumerable. For example, when doing the proposition of semantic semantics of pictures, what are the general general names of classnames of general pictures? For example, the analysis process is as follows:

  • Step 1: Find as many relevant design drafts as you can, enumerate the types of images in them one by one;

  • The second step: the image type is reasonably summarized and classified, which is the most likely to have disputes, poor definition and ambiguity will lead to the most model accuracy problems;

  • Step 3: Analyze the features of each type of picture, and see whether these features are typical or the most core feature points, because they are related to the reasoning generalization ability of the subsequent model.

  • Step 4: Is there any data sample source for each type of picture? If not, can it be made automatically? If the data sample is not available, it is not suitable to use the model, you can use algorithm rules and other ways to see the effect first.

D2C projects, many of the problems in the definition of the problem need to be very accurate, and need to have a scientific reference basis, this itself is difficult, because there is no precedent can be drawn lessons from, can only use first known the experience of the first trial, there is something wrong with the user test again after correction, it is a need to continue iterations lasting pain points.

[Sample quality]

In view of the sample problem, we need to establish standard specifications for this part of the data set, construct multi-dimensional data sets according to the scene, process and provide the collected data uniformly, and expect to establish a standard data system.

In this system, we use a unified sample data storage format, provide a unified sample evaluation tool for different problems (classification, target detection) to evaluate the quality of each data set, and for some specific models, feature engineering (normalization, edge amplification, etc.) with better effects can be adopted to deal with. Samples of similar problems are also expected to be circulated for comparison in subsequent different models to evaluate the accuracy and efficiency of different models.

(Data Sample Engineering System)

“Model”

To solve the problem of model recall and misjudgment, we try to converge the scene to improve the accuracy. There are often some similarity points in features or local key feature points that affect part of classification in samples of different scenes, resulting in misjudgment and low recall. We expect to do model recognition by converging scenes to improve model accuracy.

We converged the scene to the following three scenes: wireless C-terminal marketing channel scene, small program scene and PC background scene. The design patterns of several scenes have their own characteristics. Designing a vertical recognition model for each scene can effectively improve the recognition accuracy of a single scene.

(D2C scenario)

3) Thinking and solution

In general, since the depth model is used, there is a relatively practical problem that the generalization ability of the model is not enough, and the accuracy of recognition is bound to be not 100% satisfactory to users. What can we do before this except to continuously supplement the batch of sample data that cannot be identified?

In D2C the reduction in the link, the identification model, we also observe a methodology, which can design a set of protocol or rules in other ways out recognition effect of depth model, security in the case of model identification is not accurate user can still be completed appeal: manual rule strategy agreement > > > machine learning model of the depth. For example, if you need to identify a loop in your design:

  • At the beginning, we can reach the agreement of the loop body manually in the design draft;
  • According to the context information of the layer, some rules can be made to judge whether it is a circular body;
  • Using the layer features of machine learning, we can try to optimize the rule strategy upstream.
  • Generate some positive and negative samples of the cyclic body to learn by deep model.

The manual protocol resolution of the design draft takes the highest priority, which also ensures that the subsequent flow is not affected by blocking and error identification.

Imgcook X Cloud development platform

Why integrate IMGCook on a cloud development platform? Imgcook and the cloud development platform will be able to solve each other’s pain points by getting through each other, providing a new user experience for both cloud developers and imgCook developers.

For imgCook developers, one of the pain points is the management of the draft and the logic of the front and back end interaction. However, with the cloud development platform, developers no longer need to install Sketch locally, and can directly upload the draft to the cloud development platform to start generating code, which is truly a one-click generation at zero cost.

The Midway Serverless framework is available directly on the Cloud development platform. The plugin customization allows developers to directly select the functions to be used on a page, thus saving them the need to write basic logic and request code for front-end and back-end interactions.

Imgcook can lower the threshold of using cloud development platform. For example, a FaaS application engineer no longer needs to learn how to cut graphs and write CSS, but only needs to write the logic of FaaS functions. The rest of the front-end logic can be done in the development platform via the IMGCook plugin, which is a great experience!

So, let’s look at how to quickly generate code from 0 to 1.

To create an app, open the cloud development platform and click imgCook:

You can then start using the imgCook cloud plug-in by right-clicking it in the WebIDE of your application.

Step 1: Select “Import” in the plug-in to open the interface for uploading design draft:

Imgcook Visual Editor step 2:

Step 3: Generate code:

Step 4: Export code to the application:

Step 5: Online application

$NPM install $NPM run dev -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- development server has successfully started Please open the > > > http:// * * * * * 3000. xide.aliyun.com/ -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- thank you for using Midway Serverless, welcome Star! https://github.com/midwayjs/midway ---------------------------------------Copy the code

After successful startup, open the page through the command line address as follows, isn’t it very simple?

conclusion

Imgcook is a cloud development platform for smart development. If imgCook is a cloud development platform for smart development, you can use imgCook to start smart development. If imgCook is a cloud development platform for smart development, you can use imgCook to start smart development. Some tedious and time-consuming work in daily work will be handed over to AI to complete, so that engineers can focus on more interesting and valuable things. I also believe that in the near future, front-end engineers will work more happily and calmly with the help of AI!

Click the link immediately experience: yunqi.aliyun.com/2020/hol

“Alibaba Cloud originator focuses on micro-service, Serverless, container, Service Mesh and other technical fields, focuses on the trend of cloud native popular technology, large-scale implementation of cloud native practice, and becomes the public account that most understands cloud native developers.”