In the design code generation process, we need to first resolve layers into UI nodes, and then generate code through the layout algorithm.

Basic process of transferring design draft to code

As the first step towards front-end intelligence, the UI data parsed is critical to the quality of subsequent code restoration, so a solution is needed to ensure that common and effective UI nodes can be output during the parsing phase.

For the purpose of versatility and effectiveness, we divide the parsing process into two steps: layer abstraction and layer optimization.

The layer of abstract

In order to realize the universality of UI Nodes and be compatible with different design draft types, such as PSD, Sketch and XD, we abstract the layer of the design draft into three types of UI Nodes: Image, Shape and Text:

  1. Shape, the Shape layer that can be realized by styles, such as rectangle with pure color border, rounded rectangle, circle, etc.

  2. Text, a Text layer that can be implemented with styles;

  3. Image, layers that cannot be implemented with styles, such as complex graphics, textured shapes, bitmaps, and artefacts;

In addition to layer type abstraction, other layer information will also be abstracted as primitive attributes, which can be divided into three types:

  • Basic properties, such as name, ID, layer type

  • Positional properties, such as width, height, coordinates

  • Style properties that describe layer colors and borders

UINode properties

The code for the UINode interface is as follows:

/** * layer class interface */ layer id id: string = "; // Layer types, including Text,Shape,Image type: string; // Layer name name: string = "; // width: number = 0; // Height: number = 0; AbX: number = 0; AbY: number = 0; // Layer Styles: UIStyle = {}; }Copy the code

The layer optimization

Parsed layers often contain some invalid information, such as layer redundancy and layer fragmentation. We need to optimize UI node information through data preprocessing to improve the accuracy of code restoration.

The pretreatment stage is mainly divided into two steps: 1. Layer cleaning 2. Layer merging;

1. Layer cleaning

There will be invisible layers in the design. Removing them will not affect the visual effect. These layers are redundant.

The design draft has an invisible layer

Layer cleaning is to eliminate invisible layers, which can be divided into the following four situations:

1.1 Layer Style Transparent without background

const isTransparentStyle = function(node: UINode): boolean { const { background, border, shadows } = node.styles; return ( ! node.childNum && (node.isTransparent || (background && background.hasOpacity && background.type === 'color' && +background.color.a === 0) || (border && +border.color.a === 0) || (node.type === UINodeTypes.Shape && ! background && ! border && ! shadows)) ); };Copy the code

1.2 The layer is overwritten by other primions

Const isCovered = function(node: UINode, nodelist: Array<UINode>): boolean { const index = nodelist.indexOf(node); const arr2 = nodelist.slice(index + 1).filter(n => ! isContained(n, node)); Return arr2.some(brother => brother. Type! == QNodeTypes.QLayer && isBelong(node, brother) && ! brother.hasComplexStyle); // If a node is overridden by a sibling, and no other attributes (shadow) of its own affect the sibling, remove the node};Copy the code

1.3 The color of the layer should be the same as the color of the bottom pixel

Const isCamouflage = function(node: UINode, nodelist: Array<UINode>): boolean { const { pureColor } = node; if (! pureColor) return false; const nodeIndex = nodelist.indexOf(node); const bgNode = nodelist .slice(0, nodeIndex) .reverse() .find(n => isSameColor(pureColor, n.pureColor) && (! n.parent || isBelong(node, n))); if (! bgNode) return false; const bgNodeIndex = nodelist.indexOf(bgNode); if (bgNodeIndex + 1 < nodeIndex) return ! nodelist .slice(bgNodeIndex + 1, nodeIndex) .some(n => isIntersect(node, n)); return false; };Copy the code

1.4 The layer is outside the visible boundary

Const isOutside = function(node: UINode, rootNode: UINode): Boolean {return! ( node.abX >= rootNode.abXops || node.abY >= rootNode.abYops || node.abX >= rootNode.abXops || node.abY >= rootNode.abYops ); };Copy the code

We define it as a cleaning function, enter the layer node list to iterate, and filter out the node if one of the above four criteria is met.

Function clean(Nodes: UINode[]) {const [rootNode] = nodes; return nodes.filter((node: UINode) = > {const needClean = isTransparentStyle (node) / / node if the style is not visible | | isOutside (node, RootNode) / / whether the node is located in the border | | isCovered (node, nodes)/whether/nodes are covered | | isCamouflage (node, nodes); // Whether the node is color-camouflaged // One of the cases is considered redundant node return! needClean; }}Copy the code

2. Merge layers

This step is mainly to determine which layers in the design draft need to be merged, such as the smiley icon below. If the layers are exported without grouping, four scattered images will be output.

Scattered layers to be merged

The way we judge the merger is based on whether the spatial relationship between layers intersects, which is mainly divided into the following two steps:

2.1 Determine the intersection relationship between two nodes

As shown in the figure above, the intersection of eye and face, and the intersection of mouth and face, thus obtaining the intersection relation A: [eye,face], intersection relation B: [mouth,face] two groups, the code is as follows:

let isCollision = (node: UINode, brother: UINode) => ! ( (node.abY + node.height < brother.abY) || (node.abY > brother.abY + brother.height) || (node.abX + node.width < brother.abX) || (node.abX > brother.abX + brother.width) );Copy the code

2.2 Merging Multiple Nodes

We merge the groups (edges) of the intersecting relation. For example, the face layer in side A also exists in relation B, so merge A and B to get C: [eye, face, mouth].

MergeJudge (nodelist: UINode[]): Array<Set<UINode>> {const groups: Array<Set<UINode>> = []; const relations = []; for (let i = 0; i < nodelist.length; i++) { const node = nodelist[i]; for (let j = i + 1; j < nodelist.length; j++) { const brother = nodelist[j]; Push ([node, brother]); push([node, brother]); ForEach (([node, Brother]) = > {/ / find the edge of the current two endpoints ever have group let res = groups. The filter (group = > group) from the (node) | | group. From the (brother)); If (res.length) {// Join group const unionGroup = res.reduce((p, c) => p.concat([... array. from(c)]), []); res.forEach(g => groups.splice(groups.indexOf(g), 1)); Groups.push (new Set(unionGroup).add(node).add(brother)); } else groups.push(new Set([node, brother])); // Create a new group}); return groups; }}Copy the code

Finally, according to merge these relations into new nodes:

Function merge(Nodes: UINode[]) {// Merge layers based on spatial relationship if (! nodes.length) return; const groupArr = mergeJudge(nodes); GroupArr. Map ((item: UINode | UINode[]) => { if (item.size > 1) { const newNode = union([...item], UINodeTypes.Image); Return newNode; } return item; }); });Copy the code

conclusion

In this paper, layer abstraction and optimization are two steps. The abstraction process is to resolve the layers of different design software into a unified data structure. Then, through layer optimization, redundant nodes are removed and scattered nodes are merged to get a “clean” SET of UI nodes. Later, we’ll show you how to use these UI nodes for layout to generate the final code.

For more lessons on front-end intelligence, please refer to the course I shared earlier: ke.qq.com/course/2995…

Intelligent front-end – from image recognition UI style: zhuanlan.zhihu.com/p/207308196