A preface.

1.1 The original intention of this paper

When you first touch JavaScript on the front end, you’ll see the definition of JS as an object-oriented, weakly typed language. As a programmer, you’ll hear more or less object-oriented programming (OOP) and MVC thinking. I also think about how to apply these concepts to my daily projects and how to guide the development of projects.

Generally speaking, a product or project will go through five stages: requirements analysis, software design, program development, testing and launch. Most of the front-end developers I know get a pre-sale prototype and go straight to software development. Just as the so-called research and development cool, iterative crematorium, without a good software design stage, the subsequent project iteration probability will become very painful.

It has been almost 3 years since I graduated, so I want to summarize and review the interesting projects I have done, share some of my experiences and experiences, and make progress together. In the first article, I’m going to share how to design software step by step through MVC programming thinking. Hopefully, those of you who read this article have found that the front end can also be a good use of OO programming to advance projects.

Getting back to the subject, as the first of a series of articles, I’d like to get started with the basics of Canvas usage. I believe that there are many kinds of uses of Canvas on the Internet, but there are still many interesting things to tell about how to connect simple rendering requirements to complete rendering of rich text effect in the software design stage, which is also my original intention to write this article.

1.2 Canvas is introduced

The history of the Canvas

Before learning about a new technology, it helps to understand its history and how it developed.

Historically, Canvas was first proposed by Apple and used by control board components created in Mac OS X WebKit. Before canvas was called HTML draft and standard, the front end was drawn by some alternative methods, such as Flash, which was criticized. And very powerful SVG (Scalable Vector Graphics), and VML (Vector Markup Language), which is only available in IE (versions 5.0 and up). There are even some front ends that can use div+ CSS for drawing.

In general, drawing graphics in the browser is relatively complicated without canvas, while drawing 2D graphics is relatively easy after the emergence of Canvas.

What is the Canvas

Canvas is a new label in HTML5. It is used to generate images in real time on web pages and can manipulate image content. It is a bitmap that can be operated with JavaScript. It does not have its own behavior, but defines an API to support scripted client drawing operations, drawing actions through JS.

Canvas features

  • Resolution dependent (drawing in px units)
  • Ability to save the resulting image in.png or.jpg format
  • Event handlers are not supported
  • Weak text rendering capability
  • For local changes, you need to redraw the entire Canvas

Common Canvas application scenarios

  1. For visualization. Make visual charts and so on.
  2. Make graphic editor and other tools.
  3. Multiple media content and special effects can be embedded. And can be very compatible with the Web.
  4. The game

Why should we learn Canvas

Talking about technology instead of business is a bully. As the saying goes, technology is more than pressure. With the development of The Times, front-end development encounters more and more cool and complicated business scenes with increasingly high frequency. Pure CSS interaction or SVG effects can be difficult to handle complex game logic while maintaining smooth performance, and Canvas can do just that.

1.3 The end of the preface

This is the first article I wrote about technical exchange and sharing. It lasted about a month and a half, and there have been 16 editions of revision, but there may be more or less not good or wordy places, please forgive me.

2. Demand analysis

2.1 background

Now imagine a canvas-related rich text rendering project with a simple business scenario:

  • Users in the client by dragging the way to draw the page effect, set the interactive animation between pages;
  • Users can view the drawn page effect on the mobile terminal and have corresponding interactive animation;

Now we are assigned to the second task, which is to restore through Canvas according to the known data structure and rendering requirements, and deal with interactive animation well.

2.2 Sorting out the implementation ideas

The simpler the requirement, the more general it is. The second requirement above lacks a foothold for each function. So let’s go backwards.

Start with the render page effect

Using CSS to simulate rendering is a feasible way, but the complexity of the page effect is unknown, it is bound to integrate Dom elements nested relationship may be very deep, and considering the interaction between pages and pages, simple use of HTML+CSS rendering interaction, performance will also be a big problem.

One possible direction for performance optimization is to minimize unknown nesting levels of Dom elements by exporting renderings to images to transform the page from a layered rendering of Dom elements to a purely graphical presentation. Performance issues associated with interaction are also resolved. The Canvas is just able to render a good page and output it as an image.

Implementation approach

  1. Get the rendering information provided by the client and render it to the Canvas.
  2. Render it through Canvas and output it as a picture;
  3. Monitor the mobile terminal to respond to the event to complete the interactive animation.

Now that the idea of realizing the whole requirement has been worked out, the next thing we need to think about is what kind of logic to render rich text effect through Canvas?

2.3 Rendering Principle

Let’s start with the analogy that designers use drawing software like PHOTOSHOP to create various page effects by stacking different layers or Layer groups ** to achieve the final visual effect. This is how we render rich text effects on Canvas.

If you have a complex page that needs to be rendered, break it down into layers according to certain rules. If the effect of a layer is still relatively complex, then refine it until it can be drawn using the Canvas API.

The image below shows how to layer down the left side of the page. It is particularly necessary to explain the maskLayer, which is not directly reflected in the rendering effect layer, but mainly acts as the skeleton of the layer body, assisting and positioning the rendering of other effects layers.

Another addition is the concept of sketchpad. For the user side, the smallest unit of interaction is called the page. In terms of research and development, the positioning of the page is too broad, so it is uniformly called Artboard in the later scenes.

2.4 Rendering Ideas

Breaking down complex page effects into simple enough layers that can be combined is the core principle for achieving the entire rendering effect. But the original ideal to the ground, it needs to be supported by various quantitative rendering requirements.

First of all, it is necessary to classify layers into three categories: ShapeLayer, TextLayer and GroupLayer according to the content and extensibility of layers.

According to the current rendering requirements, the rendering effects involved can be divided into four categories: Fill, Stroke, Shadow and Blur.

2.5 Data Structure

In the previous chapter, layer types and rendering effects were simply classified. Based on the above information, the first version of data structure was simply drawn up.

First of all, the data structure shown below is an artboard data structure, and artboards and layer groups In principle, layer groups are attached to the artboard and therefore are considered one of the properties. Besides rendering effect, there is also an important transform field which describes the position, scale and rotate Angle of the current layer in the artboard.

The structure of the mapped js object looks like this:

{
	// Artboard information
	artboard: {
		width, / / width
		height, / / height
		visibleHeight, // Visual height
		// Style information
		style: {},
		// Layer list
		layerList: [{
			layerType, // Layer type shape: shape text: text group: layer group
			shape: {}, // Graphics render
			text: {}, / / text
			group: [], / / layer set
			// Render effect
			style: {
				fill: {}, // Fill effect
				stroke: {}, // Stroke effect
				// Filter effect
				filter: {
					shadow: {}, // Shadow effect
					blur: {}, // Blur effect
					/ /...
				}
			},
            transform:{} // Deformable information}}}]Copy the code

Software design

The preface mentioned the five steps of developing a software. After the requirement analysis, the software design stage will be entered, which is also the key chapter of this article. In the process of software design, I will use the domain model and UML(Unified Modeling Language) as a tool to layout the logic realized by the code in the form of diagrams in advance.

3.1 Introduction to domain model

Domain model: a visual representation of conceptual classes in the domain or objects in the real world [from Baidu]; As an analysis model, it is used to analyze how to meet the functional requirements of the system during the analysis stage of software development. It belongs to the category of software development. Class diagrams are mainly used to describe domain models in UML.

Why do domain models?

It is also mentioned in the preface that a product or project will go through five stages: requirement analysis, software design, program development, test and launch. And the boundary between software design and program development is not obvious, which is actually a very dangerous signal.

First of all, without a good software design stage, the iteration of the project will lead to confusion and complexity of the logic inside the code. In the projects I have contacted so far, vUE project is taken as an example. After several iterations and different people take over, the most obvious perception is that the data structure defined in VUEX will be jumbled. Sometimes a small logical change may have to cooperate with the whole project process to run again, the efficiency is very low.

Secondly, without a good software design, it is difficult to deal with a complex project when the program is developed, especially when the requirements change rapidly, and there is no strategic domain modeling, then it is likely to miss the really important things, leading to the risk of the whole project out of control.

Therefore, learning how to do domain modeling can help others quickly understand the macro architecture and internal implementation logic of a project, while minimizing risk during the development of the project. It’s also easy to cut in if you need to do a code refactoring later.

3.2 Domain Model

Use cases

Before we begin formal domain modeling, let’s review what the current use cases are.

  1. Restores the dynamic interaction between pages set by the client
  2. Restore the client-side drawing of the page

The domain model

It is obvious that the contents of use cases are difficult to correspond to the real thing. Fortunately, the domain boundary corresponding to this use case is relatively small, so we can directly conduct preliminary division according to the MVC structure. For the View layer (View) and the control layer (Controller), two roles (UIController) and core controller (APP) can be created directly to take on corresponding responsibilities within their respective scopes. The view layer and the browser are highly coupled.

Next, let’s focus on the logic of the Model business layer. First, from use Case 1, we perceive the need for a rendering role, which we call the Renderer. According to use case 2, for the view layer we can perceive that the responsibilities related to interactive animation seem to be separated into a single role for internal encapsulation, which we call the AnimationController. The resulting domain model is shown below.

3.3 Rendering model

Once we have the domain model, we need to visualize the responsibilities of the Renderer role in order to output a more comprehensive class diagram.

The first step is to lay down all of the layer rendering requirements. Previously, the effects of layer rendering were only divided into four types, but the specific content of each type was not further explained. The following rendering model will supplement the remaining details. The graphics layer is complemented by the stroke module, which is used to draft the shape of the graphics and specify the specific image of the graphics through related fields.

I also placed some general render attributes in the layer group section, such as whether the layer is fixed, visible, deformed, etc. Although graphics and text layers also have these properties, they are only represented in the layer group section to avoid duplication in the render model.

3.4 Data Model

Before thinking about class diagrams, think about the flow of data formats in the overall domain model. I divide it into three stages

  1. OriginData, which encapsulates original rendering properties and interaction-related properties, is returned from the server through an interface.
  2. RenderData structure (renderData), integrated into a structure of Canvas API that is convenient to call according to a certain rendering process;
  3. ViewData structure (viewData) integrates the picture resources output after Canvas rendering, which is suitable for direct display on the mobile terminal.

Next, we build the data model in turn.

Raw data structure

{
  artboardList: [], // Artboard list
  clipShapeList: [], // Mask layer list
  gradientList: [], // Gradient information list
  interactionList: [], // List of interactive information
  resourceList: [], // List of image resources
  perviewArtboadId: `` // Preview the canvas ID
}
Copy the code

The internal object structure of the drawing board list is consistent with the figure in Section 2.5.

Render data structure

There are two ways to render when the original data structure is a tree structure

  1. Deep search algorithm traverses while rendering;
  2. Through the deep search algorithm, the data is processed twice in the traversal process and re-output into a queue structure

Let’s start with method 1. Although instant rendering can be achieved, the rendering logic is coupled with the logic of traversing tree nodes, so the subsequent expansion needs may greatly adjust the original logical structure and the expansion is not high.

Therefore, scheme 2 is adopted to first traverse nodes (layers) through deep search, and output a queue structure according to hierarchy. In the process of layer decoupling, the colleague recalculated the deformation attribute, gradient attribute and mask attribute involved in each layer. When the renderer receives the information of each layer, it only needs to focus on rendering according to the attributes of the image, without extra consideration of whether it will be affected by its parent layer.

This structure is not much different from the internal object structure of the canvas list in the original data structure, except that the ** layerType attribute has only graphic and text layers to choose from. It will also add the outterRect ** attribute, which will be used later to locate the click range of interactive events due to the Canvas element’s weak support for events.

{
  canvasList: [{
    id, // Unique flag bit
    / /...
    layerList: [{
      layerType, // Layer type shape: Shape text: Text
      / /...
      outterRect // An enclosing rectangle that handles the click range of the interaction
    }]
  }],
  interactionList,
  perviewArtboardId
}
Copy the code

View data structure

This is the structure of the rendered data output from the renderer. You can see that the layer list now has no layer type attributes, but instead has img attributes and layerImg attributes.

So considering the page height is greater than the visual height, so there is page slide, at a time when a layer is the effect of the fixed background fuzzy, is bound to render an image of a static layer original attributes (img) way doesn’t work, for this is still need to adopt the way of rendering, according to the real-time rendering rendering with background fuzzy logic, This method of rendering requires the support of the layerImage attribute.

const viewData = {
  pageList: [
    {
      id: ``,
      / /...
      layerList: [{
        width,
        height,
        position,
        outterRect, // Layer is enclosed by rectangle
        img, // Render the image
        layerImage, // The layer body has no rendered image
        fixed, // Whether it is fixed
        bgBlur, // Whether the background is blurred
      }]
    }
  ],
  interactionList,
  perviewArtboardId
}
Copy the code

Layer detail structure of render data

The previous layer structure is point-to-stop, because it has many internal structural attributes, so it will be shown separately below. Essentially following the big data structure above, but extending it for many specific rendering effects.

{
  // true: visible False: invisible
  visible,
  // true: Layer is fixed false: Layer is not fixed
  fixed,
  // Layer type shape: shape text: text group: layer group
  layerType,
  // Enclosing rectangle area
  bound: {
    x,
    y,
    width,
    height
  },
  // Style information
  style: {
    // Fill effect
    fill: {
      // Solid: solid color filling pattern: pattern filling gradient: gradient filling
      type,
      // `solid` === type
      color: {
        r,
        g,
        b,
        alpha
      },
      pattern: {
        // Fill form: cover/fill/none
        objectFit,
        resource // Image resources
      },
      gradient: {
        // Linear when the distance is radial
        type,

        // 'linear' === type The position of the gradient line
        x0,
        y0,
        x1,
        y1,

        // `radial` === type 
        cx,
        cy,
        cr,
        fx,
        fy,
        fr,

        // Set of breakpoints
        stopList: [{
            color: {
              r,
              g,
              b,
              alpha
            },
            offset,
          },
          // ...
          {
            color: {
              r,
              g,
              b,
              alpha
            },
            offset,
          }
        ]
      }
    },
    // Border effect
    stroke: {
      / / edge width
      lineWidth,
      // Outside: inside: inside: center: center (default)
      align,
      // Borders can only be filled with solid colors
      color: {
        r,
        g,
        b,
        alpha
      },
      // same as context2D lineJoin property
      lineJoin,
      // same as context2D lineCap property
      lineCap,
      // same as context2D lineDashOffset attribute
      dashLength,
      // same as context2D lineDashOffset attribute
      dashOffset,
    },
    // Filter effect
    filter: {
      // Shadow style
      shadow: {
        // same as context2D shadowColor property
        shadowColor: {
          r,
          g,
          b,
          alpha
        },
        // same as context2D shadowOffsetX property
        shadowOffsetX,
        // same as context2D shadowOffsetY property
        shadowOffsetY,
        // same as context2D shadowBlur property
        shadowBlur,
      },
      // Blur effect
      blur: {
        // blur type bgBlur: background blur objectBlur: objectBlur
        type,
        / / ambiguity
        blurAmount,
        / / brightness values
        brightnessAmount,
        // Transparency value
        opacity
      },
      // Grayscale percentage
      grayscale,

      / /... Refer to the filter parameter for more filter styles}},// Deformable information
  transform: {
    // Scale horizontally
    a,
    // Vertical tilt
    b,
    // Horizontal tilt
    c,
    // Vertical scaling
    d,
    // Move horizontally
    tx,
    // Move vertically
    ty
  },
  // Graphic information
  shape: {
    // Shape type rect: rectangle circle: ellipse: line segment path: vector path compound: combined shape
    type,

    // `rect` === type
    // In the case of a rounded rectangle, this field represents the radians of the four corners of the rounded rectangle and is of type array
    r: [],
    width,
    height,

    // `circle` === type
    // Center x coordinates
    cx,
    // The center of the circle is y
    cy,
    / / radius
    r,

    // `ellipse` === type
    cx,
    cy,
    // The radius of the major axis of the ellipse
    rx,
    // The radius of the short axis of the ellipse
    ry,

    // `line` === type
    // Start coordinates
    x1,
    y1,
    // End coordinates
    x2,
    y2,

    / /, path, = = = type | | ` compound ` = = = the type
    // SVG path description
    path

  },
  // Text message
  text: {
    // Font style
    font,
    / / line spacing
    letterSpacing,
    // Line text content
    rawText,
    // Text location
    position: {
      x,
      y
    },
    / / the underline
    underline: {
      // Whether it is visible
      visiblie,
      // Distance from text
      margin,
      // Underline width
      borderWdith,
      color: {
        r,
        g,
        b,
        a
      }
    }
  },
  // Layer group information
  group: {
    childen: [{}]
  }
}
Copy the code

3.5 a UML class diagram

Next, we analyze the data change flow form as the main axis. First, we need to deal with three data formats. According to the principle of high cohesion, we need to set up a separate class for processing. For interface requests, I also wrapped a class called Data Transfer Object **. The visual layer and control layer in the domain model can be directly mapped to the class diagram structure, and the last class diagram is shown in the figure below.

  • View(View layer)
    • UIController: View layer controller, which encapsulates interactive event listening and some auxiliary rendering methods, provides methods matching APP class;
      • Initialize the page and preview the image
      • Initialize the interaction event
      • Notifies the interactive animation to perform the interactive animation
    • AnimationController: Animation controller, which encapsulates the logic of specific animation interaction effects. Currently, interactive animation can be divided into four categories:Overlay animation,CancelOverlay animation,Slide entry/entrance animation,Push the entry/exit animation;
      • Four ways to interact with the external animation
      • Specific implementation of four interactive animation methods
      • Calculates the interactive frame animation, the position of each frame
      • Encapsulate the easing function
  • Controller(Control layer)
    • APPThe core controller provides two entry methods of init data and launch externally, and encapsulates other message forwarding methods internally;
      • Initialize the interface to the external
      • Interface to external startup projects
      • Interfaces for forwarding various messages
  • Model(Logical layer)
    • ProcessService: Data processing class, scheduling raw data class to obtain raw data and integrate it into rendering data; At the same time, the rendering class is scheduled to get the view data and forward it to the core controller.
      • Notifies the raw data class down to get the raw data;
      • Integrate raw data into rendered data
        • Convert the tree structure of the original data into a queue structure that is convenient for canvas layer upon layer rendering
        • Calculate the attributes of the parent layer that are affected by the layer group nesting (such as the rotation attribute of the deformation, the rotation Angle of the child layer is affected by the parent layer)
        • Calculate the position relation of different text in complex text effect (Canvas text has weak support, which is similar to the realization of word spacing, and needs to transform position relation)
    • RendberService: render class, which takes the responsibility of renderer and converts render data into view data;
      • Render palette list
      • Renders a layer list on an artboard list
      • Draw render class layers (as some layers need to be disassembled further, they can also be seen as sketchboards in a sense)
      • Render related effects
    • CanvasDatbaService: Raw data class, which obtains sketchpad and layer information from data transfer objects and integrates the original data.
    • DTO: Data transfer object, encapsulating XMLHttpRequest, used for scheduling interface to get server-side data;

3.6 UML sequence diagram

The next section mainly shows the internal class scheduling process from initialization time.

  1. The core controller (APP) receives a request to initialize the display page;
  2. The data processing class (ProcessSerevice) receives the APP’s instructions and makes a downward request (queryOriginData) to get the original data;
  3. The data processing class first internally converts the raw data into a data structure that is easily rendered by RenderService.
  4. The data processing class then makes a request to the rendering class, and uses the method (initRenderData) to get the view data, and returns the result upward.
  5. The core controller forwards the view data to the View layer controller (UIController) and the view layer does the presentation.

Iv. Introduction to Canvas

Before developing, it is necessary to briefly mention some pits and basic concepts that will be encountered when starting to Canvas.

4.1 new Canvas

Canvas and renderingContext

A Canvas element can be understood as a node in a Dom tree that creates a fixed-size canvas and exposes one or more rendering contexts. The render context can be used to draw and process the content that needs to be displayed on the canvas.

The size of the rendering context can be set through the width and height attributes of the Canvas element, and the width and height of the Canvas element itself can be set through the style attribute. When the width and height properties of the Canvas element are not set, the default render context is 300 pixels wide and 150 pixels high.

When you first touch canvas, you need to pay special attention to the width and height Settings. If the aspect ratio of canvas element is inconsistent with that of the rendering context, the content displayed on canvas element is likely to be distorted.

CanvasRenderingContext2D

CanvasRenderingContext2D provides a 2D rendering context for the drawing of an object’s Canvas element, which can be used to draw shapes, text, images, and other objects. The API that completes the rich text rendering effect calls for the rest of this article is provided by this object.

const canvas = document.querySelector(`canvas`);
const context = canvas.getContext('2d');// Returns the CanvasRenderingContext2D object
Copy the code

WebGLRenderingContext

The WebGLRenderingContext object provides a drawing context based on the OpenGL ES2.0(OpenGL for Embedded Systems) specification for drawing within the Canvas element. OpenGL can provide 2D and 3D graphics application programming interface (API) with complete functions, which is also the cornerstone of Canvas to produce 3D effects.

const canvas = document.querySelector(`canvas`);
const context = canvas.getContext('webgl');// Returns the WebGLRenderingContext object
Copy the code

DevicePixelRatio (devicePixelRatio)

Device pixel ratio Indicates the ratio of the physical pixel resolution of the current display device to the CSS pixel resolution. It can also be interpreted as a ratio of pixel sizes: the size of a CSS pixel versus the size of a physical pixel.

In simple terms, the device pixel ratio tells the browser how many actual screen pixels should be used to draw a single CSS pixel. First of all, we should understand that all graphics drawn by canvas element are bitmaps, which will cause distortion. Typically, the browser device has a pixel ratio of 1, while the mobile device has a pixel ratio of 2 to the common iPhone 5/6/7 and 3 to the iphoneX. Content with the same width and height drawn by canvas is clear in the browser, but blurred in the mobile terminal, just because of the difference in pixel ratio of devices. Therefore, in order to ensure that the content drawn on the mobile terminal, especially the text, can be clearly displayed, when setting the width and height of the rendering context, it is necessary to multiply the pixel ratio of the device.

const canvas = document.querySelector(`canvas`);
const context = canvas.getContext('2d');

// The canvas itself covers the entire screen
canvas.style.setProperty(`width`.document.documentElement.clientWidth);
canvas.style.setProperty(`height`.document.documentElement.clientHeight);

// The render context fills the entire screen
canvas.width = document.documentElement.clientWidth * window.devicePixelRatio;
canvas.height = document.documentElement.clientHeight * window.devicePixelRatio;
Copy the code

4.2 Coordinates of Canvas

Canvas coordinates

There is only one Canvas coordinate system, and it is unique and constant. Its coordinate origin is in the upper left corner of the Canvas. From the coordinate origin to the right, it is the positive half-axis of the X axis, and from the coordinate origin to the downward, it is the positive half-axis of the Y axis.

Canvas drawing coordinate system

The drawing coordinate system is not the only constant, it is related to the Matrix of canvas. When the Matrix changes, the drawing coordinate system also changes correspondingly, and this process is irreversible. The matrix is changed by setting Translate, rotate, scale, and skew.

Each matrix change is a change for the current drawing coordinate system.

let canvas = document.querySelector (`#canvas`);
let context = canvas.getContext (`2d`);

canvas.width = document.documentElement.clientWidth * window.devicePixelRatio;
canvas.height = document.documentElement.clientHeight * window.devicePixelRatio;

context.fillStyle = ` ` # 444;
context.fillRect (0.0, canvas.width, canvas.height);

context.strokeStyle = `#cffcff`;
context.lineWidth = 10;

let rectOption = {
    width: canvas.width / 2.height: canvas.height / 4.top: 0.left: 0,}// The origin of the drawing coordinate system defaults to the upper left corner of the canvas
context.strokeRect (rectOption.left, rectOption.top, rectOption.width, rectOption.height);

// Shift the origin
context.translate(canvas.width / 4, canvas.height / 2);
context.strokeRect (rectOption.left, rectOption.top, rectOption.width, rectOption.height);

// Rotate the drawing coordinate system 30°
context.rotate(30 * Math.PI / 180);
context.strokeRect (rectOption.left, rectOption.top, rectOption.width, rectOption.height);
Copy the code

The following figure is the effect picture after the code is run. The rectangle is drawn directly before the operation. The rectangle is drawn from the origin. Then move the rendering coordinate system and draw the rectangle again. After the starting point is changed, the position of the rectangle changes. At this point, the coordinate system is rotated 30° to draw a rectangle, which is rotated about the origin.

Save () and restore ()

Save () and restore() are used to save and restore the canvas state, which is a snapshot of all the styles and distortions applied to the current canvas. The Canvas state is stored in the stack, and whenever the save() method is called, the current state is pushed to the stack for saving. Each time the restore method is called, the last saved state is popped from the stack and all Settings are restored.

A painting state includes:

  • Deformation of the current application (i.e. move, rotate, and scale)
  • StrokeStyle, fillStyle, globalAlpha…
  • Current clipping path

Rotation around the midpoint

context.strokeRect (rectOption.left, rectOption.top, rectOption.width, rectOption.height);
// Before adjusting the coordinate system, it is better to save the current canvas state to facilitate the consistency of the drawing coordinate system in subsequent operations
context.save ();

// Set the origin to the midpoint of the rotation graph
context.translate(rectOption.left + rectOption.width / 2, rectOption.top + rectOption.height / 2);
// Set the rotation Angle
context.rotate(45 * Math.PI / 180);
// At this point, the origin is in the center of the graph, and then you need to shift the origin of the coordinates backwards to make sure that the Angle is changed compared to the original coordinate system
context.translate(-(rectOption.left + rectOption.width / 2), -(rectOption.top + rectOption.height / 2));

context.strokeRect (rectOption.left, rectOption.top, rectOption.width, rectOption.height);

// After drawing, restore the drawing coordinate system
context.restore();
Copy the code

Mirror symmetry

Take the horizontal mirror flip as an example, use the following properties

transform: scaleX(-1);
Copy the code

It is the same for canvas elements to perform horizontal mirror flipping. If it is necessary to ensure that the position of the graph after horizontal mirror is consistent with the original layer, then the heap drawing coordinate system can carry out translation processing, which will not be expanded in detail here.

context.scale(-1.1);
Copy the code

Those who are interested can refer to this article, which has a very detailed explanation:

[www.zhangxinxu.com/wordpress/2…

4.3 Drawing Pictures

drawImage()

CanvasRenderingContext2D. DrawImage () method provides a variety of ways in the image is drawn on the Canvas. For details, please refer to related documents. It is worth noting that the positions of parameter 5 and parameter 9 are different.

Image preloading with cross domain

When a canvas draws a picture, the source of the picture should ensure that it does not cross domains. At the same time, a preloaded image must be drawn to be effective. Otherwise, the image will not be drawn to the artboard after calling the method.

let img = new Image ();
// Allow cross-domain
img.setAttribute (`crossOrigin`.` `);
// Set the image path
img.sestAttribute (`src`, path);

new Promise (resolve= > {
    // Is the image preloaded? Resolve () : preload the image;
   img.complete ? resolve(img) : img.onload = () = > { resolve(img) };
}).then (img= > {
   context.drawImage(image, 0.0);
})
Copy the code

Picture size should not be too large

In actual development, when canvas drawing size or drawImage inserts image, getImageDate obtains graphic resources and other sizes are greater than a certain threshold (preferably not more than 10,000 pixels), the whole rendered picture is blank. The exact threshold is uncertain, depending on the environment in which you run it, but it is also a potential hazard of drawImage drawing.

5. Program development

The entire article would obviously be too short to repeat the entire code logic, so in this chapter I will focus on some of the code details when rendering graphics layers.

5.1 Rendering Process

Taking rendering a graphic layer as an example, the following image shows the steps taken to render a static layer. The general idea still follows the steps of stroke, fill, stroke, shadow and blur. Here we need to specify two places of rendering.

  • When rendering a line map layer, you need to see if there is a mask layer. The visible area of the tracing layer is affected by the mask layer, so it needs to be handled in advance.
  • The background blur operation depends on the entire background palette and affects the rendering of the fill layer, so it also needs to be handled in advance.

The key code for rendering a graphic layer is shown below, mainly for the rendering service class.

  • The paintShapeCanvas() method is responsible for integration. According to the above rendering ideas, the final output of the rendered graphics layer palette; However, some logical details are related to the selection of technology, so there are local adjustments; One of the tweaks is in the way the palette is rendered;
  • The paintFillCanvas() method is responsible for printing the rendered artboard. It will call the methods of different rendering types according to the different rendering types. After getting the drawing board after rendering, it will process the visual range according to whether there is a mask layer. The beginning part of the rendering process can have certain differences, but it does not affect the final effect of rendering.
class RenderService {
  constructor() {}

  / /... otherMethod

   /** * Permission level: private * Description: Draw graphic layer Palette * Parameters: width: Palette width * height: Palette height * Layer: layer information * artBoard: Background Palette * Returned value: Canvas object */
  paintShapeCanvas(width, height, layer, artboard) {
    // Create a new Canvas element, get a blank Canvas, and output the rendered Canvas
    let canvas = document.createElement(`canvas`);
    let context = canvas.getContext(`2d`);

    canvas.width = width;
    canvas.height = height;

    / / if the layer invisible | | layer global transparency of 0 | | (layer without filling effect and no stroke effect)
    if(! layer.visible ||0=== layer.globalAlpha || (! layer.isFill && ! layer.isStroke)) {return canvas;
    }

    // Get the fill artboard
    layer.isFill && (layer.fillCanvas = this.paintFillCanvas(width, height, layer));

    // Get the artboard with the background blurred
    layer.isBackgroundBlur && (layer.fillfillCanvas = this.renderBgBlurCanvas(artboard, layer));

    // Get the Stroke palette
    layer.isStroke && (layer.strokeCanvas = this.paintStrokeCanvas(width, height, layer));

    // Render the fill effect
    layer.isFill && context.drawImage(layer.fillCanvas, 0.0);

    // Render stroke effect
    layer.isStroke && context.drawImage(layer.strokeCanvas, 0.0);

    // Render the shadow effect
    layer.isShadow && this.renderShadow(canvas, layer);

    // Render object blur effect! layer.isBackgroundBlur &&this.renderObjBlur(canvas, layer);

    return canvas;
  }

  /** * Permission level: private * Description: Draw fill effect Artboard * Parameters: width: Artboard width * height: Artboard height * Layer: Layer information * Returned value: Canvas object */
  paintFillCanvas(width, height, layer) {
    let canvas = document.createElement(`canvas`);
    let context = canvas.getContext(`2d`);

    canvas.width = width;
    canvas.height = height;

    context.save();

    // Render the fill style according to the fill type
    switch (layer.style.fill.type) {
      // Solid color fill
      case `solid`:
        this.renderFillSolid(canvas, layer);
        break;
        // Pattern fill
      case `pattern`:
        this.renderFillPattern(canvas, layer);
        break;
        // Gradient fill
      case `gradient`:
        this.renderFillGradient(canvas, layer);
        break;
      default:
        break;
    }

    // If there is a mask, only the content of the viewable area of the mask is displayed
    if (layer.isClip) {
      let maskCanvas = this.paintMaskCanvas(width, height, layer);

      context.restore();
      context.globalCompositeOperation = `destination-in`;
      context.drawImage(maskCanvas, 0.0);
    }

    context.restore();

    returncanvas; }}Copy the code

5.2 Transformation of rendering data structure

The layer relationship of the original data structure is a tree structure, in which some attributes of the child node, such as deformation relation and transparency, will be affected by the parent node, while in the rendering process, we are more expected to focus on some attributes of the current layer itself and directly carry out the corresponding rendering operation. It is therefore necessary to transform the tree structure into a cleaner queue structure. The code below shows how to convert a raw data structure into a rendered data structure using the deep-first-search algorithm.

class ProcessService {
  constructor() {}

  // ... otherMethod

  processLayerGroup(layerGroup) {
    let layerQueue = [];

    if(! layerGroup || ! layerGroup.length) {return layerQueue;
    }

    let stack = [];

    // Stack for layer groups
    let groupStack = [];

    // The child element of each element in the corresponding layer stack that has not yet been traversed
    let groupChildrenCountStack = [];

    // Put the first tier node on the stack
    for (let layer of layerGroup) {
      stack.push(layer);
    }

    / / deep search
    while (stack.length) {

      let layer = stack.shift();

      layerQueue.push(layer);

      let parentLayer;

      groupStack.length - 1> =0 && (parentLayer = groupStack[groupStack.length - 1]);

      / /... Handle layer related information, such as isFill, isStroke, isClip and other attributes

      // Iterate over the transform value of the layer affected by group
      groupStack.length > 0 && this.iterateGroupTransform(layer, groupStack[groupStack.length - 1]);

      // If the node has children, continue to add to the top of the stack
      if (`group` === layer.type &&
        layer.group.children &&
        layer.group.children.length) {

        stack = layer.group.children.concat(stack);

        // The record layer group has several sub-layers
        groupChildrenCountStack.push(layer.group.children.length);
        groupStack.push(layer);

      } else {
        layer.style.opacity = (layer.style.opacity === undefined)?1 : layer.style.opacity;

        // Calculate the interaction area of the current layer
        this.calculateLayerOutterRect(layer);

        // Iterate over the interaction area of its parent layer group
        this.iterateGroupOutterRect(layer, groupStack, groupChildrenCountStack); }}returnlayerQueue; }}Copy the code

5.3 Line each shape of the graphic layer

rectangular

context.rect(0.0, layer.shape.width, layer.shape.height);
Copy the code

The rounded rectangle

// r:Array represents the radians of the four corners of the rectangle
context.beginPath();

context.moveTo(layer.shape.r[0].0);

// This method can handle the case where r is less than the minimum width and height
context.arcTo(width, 0, width, height, Math.min (layer.shape.r[1], width, height));
context.arcTo(width, height, 0, height, Math.min (layer.shape.r[2], width, height));
context.arcTo(0, height, 0.0.Math.min (layer.shape.r[3], width, height));
context.arcTo(0.0, width, 0.Math.min (layer.shape.r[0], width, height));

context.closePath();
Copy the code

Is round

context.arc(layer.shape.cx, layer.shape.cy, layer.shape.r, 0.2 * Math.PI); 
Copy the code

The ellipse

context.ellipse(layer.shape.cx, layer.shape.cy, layer.shape.rx, layer.shape.ry, 0.0.2 * Math.PI);
Copy the code

Line segment

context.beginPath();

context.moveTo(startX, startY);
context.lineTo(endX, endY);

context.closePath();
Copy the code

Bezier curves (SVG/Composite Graphics)

let pathList = layer.shape.pathList

context.beginPath();	

for (let path of pathList) {

    switch (path.action) {
        case `M`:
            context.moveTo(path.points[0], path.points[1]);
            break;
        case `C`:
            context.bezierCurveTo(path.points[0], path.points[1], path.points[2], path.points[3], path.points[4], path.points[5]);
            break;
        case `L`:
            context.lineTo(path.points[0], path.points[1]);
            break;
        default:
            break;
    }
}

context.closePath();
Copy the code

5.4 Fill the border of the graphic layer

Rendering requirements

For the setting of graphic border width, in addition to setting the value of the width, it will also require three rendering modes of internal stroke, external stroke and center stroke. But the Canvas’s lineWidth property only extends from the center to the sides, which is the default v-center stroke. Therefore, it is best to have a common script that can render different effects depending on configuration items alone.

Implementation approach

Here are two different implementations:

  1. By lineWidth’s nature, the edge width is set to the value of the configuration item, but drawing with stroke() subtracts the coordinates in the upper left corner and the width and height of the shape;
  2. Keep the shape in its original position and width and adjust the edge width. For internal or external strokes, set the edge width to 2x, then eliminate the excess.

Scheme 1 May be more convenient if you are dealing with simple graphics. However, if complex graphics, such as vector graphics, are involved, scheme 1 will encounter all kinds of trouble in the subsequent processing. Although the effect may be achieved by scaling and processing the above ideas, it always seems to treat the symptoms rather than the root cause. Here we expand according to the idea of Scheme 2.

The first is the application of the globalCompositeOperation (compositeoperation or blending mode operation). When drawing multiple layers, different modes can be used to achieve different effects. Here we use its two modes.

  • Destination-in: The existing canvas content remains where the new graphic and the existing canvas content overlap. Everything else is transparent.

  • Destination-out: The existing content remains where the new graph does not overlap.

Here’s how to implement different stroke modes:

  • Center stroke: use the lineWidth property as normal;
  • External stroke: Set lineWidth to 2x and globalCompositeOperation to destination-out, then draw the original shape so that the outer border is displayed and a border layer is created. Restore the globalCompositeOperation property and draw the graphics body again.
  • Internal Stroke: Same as external stroke, but with globalCompositeOperation set to destination-in.

The demo code

html.body {
	padding: 0;
	margin: 0;
	background: #9e9e9e;
}

.canvas {
	width: 100vw;
	height: 100vh;
	position: absolute;
	top: 0;
	left: 0;
}
Copy the code
<canvas id="outSideCanvas" class="canvas"></canvas>
<canvas id="centerCanvas" class="canvas"></canvas>
<canvas id="insideCanvas" class="canvas"></canvas>
Copy the code
let canvasList = document.querySelectorAll(`.canvas`);

let canvasWidth = document.documentElement.clientWidth * devicePixelRatio;
let canvasHeight = document.documentElement.clientHeight * devicePixelRatio;

for (let canvas of canvasList) {
    canvas.width = canvasWidth;
    canvas.height = canvasHeight;
}

// encapsulates the render stroke type
let renderStorkeCanvas = (canvas, option) = > {
    let context = canvas.getContext(`2d`);

    context.save();

    // According to the Settings item stroke, if non-center stroke, set the edge width to 2 times
    context.lineWidth = (`center` === options.strokeAlign) ? options.lineWidth : options.lineWidth * 2;
    context.strokeStyle = `white`;
    context.rect(options.left, options.top, options.width, options.height);
    context.stroke();

    // Set the blending layer parameters according to stroke mode
    if (`outside` === options.strokeAlign) {
        context.globalCompositeOperation = `destination-out`;
    } else if (`inside` === options.strokeAlign) {
        context.globalCompositeOperation = `destination-in`;
    }

    // Add the original stroke layer
    if (`center`! == options.strokeAlign) { context.rect(options.left, options.top, options.width, options.height); context.fill(); }// Rectangles with no width for comparison
    context.restore();
    context.strokeStyle = `red`;
    context.lineWidth = 2;
    context.strokeRect(options.left, options.top, options.width, options.height);

    context.restore();
}

// Rectangle is used by default for the demo
let options = {
    width: canvasWidth / 4.height: canvasWidth / 4.strokeAlign: `outside`.// Stroke type inside: inside inside: outside: outside center: center
    left: canvasWidth * (3 / 8),
    top: canvasHeight / 8.lineWidth: 30 / / edge width
}

// Default 0: outside stroke 1: center stroke 2: inside stroke
renderStorkeCanvas(canvasList[0], options);

options.strokeAlign = `center`;
options.top += (options.height + options.lineWidth + 150);
renderStorkeCanvas(canvasList[1], options);

options.strokeAlign = `inside`;
options.top += (options.height + options.lineWidth + 150);
renderStorkeCanvas(canvasList[2], options);
Copy the code

The final running effect is shown in the figure below:

5.5 Application of blur effect

Rendering requirements

General fuzzy requirements are divided into two kinds, one is object fuzzy, for its own layer fuzzy processing; The other is background blur. In the multi-image hierarchical relationship, the layer below this layer will be blurred.

Implementation approach

For object blur, I have found two ways to deal with it. First, if it is compatible with the Filter attribute of Context2D, the blur effect can be directly achieved by setting the blur degree of the filter. However, this attribute has compatibility problems, so it needs to be used according to the scene. StackBlur is a Gaussian blur library I found on Github. The Github link is listed below for those interested. Github.com/flozz/Stack… ;

Another way to deal with background blur is based on object blur. First, find the layer below the background blur layer and blur the object according to the configuration items in turn. Then, set the globalCompositeOperation property according to the position relationship so that each layer only shows the area of the background blur layer. So this is kind of pseudo code.

/** * Canvas: rendered canvas for background blur * layer: layer configuration parameters */
renderBgBlurCanvas (canvas, layer) {
    let context = canvas.getContext (`2d`);
    
	// Create an artboard with a blurred background
    let blurCanvas = document.createElement(`canvas`);
    let blurContext = blurCanvas.getContext(`2d`);
	
    // Initializes the artboard width and height
    blurCanvas.width = canvas.width;
    blurCanvas.height = canvas.height;
	
    // First draw the background layer on the Blur palette
    blurContext.drawImage(canvas, 0.0);
	
    Blur the artboard using the interface provided by the blur library
    let stackBlur = new StackBlur();
    stackBlur.canvasRGB(blurCanvas, 0.0, 
                        canvas.width, canvas.height, layer.blurFilter.blurAmount * 2.2);
	
    // Feel the background blur layer information, get the Stroke palette
    let maskCanvas = this.paintMaskCanvas (canvas.width, canvas.height, layer);
	
    // Use the Stroke palette as a mask layer to display only blurred areas
    blurContext.save ();
    blurContext.globalCompositeOperation = `destination-in`;
    blurContext.drawImage (maskCanvas, 0.0);
    blurContext.restore ();
	
    // Blur the border of the layer itself against the background. }Copy the code

6. Conclusion

It took a month and a half to revise it back and forth. The initial content is only part of section 3 and section 5, and the positioning is also for people with Canvas experience to read. But after I showed my colleague the first draft, he said that he didn’t understand the whole story, and it was better to lower the bar a little bit, so I repositioned it.

  • For those who do not understand Canvas at all, I have prepared the content of chapter 4 to help quickly read and have a certain impression of Canvas.
  • For those who have some experience in Canvas, part of the fifth section is prepared, showing some writing methods rarely used by Canvas.

The core idea of this article is to take rich text rendering as an example to show my thinking process in the software design stage for communication and discussion. After some articles are also, mainly want to show their own ideas in the software design stage, interested friends can comment on their own ideas and experience.

PS: 50 cents bet there’s no comment _(:з