Shiv ENOW large front end

Company official website: CVTE(Guangzhou Shiyuan Stock)

Team: ENOW team of CVTE software Platform Center for Future Education

The author:

First WebGL

There are four ways to draw graphics on the front end, including HTML+CSS, SVG, Canvas2d, and WebGL. WebGL is a drawing method that is rarely used, but nonetheless has an irreplaceable place. Next, let’s take a look at WebGL by starting with these four drawing methods.

Four ways to draw front-end graphics

HTML+CSS

HTML+CSS is the most commonly used way to draw graphs. Its operation is simple and its performance is generally good. Dom events can be used to bind graphs, but it is not so convenient to deal with complex and irregular graphs.



SVG

SVG is an image format based on XML syntax and therefore features HTML event binding. SVG also provides rich encapsulation, making it easy to implement rectangles, circles, Bezier curves, and more. With a variety of filters built in, SVG makes complex, irregular shapes much easier to achieve than HTML+CSS. Meanwhile, SVG can be zoomed in and out at will without distortion. Of course, SVG is always based on Dom nodes, so rendering performance slows down with a large number of nodes.



Canvas2d

Canvas2d is dynamic drawing through JavaScript instructions, so it has very flexible logical processing ability, which can save and restore the state of the canvas and other operations. At the same time, Canvas can achieve pixel-level operation, so it can achieve more cool effects, such as various image filter effects.



Whereas SVG requires multiple HTML tags, Canvas2d needs only one Canvas tag, so it can perform better when drawing the same complex graphics. As shown in the figure below, as the area of the graph increases, the time of Canvas2d rendering will become larger and larger. However, as the elements in the scene continue to increase, the time of Canvas2d rendering will increase less obviously than that of SVG.



WebGL

The above three drawing methods are all 2D graphics processing. In terms of 3D processing, the earliest browser plug-ins such as Flash or SilverLight can only be realized. The emergence of WebGL standard enables us to draw 3D graphics in a more unified and standard way. Simply put, WebGL is OpenGL ES using the Javascript language, and it draws graphics inside the Canvas element of HTML.



In addition to drawing 3D graphics, WebGL can directly call the underlying interface to achieve hardware acceleration by virtue of its features. Compared with Canvas2d, WebGL has better performance in 2d graphics drawing. As follows, the same algorithm is used to achieve the effect of image dissolution. Figure 1 shows Canvas2d scheme, while Figure 2 shows WebGL scheme. By observing the FPS value, it can be seen that WebGL scheme is relatively smoother.



For HTML/CSS, SVG and Canvas, we do not need to care about their specific underlying mechanism, but can directly use the labels and APIS provided by them to draw graphics. For example, we can output graphics as long as we understand the drawing statement of creating SVG elements and learn to execute the drawing instructions corresponding to Canvas. However, WebGL can only draw basic primients such as points, lines and triangles. If you want to use WebGL to complete more complex tasks, you need to provide appropriate code and use points, lines and triangles in combination instead of implementation. So to use WebGL drawing, we have to go into the details. In other words, we have to work with memory, GPU, and really control every detail of the graphics output.

Graphics based

Graphics system

The graphics system is mainly divided into six main elements, which are input device, CPU, GPU, memory, frame cache and input device.



CPU is mainly responsible for loading and preparing scene data, setting rendering states such as material, texture and light source attributes, rendering basic graphics units, and sending instructions to GPU.

GPU receives CPU instructions to draw graphics, and generates pixel array input to frame cache through vertex processing and chip processing.

The frame cache is used to hold pixels, and the number of pixels in the positive cache is called resolution. The depth of the frame cache is the number of bits per pixel, which determines how many colors a system can represent. For example, an 8-bit frame cache can represent 256 (2^8) colors.

Graph drawing pipeline

Specifically, the rendering process of graphics is called rendering pipeline, which mainly includes two functions: one is to transform 3D coordinates of objects into 2D coordinates of screen space, and the other is to color every pixel on the screen. The general process of rendering pipeline is shown in the figure below. They are: vertex processing, clipping and pixel assembly, rasterization, processing.



Vertex processing: This process mainly involves coordinate transformation of vertices, converting the object from its defined representation in the coordinate system to the camera coordinate system.



Primitives assembly and clipping: Primitives assembly is the combination of vertices to form primitives (points/lines/triangles), clipping is the clipping of objects outside the viewport, clipping is for each slice rather than each vertex.





Rasterization: The graph metadata generated by clipping modules must be further processed to generate pixels in the frame cache. Rasterization is actually a process of transforming geometric elements into two-dimensional images. This process involves two parts of work. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area. The rasterization process produces slices.



Slice processing: this stage is mainly through the slice shader to calculate the final color and depth value of the slice, at the same time through the depth test and template test to determine whether the current slice is visible, if the slice passes the test, it can be drawn directly to the frame cache.

WebGL program

A complete WebGL program, including at least HTML, JavaScript, and OpenGL shader code (GLSL), where THE HTML code mainly provides a Canvas Canvas; JavaScript is used to obtain WebGL context, process coordinate, color and other information of model vertices, and transfer these processed data to GPU. GLSL is a kind of C shader programming language, including two major parts, namely vertex shader and chip shader. Let’s take a look at the WebGL program using an example of WebGL that draws triangles.

  1. Declare a Canvas Canvas
<canvas id="webgl" width="500" height="300" style="border: 1px solid;"></canvas>
Copy the code
  1. Get the WebGL context
// As with canvas2D, get the WebGL context first
const canvas = document.getElementById("webgl");
const gl = canvas.getContext("webgl");
Copy the code
  1. Write vertex shader code and chip shader code
// Shader source code
const vertexShaderSource = '// Declare an attribute variable a attribute vec3 a; Void main() {gl_Position = vec4(a, 1.0); void main() {gl_Position = vec4(a, 1.0); } `;

const fragmentShaderSource = Void main() {// gl_FragColor = vec4(0.1, 0.7, 0.3, 1.0); } `;
Copy the code
  1. Declaration uses shader code
 // Initialize the shader method
 function initShader(gl, vertexSource, fragmentSource) {
   const vertexShader = gl.createShader(gl.VERTEX_SHADER);
   const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
   // Attach the shader source code to the shader
   gl.shaderSource(vertexShader, vertexSource);
   gl.shaderSource(fragmentShader, fragmentSource);

  // Compile the shader
  gl.compileShader(vertexShader);
  gl.compileShader(fragmentShader);

  // Create a program object
  const program = gl.createProgram();
  // Appends the compiled shader to the program object
  gl.attachShader(program, vertexShader);
  gl.attachShader(program, fragmentShader);
  // Link program objects
  gl.linkProgram(program);
  // The WebGL engine uses this program object
  gl.useProgram(program);

  return program;
}

 const program = initShader(gl, vertexShaderSource, fragmentShaderSource);
Copy the code
  1. Pass vertex data to the vertex shader
function sendDataToSharder(gl, data) {
  // Writes vertex data to the cache and passes the data to the vertex shader
  var vertexBuffer = gl.createBuffer();
  gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
  gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data), gl.STATIC_DRAW);
  var vertexAttribLocation = gl.getAttribLocation(program, "a");
  gl.vertexAttribPointer(vertexAttribLocation, 3, gl.FLOAT, false.0.0);
  // Sets the vertex shader to convert the buffered input data into a series of vertex arrays
  gl.enableVertexAttribArray(vertexAttribLocation);
}

// Define the vertex data. Here we define the coordinates of the three vertices of the triangle, starting with the center point as the origin and the z-axis as 0
var data = [0.0.0.5.0.0, -0.5, -0.5.0.0.0.5, -0.5.0.0];
sendDataToSharder(gl, data);
Copy the code
  1. Drawing graphics
// Draw the buffer array
gl.drawArrays(gl.TRIANGLES, 0.3);
Copy the code

The resulting triangle looks like this:



Demo Address:Codesandbox. IO/s/the draw – tria…



At this point, I believe you should have a simple understanding of WebGL, I will explain how to write your own shader in the next article, can look forward to!



References:

  1. “Follow the shadow visualization” geek time – based article time.geekbang.org/column/intr graphics…
  2. Interactive Computer Graphics — A Top-down Approach Based on WEBGL (7th edition)
  3. OpenGL Programming Guide (version 8)
  4. Introduction and Practice of WebGL juejin.cn/book/684473…