How does the graphics system draw

A general computer graphics system mainly consists of six parts, namely, input device, central processing unit, graphics processing unit, memory, frame cache and output device. Although I have given a diagram of the drawing process below, I will give you a few more words about the role of these devices in visualization.

Raster: Almost all modern graphics systems are based on Raster to draw graphics, Raster is an array of pixels constructed into an image.

Pixel: A Pixel corresponds to a point on an image, which usually holds information such as the color of a specific location on the image.

Frame Buffer: During drawing, pixel information is stored in the Frame Buffer, a block of memory address.

Central Processing Unit (CPU): a Central Processing Unit responsible for logical computing. Graphics Processing Unit (GPU): a Graphics Processing Unit responsible for Graphics computing.

Graphics Processing Unit (GPU): a Graphics Processing Unit responsible for Graphics computing.

First, the data is processed by the CPU into geometric information with a specific structure. This information is then sent to the GPU for processing. There are two steps to generate raster information in GPU. This raster information is output to the frame cache and eventually rendered to the screen.

This drawing process is a general process for any graphics system in modern computers.

It does two main things

  • One is to calculate the given data combined with the scene elements of the drawing (such as camera, light source, occluded objects, etc.), and finally transform the graph into 2D coordinates of the screen space.
  • Second, each pixel of the screen space is colored, and the final graphics output to the display device.

The whole process is carried out step by step, the output of the previous step is the input of the next step, so we also call this process RenderPipelines.

Why use GPU instead of CPU for image rendering

Because rendering an image is essentially a calculation of the position of each small pixel block, processing a single image is a task of tens of thousands of pixels. Cpus are capable of parallelism, but due to the design of their pipes, it is expensive for a single CPU to run so many threads in parallel. GPU is different. Compared with CPU, its function is not so powerful, but more than the number, so our processing of pictures are generally placed on the CPU to run.

WebGl drawing method

The WebGL API provided by the browser is a JavaScript bound version of OpenGL ES, which gives developers the ability to manipulate the GPU. However, this feature makes WebGl’s operation a lot more complicated by the following steps:

  1. Create a WebGL context
  2. Create WebGL Program (WebGL Program)
  3. Buffer the data
  4. Read buffer data to GPU
  5. The GPU executes the WebGL program and outputs the results

Let’s look at the steps in detail using drawing a triangle as an example

1. Create WebGL context

const canvas = document.querySelector('canvas'); 
const gl = canvas.getContext('webgl');
Copy the code

2. Create WebGl applications

The WebGL program here is a WebGLProgram object, which is the program that gives the GPU the final shader to run, not the JavaScript program that draws the graphics that we’re writing

To create this WebGL program, we need to write two shaders. Shaders are snippets of code written in the PROGRAMMING language GLSL.

Before we talk about shaders, we need to take a look at the basic concepts of vertices and primitives

In drawing, WebGL describes geometric information in terms of vertices and primitives. Vertices are the vertices of a geometric figure, for example, a triangle has three vertices and a quadrilateral has four. Primions are graphic units that WebGL can directly handle, determined by WebGL’s drawing mode, dots, lines, triangles, and so on.

Vertices and primitives are essential to the drawing process. Therefore, WebGL generally uses two shaders to draw a graph. One is called Vertex Shader, which processes the Vertex information of the graph, and the other is called Fragment Shader, which processes the pixel information.

Vertex shaders and fragment shaders, both of which can be written in GLSL, are basically a combination. Inside the shader, you must define a main function to add your own processing to. Furthermore, when passing data from the WebGL side to the shader, you need to use variables defined by special modifiers.

We use the Uniform modifier to declare variables when we want to pass different information to shaders about vertices. We use the Uniform modifier to declare variables when we want to pass information to shaders that is the same for all vertices.

In addition, when passing data from the vertex shader to the fragment shader, the variable is declared using the VARYING modifier.

The vertex shader’s built-in gl_Position variable must be assigned, while the fragment shader’s built-in gl_FragColor variable is not necessarily assigned, but is generally assigned.

Vertex shader

We can think of vertex shaders as GPU program code that processes vertices. It can change vertex information (such as vertex coordinates, normal direction, material, etc.) to change the shape or size of the graph we draw, etc.

Once the vertex processing is complete, WebGL calculates the pixels to be colored based on the vertex and primitives specified by the drawing mode, and then executes the pixel shader program on them. In simple terms, it is to color the pixels in the specified pixel.

Chip shader

The process of WebGL extracting pixel points from vertex shader and pixel to execute code for pixel shader is the process of generating raster information mentioned above, which is also called rasterization process. Therefore, the function of the pixel shader is to process the pixel information after rasterization.

For example, we can set the pixel as a line segment, and the fragment shader will process the pixel information on the line segment between the vertices, so that the graph is hollow. If we set the pixel to a triangle, the pixel shader will process all the pixels inside the triangle, resulting in a solid shape.

One thing to note here is that since primitives are graphic units that WebGL can handle directly, other non-primitives must eventually be converted to primitives before they can be processed by WebGL. For example, if we were to draw a solid quadrilateral, we would split the quadrilateral into two triangles and hand them to WebGL to draw separately.

At the same time, the pixel shader’s coloring process is parallel, that is, no matter how many pixels, can be processed at the same time.

So let’s draw a triangle together

First you need to declare two shaders

// In GLSL, attribute represents a declared variable, vec2 is the type of the variable, it represents a two-dimensional vector, and position is the variable name. Next we bind the buffer data to the vertex shader's position variable.
const vertex = ` attribute vec2 position; Void main() {gl_PointSize = 1.0; Gl_Position = vec4(position, 1.0, 1.0); } `;


const fragment = ` precision mediump float; Void main() {gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0); // Define the pixel color} 'when drawing in the slice shader;
Copy the code

Then you need to create a WebGL program and bind the shader you just created to it

const vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, vertex);
gl.compileShader(vertexShader);


const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragmentShader, fragment);
gl.compileShader(fragmentShader);
Copy the code

Then create the WebGLProgram object and associate the two Shaders with the WebGL program. The creation of the WebGLProgram object is done by adding vertexShader and fragmentShader, and then linking the WebGLProgram object to the WebGL context object.

const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
Copy the code

Finally, apply the WebGl object

gl.useProgram(program);
Copy the code

At this point, the WebGL application configuration has been created and completed. Next, we simply store the triangle’s data into a buffer and then feed it to the GPU.

WebGl is a right-handed coordinate system, it’s 3d by default and we need a plane triangle to just set the z axis to zero.

By default, the top left corner of the entire window is (0,0)

We define an array by using the format Float32Array

const points = new Float32Array([-1, -1.0.1.1, -1,]);Copy the code

You then need to write the defined data to the WebGL buffer. This process can be summarized in three simple steps: create a cache object, bind it to the current operation object, and write the current data to the cache object. The createBuffer, bindBuffer, and bufferData methods are used to implement these three steps

const bufferId = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, bufferId);
gl.bufferData(gl.ARRAY_BUFFER, points, gl.STATIC_DRAW);
Copy the code

We have now written the data to the cache, but our shader cannot read the data yet and needs to bind the data to the position variable in the vertex shader.

const vPosition = gl.getAttribLocation(program, 'position');// Gets the address of the position variable in the vertex shader
gl.vertexAttribPointer(vPosition, 2, gl.FLOAT, false.0.0);// Set the length and type of the variable
gl.enableVertexAttribArray(vPosition);// Activate this variable
Copy the code

This way, in the vertex shader, the values in the array of points we define can be read through the variable position.

The last step! Draw it!

After we pass the data into the buffer, the GPU can also read the bound data into the shader variable. Next, we just need to call the drawing instruction, can execute the shader program to complete the drawing.

We first call gl.clear to clear the contents of the current canvas, and then call GL. drawArrays to pass into draw mode. Using TRIANGLES as primitics, WebGL will pass the corresponding buffer array to the vertex shader, using the vertex offset and number of vertices to draw. The code is as follows:

gl.clear(gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLES, 0, points.length / 2);

gl.drawArrays(gl.LINE_LOOP, 0, points.length/2) // This will draw hollow triangles

Copy the code

A note about shaders

Whether the triangle is large or small, has dozens of pixels or millions of pixels, the GPU processes every pixel at the same time. That is, the shader program will be executed simultaneously on the GPU as many times as there are pixels in the graph.

Keep in mind that WebGl is a parallel computing feature

1. What if I need to modify the perimeter of a triangle?

Normally you need to reset the coordinates of the points, but in WEbGL. Because each point is rendered independently and in parallel.

We can zoom out directly by modifying the value of position in the vertex shader

gl_Position = vec4(position, 1.0.1.0); Gl_Position = vec4(position *0.5.1.0.1.0);
Copy the code

2. Click shader to transfer data to shader

The vertex shader can also pass data to the chip shader via a VARIABLE VARYING. These values are then interpolated linearly based on the position of the pixel coordinates of the slice shader relative to the vertex pixel coordinates.

Simple understanding, is can do gradient effect

attribute vec2 position;
varying vec3 color;

void main() {
  gl_PointSize = 1.0;
  color = vec3(0.5 + position * 0.5.0.0);
  gl_Position = vec4(position * 0.5.1.0.1.0);
}
Copy the code

We modified the vertex shader to define a color variable, which is a three-dimensional vector. We use mathematical tricks to map vertex values to an RGB color value (the method of mapping vertex RGB color values will be described in more detail in a later lesson), using vec3(0.5 + position * 0.5, 0.0).

Thus, vertices [-1,-1] are mapped to [0,0,0] (black), vertices [0,1] are mapped to [0.5, 1,0] (light green), and vertices [1,-1] are mapped to [1,0,0] (red). In this way, three vertices will have three different color values.

We then pass color to the chip shader via the VARYING variable. The code in the fragment shader is as follows:

precision mediump float;
varying vec3 color;

void main()
{
  gl_FragColor = vec4(color, 1.0);
}  
Copy the code

3.WebGl basic primitives

Webgl supports seven primitive types,

  • Gl. POINTS (POINTS),

  • Gl. LINES (line),

  • Gl. LINE_STRIP (line),

  • Gl. LINE_LOOP (loop),

  • TRIANGLES, say, using TRIANGLES,

  • Gl. TRIANGLE_STRIP,

  • Gl.TRIANGLE_FAN

    If we need to make that triangle hollow

    To draw hollow triangles, gl.line_strips, gl.lines, and gl.line_loops are all possible. But the gl. LINES (line segments) need to write six vertices ([1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 1, 1]), gl. LINE_STRIP (LINES) also need to write four vertices ([1, 1, 0, 1, 1, 1, 1, 1]), and gl. LINE_LOOP (circuit), if only three vertices ([1, 1, 0, 1, 1, 1]), so the gl. LINE_LOOP (circuit) is the best choice

conclusion

GLSL learning

First, you need to configure your local development environment

Note that there is an installation package that needs to be downloaded and configured otherwise an error will be reported when installing glsl-lint in vscode

Refer to the article

Vertex cache

Since vertex position coordinates are required, how do you define this information?

When vertices are finally drawn on the screen, they go through model coordinate transformation, view coordinate transformation, and projection coordinate transformation, and I’ve said this many times. However, before using coordinate information, you must first define the composition of these vertices, otherwise there is no way to start.

Where the vertices are placed, they are represented as coordinates, usually called local coordinates. Local coordinates are the coordinates of the vertices of the model with respect to the origin (x, y, z are all 0).

For example, a vertex with local coordinates (1.0,0.0,0.0) is 1.0 away from the origin in the X-axis. Similarly, each vertex in turn has its local coordinates defined so that the vertex position is formed.

A buffer, “buffer,” is a common computer term for data storage space. There are frame caches, index caches, and all kinds of caches in WebGL, but no matter what kind of cache you have, you just need to think of it as a space to store data. Vertex buffers are used to store vertex information. Vertex buffers in WebGL are called VBO (Vertex Buffer Object).

The purpose of the vertex cache is not only to store the position of the vertex, but also to store information related to the vertex beyond the position.

For example, vertex normals, colors, textural coordinates, and all vertex-related information can be stored and managed using vertex cache. However, it is important to note that when appending information to the vertex information, the corresponding VBO needs to be used.

The specific process of vertex caching is as follows.

· Various information about vertices is stored in the array

· Generate VBO using WebGL method

· Pass the information in the array to VBO using WebGL methods

· Attribute function in vertex shader is combined with VBO

This article is part of the “Gold Nuggets For Free!” Event, click to view details of the event