Preface :(very important)
In the last article, I shared how to get started with WebGL and draw a point on the canvas.
Admittedly, the process we see is quite cumbersome.
The point to make here is that the WebGL foundation is very similar to the JavaScript foundation.
Jquery, Vue, React, Angular… Presumably the ceiling will be very, very low.
Because the learning curve determines the difficulty of getting started, and the foundation determines the ceiling height.
Learning WebGL can be troublesome at the beginning, and there are many difficult points to understand. Things like: lighting, 3D coordinates, matrix transformations, matrix manipulation (product, dot product, normalization, etc.), vectors, precision, textures, rendering pipelines…
But then you get to touch some libraries, and you get to touch some 3D frameworks. These frameworks and libraries will greatly improve the efficiency and ease of development.
But again, no matter what, it’s all about the basics.
All right, without further ado, let’s move on to today’s sharing session.
1. Render pipeline
1.1 Introduction to basic Content
Webgl rendering depends on the rendering capability of the underlying GPU. So the WebGL rendering process is consistent with the rendering pipeline inside the GPU.
The rendering pipeline is used to convert a 3D model into a 2d image.
In the early days, rendering pipelines were not programmable, called fixed rendering pipelines. The details of the workflow were fixed, and some parameters needed to be adjusted.
The rendering pipeline included in modern Gpus is programmable rendering pipeline, which can control some details of the rendering stage by programming GLSL and shader language.
1.2 Rendering Process
(Students with weak psychological tolerance can skip this section)
Webgl’s rendering process is divided into the following:
- Vertex shader
- Image assembly
- rasterizer
- Chip shader
- Piece-by-piece operations (this will not be shared in this article)
- Cutting test
- Multiple sampling operation
- Out on the back
- Template test
- The depth of the test
- merge
- The cache
The list of nouns here may scare many students, but don’t let the nouns scare you, as they will be explained in more detail in the next section. Hopefully this article will help you understand the basic rendering process. A diagram is attached to help you understand.
2. Vertex shaders
The purpose of the vertex shader is to calculate the final vertex coordinates, as in:
A --> () => {………… } ==> A1
Coordinates:A
After a series of calculations, the coordinates are finally obtainedA1
B --> () => {………… } ==> B1
Coordinates:B
After a series of calculations, the coordinates are finally obtainedB1
- …………………
The coordinates calculated by the vertex shader will be rendered to the final display canvas.
In addition, the vertex shader calculates the following: color, texture coordinates, vertex dimensions…
There are usually three types of variables involved in the vertex shader stage.
- attribute: applies to a single vertex.
- Usually used for: vertex position, vertex size, etc
- uniform: affects all vertices
- Usually used for: rotation, translation, zooming and other position transformation, color processing and other content.
- Varying: Varying is passed to the chip shader through the vertex shader.
A variable like attribute, which we’ve used before, sets the position and size of vertices. Review the
// Vertex shader
const VERTEX_SHADER = ' ' +
'void main(){' +
'gl_Position = vec4 (0.0, 0.0, 0.0, 1.0); ' +
' gl_PointSize = 15.0;' +
'} ' +
' '
// Vertex shader
const FRAGMENT_SHADER = ' ' +
'void main() {' +
'gl_FragColor = vec4 (1.0, 0.0, 0.0, 1.0); ' +
'} ' +
' '
Copy the code
The other two variables are not used for the moment, but they will be used in the following content. Stay tuned 😁
3. Pixel assembly and rasterization
What is a primitive?
Official explanation:
Functions describing various graphic elements are called primitives, and those describing geometric elements are called primitives (points, line segments, or polygons). Points and lines are the simplest geometric primitives
Coordinates computed by the vertex shader are assembled into composite primitives.
Let’s go through a set of diagrams to see how the renderer does pixel assembly and rasterization. Okay
Popular accounts
A primitive is a point, a line segment, or a polygon.
3.1 Primitive assembly
What is primitive assembly? Simply put, this is the process of assembling the vertices, colors, textures, etc. we set up into a renderable polygon.
Note:
How it is assembled depends on the type in gl.drawarrays (type, count), which is detailed at the end of this article.
In the previous article, we used gl.POINTS to draw a point.
3.2 rasterizer
Slice: Each point on a two-dimensional image contains color, depth, and texture data. Such a point is called a slice element
Rasterization can be simply understood as the following:
Polygons generated by pixel assembly are calculated and filled, and the invisible parts are removed and the parts that are not visible are cut out. Finally, visible graphics with colored data are generated and drawn.
Diagram of raster process:
Culling and clipping
- To eliminate: Take an everyday chestnut as an example.
- In everyday life, the back of an opaque object is not visible to the observer. Similarly, in
webgl
In, we can also set the back of the object invisible, so in the rendering process, the invisible part will be removed, do not participate in the drawing. Save rendering overhead.
- In everyday life, the back of an opaque object is not visible to the observer. Similarly, in
- clipping: Again, give an example
- In our daily life, whether we are watching TV or observing objects, there will be a visual range, beyond which we cannot see things. Similarly, after the graphics are generated, some parts of the graphics may be out of visual range and will be clipped out of the drawing. To improve performance.
4. Slice shader
Receive the slices generated in the rasterization stage. In the rasterization stage, the color information of each slice has been calculated. In this stage, the slices will be selected one by one, and the processed slices will continue to be transmitted to the next stage.
Slice by slice
Through template test and depth test to determine whether the slice should be displayed, during the test process, part of the useless slice content will be discarded, and then the two-dimensional image can be drawn and displayed.
- ** Depth testing: ** is right
z
To test the value of the axis, the content of the slice with a smaller value will overwrite the content with a larger value. (Similar to close objects blocking distant objects). - ** Template test: ** simulate the observation behavior of an observer, which can be connected to mirror observation. Marks the tiles that appear in all mirrors, and finally draws only the tagged content.
5. Case analysis
First let’s look at the code that drew a point from the previous article. Draw a point
// Vertex shader
const VERTEX_SHADER = ' ' +
'void main(){' +
'gl_Position = vec4 (0.0, 0.0, 0.0, 1.0); ' +
' gl_PointSize = 15.0;' +
'} ' +
' '
// Chip shader
const FRAGMENT_SHADER = ' ' +
'void main() {' +
'gl_FragColor = vec4 (1.0, 0.0, 0.0, 1.0); ' +
'} ' +
' '
// Create vertex and slice shaders
const vertexShader = gl.createShader(gl.VERTEX_SHADER);
const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER);
// Add the vertex shader code to the vertex shader
gl.shaderSource(vertexShader, VERTEX_SHADER);
// Add the fragment shader code to the fragment shader
gl.shaderSource(fragmentShader, FRAGMENT_SHADER);
// After the addition is complete, you need to compile the shader
gl.compileShader(vertexShader);
gl.compileShader(fragmentShader);
// Create a program object
const program = gl.createProgram();
// Add a shader to the program object
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
// Associate program objects
gl.linkProgram(program);
// Use program objects
gl.useProgram(program);
// Draw a point
gl.drawArrays(gl.POINTS, 0.1);
Copy the code
Analysis:
-
We first create two variables, the VERTEX_SHADER vertex shader and the FRAGMENT_SHADER fragment shader. These two contents are added to the vertex shader and fragment shader with the following code.
-
Two shaders are then created through gl.createshader (). Pass in different parameters to create different shaders. The steps to create and associate shaders are complex. Once you are familiar with the process, encapsulate the code for the process as a backup.
-
Next, you initialize the shader and program objects by adding shader code to the shader, editing the shader, creating a program object, adding a shader to a program object, associating a program object, and using a program object.
-
When all the pre-work is done, the gl.drawarrays () method can be called to draw the desired graphics.
5.1 gl. DrawArrays (type, first, the count)
1. type
Represents the shape of the graph to be drawn and has the following values:
Gl. POINTS: To draw a series of POINTS
Gl.lines: To draw a series of unconnected LINES (single LINES)
Gl.line_strip: To draw a series of connected line segments
** gl.line_loop **: Draw a series of connected line segments. It also joins the first and last vertices to form a ring
TRIANGLES: A set of separate TRIANGLES
Gl. TRIANGLE_STRIP: Draw a triangle strip
Gl. TRIANGLE_FAN: Draw a fan (triangular fan)
2. first
Where do we start
3. count
The representation needs to use several points
Chestnut:
gl.drawArrays(gl.POINTS, 0.1)// Draw a point, one at a time, starting with the first point.
Copy the code
Other shapes can be tried ………… Can have unexpected effects.
This section only introduces part of the shader, relative to the overall shader knowledge is still a lot of difference, but these knowledge is more than enough to enter the deeper level of content only contact more to master more skilled, now do share but have the feeling of gilding the snake.
Well, that’s all for today’s sharing, Bye~