Hello again, bean skin powder! Today, bytedance’s “Yancy” kids are launching a new WebGL product called “WebGL Basics: Shader Basics “! Are you ready? Let’s begin our magical journey into the 3D world.
Author: Yancy
1. Know3D
First we’ll introduce a couple of concepts that we need to move into 3D. Get to know them.
1.1 Viewpoint, line of sight, target point, upper direction
These concepts are among the most common in WebGL.
- Viewpoint: can be easily understood as the eye, also known as the observation point
- Target point: can be understood as the object (any object) we want to look at
- Up: the direction of the head.
In real life, our eyes always start from our eyes and reach the object we want to see. At the same time, the object will take on different shapes depending on the Angle we observe. Let me illustrate it with a picture.
These few things create a basic display model of the 3D world, which shows how important it is. We’ll also talk about how to set these up in WebGL later. Some friends call the viewpoint the camera and the target the canvas. It’s the same thing. Just remember it as you understand it.
1.2 Visual range
Visual range refers to the maximum range that we can see. Normally we can’t see what is behind us.
It is well known that three-dimensional objects have the concept of depth. In our understanding, depth is the Z-axis.
Although objects can be placed anywhere in THREE-DIMENSIONAL space, in WebGL objects outside of visual range are not drawn, again to save overhead.
1.3 Visual Space
Horizontal perspective, vertical perspective, and visual depth define the concept of visual space.
There are two kinds of visual space.
- Orthographic projection: Independent of the distance of an object, usually used in architectural design and modeling.
- Perspective projection: The real world we observe is perspective projection. A deeper feeling.
1.3 shader
If you want to render 3D graphics, you need to go through a series of steps called the render pipeline. When developing WebGL programs, we need to communicate with the GPU through the shader language, which is used to set the graphics that we need to render and display.
It follows that shaders are the most important aspect of writing WebGL (not one). The reason we can generate and manipulate 3D images is because shaders work. There are two kinds of shaders in WebGL. Vertex shaders and slice shaders.
1.3.1 Vertex shaders
The vertices here represent each of the points that make up the object.
The function of vertex shader is mainly to transform the position data through matrix transformation, generate vertex color after calculating illumination and transform texture coordinates. And output the generated data to the chip shader.
1.3.2 Chip shaders
The function of the slice shader is to calculate the final element of each slice generated in the rasterization stage.
Note:
Because of the importance of shader content, we will introduce these two concepts first, and then share them exclusively.
2. Draw graphs
2.1 Obtaining the drawing context
Now that we’ve covered the first section, we’re ready to dive into WebGL development.
Remember what the first step in Canvas is?
Yes, you need to get the Canvas element and the drawing context. WebGL development is no exception and also needs to get elements and drawing context first. Look like the following code:
2.2 Initializing shaders
2.2.1. Write shader code
After obtaining the drawing context, we need to initialize the WebGL shader. The shader code is embedded in the renderer as a string, so we need to write two strings of shaders.
Both shader code exists as strings and is embedded into the rendering process at render time.
Description:
void main() {}
: Creates a main function.gl_Position
: Specifies the coordinates to draw, to receive a 4-point componentvec4
The data. Represent thex,y,z,w
datagl_PointSize
: Indicates the size of the graph to be drawn.gl_FragColor
: Define graphic color,1.0 0.0 0.0 1.0
Represent ther g b a
2.2.2. Creating shaders
Of course, just writing the shader code won’t finish the rendering, so we need to add the shader to the rendering process
2.2.3. Shader compilation
After the above two steps, we need to add shader code to the shader. Let’s look at an example.
2.2.4. To create the program
After compiling, we need to add shaders to the renderer.
2.2.5. Draw graphs
After completing the above steps, we are ready to draw our graph. So let’s take a point here. Draw a point on the canvas.
gl.drawArrays(gl.POINTS, 0.1);
Copy the code
Now you can open the page and see the point we drew.
Summary code:
3. Render pipeline
3.1 Introduction to basic Content
WebGL rendering depends on the rendering capability of the underlying GPU. So the WebGL rendering process is consistent with the rendering pipeline inside the GPU.
The rendering pipeline is used to convert a 3D model into a 2d image.
In the early days, rendering pipelines were not programmable, called fixed rendering pipelines. The details of the workflow were fixed, and some parameters needed to be adjusted.
The rendering pipeline included in modern Gpus is programmable rendering pipeline, which can control some details of the rendering stage by programming GLSL and shader language.
3.2 Rendering Process
(Students with weak psychological tolerance can skip this section)
WebGL’s rendering process is divided into the following:
- Vertex shader
- Image assembly
- rasterizer
- Chip shader
- Piece-by-piece operations (this will not be shared in this article)
- Cutting test
- Multiple sampling operation
- Out on the back
- Template test
- The depth of the test
- merge
- The cache
The list of nouns here may scare many students, but don’t let the nouns scare you, as they will be explained in more detail in the next section. Hopefully this article will help you understand the basic rendering process. A diagram is attached to help you understand.
Vertex shaders
The purpose of the vertex shader is to calculate the final vertex coordinates, as in:
A --> () => {………… } ==> A1
Coordinates:A
After a series of calculations, the coordinates are finally obtainedA1
B --> () => {………… } ==> B1
Coordinates:B
After a series of calculations, the coordinates are finally obtainedB1
- …………………
The coordinates calculated by the vertex shader will be rendered to the final display canvas.
In addition, the vertex shader calculates the following: color, texture coordinates, vertex dimensions…
There are usually three types of variables involved in the vertex shader stage.
- attribute: applies to a single vertex.
- Usually used for: vertex position, vertex size, etc
- uniform: affects all vertices
- Usually used for: rotation, translation, zooming and other position transformation, color processing and other content.
- Varying: Varying is passed to the chip shader through the vertex shader.
A variable like attribute, which we’ve used before, sets the position and size of vertices. Review the
The other two variables are not used for the moment, but they will be used in the following content. Stay tuned 😁
5. Pixel assembly and rasterization
What is a primitive?
Official explanation:
Functions describing various graphic elements are called primitives, and those describing geometric elements are called primitives (points, line segments, or polygons). Points and lines are the simplest geometric primitives
Coordinates computed by the vertex shader are assembled into composite primitives.
Let’s go through a set of diagrams to see how the renderer does pixel assembly and rasterization. Okay
Popular accounts
A primitive is a point, a line segment, or a polygon.
5.1 Primitive Assembly
What is primitive assembly? Simply put, this is the process of assembling the vertices, colors, textures, etc. we set up into a renderable polygon.
Note:
How it is assembled depends on the type in gl.drawarrays (type, count), which is detailed at the end of this article.
In the previous article, we used gl.POINTS to draw a point.
5.2 rasterizer
Slice: Each point on a two-dimensional image contains color, depth, and texture data. Such a point is called a slice element
Rasterization can be simply understood as the following:
Polygons generated by pixel assembly are calculated and filled, and the invisible parts are removed and the parts that are not visible are cut out. Finally, visible graphics with colored data are generated and drawn.
Diagram of raster process:
Culling and clipping
- To eliminate: Take an everyday chestnut as an example.
- In everyday life, the back of an opaque object is not visible to the observer. Similarly, in
WebGL
In, we can also set the back of the object invisible, so in the rendering process, the invisible part will be removed, do not participate in the drawing. Save rendering overhead.
- In everyday life, the back of an opaque object is not visible to the observer. Similarly, in
- clipping: Again, give an example
- In our daily life, whether we are watching TV or observing objects, there will be a visual range, beyond which we cannot see things. Similarly, after the graphics are generated, some parts of the graphics may be out of visual range and will be clipped out of the drawing. To improve performance.
6. Slice shader
Receive the slices generated in the rasterization stage. In the rasterization stage, the color information of each slice has been calculated. In this stage, the slices will be selected one by one, and the processed slices will continue to be transmitted to the next stage.
Slice by slice
Through template test and depth test to determine whether the slice should be displayed, during the test process, part of the useless slice content will be discarded, and then the two-dimensional image can be drawn and displayed.
- ** Depth testing: ** is right
z
To test the value of the axis, the content of the slice with a smaller value will overwrite the content with a larger value. (Similar to close objects blocking distant objects). - ** Template test: ** simulate the observation behavior of an observer, which can be connected to mirror observation. Marks the tiles that appear in all mirrors, and finally draws only the tagged content.
7. Case analysis
The following is a process analysis based on the code of 2.2.5 to deepen our understanding.
Analysis:
-
We first create two variables, the VERTEX_SHADER vertex shader and the FRAGMENT_SHADER fragment shader. These two contents are added to the vertex shader and fragment shader with the following code.
-
Two shaders are then created through gl.createshader (). Pass in different parameters to create different shaders. The steps to create and associate shaders are complex. Once you are familiar with the process, encapsulate the code for the process as a backup.
-
Next, you initialize the shader and program objects by adding shader code to the shader, editing the shader, creating a program object, adding a shader to a program object, associating a program object, and using a program object.
-
When all the pre-work is done, the gl.drawarrays () method can be called to draw the desired graphics.
7.1 gl. DrawArrays (type, first, the count)
1. type
Represents the shape of the graph to be drawn and has the following values:
Gl. POINTS: To draw a series of POINTS
Gl. LINE_STRIP: To draw a series of connected line segments
** gl.line_loop **: Draw a series of connected line segments. It also joins the first and last vertices to form a ring
TRIANGLES: A set of separate TRIANGLES
Gl. TRIANGLE_STRIP: Draw a triangle strip
Gl. TRIANGLE_FAN: Draw a fan (triangular fan)
2. first
Where do we start
3. count
The representation needs to use several points
Chestnut:
Other shapes can be tried ………… Can have unexpected effects.