preface
After the autumn recruitment, I finally wrote a comprehensive scene, Juejin (cn), which covered most of the basic 3D functions, using three. js. I thought many people were interested in Web3D, but I really could not disclose the code (I had no sense of confidentiality as an intern at that time). So the source code was removed from GitHub in November, and probably won’t be made public again.
Next, I will continue to write a concluding article on Web3D based on some notes taken at that time. Today, this article is mainly about the graphics pipeline or rendering process. Understanding these processes may be helpful to optimize the rendering performance.
The author level is limited, if there are mistakes welcome to comment
CPU and GPU features
Why write graphics pipelining first about CPU and GPU?
Because in the rendering process, the CPU and GPU work together as a pair of rendering images, one cannot be without the other. In the process of calculation, the CPU is responsible for calculating and throwing the calculated data to the GPU, which mobilizes a worker-like computing unit to process these data and finally assemble the image to be displayed
Compared with CPU, the two main features of GPU are pipeline rendering and parallel computing.
Both the vertex data sent by the CPU to the GPU and the pixel data generated by the GPU raster generator are unrelated and can be processed independently in parallel (a frame is made up of tens of thousands of pixels). And vertex data (XYZW), pixel data (RGBA) are generally expressed by quaternion, suitable for parallel computing. At the same time, since the processing of each pixel is unrelated to each other, the processing efficiency can be improved through pipeline processing, that is, each stage will take the output of the previous stage as the input of the stage.
The following process is the graphical assembly line or Render Pipeline. The diagram below is the process of rendering a color triangle, hoping to establish a perceptual cognition for readers with the help of this picture
1: Application Stage
This part is not regarded as part of the “pipeline”, but is also the preparation stage before handing over to GPU processing
This is a phase where the CPU is in charge and the developer is in full control. At this stage, the CPU decides what data to hand to the GPU (such as rendering lights in the target scene, models of the scene, camera positions), and sometimes processes the data (such as only handing the GPU elements that can be seen by the camera, and culling out other elements that are not visible). And tell the GPU the render state of the data (such as textures, materials, shaders, etc.). In the application stage, we mainly use high-level programming language instead of shader language for processing.
Vertex Shader: Determines the shape of the point (position and color)
This part then begins to enter the Geometry Stage, which is dominated by gpus. In other words, from this stage, we enter the “assembly line” mentioned above.
- Input to vertex shaders:
This phase inputs Vertex Data, which is a collection of vertices. It includes vertex position data, vertex color, normal vector data, light direction, light source position, light color and other data.
- What vertex shaders do:
-
Implement vertex shader program for vertex transformation calculation, such as vertex position coordinates for rotation, translation and other matrix transformation, transformation of the new vertex coordinates as the output of the vertex shader and the next input.
By the way, many of these steps can be thought of as transformations between representations of objects in different coordinate systems. For example, in the virtual camera imaging mode, transformation is also involved when the image is first transmitted to the output device from the representation of the object in its defined coordinate system. The multiple transformations of the coordinate system are the multiplication of matrices, and the final projection transformation of the geometric data is suitable for pipeline architecture.
-
In addition to the geometric transformation matrix mentioned in the first point, which changes the vertex position, similarly, the light data changes the vertex color data, the vertex color assignment can be as simple as a program to specify a color, or as complex as using a real lighting model to calculate.
In this stage, developers can calculate the light information of each vertex, calculate the light, shadow, etc. Besides calculating the light, other vertex-related operations can be performed in this stage. It is worth mentioning that this is only “information processing”, not really coloring, can be understood as “to provide some information for the next coloring calculation”.
-
Output from the vertex shader:
This phase outputs modified or transformed Vertex Data. Vertex Data is a collection of vertices. It includes vertex position data, vertex color, normal vector data, light direction, light source position, light color and other data.
3. Shape Assembly: Connecting points, lines and triangles to determine the Shape
-
Input for primitive assembly:
This stage takes all vertices output by the vertex shader as input and assembles all points into the shape of the specified primitives.
For example, a line Primitive is a line segment defined by two vertices. The default width is 1 pixel. A triangular Primitive is all the areas inside the boundary of a triangle, not the edges of a triangle. Primitives are not actually display units, that is, primitives are not so-called pixels.
-
Functions of primitive assembly:
-
This stage takes as input a set of fixed points in the form of primitives and constructs entirely new primitives (or other primitives) to generate geometric shapes.
-
If it is WebGL, from the program’s point of view, it is the first argument to drawArrays() or drawElements() drawing mode that controls how vertices are assembled as elements. For example, Gl. LINES defines two vertices to be assembled into a line primitive, Gl. POINTS defines three vertices to be assembled into a triangular primitive, and Gl. POINTS defines a point-domain primitive.
-
Output of primitive assembly:
Assembled primitives
4. Rasterization: Transform geometry into actual screen pixels
-
Rasterized input:
Primitives in the primitives assembly phase
-
Functions of rasterization:
-
This stage maps primitives to corresponding pixels on the final screen, generating primitives. Fragments are all the data (including interpolated color values) needed to render a pixel.
But need special remind is, at this time the color of the fragment, though rasterizer is the figure RMB into dollars, canvas canvas each pixel of the image corresponding to a piece of yuan, you can put a simple understanding of yuan for pixels, but in fact it is not a concept, vertex rasterizer get original colour also does not give, You can customize the color of the slice in the slice shader in the next step
-
The coordinates of vertices are defined with reference to the world coordinate system, and the coordinates of slices are defined with reference to the coordinate system of Canvas canvas window. That is, after this step, the slice is actually converted from “3D” data to “2D” data that can be displayed on the screen (the screen is always 2D, whether it is original 2D or 3D).
-
Rasterized output:
A slice can also be understood as a pixel without a color
5. Fragment Shader: Coloring useful screen pixels (interpolation Shader)
-
Input to the chip shader:
Rasterize the output slice. When reading data, the slice shader, like the vertex shader, is a functional unit in the GPU rendering pipeline that can execute the shader program. The vertex shader processes vertex data on a per-vertex basis, and the slice shader processes slice metadata on a per-vertex basis.
-
The function of the slice shader:
-
In this phase, the Clipping is performed on the input Fragment. Crop will discard all pixels outside the view to improve execution efficiency. WebGL can also use the keyword discard to implement which slices can be discarded. Discarded slices will not appear in the frame buffer and will not be displayed on the canvas canvas.
-
Color the Fragment.
WebGL colorates each slice by assigning a built-in variable gl_FragColor, which can be a defined RGBA value, a value related to the slice position, or an interpolated vertex color. This stage is completely programmable; After receiving a large amount of data from the GPU for this phase, the programmer can decide what color the tiles should be, and can also use a texture map to color them. Here’s an example from a blog post
-
Output from the chip shader:
Cropped and colored tiles, i.e. pixels that have been filtered and colored
6. Tests and Blending: Check layer depth and opacity and blend layers
If you’ve done photoshop, you should understand this phase better
-
If 3d data is generated, the depth value of the slice with the same coordinate (x,y) is Z. By default, the depth value is gl_fragcoord. Z. A larger depth value indicates that the slice is behind another one. That is, some tiles may not be visible because their defined surface is defined behind other surfaces
-
This phase also checks the alpha value (which defines the transparency of a pixel) to blend the layers.
-
Pixel value RGBA is stored in the color buffer, the same (X, Y) position, complex scenes often have multiple pieces of superposition, will be many times by piece of comparison, constantly update and replace the existing depth value and color value in the frame buffer.
-
If the depth buffer or depth detection unit is not closed, the fusion calculation cannot be carried out. If there are both opaque and transparent objects in the scene, the general principle is to open the depth test unit first, draw opaque geometry, then set the depth buffer to read-only mode, and then draw transparent geometry. In this way, since the depth buffer can be read, the transparent chip behind the opaque chip is discarded. Since the depth buffer is not writable, the depth value corresponding to each (x,y) coordinate cannot be replaced by a smaller value, ensuring that the fusion calculation can be performed.