Rendering pipeline
- The render pipeline is the handler that each frame goes through
- WebGL is a state machine, and all states need to be set up in advance. The DrawCall command of GPU is used to call all states along the rendering pipeline, and the Framebuffer is finally obtained
- The process of setting up all states is run on the CPU, and the rendering pipeline under DrawCall is run on the GPU
- There are programmable stages, configurable stages, non-configurable stages, optional stages, etc
- GPU memory stores vertex data, map data (all variables in shader)
- Different vendors implement different parts of the rendering pipeline
- The rendering pipeline for this article: Vertex Data -> Vertex Shader -> Pixel Assembly (Clipping -> Back Culling -> Perspective Division -> Viewport Transform)-> Rasterization -> Pixel Shader -> Test and Blend Stages (Clipping Test -> Alpha Test -> Depth Test -> Template Test -> Alpha Mix -> Jitter Handling)
Vertex Data and Uniform,texture,define information
- Buffer object to store vertex data (positions, texture coordinates, normals, etc.), which stores a large amount of vertex data in GPU memory (commonly known as video memory).
- The advantage of using these buffer objects is that we can send a large number of data to the graphics card at once, rather than each vertex once.
- Common processing attribut information include uv, position, normal, indices, and color
- The process of loading a hard disk into RAM is very time consuming
- When data is loaded into video memory, it can be removed from RAM.
- Video cards have faster access to video memory, and most have no direct access to RAM.
Vertex Shader
- If there is Scaling we need to transform normals using the inverse transpose of the matrix.
- The keywords involved in shaders are (gl_Position,gl_PointSize,gl_)
- Common primitives include GL_POINTS, GL_LINES, gl_line_strips, and GL_TRIANGLES.
- The vertex xyz is near the far plane (not yet divided by W);
- It’s traversed N times, and N is equal to the number of vertices to draw
Surface subdivision
- An optional stage
Geometry Shader
- An optional stage
Primitive Setup
-
Culling And Clipping
- Discard elements (or parts of elements) of coordinates whose x, y, and z components are not all within (-1,1)
- The two vertices of the segment, one in the optic vertebra and the other outside the optic vertebra, are clipped and new vertices are created at the junction of the optic vertebra and the segment.
-
Back-face Culling
- Before “facing” a polygon, it is necessary to give whether the polygon is front-facing or back-facing
- The default counterclockwise is front-facing, which can be controlled by glFrontFace(GL_CCW[CW])
- Points and lines only have heads
- Use the glCullFace() function to configure whether the front or back is culled, with GL_FRONT, GL_BACK(the default), and GL_FRONT_AND_BACK
- The optimization of backside culling is enabled with glEnable(GL_CULL_FACE).
- This technique cannot be used when rendering translucent or opaque objects
-
Perspective Division
- By performing transmission division (divided by the W component), we get the standard device space, also known as the Canonical View Volume (CVV).
- Transmission division is performed automatically by the hardware
- The orthogonal projection transformation does not change the value of the W component.
- The normalized device coordinates (NDC) after perspective division are a small space with x, y, and Z values between -1.0 and 1.0.
-
Viewport Transform
- The viewport transform also scales the z coordinate to [0,1] to Depth; You can specify a range of depth values: GLclampd N,GLclampd F
rasterizer
- Rasterization mainly consists of two processes: triangle setting and triangle traversal.
- Triangle setup: to get the coverage of the entire triangle grid on pixels, we must calculate the pixel coordinates of each edge.
- Triangle traversal: the sampling points of which segments are covered, and then the corresponding segment of the element is obtained
- Instead of filling polygons directly, rasterization can construct edges or points only, which is controlled by glPolyMode(GL_FRONT[BACK,FRONT_AND_BACK],GL_FILL[LINE,POINT]).
Fragment shader
- Fragment shaders are programmed to control the color displayed on the screen and its depth value
- The process color is not the final color
Test Blending Stage (Tests & Blending)
- At each stage of the test, the fragments that do not pass the test are discarded and cannot proceed to subsequent operations, and then some operations (such as mixing) are performed, until the fragments that pass all the tests are written into the FrameBuffer for final screen display
- GlEnable /glDisable(GL_ALPHA/STENCIL/DEPTH_TEST), glEnable/glDisable(GL_BLEND), etc. For RGBA, Depth, STENCIL Buffer, Can use glColor/the Depth/StencilMask (GLboolean/GLuint) to control whether to write.
-
Cutting test
- Scissor Test Tests the user specified Scissor Rectangle
- Crop testing can avoid rendering waste when viewports are smaller than the screen window.
- The cropping test is mainly to remove pixels outside the window area.
- Start the cutting test with glEnable(GL_SCISSOR_TEST), and specify the cutting area with glScissor().
-
Template test
- Stencil Test The Stencil Buffer can be updated conditionally based on the Stencil or Depth Buffer Test results.
- Through glEnable (GL_STENCIL_TEST); Enabling template Testing
- We can use template test to achieve flat mirror effect, plane shadow and object contour functions.
-
alpha
- The alpha value of a pixel is compared to a fixed value. If the result of the comparison fails, the pixel will not be written to the display output.
- Alpha testing itself is expensive and low performance, and is used only when necessary.
-
The depth of the test
- Discard the value of the pixel located later, because the pixel in this position is itself covered by the previous pixel;
- Run the glEnable(GL_DEPTH_TEST) command to enable an in-depth test
- Set the depth comparison operator with the depth function glDepthFunc()
- When rendering translucent objects, turn on depth testing and turn off depth write.
-
Alpha blending
- Alpha blending can be blended according to the Alpha value of the fragment to create a translucent effect.
- GlEnable (GL_BLEND) is used to enable blending
- Use glBlendFuncSeparate(), glBlendFunc(), and glBlendEquation() to set up various blend effects. Common options are GL_ZERO, GL_ONE, GL_SRC_ALPHA, GL_FUNC_ADD, and so on.
- If a more complex blending algorithm is needed, it can be implemented through the pixel shader, but of course the performance will be worse than the native blending algorithm.
-
Dithering
- Use glEnable(GL_DITHER) to control jitter output. This function is enabled by default
- Dithering is a technique that increases the number of available colors by dithering color values at the expense of resolution for systems with fewer colors available.
Frame Cache/FrameBuffer (FBO FrameBufferObject)
- Frame cache is a collection of color cache, depth cache, template cache, and cumulative cache.
- The canvas can be a Texture or a RenderBuffer, and the position of the canvas is called the Attachment to the frame buffer.
- The frame buffer is not where the actual data is stored; the objects that actually store the image data are the texture and render buffers.
- In the frame buffer can be attached to three types of adhesion, color adhesion (ColorAttachment) adhesion (DepthAttachment) | | depth template attached (StencilAttachment). These three attached to the corresponding storage area is also known as color buffer (ColorBuffer) (DepthBuffer) | | depth buffer stencil buffer (StencilBuffer).
Color buffer (more accurate than depth buffer)
- 16: RGB565, red 5, green 6, blue 5, do not save the transparent channel (the reason why the green one more because the human eye is more sensitive to green);
- 24 bits: RGB888, red 8 bits, green 8 bits, blue 8 bits, do not save transparent channel;
- 32-bit: RGBA8888, red 8 bits, green 8 bits, blue 8 bits, transparent channel 8 bits;
- If Multiple Render Targets are used, the number of color attachments may be greater than one.
Depth buffer (depth cache)
- Store the depth of each pixel to determine whether to discard the current pixel color or keep it, assuming that the newly rendered color is in front of the old color, then overwrite the old color, if the new color is behind the old color (blocked).
Template buffer
- Used to control writing to a location in the color cache, often for shadows.
The resources
- OpenGL — GPU Graphics rendering pipeline _& Dynamic Superman blog -CSDN blog
- Details of graphics Rendering Pipeline – Zhihu (zhihu.com)
- WebGL Study Notes (1) : Understanding the basic concepts and rendering pipeline – Akeng De Nest – Blogpark (CNblogs.com)
- (1 message) Summary of DrawCall and rendering processes _GR90’s blog -CSDN blog
- WebGL — Two ways to implement off-screen rendering using FBO (also copy textures
- zhuanlan.zhihu.com/p/65969162
- zhuanlan.zhihu.com/p/56693625