1. Problems arising from the rendering process
The previous article introduced the implementation of using flat shaders to draw pyramids, wreaths, and flower plates. In order to render more like the real world, lighting effects need to be added.
The image above shows a donut rendered with the default light shader. The donut is implemented by OpenGL and can be obtained by calling the donut.
When stationary, donuts work perfectly. But once it turns, it goes through.
The reasons for the hidden surface are as follows:
When using the default light shader, the front side is rendered normally and the back side is rendered black. When rotation occurs, the hidden surface of the back cannot be obscured by the front.
Pyramids, garlands, and flower plates drawn with flat shaders were rotated without this phenomenon. This is because the flat shader renders all primitives in the same color, the front and back are the same color, and there is no visual difference between the front and back after rotation.
portal
2. Oil painting algorithm
A possible solution to this problem is to sort the triangles and then render the more distant triangles first and the closer triangles in a way called the oil painting algorithm.
In the picture, draw the mountains in the distance first. After drawing the grass, draw the tree at the end to solve the hidden surface problem.
But such graphical processing is very inefficient in computers. Because any overlapping pixels have to be written multiple times.
More importantly, when it comes to certain scenes, the oil painting algorithm does not solve the problem very well.
When such graphics are displayed, the oil painting algorithm quits the group chat.
3. Remove the front and back
One of the reasons triangles are divided into heads and tails is that they can be culled.
If you look at any 3D object from any direction, it is impossible to see its full picture, so there must be some invisible sides. To the observer, these invisible surfaces do not render or affect the object being viewed; For the computer, why render when you can’t see it, just throw it away.
The problem then becomes how to distinguish between visible and invisible, which is equivalent to how to distinguish between heads and tails.
As for how to distinguish front from back: it is by the order in which the vertices of the triangle are joined, and the order in which they are joined determines the direction of the normal vector.
-
Face: Triangular faces with vertices joined in counterclockwise order and normal vectors facing the observer
-
Back: Triangular faces with vertices connected in clockwise order and normal vectors facing away from the observer
But the front and back of a solid also depend on the viewer’s point of view.
-
When the observer is on the right, the right triangle is counterclockwise for heads, and the left triangle is clockwise for tails
-
When the observer is on the left, the left triangle is counterclockwise for heads, and the right triangle is clockwise for tails
Understandably, it makes sense for the observer to see heads.
The front and back sides are determined by the order in which the vertices of the triangle are joined and by the direction of the observer. As the Angle of the observer changes, so does the front and back.
The back here refers to the covered face, not the front and back sides of a face. The points on the front and back of the same face in the same position in 3D are exactly the same color, and the face has no concept of thickness.
Because of the existence of the state machine, the use of front and back culling is extremely simple:
- Enable front and back culling (default back culling)
glEnable(GL_CULL_FACE);
Copy the code
- Turn off front back culling (default back culling)
glDisable(GL_CULL_FACE);
Copy the code
The state machine determines that a function must be turned off after it is used, otherwise it will affect subsequent frames.
As shown in the picture, the problem of the donut hidden surface with the front and back culling has been solved, but when rotated to a certain Angle, the donut appears to be bitten.
This is because OpenGL is not sure whether to render front or back when pixels overlap.
To solve this problem, you need to use in-depth testing.
4. Depth testing
What is depth?
The depth is simply how far the pixel is from the camera in the 3D world, the Z value
What is a depth buffer?
The depth cache is an area of memory dedicated to storing the depth value of each pixel. The greater the depth value (Z value), the further away you are from the camera
Why do we need a deep buffer?
When a scene contains many models, opaque objects can be sorted correctly regardless of their rendering order because of the depth buffer
In real-time rendering, depth testing is the inverse of the oil painting algorithm, dealing with objects close to the observer first, and with objects far away, it can determine which parts of which objects will be rendered first and which will be obscured, so it can also be used to deal with visibility issues.
Its working principle is: when rendering a piece of RMB, need the depth of the value and it has been held in the depth buffer depth value of the yuan as a contrast, if it is beyond the value of the distance from the camera, shows that the yuan should not be rendered, discarded directly, otherwise should cover off color buffer corresponding pixel values, and in the depth of the depth buffer update it value writing open (depth).
Deep write: Updates the depth value of the slice that has just passed the test to the depth cache. The purpose of a deep write is to update a depth threshold. After the depth test, the slices that do not meet the new threshold conditions will be discarded.
Due to the existence of state machines, deep testing is also very simple:
- Enable depth testing
glEnable(GL_DEPTH_TEST);
Copy the code
- Turn off depth testing
glDisable(GL_DEPTH_TEST);
Copy the code
- Whether to enable deep write
glDepthMask(GLboolean flag);
Copy the code
Note that if no depth buffer is requested, the command for depth testing is ignored.
- Apply for color buffer and depth buffer
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_DEPTH | GLUT_STENCIL);
Copy the code
As shown, use the depth tested donut to solve the hidden surface and notch problem.
Depth values are usually represented by 16 -, 24 -, or 32 – bit values, usually 24 – bits. The higher the number of digits, the more accurate the depth. The depth value ranges from [0,1], the smaller the value is, the closer it is to the observer, and the larger the value is, the farther it is away from the observer. Due to the limitation of depth buffer accuracy, when the difference of depth values is very small, the phenomenon of mottling will occur, and the effect displayed will be staggered and flicker. (Imagine rendering an airplane with a sticker logo on the wing, the wing surface and sticker depth values are very close)
The reason for this phenomenon is called ZFighting. To resolve ZFighting conflicts, use polygon offsets.
5. Polygon offset
Polygon offset was covered by the stroke step in the pyramid case in the previous article.
The idea of polygon offset is:
By making some slight changes to the depth values before performing the depth test and creating a gap between the depth values, you can distinguish the two graphics depth values that overlap.
Also, polygon offsets are easy to use because of state machines:
- open
Polygon migration
The parameter corresponds to the filling method of the polygon
glEnable(GL_POLYGON_OFFSET_POINT);
glEnable(GL_POLYGON_OFFSET_LINE);
glEnable(GL_POLYGON_OFFSET_FILL);
Copy the code
- Turn off polygon offset
glDisable(...) ;Copy the code
- When polygon offset is enabled, you also need to specify the offset
glPolygonOffset(GLfloat factor, GLfloat units);
Copy the code
Offset greater than 0 pushes the model farther away from the camera, Offset less than 0 pulls the model closer. In general, just assigning glPolygonOffset values -1 and 1 will suffice. For the pyramid case the stroke is set to -1.
ZFighting Problem prevention:
- Do not place two objects too close together to avoid overlapping triangles when rendering. This approach requires inserting a small offset for objects in the scene, which can be avoided
ZFighting
Phenomenon. - As far as possible will
Almost cut surface
Set it away from the observer. Depth accuracy is high near the near clipping plane, so keeping the near clipping plane as far away as possible increases the accuracy of the entire clipping range. However, in this way, objects close to the observer will be clipped, so it is necessary to adjust the clipping surface parameters. - Use higher digits
Depth buffer
The depth buffer is usually 24 bits, but some hardware now uses 32 bits to improve accuracy
Mixed 6.
Related terms:
Target color: The color value already stored in the color cache
Source color: the color value entered into the color cache as the result of the current render command
If a slice passes all the tests and blending is turned on, the existing colors of the color buffer (destination colors) are blended with the current slice colors (source colors), and the combination of destination colors and source colors is controlled by the blending equation.
The default mixed equation is as follows:
Cf
: Calculates the final parameter colorCs
Color: the sourceCd
: Target colorS
: Source mixing factorD
: Target mixing factor
Cf = (Cs * S) + (Cd * D)
Copy the code
To set the blending factor, run the glBlendFun command:
S
: Source mixing factorD
: Target mixing factor
glBlendFunc(GLenum S,GLenum D);
Copy the code
Through the combination of commonly used mixed commands:
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA)
Copy the code
If the alpha value of the source color is 0.6, that is, S=0.6, then the target blending factor D=1-0.6.
Substitute into the mixture equation:
Cf = (Cs * 0.6) + (Cd * 0.4);
Copy the code
The higher the alpha value of the source color, the higher the source color component is added, and the less the target color will retain.
In addition to setting the blending factor, the blending function itself can also be set:
glbBlendEquation(GLenum mode);
Copy the code
You can change the blending function arbitrarily using the above combination. The default blending function is a plus sign.
Also, thanks to the state machine, it’s easy to mix:
- Open the hybrid
glEnable(GL_BLEND);
Copy the code
- Close the hybrid
glDisable(GL_BLEND);
Copy the code
It is important to note that blending for layers generally does not process the blending equation itself, just need to switch; When the slice shader is given a color instead of a layer, you need to customize it using the blending equation.
Due to depth testing and depth writing, rendering opaque objects can be arbitrary, but when blending is turned on, it means translucent objects are present in the scene, and the order of rendering becomes important.
If deep write is turned on when semi-transparent objects are in front, it means that the depth value of the element is updated. Objects below the depth value of the element are not rendered, but semi-transparent objects block the scene behind them incorrectly.
- So when rendering translucent objects, it needs to be turned off
The depth of the writing
Yes, but turn it onThe depth of the test
.
So why keep depth testing?
- because
The depth of the test
The purpose is to determine whether to discard the element by comparing the depth value. If the film element after the opaque object is to be discarded, natural needThe depth of the test
To decide whether to leave or stay.
Based on the rendering properties of translucent and opaque objects, the rendering engine will usually sort the objects first, and render them in the following order:
-
Render all opaque objects first, turn on depth test and depth write while rendering
-
Translucent objects are rendered from back to front. Depth testing starts while rendering, and depth writing is turned off
For the mixed case, feel the difference between deep testing and deep writing on/off mixed case 1 & Mixed case 2
7. The difference between front and back rejection and depth testing
I once thought about front and back and depth. I wonder if you will think the same thing when you see here.
Front-and-back culling and depth testing seem to do the same thing, essentially removing obscured pixels. So why do you need two steps, it seems like you’re just doing deep testing and the end result is the same, what’s the difference?
The front and back elimination is based on the normal vector of the triangle face. Once it is confirmed that the normal vector is invisible, the triangles in the scene can be eliminated piece by piece, while the depth test is a judgment of pixel by pixel, which is inefficient. This is the essential difference that the two must coexist.
Consider two special cases:
For depth testing only:
Conceivably, pixels could be processed one by one to get the same rendering result, but less efficiently.
Only do the front and back elimination:
The Donut example shows that in dynamic cases, just doing the front and back rendering can cause errors, but does it get the right result in static cases? The answer is no.
In addition to being completely visible and completely invisible, a surface can also be partially visible. Partially visible face usage vector judgments can be marked as fully visible, resulting in render errors, and more testing is required to ensure the render results. More testing is depth testing.