“What do you know about the render pipeline?”
When I heard the interview question, I was at my wits end. A long time ago to learn the knowledge of graphics, although read the red and blue books, and familiar with OpenGL, but about the details of the rendering pipeline, there are still a lot of fuzzy, very ashamed. In 19 years, I did not know whether to go to the industry to interview engine engineer, in the interview question was DISS.
In the study summary of the next few months, I got an internship OFFER from TIMI, a doll factory. But unfortunately, because the laboratory teacher did not let me go, I had to wait for Autumn Recruitment. In this article, I will put together my understanding of the rendering pipeline flow as follows. This article will be divided into three parts:
- GPU Rendering Process
- Graphic rendering pipeline
- How to use pure C++ to implement a rendering pipeline and support PBR effects.
The full text does not involve any mathematical knowledge, all is the part of understanding, I try to use vernacular to describe the whole rendering process. If you want to know how the perspective projection matrix is pulled down? How to determine the View matrix? Amway: Mathematical Methods in 3D Games and Computer Graphics. There are a lot of things in this book that game engines need to know.
1.GPU rendering process
All rendering is the process of transferring data from the CPU to the GPU.
From the GPU’s point of view, rendering is very straightforward. The overall framework of the program can be roughly divided into the following steps:
- The application calls the graphics API(OpengL/DX12).
- API calls the GPU driver.
- The GPU driver is responsible for converting graphics API functions into GPU-recognized codes.
- The CPU passes Data in memory to the GPU.
- At this point, the GPU has the data and program code to execute and render the image to the screen.
Here is a global view of the rendering process in progress. The essence of rendering is to transfer CPU data to the GPU for computation.
2. Graphics rendering pipeline
After looking at rendering from a global perspective in the previous section, this section dives into the rendering pipeline to understand how rendering actually happens. How to render cool images.
Even the most beautiful images are created by the render pipeline we discussed
As shown in Figure 2, we can see very gorgeous and bright colors. Let’s analyze the rendering principle in this image. For the stone texture of the road, the wood texture of the hut, it is the credit of the tile. For the light source of the flame and the natural light of the sky, it is the light. Beautiful models are the result of the efforts of the modeler through 3DMAX… Of course, in order to improve the graphics, most of the current 3A games are made with PBR lighting, overlay of various maps, flood light, particle system, etc. But all of these advanced features are based on the render pipeline. We abstracted what we needed for rendering as follows:
Things needed for rendering. Looks very simple, the world event, must make in detail.
Now it’s time to speed up and get into the mental roller coaster of the render pipeline. Stay close.
2.1 Render pipeline overview
Render pipeline overview. Don’t be afraid. This is everything.
We’ll take a look at each of these stages next, but first, let’s cover some basic concepts:
- Application phase: A phase that runs on the CPU, typically for input operation processing, animation processing, event processing, and so on.
- Geometry: responsible for vertex-by-vertex and primitive operations.
- Rasterization: The operation of drawing pixel by pixel based on transformed and projected vertices and coloring information. Converts 2D points in the middle of the screen to pixels on the screen.
- Buffers: Each Buffer stores different render information. For example, the Z-buffer stores depth information per pixel, and the template Buffer may store shadow pixels. To draw a gorgeous image, it’s essentially a stack of buffers and calculations.
- Shader: Shader. Generally, Opengl can encode Vertex shader and Fragment shader. The mathematical calculation of the light equation is usually done in the shader. Such as Phong model, PBR lighting model.
It doesn’t matter if you don’t understand these concepts for the time being, just understand them first, and with further discussion later, you will fill in these pits one by one.
2.2 Application phase
The Application phase runs on the CPU, so the developer has complete control over what happens to the Application. It may be difficult to understand this phase alone, but let’s take an ACT game as an example:
- Our characters need gorgeous character animation, such as breaking through the sky.
- When our character slashes into a monster, it gets hit by a bullet knife, causing a collision.
- With keyboard and mouse input, D + D + J + K: whirlwind.
- Logically we have a monster right in front of us, but we can’t see it because there’s no render. We need to pass the monster information to the next rendering stage: geometry. Determine which models need to be rendered.
- Water is flowing, creating a beautiful environment through texture animation.
In summary, the work in the Application is to handle animations, collisions, inputs, and models that need to be rendered. It’s that simple.
2.3 Geometry Stage
The geometry stage is used to handle vertex-by-vertex operations. Why do I emphasize vertex by vertex. A model, no matter how complex, is made up of vertices. Each vertex relationship is determined by the index, and the vertices are combined into a triangle. In the geometry phase, we focus on the vertices in the model.
2.3.1 Model&View Transform
For each Object, we have a local coordinate system. This local coordinate system we call the Model coordinate system. It’s easy to understand that for each of us, when we talk about the position of my arms, we don’t usually talk about the latitude and longitude of my arms in the world, we talk about my arms being on the left and right sides of my body. And this frame, the reference point is ourselves, this is the Model frame. The Model coordinate system is great for art when modeling objects. Like the artist stuck a golden wing on my back.
But when it comes to rendering, just knowing each object’s local coordinate system is not enough. For example, I am writing this article in the laboratory of Tianjin University. Now I want to locate my position in the world. I say: I am in the middle of the laboratory. This is completely unable to locate my position. Therefore, we need to transform the local coordinate system into the world coordinate system.
Then the formula transformed into the world coordinate system is: my world coordinate = the world coordinate x laboratory of Tianjin University, the local coordinate x in Tianjin University, my local coordinate in the laboratory
Okay, that’s enough examples, let’s get to some serious ones.
- Model Transform: Transform the vertices and normals of the Model.
- Each Model corresponds to a Model Transform.
- The world coordinate system is unique. After all models are changed by Model Transform, each Model can use a world coordinate system to uniquely describe its position.
For model world coordinates, this is far from enough. The world is only meaningful to us as we observe it. The only way for us as players to see the virtual world is through a Camera. Our purpose is to find the position of the world coordinates of other models relative to the world coordinates of the Camera. This introduces the View Transform.
- View Transform: Converts the world coordinate system of the model to the observation coordinate system of the Camera. Is the relationship between the Model and the Camera.
2.3.2 the Projection
For perspective projection, anyone who has learned graphics will know orthogonal projection and perspective projection. It may not be clear, however, where these goofy looking visuals fit into the render. It is the most critical step in Projection’s graphics. The purpose of Projection is to map coordinates in 3D virtual space onto a 2D plane. This 2D plane is the basis for our rasterization. Every pixel of our display is generated based on this 2D plane.
In the rendering pipeline, it is easy to realize Projection. According to the aspect ratio of the current screen and other given parameters, we can construct a Projection matrix. By multiplying the vertex positions in View Space by the Projection matrix, the Clip coordinates of each vertex can be easily obtained. For the Clip coordinates, the x and y values are the corresponding 2D coordinates.