Texture mapping
The texture acts as the “skin” of the colored object and is responsible for providing the base color, such that yellow people and black people are significantly different when illuminated at the same time. When processing texture mapping, computers usually use two-dimensional arrays to store the texture information of three-dimensional objects, as shown in the following figure.Texture, usually in order to use for each triangle vertex vertex attribute need to store extra u, v coordinates in order to map to the texture space (because of the triangle is a plane, so very convenient mapped to the plane of the texture space), within the triangle barycenter coordinates of point is only need according to the triangle interpolation can also work out the corresponding u, v coordinates. In this way, each pixel of the triangle can find the color corresponding to its position in the texture.
Texture mapping problems and solutions
Texture is too small
The problem of too small textures is relatively easy to understand. Consider that if we apply a 100×100 texture map to a 500×500 screen, it will inevitably lead to aligning distortion because several pixels of the screen space are concentrated within a single pixel of the texture map’s coordinates. Using only the closest pixel under the texture map corresponding to (u,v) tends to cause serious distortion.For example, if the mapped coordinate position is a red dot, it will select the nearest orange dot. This method seems reasonable, but it is actually not desirable. The bilinear interpolation method will be introduced to alleviate this aliasing phenomenon.
Bilinear Interpolation
The bilinear interpolation method is easy to understand and easy to operate. It is to calculate the color of the red dot through two linear interpolation of the pixel colors of the surrounding four points.
- By calculating the ratio s between the abscissa of the red point and the abscissa of the surrounding vertex, p is obtained by interpolating the color of U00 and U10, and Q is obtained by interpolating the color of U01 and U11
- By calculating the proportion t of the vertical coordinate of the red point to the vertical coordinate of the surrounding vertices, the color interpolation of p and Q is used to obtain the color of the small red point.
From the image above, we can see that bilinear interpolation can solve the problem of too small texture. (There is another method called Bicubic interpolation, which uses cubic equations to perform two interpolation. It may be better, but the calculation speed is very low, which will not be discussed here.)
The texture is too large
Maybe for our first intuition, small textures do cause problems, but wouldn’t bigger textures be better? Why would that cause problems? But the truth is that the distortion caused by oversized textures can be even worse. Because one screen pixel cannot be mapped to a region of texture (imagine texture map size 500×500, screen space size 100×100, spread pixels of screen space evenly across texture space, the average size of one screen space pixel is 5×5=25 texture space pixels, So this is what happens when the texture is too large.)Most causes of this phenomenon is due to the perspective projection, the floor covered with repeated grid map, according to a recent coscodl is small, the distance of a complete map may in screen space is only the size of several pixels, so inevitable screen space a pixel corresponding to the texture map on a range of points, which is caused by the texture is too high, Intuitively, trying to replace a range of color information in texture space with the result of one point sampling will inevitably lead to serious distortion! (From the point of view of the signal, that is, the sampling frequency is too low to restore the original appearance of the signal)As can be seen from the figure above, when the sampling rate is high, one pixel corresponds to one UV coordinate on the texture map, while when the sampling rate is low, one pixel only corresponds to one region on the texture map. The farther away a blue pixel in screen space is from the camera, the larger the corresponding range in texture space. In fact, this phenomenon is increasingly undersampled, which is vividly referred to as the footprint of the screen pixels in the texture space. For this kind of problem, instead of Supersampling’s method (too much computation), we use the well-known Mimap technique.
Mipmap
As mentioned above, the color information of a sampling point is not enough to represent the color information of a region in “footprint”. Is it a feasible method if the average value of all colors in such a region can be calculated? Yeah, so our goal is to go from a Point Query to a Range Query. However, there is still a problem. As can be seen from the figure above, footprint sizes corresponding to different screen pixels are different. Take a look at the following example:The footprint areas corresponding to pixels in the near and far circles are different in a certain size, and the footprint of the distant circle is necessarily larger than that of the near circle. Therefore, it is necessary to prepare area queries with different levels, which is exactly Mipmap.Level 0 represents the original texture, which is also the texture with the highest accuracy. With the improvement of level, the average value of four adjacent pixel points is combined into one pixel point for each level improvement. Therefore, the higher the level is, the larger the area query of footprint is. The next step is to select the level of the texture based on the size of the screen pixel’s footprint, and then do a click query. This is the same as the area query on the original texture!
So how do you determine which level of texture to use? The adjacent pixels of the screen pixel were used to estimate the footprint size and then the level D! The diagram below:In the screen space, take the right side of the current pixel and the two adjacent pixels above it (4 can also be taken), query the coordinates of the three points corresponding to the Texture space, calculate the distance between the current pixel and the right side pixel and the top pixel in the Texture space, and take the maximum value of both. The calculation formula is shown in the figure, then level D is the log2 value of this distance (D = log2L).
The value of D may not be an integer. There are two ways to do this:
- Select the nearest level by rounding
- Use the trilinear interpolation method (below the level itself bilinear interpolation, above the level itself bilinear interpolation, linear interpolation between the two again)
Anisotropic filtering Mipmap
Mipmap’s default Range Query is for square areas, which is not the case, as shown below:Principle of anisotropic filtering: based on Mipmap, provide horizontal and vertical scaling texture hierarchy (to accommodate horizontal and vertical strip shapes); However, this can only reduce the excessive blur mentioned above, because the actual rendering also has diagonal bars or other shapes that are difficult to approximate.Although the anisotropic filtering technique is better, the extra space cost of the texture is 3 times that of the original resolution texture.
Texture mapping application
Normal Maps
In the Blinn-Phong lighting model, normal vectors play an important role. Different normal vectors have a significant impact on the calculation results of the lighting. For example, if you apply the normal information of a high-precision model to a low-precision model, it will greatly improve the rendering effect of the low-precision model.
Bump Maps
Bump Maps are actually very similar to Normal Maps, Normal Maps store the Normal information directly, whereas Bump Maps store that pointlogicThe relative height (which can be negative) on a Bump Map. The change in the height is actually indicative of the uneven nature of the object’s surface. Using that information, you can calculate the normal vector of that point, and then use that to calculate the illumination. It just takes one more step to go from height to Normal than Normal Maps.
Displacement Maps
As mentioned in Chapter 2, A Bump map is a logical change in height, while a Displacement map is a physical change in height. This is where the difference between the two is found. You can see this by looking at the edge of the object shadow:
Environment Map
Ambient light mapping, as the name suggests, stores ambient light on top of a map. Imagine a situation, light object distance is very far from us, so for each point on the object light direction almost no difference, so the only variable is the direction of the human eye is observed, so in all directions of light source can be stored in a sphere, or any 3 d direction, marks a texels, Ambient light mapping mainly adopts the following two clock mapping methods
The spherical mapping
For some complex spatial graphics, the method of spherical mapping is usually used to map the object to the sphere, and then the sphere is expanded into a long square, and the one-to-one corresponding relationship can be obtained.The idea of spherical mapping is as follows:
- Suppose a unit sphere surrounds the center of the object. When the center of the object looks at some position (x,y,z) on the surface of the object, a ray from the center of the object is emitted toward that position. The ray will intersect the unit sphere at some point.
- According to the coordinates of the intersection point (xo, Yo,zo) between the ray and the unit sphere, the yaw Angle and the pitch Angle (Yaw,pitch) of the intersection point are calculated, and then mapped to the coordinate point (u, V) corresponding to the spherical map.
Cube map
The spherical mapping map will distort things in some places when it is expanded into a cube, so the cube mapping map can be used, which is to wrap the object around the cube and store the information about the ball on the cube.
Shadow Mapping
The previously introduced illumination calculation does not include Shadow, and Shadow Mapping is a common way to achieve Shadow calculation.The basic principle of the shadow map is as follows (where the red line intersects the black line of the human eye is the shadow) :
- Set an extra camera at the light source and look in the direction of the light, and use a Map called the Shadow Map to record the depth of the pixels seen (each pixel position only records the nearest depth, not the Shading depth).
- When the main camera (human eye) needs to render each pixel, the corresponding position (X ‘, Y ‘,z ‘) of the pixel in the screen space of the light camera can be obtained by the MVP transformation of the light camera. Then the depth value Z and the Shadow Map are compared with the occlusion depth sampled by (x ‘,y ‘). If the depth Z is greater than the corresponding occlusion depth (meaning that the light of the pixel is blocked), then the brightness of the pixel can be reduced.
The above is just the basic solution for Shadow Mapping. In fact, it still has many shortcomings (such as noise and Soft Shadow). The industry tends to adopt more advanced Shadow Mapping solutions (such as Bias, PCF and PCSS).