This section involves bilinear interpolation, MipMap and trilinear interpolation and other related knowledge

Texture magnification

When the texture is too small, the image becomes blurred.

Let’s say we have a wall that is rendered in 4K, but the texture is only 256×256. When we query the texture at any point, we will find some non-integer values. For any point on the wall, you can find a corresponding position on the texture. This position may not be an integer, so you can round it to an integer. To a certain extent, what we are looking for is a pixel on the same texture, called a texel, which is not what we want.

Bilinear interpolation

If we directly find the pixels close to the texture element (rounded), we will get a fuzzy image, but if we find the four pixels close to the sampling point and perform Bilinear Interpolation, we can get a better image.

The linear interpolation of u00, U10U_ {00}, U_ {10}u00, U10 and U01, U11u_ {01},u_{11}u01 and U11 is carried out in the horizontal direction respectively, and the color values of U0u_ {0}u0 and U1u_ {1} U1 are obtained. Then u0u_{0}u0 and U1u_ {1} U1 are interpolated vertically to get the final color value. So bilinear interpolation is actually doing cubic interpolation, two horizontal linear interpolation, one vertical linear interpolation.

Bicubic Interpolation is better than bilinear Interpolation. The principle is to take 16 points adjacent to any point and use four pixels at a time to do a cubic interpolation instead of linear interpolation.

Texture to narrow

When the texture is too large, it can lead to sampling defects such as mole lines in the distance.

In the near area, the area covered by the pixel is relatively small. We can directly query the color value of the covered texture region corresponding to the pixel by the pixel center.

At a distance, a pixel will cover a large area of texture, so it is obviously unreasonable to take the average value of the area covered by a pixel as the color value of the pixel.

Such problems can be solved by using ultra-high frequency sampling for a distant pixel, but hypersampling obviously requires a lot of performance. Since sampling causes aliasing, what if we don’t sample the pixels?

Originally we need to calculate the color value of the texture covering a large area of pixels, that is, take the average value of the texture within this range. However, if we can immediately know the range of textures covered by distant pixels in texture space and what the average value of the range is, we can avoid sampling.

Furthermore, if the range is limited to a pixel size, wouldn’t the color value be more accurate? And because different pixels at different distances will have different coverage sizes on the texture, it should be able to query the average value of the texture in any size range during the range query.

Mipmap

Mipmap creates a series of images from the original, with each layer having half the resolution of the previous one. Obviously, Mipmap processing of an image requires the introduction of a large number of additional images.

How much extra storage does Mipmap need?

In fact, we only need to consume a third of the original image size to meet Mipmap’s requirements. Although this is a limit of a geometric sequence, it might seem a little abstract, but we can do it in a more visual way.

So let’s say we multiply each of these layers by 3, and it doesn’t matter what proportion we end up with. Put the three original drawings of the 0 layer together to form a large square with the lower right corner missing. Then put the three pictures of the next layer in the lower right corner, which is obviously missing another lower right corner. You can put the three pictures of the next layer in the lower right corner, and so on until the last layer. And obviously the limit is 1/3 of the original.

How do I know the size of the texture region for which I want to query a pixel?

We know that any pixel can be mapped to an area of a texture, so we need to calculate the size of the area mapped to the texture by a pixel. And each sampling point has its own neighbors, so we can also project pixels and their neighbors into the texture space.

In a triangle, it is obvious that each pixel is one pixel away from its neighbor. So in texture space, the distance between the center point of two pixels can be assumed to be L. The value of the distance in the texture can also be obtained by a differential formula.


L = max ( ( d u d x ) 2 + ( d v d x ) 2 . ( d u d y ) 2 + ( d v d y ) 2 ) L=\max \left(\sqrt{\left(\frac{d u}{d x}\right)^{2}+\left(\frac{d v}{d x}\right)^{2}}, \sqrt{\left(\frac{d u}{d y}\right)^{2}+\left(\frac{d v}{d y}\right)^{2}}\right)

However, the distance between a pixel and its neighbor may vary, so let’s take the largest distance L for simplicity. We take the distance as the side length of the square and the pixel point as the center of the square to obtain the texture range of a square in the texture space.

How to query the average value of the region with side length L according to Mipmap?

If the size of the region with side length L is 1×1, it corresponds to 1 pixel, that is, a pixel can directly find the corresponding texture value on the original texture map. If the size of the pixel coverage area corresponding to the area on the texture is 4×4, then the first layer of Mipmap is 2×2 and the second layer is 1×1. In other words, Mipmap will make the square coverage on the texture correspond to 1 pixel at the second layer. To sum up, we just need to figure out which layer of Mipmap the square covering area of the texture can correspond to 1 pixel. Obviously, there is a logarithmic relationship between the size of the resolution and the Mipmap hierarchy.


D = log 2 L D=\log _{2} L

Trilinear interpolation

If you look at it carefully, we actually only computed the discrete layers. For a discontinuous number of layers like 1.8, nothing can be done.

How do you resolve this discontinuous hierarchy in Mipmap?

The answer is interpolation. We only need to find two adjacent layers of discontinuous layers and perform bilinear interpolation on the images of the two layers respectively. After obtaining the results of two layers, the results are combined and linear interpolation is carried out again, which is the continuous interpolation results between two layers. This Interpolation is known as Trilinear Interpolation.

The defect of MipMap

MipmapIt’s not perfect, actuallyMipmapThere will still be excessive blur at a distance(Overblur)The phenomenon. This is becauseMipmapCan only query a range query within a square area, used if the pixel only needs a rectangular texture in texture spaceMipmapThe square range query will obviously box a lot of useless textures, so the average result will naturally be less than satisfactory.

Anisotropic filtering

Anisotropic Filtering is a partial solution to this problem. Obviously, anisotropic filtering costs three times as much as the original image.

It can be seen that Mipmap generates squares of different proportions and sizes, that is, it processes them on the diagonal of the image, while it does not process the graphs with different aspect ratios.

Anisotropic filtering, compared with Mipmap, involves more compression and stretching of the original image in both horizontal and vertical directions. Therefore, anisotropic filtering can also perform fast queries in the face of texture regions that are stretched into rectangles.

In the same way, in the face of overly tricky diagonal textures, rectangles will still frame a lot of useless textures, which cannot avoid excessive ambiguity.

EWA filtering

For any irregular shape, EWA Filtering breaks it down into different circles and searches the irregular shape multiple times to fully cover the shape. Obviously, multiple queries do a good job of avoiding excessive ambiguity to some extent, but it definitely consumes more performance.