shadowMapping
Real-time shadow generation is an important part of real-time rendering. So the shadow generation makes the rendering more realistic. So how do you create a shadow? One basic way to do this isShadow Mapping Basic principles of Shadow Mapping
- Shadow generation Pass:
- Set an extra Camera in the Light Camera position and look towards the Light.
- A Texture (called Shadow Map) is used to record the pixel depth seen by the Light Camera (each pixel position only records the most recent depth seen, without doing other shading calculations) for the occlusion depth.
2. Render Pass:
- ShadowCoord =(x ‘, Y ‘, Z ‘) shadowCoord=(x ‘, Y ‘, Z ‘) shadowCoord=(x ‘, Y ‘, Z ‘)
- In Shadow Map, (x ‘,y ‘)(depthdepth of occlusion obtained by sampling is compared with the depth value Z ‘z’ : If depth
Shadow Mapping is a very useful method to generate shadows in real time. It does not need any known geometric information in the scene. It only needs to perform one more rendering before formal rendering to generate a Shadow Map and get a good Shadow generation effect.Usually applies to parallel lights in a sceneNote that if lighting is not directional, you may need to generate multiple Shadow maps in different directions. For example, if the light source is a point light source, it may be necessary to render the six faces of a Cubemap at a 90 degree Angle of 1:1 aspect ratio along the six axis directions.)
Shadow Bias
The shadow map is the same as the real digital image, which is composed of pixels, but the pixel value is not RGB, but depth. The recorded depth value is discrete and discontinuous, and the resolution of the shadow map will directly affect the degree of discreteness between digits.
Shadow Bias is a concept introduced in response to Shadow Acne. Since Shadow Bias itself may have some problems, an adaptive Bias method will be proposed later.
- If Shadow Bias is too small, it will cause Shadow Acne
- Shadow Bias can lead to the phenomenon of Peter Panning
Shadow Acne
The root cause of Shadow acne is that the resolution of Shadow map is not enough, so multiple pixels will correspond to the same point on the map.In the figure, the yellow arrows are the rays illuminated, the black rectangles are the actual surfaces, and the yellow wavy lines are the corresponding values in the Shadow map.
It can be seen that map is a discrete sampling of the scene, so the yellow line segment shows a step-like wave change. Relative to the situation in the actual scene, part of the depth is larger than that in the actual scene (corresponding to the part of the black line segment), and the shadow will not be generated on the part (notice that the picture is reversed). Part of it is less deep than the actual scene (corresponding to the yellow line segment), this part will have a shadow, so there is a striped shadow.
This situation is caused by the fact that the actual depth of the object is not equal to its own sampling depth (the actual depth is greater than the sampling depth), so it can be said that it (the sample copy) shadows itself (the actual object), so it is called self shadowing.
Solutions:
- Directly add an offset Bias to the shadow depth of the sample (this is equivalent to adding the shadow depth further away to make it less likely to generate occlusion).
Peter Panning
This phenomenon can only appear when shadow Bias is added, as shown in the figure above, light leakage often occurs between cracks of objects. Because the Shadow Bias is so large, the computed position of the Shadow does not fit well with the object, resulting in a gap in the middle, as if the Shadow is separated from the object (like Peter Pan flying) is calledpeter-panning.
Solutions:
- Do not use excessive Bias
- Avoid using thin geometry (e.g. thin walls, thin floors); As long as the thickness of the geometry is greater than Bias, the shadow boundary will be generated inside the geometry, so it is difficult to see the separation between the shadow and the geometry.
- Bias is not adopted. When the first Pass generates Shadow map, it is set to render only the back side (the front side is excluded), which is equivalent to giving a high threshold
Slope Scale Based Depth Bias
It can be seen from the above that if Bias is too small, it may not solve the Shadow Acne phenomenon, and if Bias is too large, it may lead to severe Peter Panning problem. Slope Scale Based Depth Bias: In order to minimize the problems caused by Bias being too large or too small, an adaptive Bias is adopted according to the inclination of the plane (for example, when the light is perpendicular to the plane, the Bias should be 0; The smaller the Angle between the light and the plane, the larger the Bias should be.
anti-aliasing
Shadow Mapping still existsShadow AliasingThe problem Percentage Closer Filtering (PCF)Is the solution for the shadow serrate, its core idea is to calculate the shadow is not considered a single sample point, but in a certain range of multiple samples, so that we can make the edges of the shadows less serrate, because Visibility is no longer is 0 or 1, but with the gradient values, while the PCSS and VSSM are soft shadow frontal optimization method.
PCF
There are two main problems encountered during PCF sampling:
- Selection of sampling points
- Determination of sampling range
First, we will tackle the first problem here, the selection of sampling points. When sampling several coordinates within a certain range around, NUM_SAMPLES sampling locations can be determined by distribution sampling function. In order to make shadow edges softer, we can use some better distribution sampling functions.
-
Uniform-disk Sample: a series of coordinates are randomly selected as sampling points within a circle; It looks chaotic and the noise of sampling effect is serious.
-
Poisson-disk Sample: A series of coordinates are randomly selected as sampling points within a circle, but these coordinates must meet certain constraints, that is, there is at least a certain distance between coordinates.
The specific algorithm process of PCF is as follows:
- When calculating the Visibility, the original coordinate sampling of Shadow Map is replaced by sampling of several coordinates within a certain range.
- Each sampling result is also used to compare with Z ‘Z’, and the average of the comparison result is taken as Visibility.
Float visibility_PCF(sampler2D shadowMap, vec4 Coords) {const float bias = 0.005; Float sum = 0.0; // Initialize Poisson distribution poissonDiskSamples(coords.xy); // Initialize poisson distribution poissonDiskSamples(coords.xy); For (int I = 0; i<NUM_SAMPLES; + + I) {float depthInShadowmap = unpack (texture2D (shadowMap, coords. Y + disk [I] * 0.001). The rgba); sum += ((depthInShadowmap + bias)< coords.z? 0.0, 1.0); } // return sum/float(NUM_SAMPLES); }Copy the code
As you can see from the figure below, the jagged appearance of shadows is much reduced with the use of PCF.
PCSS
The second problem with PCF is the selection of the sampling range, known as the filter core size
- The smaller the filtering core —– the lower the blur degree —– the harder the shadow (sharp)
- The larger the filtering core —–, the higher the blur degree —–, the softer the shadow
The core idea of Percentage Closer Soft Shadows (PCSS) is to control the size of filtering kernel in PCF algorithm to generate Soft Shadows.
In general, the implementation process of PCSS algorithm is as follows:
The first step is Blocker Search, which is to obtain the average occlusion depth dBlocker for a region
Step 2: Penumbra estimation, using dBlocker to calculate the filter core size Wpenumbra
Step 3: Percentage Closer Filtering, PCF algorithm should be applied to the size of the Filtering kernel
Blocker Search
Blocker Search is to get the average occlusion depth of a region. To do this, we can use theA certain rangeIf the depth of each sample is smaller than that of dReceiver, it is considered that the occluded object is encountered and the contribution of the average occluded depth is counted. In this way, the average occluded depth obtained after multiple sampling is regarded as dBlockerdBlocker. So how do we determine this range?As shown in the figure, the query range is determined according to the range of Light and the distance between projection point and Light source, and the specific derivation process is omitted
Penumbra estimation
Then the next problem to be solved is how to determine the size of the filter core. An intuitive observation result is that the farther the area from the occlusion is, the softer the shadow is and the larger the size of the filter core is needed. The relation between the size of filter core and the depth of occlusion is shown in the figure below Where, wLight is the area size of the light source, dBlocker is the depth of the shading object, and dReceiver is the depth of the shading point.
Percentage Closer Filtering
The third step is to generate the softened shadow according to the filtering kernel, the code idea is as follows:
float visibility_PCSS(sampler2D shadowMap, vec4 coords){ poissonDiskSamples(coords.xy); // STEP 1: avgblocker depth float dBlocker = findBlocker(shadowMap,coords.xy,coords.z); // STEP 2: penumbra size const float wLight = 0.006; float penumbra = (coords.z-dBlocker)/dBlocker * wLight; // STEP 3: filtering const float bias = 0.005; Float sum = 0.0; for(int i = 0; i<PCF_NUM_SAMPLES; ++i){ float depthInShadowmap = unpack(texture2D(shadowMap,coords.xy+disk[i]*penumbra).rgba); sum += ((depthInShadowmap + bias)< coords.z? 0.0, 1.0); } return sum/float(PCF_NUM_SAMPLES); }Copy the code
VSSM
Variance Soft Shadow Mapping (VSSM) is actually an optimization process for Blocker Search and PCF in PCSS, because both of them require multiple sampling, which seriously affects the time efficiency of the algorithm. How to optimize this process is what we will consider next.
To avoid multiple sampling calculations, VSSM assumes a range ofDepth distributionConform to theNormal Distribution, so as long as we know the mean and variance of this range, we can first get the normal distribution model of this range (namely, we know the correspondingProbability density function PDF) When we know the PDF of the normal distribution, we can obtain the CDF(cumulative distribution function), which can quickly deduce what percentage of x in the range is greater than (or less than) a given value. Then, the next problem is how to obtain the average of depth within a certain range, namely the mathematical expectation E(X) and variance Var(X).
Some optimization methods
- E(x) can be evaluated using miPMap and SAT(prefix and array)
- The calculation of the Var (x) can formula Var (x) = E (x squared) – E squared (x), E (x squared) is the corresponding depth squared shadow Map
- The calculation of CDF was optimized by The Checkerbiff inequality
- The Blocker Search algorithm and PCF algorithm can be accelerated by the above means
reference
- Shadow Map Principle and Improvement – Tencent Game Academy (QQ.com)
- (14 posts) Learning rendering algorithms (a) — Shadow Mapping _Jiamada blog -CSDN blog
- Real-time Shadows – KillerAery – Cnblogs.com
- GAMES202 Live Shadows – Soft Shadow Generation (PCSS and VSSM) – Zhihu (Zhihu.com)
- GAMES202- High quality real-time rendering _ bilibili_bilibili