Unbiased light propagation method
A lot of ray tracing is estimated using Monte Carlo path tracing, and some are Unbiased light transport methods. No matter how many samples are used, his expectation is always correct and we expect the value of the definite integral. If the estimated value is different from what I expect, it is called Biased light transport Methods.
Bidirectional path tracking
The path tracing introduced above takes advantage of the reversibility of the optical path. The path tracing starts from the camera and generates a path linking the camera and the light source, while the bidirectional path tracing BDPT (Bidrectional Path tracing) extends this concept and generates two different half-paths (subpaths). Starting from the camera and the light source respectively, BDPT’s function is to connect the ends of the two paths to form a complete path.
Although the principle of BDPT is simple, it is difficult to implement. But BDPT works very well in some cases.
The scene below uses 32 pixel samples, and it is obvious that BDPT works very well. Ray tracing it is not easy to find a light with a large amount of energy in this scene, where the spotlight is directed towards the ceiling and the scene is illuminated by indirect light.
For example, in one-way tracking, the camera starts to hit the right wall, forming diffuse reflection, and then hits the left wall, unable to receive light from the light source. Only when it hits the light in a small area on the ceiling, can it find a high energy path. BDPT works best when the propagation of light is easier to calculate on this side of the light source. In addition, the speed of BDPT is obviously much slower than one-way path tracking.
Metropolis light propagation
The idea of the Metropolis Light Transport (MLT) utilizes a tool in statistics known as the Markov Chain. The principle of markov chain is to generate the next sample according to the current sample, which is completely different from the monte Carlo integral random sampling mentioned above. The advantage is that, given enough time, you can generate a series of samples of PDF functions in the shape of any function. The Monte Carlo method, while it is possible to sample a function from any PDF, minimizes when the PDF you sample is the same shape as the function you are integrating.
More samples are generated from one sample, which is reflected in the sampling, generation path, which is generated from a path similar to this path.
The blue path as shown in the figure below is the one that has been found. At this time, slight perturbation of the path can form a new path.
It works well in most cases, and is ideal for difficult, complex light-path propagation, because if you can find one seed you can find more around it.
As you can see on the left side of the picture below, only the door is illuminated directly, while the rest of the scene is illuminated indirectly. On the right is the swimming pool, and the ripples below it are observed by us because the light has been focused and hit the bottom, similar to the feeling of a prism. Rendering a complex scene like the one above is very difficult, as the bottom of the swimming pool passes over the Specular surface and the bottom of the Diffuse passes over the Specular surface, also known as SDS (Specular Diffuse Specular Path).
However, it is difficult to theoretically analyze its convergence speed, such as the rendering time of a picture. Monte Carlo path tracing can be roughly estimated, but this method is difficult to estimate. And all operations are local, with each pixel doing its own, some fast and some slow, which makes the picture look dirty. So not knowing when an image is converging makes it even harder to use in rendering animations and things like that, it’s very shaky.
Biased light propagation method
The photon mapping
Photon Mapping is especially good for rendering so-called Caustics: a series of lights that are focused on a complex object, refracted and focused in one place.
In this case, when the light hits the inner wall, it reflects the light back to the floor, forming a strong pattern due to the focus of the light.
Photon mapping is perfect for SDS rendering,
Implementation method of photon mapping
The first step:
The photon travels from the light source and refracts and reflects normally until it hits a diffuse material object, where it stays.
Step 2: Actually hit the sub-path from the camera, still refracting and reflecting normally, until the sub-path hits the diffuse material object, where the photon stops.
Then a local density estimation is performed. The local density estimation is based on the observation that first photons are shot out on the surface of the object, and now a bunch of light is shot out from the camera and also photons are distributed on the surface of the diffuse object. Obviously, the more concentrated the photon distribution, the brighter it is, and the less concentrated the photon distribution, the less bright it is. When we hit a photon on a diffuse surface for the second time, we need to do a local density estimate, that is, take the nearest N photons around any colored point, which is a classic nearest neighbor problem.
By organizing all the photons into a top-down accelerated structure pattern, the photons around a coloring point can be quickly located. Once found, find the area occupied by N photons, as shown in the figure below. Divide the area by the number of photons to get the density.
As we can imagine, a small number of photons is necessarily a noisy graph, and finding a large number of photons gives a good result, but it gets fuzzy. The reason why photon mapping is a biased method is that when we do density estimation, we should calculate the density by taking a tiny area around the current point, but we actually calculate the actual area by giving a number of photons. Therefore, in fact, the estimate of the area is an incorrect estimate, that is, only dAdAdA infinite hours are correct estimates.
If we shoot more photons, if we shoot the same 100 photons, obviously as we shoot more photons, the 100 photons are going to be distributed over less area, so it’s going to be closer to the tiny area. That is, if you shoot enough photons you’ll get the right answer, but as long as you don’t have an infinite number, it’s a little fuzzy, not exactly right but consistent.
The reason why the photon mapping chooses the number of photons adjacent to the coloring point instead of the photons with a fixed area is that in the case of a fixed area, even if there are infinite photons projected in the area, it does not conform to the differential definition of dAdAdA, so it cannot be consistent.
VCM
Vertex connection and merging are combined by photon mapping and two-way path tracing
When we perform BDPT bidirectional path tracking, if some sub-paths meet some characteristics: two sub-paths are very close to each other, in the same local surface, if we have to play the ray again, it will waste performance. We can think of the light path in the area as a photon map,
Real-time emissivity algorithm
IR (Instant Radiosity) is also called Many-light Approaches.
When we analyze the propagation of light, we don’t really distinguish whether the light is emitted by itself or reflected. But they all have one thing in common: Radiance.
The idea of the algorithm is that places that have been illuminated can be considered as light sources, which can be used to illuminate others. That is to say, the light sub path from the light source, after several ejections, stops in some places, and the place we think stops becomes a new light source, namely Virtual Point light (VPL). When we see a colored spot, we illuminate it with all the new lights, so that each light source shines directly on the colored spot.
As shown in the figure below, the scene rendered by VPL will glow at the joints. We changed the sampling of solid Angle to the sampling of area. When the two point light sources are extremely close, the denominator term in the formula will become extremely large because it contains the square of distance, and the value of the whole formula will become white. Another point is that VPL can’t GLossy objects.