A background.

Threejs gives us encapsulated shadows that, while simple to use, are too high level and not flexible enough. When we previewed the new image of Cm Show, we had to “translate” the shader code from Unity into Threejs custom materials. We found that the first PASS defined by the new image in Unity relies on shadows, which means we need to get the shadow data before we can “translate” the subsequent passes. But Threejs doesn’t give us shadow information. In addition, even if it is possible to retrieve threejs-generated shadow textures through various operations, we still need to understand and consume them through shaders in custom materials. In summary, simply in Threejs on their own implementation of a set of shadows, enough flexible, also can oneself hold details.

How are shadows created



In nature, a non-luminous object needs a light source to be seen. Since light travels in a straight line, when the light is blocked by something (the orange object in the picture), the areas (point C) that were originally colored turn black because they are not illuminated. These areas are called shadows.

How to use ShadowMap to generate shadows

In theory, when drawing the color of a point, you can know whether to draw a shadow as long as you judge whether the point is “blocked”. There are many schemes to judge “occlusion”, the most commonly used is ShadowMap. We just need to know if there is a point on the line between the point and the light source that is closer to the light source. The distance between the point and the light source is the depth in ShadowMap. The specific approach is:

  • (1) Generate depth texture map: the so-called depth texture map is the minimum depth of each position. We stand at the position of the light source, observe the scene according to the perspective of light propagation, calculate the distance between the object in the scene and the light source (that is, the depth of the perspective), and record the minimum value at each position, so as to obtain a depth texture.
  • (2) Use the depth texture map: For a certain point P in the world, we need to get its depth from the perspective of the light source, and then compare it with the corresponding depth in the depth texture map to determine whether it is in the shadow.

Ps: For more information about the principle of shadow generation by ShadowMap and the gradual optimization of shadow quality, please refer to my other article “Shadows in 3D World — Analysis of ShadowMap Principle”.

4. Landing code

The following demo code is used to achieve the shadow effect of the above scene. The blue cube represents the position of the light source.

1. Rendering process of each frame

The images we see on the device are drawn frame by frame. The rendering of each frame in this demo consists of the following two parts:

  • (1) Off-screen rendering: Move the camera to the light source, render the scene to the buffer, and get the shadow map.
  • (2) Switch the rendering target to render the scene on the screen. The cube and floor in the scene should be equipped with custom materials to use shadow Map.

Here is the code for the rendering process:

import { OrbitControls } from "./assets/orbitcontrols.js";

let renderer, stats, camera, camera4SM;
let scene, bufferScene, bufferTexture;

const domElement = document.getElementById("canvas-frame");

// Frame detection
function initStatus() { // ... }

// Initialize render
function initThree() { // ... }

// Initialize the scene
function initScene() {
  // The scene to draw to the screen
  scene = new THREE.Scene();
  const axisHelper = new THREE.AxesHelper(100);
  scene.add(axisHelper);
  // Off-screen buffer
  bufferScene = new THREE.Scene();
  bufferTexture = new THREE.WebGLRenderTarget(
    domElement.clientWidth,
    domElement.clientHeight
  );
  bufferTexture.depthBuffer = true;
  bufferTexture.depthTexture = new THREE.DepthTexture();
}

// Initialize the camera
function initCamera() {
  const width = domElement.clientWidth;
  const height = domElement.clientHeight;

  camera = new THREE.PerspectiveCamera(45, width / height, 1.10000);
  camera.position.set(50.50.200);
  camera.lookAt(scene.position);
  camera.aspect = width / height;
  camera.updateProjectionMatrix();
  scene.add(camera);
  // The camera at the light source is used to generate the shadow map
  camera4SM = new THREE.OrthographicCamera(-100.100.100, -100.1.70);
  camera4SM.position.set(20.50.0);
  camera4SM.lookAt(bufferScene.position);
  camera4SM.aspect = width / height;
  camera4SM.updateProjectionMatrix();
  bufferScene.add(camera4SM);

}

// Initialize the light source
function initLight() { / /... }

// Render function called per frame
function render() {
  // Render to the target buffer
  renderer.setClearColor("RGB (255255255)".1.0);
  renderer.setRenderTarget(bufferTexture);
  renderer.render(bufferScene, camera4SM);

  // Render to screen
  renderer.setClearColor("RGB (150150150)".1.0);
  renderer.setRenderTarget(null);
  renderer.render(scene, camera);

  stats.update();
  requestAnimationFrame(render);
}

function start() {
  initStatus();
  initThree();
  initScene();
  initCamera();
  initLight();
  initObject();

  render();
}

start();
Copy the code

2. Generate a depth texture map-shadow map

Use off-screen rendering to cache the depth map frame.

(1) Add a new custom material to record the depth

function initObject() {
  // add object in buffer scene
  const getSMMaterial = new THREE.ShaderMaterial({
    uniforms: {
      projectionMatrixSM: { value: camera4SM.projectionMatrix },
    },
    vertexShader: document.getElementById("vertexShaderSM").textContent,
    fragmentShader: document.getElementById("fragmentShaderSM").textContent,
  });
  // ...
}
Copy the code

Related shader code: the way to do this is to write the pixel depth value after the projection, r value in RGB. This is why our shadow map is red

<script id="vertexShaderSM" type="x-shader/x-vertex">
    uniform mat4 projectionMatrixSM;
    void main(){
      gl_Position = projectionMatrixSM * modelViewMatrix * vec4( position, 1.0 );
    }
</script>
<script id="fragmentShaderSM" type="x-shader/x-fragment">
    void main() {
      gl_FragColor = vec4(gl_FragCoord.z, 0.0.0.0.0.0); // Write the depth value of the chip to r
    }
</script>
Copy the code

(2) Place the cube and floor using this material in the off-screen rendering scene

function initObject() {
  // ...
  const groundGeo = new THREE.BoxGeometry(40.40.1);
  const groundInBuffer = new THREE.Mesh(groundGeo, getSMMaterial);
  groundInBuffer.rotation.x = Math.PI / 2;
  groundInBuffer.name = "groundPlane";
  bufferScene.add(groundInBuffer);
  
  const cubeGeo = new THREE.BoxGeometry(20.20.20);
  const cubeInBuffer = new THREE.Mesh(cubeGeo, getSMMaterial);
  cubeInBuffer.position.y += 10;
  cubeInBuffer.name = "cubeInBuffer";
  bufferScene.add(cubeInBuffer);
  // ...
}
Copy the code

(3) Use the camera at the light source to render the scene into the cache

function render() {
  // Render to the target buffer
  renderer.setClearColor("RGB (255255255)".1.0);
  renderer.setRenderTarget(bufferTexture);
  renderer.render(bufferScene, camera4SM);
  // ...
}
Copy the code

3. Use the depth texture map to generate shadows

(1) Added a custom material to create shadows using the depth map

function initObject() {
  // add object in buffer scene
  // ...
  // add object in screen scene
  const useSM4CubeMat = new THREE.ShaderMaterial({
    // Attributes failed to upload
    uniforms: {
      modelViewMatrixSM: { value: cubeInBuffer.modelViewMatrix },
      projectionMatrixSM: { value: camera4SM.projectionMatrix },
      depthTexture: { value: bufferTexture.texture },
      color: { value: new THREE.Vector3(0.1.0)}},vertexShader: document.getElementById("vertexShader").textContent,
    fragmentShader: document.getElementById("fragmentShader").textContent,
  });
  
    const useSM4GroundMat = new THREE.ShaderMaterial({
    uniforms: {
      modelViewMatrixSM: { value: groundInBuffer.modelViewMatrix },
      projectionMatrixSM: { value: camera4SM.projectionMatrix },
      depthTexture: { value: bufferTexture.texture },
      color: { value: new THREE.Vector3(1.1.1)}},vertexShader: document.getElementById("vertexShader").textContent,
    fragmentShader: document.getElementById("fragmentShader").textContent,
  });
}
Copy the code

Related shader code:

  • Normalization first: The coordinates processed by MVP matrix will also be automatically converted to the coordinates of the clipping space, and the range is in [0, 1], so normalization should be done here as well.
  • Get depth: Get the data stored in the coordinates of the depth texture.
  • Check whether the element is in shadow: If the depth of the element is greater than the corresponding depth stored in the shadow map, it means that there is a point closer to the light source than this point at the same position and it is blocked.
<script id="vertexShader" type="x-shader/x-vertex">
    uniform mat4 modelViewMatrixSM;
    uniform mat4 projectionMatrixSM;
    varying vec4 result;
    void main(){
      gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
      result = projectionMatrixSM * modelViewMatrixSM * vec4( position, 1.0 );
    }
</script>
<script id="fragmentShader" type="x-shader/x-fragment">
    uniform sampler2D depthTexture;
    uniform vec3 color;
    varying vec4 result;
    void main() {
      vec3 shadowCoord = (result.xyz / result.w) / 2.0 + 0.5; / / normalization
      vec4 rgbaDepth = texture2D(depthTexture, shadowCoord.xy); 
      float depth = rgbaDepth.r; // Get the depth of the coordinates stored in the depth texture

      float visibility = (shadowCoord.z > depth + 0.3)?0.0 : 1.0; // Check whether the slice is in shadow
      vec4 v_Color = vec4(color, 1.0);
      gl_FragColor = vec4(v_Color.rgb * visibility, v_Color.a);
    }
</script>
Copy the code

(2) Place the cube and floor using this material into the scene rendered on the screen

function initObject() {
	// ...
	const cubeBufGeo = new THREE.BufferGeometry();
  cubeBufGeo.fromGeometry(cubeGeo);
  const cubeInScreen = new THREE.Mesh(cubeBufGeo, useSM4CubeMat);
  cubeInScreen.position.y += 10;
  cubeInScreen.name = "cubeInScreen";
  scene.add(cubeInScreen);
    
	const planeInScreen = new THREE.Mesh(groundGeo, useSM4GroundMat);
  planeInScreen.rotation.x = Math.PI / 2;
  planeInScreen.name = "planeInScreen";
  scene.add(planeInScreen);
  
  // Display shadow map
  / /...
}
Copy the code

(3) Render the scene to the screen

// Render function called per frame
function render() {
  // Render to the target buffer
  // ...

  // Render to screen
  renderer.setClearColor("RGB (150150150)".1.0);
  renderer.setRenderTarget(null);
  renderer.render(scene, camera);

  stats.update();
  requestAnimationFrame(render);
}
Copy the code

5. Final effect

Six appendices.

  • Related code: github.com/Zack921/vis…
  • ShadowMap in 3D world: juejin.cn/post/694007…