Volume rendering is generally used to display 3D distribution maps and scanning maps. This paper discusses a body rendering method suitable for WebGL.
Design ideas[1]
For the sake of simplicity, this article discusses only volume rendering methods for cube areas. The 3D image information can be stored in the 3DTexture, and the process of sampling and calculating the color value is mainly done in the slice shader.
As shown in the figure above, O represents the position of the camera, V represents the observation direction of the line of sight, and P1 and P2 represent the two intersection points of the line of sight on the surface of the cube. If the coordinates of P1 and P2 are obtained, the average samples from P1 to P2 can be mixed with Alpha to obtain the final color value seen by the eye.
How do I calculate the coordinates of P1 and P2?
For ease of calculation in the slice shader, the computed coordinate system determines the center of the square region as the origin. The coordinates of point O can be obtained by multiplying the camera world coordinates by the inverse of the cube transformation matrix, and the coordinates of P1 are the coordinates of this element. The direction vector V of the line of sight can be determined by the coordinates of O and P1.
Assuming that the size of the cube is (0.5, 0.5, 0.5) and the origin coordinates of the cube are (0,0,0), the distance between line segment o-p1 and o-p2 can be calculated by the following method [2].
// axis aligned box centered at the origin, with size boxSize
vec2 boxIntersection(vec3 ro, vec3 rd, vec3 boxSize) {
vec3 m = 1.0 / rd; // can precompute if traversing a set of aligned boxes
vec3 n = m * ro; // can precompute if traversing a set of aligned boxes
vec3 k = abs(m) * boxSize;
vec3 t1 = -n - k;
vec3 t2 = -n + k;
float tN = max(max(t1.x, t1.y), t1.z);
float tF = min(min(t2.x, t2.y), t2.z);
if( tN > tF || tF < 0.0) return vec2(1.0); // no intersection
return vec2( tN, tF );
}
Copy the code
Given coordinates O, vector V, and the distance between line segment O-P1 and O-P2, the coordinates of P1 and P2 can be calculated, and then the average samples from P1 to P2 can be mixed with Alpha to obtain the final color value seen by the eye.
The specific implementation
WebGL rendering library using ZEN3D as an example.
The main code for the slice shader is as follows [1] :
precision highp sampler3D;
#include <common_frag>
varying vec3 v_modelPos;
uniform sampler2D platteTexture;
uniform sampler3D densityTexture;
uniform mat4 uInvTransform;
uniform float uAlphaCorrection;
const float STEP = 1.73205081 / 256.0;
// http://iquilezles.org/www/articles/intersectors/intersectors.htm
// axis aligned box centered at the origin, with size boxSize
vec2 boxIntersection(vec3 ro, vec3 rd, vec3 boxSize) {
vec3 m = 1.0 / rd; // can precompute if traversing a set of aligned boxes
vec3 n = m * ro; // can precompute if traversing a set of aligned boxes
vec3 k = abs(m) * boxSize;
vec3 t1 = -n - k;
vec3 t2 = -n + k;
float tN = max(max(t1.x, t1.y), t1.z);
float tF = min(min(t2.x, t2.y), t2.z);
if( tN > tF || tF < 0.0) return vec2(1.0); // no intersection
return vec2( tN, tF );
}
vec4 getColor(float intensity) {
// makes the volume looks brighter;
intensity = min(0.46, intensity) / 0.46;
vec2 _uv = vec2(intensity, 0);
vec4 color = texture2D(platteTexture, _uv);
float alpha = intensity;
if (alpha < 0.03) {
alpha = 0.01;
}
return vec4(color.r, color.g, color.b, alpha);
}
vec4 sampleAs3DTexture(vec3 texCoord) {
texCoord += vec3(0.5);
return getColor(texture(densityTexture, texCoord).r);
}
vec3 shade(inout float transparent, in vec3 P, in vec3 V) {
// Transform to model space.
vec3 frontPos = (uInvTransform * vec4(P.xyz, 1.0)).xyz;
vec3 cameraPos = (uInvTransform * vec4(u_CameraPosition.xyz, 1.0)).xyz;
vec3 rayDir = normalize(frontPos - cameraPos);
vec3 backPos = frontPos;
vec2 t = boxIntersection(cameraPos, rayDir, vec3(0.5));
if (t.x > 1.0 && t.y > 1.0) {
backPos = cameraPos + rayDir * t.y;
}
float rayLength = length(backPos - frontPos);
int steps = int(max(1.0.floor(rayLength / STEP)));
// Calculate how long to increment in each step.
float delta = rayLength / float(steps);
// The increment in each direction for each step.
vec3 deltaDirection = rayDir * delta;
// Start the ray casting from the front position.
vec3 currentPosition = frontPos;
// The color accumulator.
vec4 accumulatedColor = vec4(0.0);
// The alpha value accumulated so far.
float accumulatedAlpha = 0.0;
vec4 colorSample;
float alphaSample;
// Perform the ray marching iterations
for (int i = 0; i < steps; i++) {
colorSample = sampleAs3DTexture(currentPosition);
alphaSample = colorSample.a * uAlphaCorrection;
alphaSample *= (1.0 - accumulatedAlpha);
// Perform the composition.
accumulatedColor += colorSample * alphaSample;
// Store the alpha accumulated so far.
accumulatedAlpha += alphaSample;
// Advance the ray.
currentPosition += deltaDirection;
}
transparent = accumulatedAlpha;
return accumulatedColor.xyz;
}
void main() {
vec3 V = normalize(v_modelPos - u_CameraPosition);
vec3 P = v_modelPos;
float transparent;
vec3 color = shade(transparent, P, V);
gl_FragColor = vec4(color, transparent);
}
Copy the code
The complete sample code is here, using Simples-Noise [3] to generate random 3D texture data.
The test results
Render result screenshot:
The online test
The resources
[1] github: modelo/API_samples/volume-rendering
[2] iquilezles.org
[3] github: simplex-noise