“This is my fourth day of participating in the First Challenge 2022. For more details: First Challenge 2022.”
preface
Following up, last time we discussed how to enable MSAA in WebGPU, remember our “three steps” approach? If you forget it, hopefully you’ll be able to go back and review it in time. Today we will discuss another new topic, we start drawing 3D graphics in WebGPU, let’s start by drawing a cube!
Vertex data passing
Remember when we drew a triangle, how we specified the vertices of the triangle? (I hope you can recall it carefully.)
A century later…
Yes, we specified the vertex information in the vertex shader, so can we do the same for the cube? Yes, we can do that too! But if we keep doing this, our Shader program will be too generic! So this time we’re going to pass data from the CPU to the GPU.
First, let’s take a look at how it works in WebGL.
Pass vertex data in WebGL
const buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, bufferData, gl.STATIC_DRAW);
let a_position = gl.getAttribLocation(program, 'a_position');
gl.vertexAttribPointer(a_position, 2, gl.FLOAT, false, fsize * 4.0);
gl.enableVertexAttribArray(a_position);
Copy the code
It can be roughly divided into the following steps:
- Create a Buffer
- Fill the Buffer with data
- Get the position of the vertex variable in the GPU, and pass data to the GPU.
If you are not familiar with this topic, you can check out my WebGL overview – Principles
Pass vertex data in WebGPU
Now let’s look at how data is passed to vertices in a WebGPU. Just like in WebGL, we first need to create a Buffer object.
const verticesBuffer = device.createBuffer({
size: cubeVertexArray.byteLength,
usage: GPUBufferUsage.VERTEX,
mappedAtCreation: true});new Float32Array(verticesBuffer.getMappedRange()).set(cubeVertexArray);
verticesBuffer.unmap();
Copy the code
GPUBuffer is explained in the W3C standard as follows:
A
GPUBuffer
represents a block of memory that can be used in GPU operations. Data is stored in linear layout, meaning that each byte of the allocation can be addressed by its offset from the start of theGPUBuffer
, subject to alignment restrictions depending on the operation. SomeGPUBuffers
can be mapped which makes the block of memory accessible via anArrayBuffer
called its mapping.
GPUBuffer represents a chunk of memory that can be operated on the GPU, and its data is stored in a linear fashion, which means that each bit of data can be directly addressed by offset. Some GpuBuffers can be mapped to arrayBuffers to read and write data from them.
A few more words about the GPUBuffer object, which has a few important concepts:
[[state]]
: it belongs toGPUBuffer
The internal attributes (here understood as private variables) in- “Mapped” : when
GPUBuffer
If it is in this state, it indicates that the CPU is operable, but the GPU is not operable. - Mapped at creation
GPUBuffer
At creation time it is set to “mapped” state, in other words, i.eGPUBuffer
Once created, it is available to the CPU. - Mapping Pending: Indicates that the CPU is still in the pending state and is unavailable to the GPU or CPU
- Unmapped: Available to a GPU.
- “Destroyed” : indicates that the buffer has been destroyed and is no longer available.
- “Mapped” : when
So when a GPUBuffer is in the mapped state, it can be read and written by the CPU.
getMappedRange(offset, size)
: returns oneArrayBuffer
Object containing theGPUBuffer
Data in.- How to go to
ArrayBuffer
Write data to? Be familiar withArrayBuffer
And as you know, if you need to write data into it, you need to useArrayBuffer
In other words, you need to wrap a typed arrayArrayBuffer
So that we can write the data into memory correctly.
- How to go to
OK, now we are done creating and writing the vertex Buffer. Next, we need to modify the configuration of the Pipeline State Object.
Modify Pipeline State Object
The PSO object is configured as follows:
cnst pipeline = device.createRenderPipeline({
vertex: {
module: device.createShaderModule({
code: basicVertWGSL,
}),
entryPoint: 'main'.buffers: [ // Configure Buffer for vertex shaders
{
arrayStride: cubeVertexSize,
attributes: [{// position
shaderLocation: 0.offset: cubePositionOffset,
format: 'float32x4'}, {// uv
shaderLocation: 1.offset: cubeUVOffset,
format: 'float32x2',},],},},fragment: {
module: device.createShaderModule({
code: vertexPositionColorWGSL,
}),
entryPoint: 'main'.targets: [{format: presentationFormat,
},
],
},
primitive: {
topology: 'triangle-list'.// Backface culling since the cube is solid piece of geometry.
// Faces pointing away from the camera will be occluded by faces
// pointing toward the camera.
cullMode: 'back',},multisample: {
count: 4
},
// Enable depth testing so that the fragment closest to the camera
// is rendered in front.
depthStencil: {
depthWriteEnabled: true.depthCompare: 'less'.format: 'depth24plus',}});Copy the code
Let’s focus on the parts that differ from the first chapter of Drawing a triangle:
- A buffer is configured for the vertex shader, which contains information about the vertex positions, UV coordinates of the cube.
{
arrayStride: cubeVertexSize,
attributes: [{// position
shaderLocation: 0.offset: cubePositionOffset,
format: 'float32x4'}, {// uv
shaderLocation: 1.offset: cubeUVOffset,
format: 'float32x2',},],}Copy the code
Those familiar with WebGL should easily understand the above information
arrayStride
Denotes the size of all the information contained in a vertex. For example, our data is organized like this:
// point color uv
1, -1.1.1.1.0.1.1.1.1.// point1
-1, -1.1.1.0.0.1.1.0.1.// point2
-1, -1, -1.1.0.0.0.1.0.0.// point3
Copy the code
We can see that one vertex of the cube contains position information, color information and texture coordinate data. The three data forms a total of 4×10 = 40 bytes, which is the size of a vertex. Data of type FLOAT32 occupies exactly 4 bytes.
attributes
: Indicates the attributes in the data set-
ShaderLocation: indicates which location in the shader it corresponds to
-
Offset: Provides an offset so that the GPU can quickly address the corresponding data. For example, the position data is at the beginning of each line, so the offset is 0, and the UV data is at the end. There are 8 digits before the UV data, so its offset is 4×8=32 bytes.
-
Format: Indicates the data type of this attribute. Float32 indicates the specific data type. The following x4 indicates that the value consists of four float32 values.
-
Now that the vertex-related data has been configured, let’s see if the fragment has been modified. OK, the fragment has not been modified in the PSO.
Next, look at the primitive section
primitive: {
topology: 'triangle-list'.// Backface culling since the cube is solid piece of geometry.
// Faces pointing away from the camera will be occluded by faces
// pointing toward the camera.
cullMode: 'back',},Copy the code
Here we add an attribute cullMode, which represents the culling mode, and here represents the back of the culling model. So what’s heads and what’s tails?
Those who are familiar with WebGL should know that in WebGL, if the vertices of a triangle are counterclockwise, the triangle is frontal. If the vertices of a triangle are clockwise, it is a reverse triangle. So cullMode: back means that all triangles with clockwise vertices are not drawn.
Finally, we noticed that a depthStencil configuration was added, which is used to configure depth and template testing. Here we describe only the configuration of deep testing.
depthStencil: {
depthWriteEnabled: true.depthCompare: 'less'.format: 'depth24plus',},Copy the code
Its meaning is as follows;
depthWriteEnabled
: indicates that deep write is enabled, indicating that the depth value of the graph will be written to the depth buffer during this rendering.depthCompare
: used for depth testing. Compare the depth buffer value with the depth value of the pixel to be drawn. If the condition is not met, the depth test will not pass and the subsequent rendering will not be possible. Less here means that the depth value of the new pixel must be less than the value in the depth buffer for subsequent rendering.format
: Indicates the data format of the depth value
So far, the modification of PSO has also been completed.
Set the UniformBuffer
Next, it is time to set UniformBuffer so that we can pass Uniform types of values, such as MVP matrices, into the GPU
Note:
This article does not discuss the MVP matrix in detail, nor does it provide any information on how to calculate the MVP matrix.
Create it as follows:
const uniformBufferSize = 4 * 16; // 4x4 matrix
const uniformBuffer = device.createBuffer({
size: uniformBufferSize,
usage: GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST,
});
Copy the code
Let’s look at passing uniform variables in WebGL:
const uniformLocation = gl.getUniformLocation('modelViewProjectionMatrix')
gl.uniformMatrix4fv(uniformLocation, false, matrix);
Copy the code
We can see that when passing uniform variables in WebGL, it is not necessary to create an extra Buffer, but to obtain the location of UNIFORM variables in GPU through relevant API, and then directly assign values through relevant API.
However, in WebGPU, Buffer creation is required to uniformly pass uniform variables to vertex data. Let’s compare the vertex Buffer we just created with our UniformBuffer:
usage:
GPUBufferUsage.VERTEX, // Vertex Buffer
GPUBufferUsage.UNIFORM | GPUBufferUsage.COPY_DST // Uniform Buffer
mappedAtCreation:
true // Vertex Buffer
false // Uniform Buffer
Copy the code
We can see that the usage of the Buffer is not the same as the mappedAtCreation. If we specify mappedAtCreation to true, then we need to set its value after creation. We didn’t set mappedAtCreation to True for the Uniform Buffer. Because we might change it later in the render frame. So we need to write data in a different way.
Next, we create bindgroups for Uniform Buffers.
const uniformBindGroup = device.createBindGroup({
layout: pipeline.getBindGroupLayout(0),
entries: [{binding: 0.resource: {
buffer: uniformBuffer,
},
},
],
});
Copy the code
First, let’s explain the meaning of BindGroup. CreateBindGroup returns a GPUBindGroup object. It defines a set of resources that need to be bound together and specifies how they should be used in the Shader.
Let’s take a look at their properties:
layout
: indicates the layout position of the bindGroup in the Shader. In shader, pass as[[binding(0), group(0)]]
To specify the value of the bindGroup to use.- Entries: Indicates the Buffer used by the relevant resource and its binding location. For example, the shader code below shows that an imaginative variable corresponds to the uniformBuffer in the bindGroup we just created.
[[binding(0), group(0)]] var<uniform> uniforms
Copy the code
OK, up to now, our relevant Buffer has basically been prepared. We are ready to change the rendering process.
Modify the rendering process
What we need to change in the rendering process is the configuration in renderPass
const renderPassDescriptor: GPURenderPassDescriptor = {
colorAttachments: [{view: context.getCurrentTexture().createView(), // Assigned later
loadValue: { r: 0.5.g: 0.5.b: 0.5.a: 1.0 },
storeOp: 'store'],},depthStencilAttachment: {
view: depthTexture.createView(),
depthLoadValue: 1.0.depthStoreOp: 'store'.stencilLoadValue: 0.stencilStoreOp: 'store',}};Copy the code
We can see that depthStencilAttachment has been added, which is used to provide initial values for deep testing and template testing. We can skip over this. The depthStencil Attachment is unnecessary if we don’t set the depthStencil in the PSO object. They always come in pairs.
Another important step is to write values to the uniform Buffer we created earlier.
device.queue.writeBuffer(
uniformBuffer,
0,
transformationMatrix.buffer,
transformationMatrix.byteOffset,
transformationMatrix.byteLength
);
Copy the code
Note that if we want to use this API to write values to Buffer, the following conditions must be met:
buffer
Must be valid and lawful- The buffer object must be valid
- Buffer. device object must be valid
- Buffer. device must be the same object as the device that created it
buffer
的[[state]]
It must beunmapped
State, that is, the state available to the GPU.buffer
的usage
Must containCOPY_DST
Flag bit of.bufferOffset
Convert to bytes, which must be a multiple of 4 bytes.bufferOffset
+contentSize
Convert to bytes, must be less thanbuffer
的size
Size.
Finally, we need to set the BindGroup of uniformBuffer in the renderPass
passEncoder.setBindGroup(0, uniformBindGroup);
Copy the code
This last step is pretty easy, and the rest of the steps are pretty much the same as we did before to draw the triangle.
Modify WGSL
vertex shader
struct Uniforms { modelViewProjectionMatrix : mat4x4<f32>; }; [[binding(0), group(0)]] var<uniform> uniforms : Uniforms; struct VertexOutput { [[builtin(position)]] Position : vec4<f32>; [[location(0)]] fragUV : vec2<f32>; [[location(1)]] fragPosition: vec4<f32>; }; [[stage(vertex)]] fn main([[location(0)]] position : vec4<f32>, [[location(1)]] uv : vec2<f32>) -> VertexOutput { var output : VertexOutput; output.Position = m * uniforms.modelViewProjectionMatrix; output.fragUV = uv; FragPosition = 0.5 * (position + vec4<f32>(1.0, 1.0, 1.0, 1.0)); return output; }Copy the code
Here we use structs to make our code clearer. Here’s the data binding part:
[[binding(0), group(0)]]: This parameter corresponds to the layout and binding parameters in the BindGroup we created. So we know exactly what its value is.
VertexOutput will be our output in the rasterization phase of the rendering pipeline, for example output.Position specifies the last vertex Position, output.fragUV, output.fragPosition will be interpolated in the rasterization phase. Then we can get the interpolation result in the slice shader.
fragment shader
[[stage(fragment)]]
fn main(
[[location(0)]] fragUV: vec2<f32>,
[[location(1)]] fragPosition: vec4<f32>,
) -> [[location(0)]] vec4<f32> {
return fragPosition;
}
Copy the code
We can see that the inputs here are [[location(0)]] fragUV and [[location(1)]] fragPosition, which need to correspond to the location in the VertexOutput of the vertex shader. Note that fragUV and fragPosition are interpolated results.
The final render looks like this:
conclusion
Finally, let’s sum up what we have learned today
- Vertex data passing: We pass
createBuffer
Create a GPUBuffer and stuff it with data. - Uniform type data passing: also pass
createBuffer
Create a GPUBuffer and stuff it with data, but with a slight difference from passing vertex data:- Buffer usage of uniform type except set to
GPUBufferUsage.UNIFORM
Plus, plusGPUBufferUsage.COPY_DST
. Because if you don’tGPUBufferUsage.COPY_DST
Then we can’t get throughwriteBuffer
This API writes data to it. - The vertex data buffer is set up when it is created
mappedAtCreation: true
Set buffer tomapped
State so that we can write data directly from typed arrays. - Buffers of uniform type need to pass data to the GPU through the wearable bindGroup
- Buffer usage of uniform type except set to
- Learned how to use structures in shader programs to make our programs clearer, and how to pass variables between vertex shaders and chip shaders.
- We also covered some depth/template testing/writing, but that’s not the focus of this section, so we can just ignore it. You can also delete related configuration items.
That’s all for today’s main content, if you found this article useful, please give it a thumbs up, your support is my motivation to update
The resources
- WebGPU Samples (github.com)
- WebGPU Shading Language (gpuweb.github.io)
- WebGPU (gpuweb.github.io)