basis
Noun review
- Vertex: Vertex data is a set of vertices. A Vertex is a collection of data in 3D coordinates. Vertex data is represented by Vertex attributes, which can contain any data we want. For simplicity, our example consists of a single 3D position and some color values for each Vertex.
- Primitive: Now that we have coordinates and colors, we need to specify what type of rendering these data represent. It is usually a point, line, or triangle.
- Vertex shaders: Vertex shaders are used to process vertex properties and convert coordinates.
- Rasterization: Map primions to corresponding pixels on the final screen to generate fragments for use by Fragment shaders.
- Fragment shader: The main purpose is to calculate the final color of a pixel.
- Depth testing and blending: Depth testing is to determine whether the pixel is in front of or behind other objects and to determine whether it should be discarded; Blending is done because pixels are transparent and the final pixel color needs to be calculated when objects overlap.
- Texture: When we say texture, we usually refer to a two-dimensional image that is attached to the view like a sticker (sampling) to make the screen look the way we want it to. But physically, texture refers to the contiguous space in GPU memory for storing image data.
Coordinate system
Metal defines several coordinate systems for converting graphic data at different stages of the rendering pipeline. A four-dimensional homogeneous vector (x,y,z,w) represents a three-dimensional point in cropped space. A vertex shader generates clipping space points. Metal converts x,y, and z from clipped coordinates to standard equipment coordinates using the W component.
- Normalized Device Coordinates (NDC) : Metal is a left-hand coordinate system, while OpenGL is a right-hand one. NDC converts coordinates to Windows that are independent of screen size. The lower left corner in the (x,y) coordinate system is (-1.0,-1.0), and the upper right corner is (1.0,1.0). Plus Z is pointing into the screen. The z-axis has visible space between 0.0 (near plane) and 1.0 (far plane). The Metal render pipeline cuts primitives into the cube.
- Screen space coordinates: The rasterization phase converts normalized device coordinates to screen space coordinates. The (x,y) coordinates are described in pixels in this space.
- Texture coordinates: Texture coordinates are similar to screen space coordinates. For 2D textures, the standard texture coordinates are x and y are values between 0.0 and 1.0. (0.0, 0.0) represents the first byte of image data (the upper left corner of the image). (1.0, 1.0) represents the last byte of the image (bottom right corner of the image).
The Metal of the API
MTLCommandQueue
Created by Device, it is the queue drawn by the entire APP, and the command buffer stores the instructions for each rendering, that is, contains the information required for each rendering, until the instructions are submitted to GPU for execution. The Command queue is used to create and organize the MTLCommandBuffers. It contains multiple Command buffers and ensures that the commands are sent to the GPU in an orderly manner.
Render pass is a sequence of commands to draw a set of render targets (textures). We configure the render pass with the MTLRenderPassDescriptor.
The commandencoder, MTLCommandEncoder, takes the high-level instructions we describe, encodes them into GPU commands that the GPU can understand, writes them into the command buffer, The MTLRenderCommandEncoder we used is an encoder for graphics rendering tasks.
Render pipeline processes draw commands and writes data to render targets in Render Pass. Shader methods need to be added to render pipeline. Render pipeline corresponding object is MTLRenderPipelineState, we need to use a MTLRenderPipelineDescriptor object to configure it.
When a texture is rendered it is not automatically updated to the screen. In Metal, the texture is displayed with a Drawable object.
Draw triangles using the render pipeline
The official demo
understandMetal
Render pipeline
The render pipeline is the process of processing the draw command and writing data to the target of the render pass. The rendering pipeline consists of many stages, some of which are programmable shaders, and fixed or configurable sections, such as vertex shaders and pixel shaders, which are programmable and let you manually specify what your rendered model will look like.
This example focuses on the three main stages of the rendering pipeline: vertex shader, raster, and fragment shader. We can useMSL
Writing vertex shader and fragment shader methods in Metal Shading Language. Rasterization is a built-in operation.Rendering begins with a draw instruction that specifies the number of vertices and the type of primitives to draw. For example, the draw instruction in this example:
// Draw the triangle
[renderEncoder drawPrimitives:MTLPrimitiveTypeTriangle vertexStart:0 vertexCount:3];
Copy the code
Vertex shaders provide data for each vertex, and when enough vertices have been processed, the rendering pipeline rasterizes primitives to determine which pixels in the rendered object are inside the primitives’ boundaries. The fragment shader determines the color of each of these pixels.
What happens to the data in the render pipeline
We need to input data into the render pipeline and control the data passing through the render pipeline. To demonstrate the data transformation in the vertex shader, we enter a set of coordinates starting from the center point of the view, with the value representing the pixel value, that need to be converted to Metal’s coordinate system.
Declare an AAPLVertex structure that stores positions and colors with SIMD vectors.
typedef struct{
vector_float2 position;
vector_float4 color;
} AAPLVertex;
Copy the code
SIMD is very common in Metal Shading Language and you should use it in your own projects as well. The SIMD type can contain multiple channels in a particular type, so declaring the position type vector_FLOAT2 means that it contains two 32-bit float values (x and Y coordinates, respectively). Colors are stored as vector_FLOAT4, so it has four parts: red, green, blue, and transparency
In this case, our input data is constant
static const AAPLVertex triangleVertices[] =
{
// 2D positions, RGBA colors {{250.- 250. }, { 1.0.0.1}}, {{- 250..- 250. }, { 0.1.0.1}}, {{0.250 }, { 0.0.1.1}}};Copy the code
The vertex shader generates vertex data, so it outputs a color value and the converted position value. We declare a RasterizerData structure for this.
struct RasterizerData{
In the return value of the vertex shader, the [[position]] descriptor indicates that the value is the position of the vertex in the clipping space
float4 position [[position]];
// Since this value has no descriptor, rasterization interpolates the value according to the other vertices of the triangle and passes it to some fragment S of the fragment shader
float4 color;
};
Copy the code
The output position must be defined as vector_FLOAT4. The color value remains as it was entered.
You need to tell Metal to provide location data for the value passed to rasterization, since Metal has no specific naming convention. The [[position]] descriptor declares the position field to indicate that the field is a position value.
Vertex shader method
Use the vertex keyword
vertex RasterizerData vertexShader(uint vertexID [[vertex_id]],
constant AAPLVertex *vertices [[buffer(AAPLVertexInputIndexVertices)]],
constant vector_uint2 *viewportSizePointer [[buffer(AAPLVertexInputIndexViewportSize)]])
Copy the code
The first argument, vertexID, uses the [[vertex_id]] descriptor and is another keyword for Metal. When rendering instructions are executed, the GPU calls the vertex shader method multiple times to generate unique values for each vertex.
The second argument, vertices, is an array of vertices of the previously declared AAPLVertex type.
Converting vertices to Metal’s coordinate system requires the size of the view, the pixel size of the render target, stored in the third argument, viewportSizePointer.
float2 pixelSpacePosition = vertices[vertexID].position.xy;
// Get the viewport size and cast to float.
vector_float2 viewportSize = vector_float2(*viewportSizePointer);
Copy the code
Get the current vertex data from the vertex array using vertexID. The vertex shader method must output clipping coordinates, which are four-dimensional homogeneous vectors (x,y,z,w) representing 3D positions. The rasterization operation separates x, Y, and z coordinates and uses the W value to generate 3D standard equipment coordinates. Because our example is 2D, initialize the output coordinates to set w to 1 and everything else to 0. Divide the x and y coordinates by the general view size to convert to standard device coordinates.
out.position = vector_float4(0.0.0.0.0.0.1.0);
out.position.xy = pixelSpacePosition / (viewportSize / 2.0);
out.color = vertices[vertexID].color;
Copy the code
Fragment shader method
The rasterization operation determines which pixel to render the target is inside the pixel. Only pixels with center points inside the triangle will be rendered.The vertex shader method receives rasterized output and calculates the color value for each position. These fragment values are processed by subsequent render pipelines and eventually written to render targets. Fragment shader method usedfragment
Keyword declaration, which has only one argument, provided by the vertex shaderRasterizerData
Structure.[[stage_in]]
It shows that this is a raster generated value.
fragment float4 fragmentShader(RasterizerData in [[stage_in]]) {
// Return the interpolated color.
return in.color;
}
Copy the code
The rasterization stage generates parameters for each fragment and calls the fragment shader method. The color value generated by rasterization for each fragment is a mixture of three vertices, and the closer you are to a vertex, the greater the weight of that vertex in the final color value.
Create the render pipeline State object
Now that the shader methods are complete, we render the pipeline to use them. First, get the shader method MTLFunction. Then, create MTLRenderPipelineState object, use MTLRenderPipelineDescriptor configuration rendering pipeline
id<MTLLibrary> defaultLibrary = [_device newDefaultLibrary];
id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:@"vertexShader"];
id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:@"fragmentShader"];
MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
pipelineStateDescriptor.label = @"Simple Pipeline";
pipelineStateDescriptor.vertexFunction = vertexFunction;
pipelineStateDescriptor.fragmentFunction = fragmentFunction;
pipelineStateDescriptor.colorAttachments[0].pixelFormat = mtkView.colorPixelFormat;
_pipelineState = [_device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
Copy the code
The pixel format specified by render pipeline state must be compatible with that specified in the render pass command, which in this case are the pixel formats used by mtkView.
Set the Render Pass
Once the render pipeline is created, we need to configure it into the render channel, and we need to set the vertex data and view size.
[renderEncoder setRenderPipelineState:_pipelineState];
[renderEncoder setVertexBytes:triangleVertices length:sizeof(triangleVertices) atIndex:AAPLVertexInputIndexVertices];
[renderEncoder setVertexBytes:&_viewportSize length:sizeof(_viewportSize) atIndex:AAPLVertexInputIndexViewportSize];
Copy the code
Displays the drawing result to the screen
This method tells Metal that when the render command is finished, Metal needs to display the final drawing result using Core Animation.
[commandBuffer presentDrawable:drawable];
Copy the code
Draw images with textures
We can use textures to draw or manipulate images in Metal. A texture is made up of texture elements (usually pixels). The structure of a texture element is determined by the type of texture. A texture is usually a 2D image (even 1D and 3D textures are available). The texture structure we use in this example is a 2D array of color data that holds an image. The fragment shader generates color for each fragment by sampling the texture.
We will create the MTLTexture class, which defines the format of the texture, including the size, the layout of the texture elements, the number of texture elements, and the composition of those texture elements. Once created, the format of the texture cannot be changed, but we can change its contents by copying data into the texture.
Metal doesn’t mention that the API loads images from textures or files. Metal can only create textures and provide methods to load or extract image data into textures. We need to implement our own Image processing code or rely on other frameworks such as MetalKit, Image I/O, UIKit, or AppKit. This example implements a custom texture loader.
Load and change the image format
When you need to create a texture and update its contents:
- You have a picture in a special format.
- The content of the texture needs to be generated in the Runtime phase.
- Texture data obtained from the server needs to be updated dynamically.
In this case, the AAPLImage class loads and parses the image data from a TGA file. It converts the pixel data from the TGA file into a format that Metal can understand. In this case, we create a texture with the metadata of the image and take the pixel data as its content.
Metal requires all textures to have a specific format, MTLPixelFormat. The pixel format describes the layout of pixels in a texture. In this example, MTLPixelFormatBGRA8Unorm pixel format is used. Each pixel is 32bits, and each color value is 8bits. The order is blue, green, red, and transparency.
Create a texture
MTLTextureDescriptor *textureDescriptor = [[MTLTextureDescriptor alloc] init];
textureDescriptor.pixelFormat = MTLPixelFormatBGRA8Unorm;
textureDescriptor.width = image.width;
textureDescriptor.height = image.height;
id<MTLTexture> texture = [_device newTextureWithDescriptor:textureDescriptor];
Copy the code
Configuration with MTLTextureDescriptor objects the size of the texture, pixel format, such as attributes, and then call newTextureWithDescriptor: method to create textures. Metal creates the MTLTexture object and allocates memory for the texture data, but it is not initialized, so next we need to fill the content of the texture.
Fill in the image data for the texture
Metal manages the memory of the texture, so we can’t get a pointer to the texture data to assign to it, we need to call the method of the MTLTexture object to assign to it. In this case we need to pass data from the AAPLImage object to the texture object. The MTLRegion object specifies which part of the texture you want to update, in this case the entire texture.
MTLRegion region = { { 0.0.0 }, {image.width, image.height, 1}};
Copy the code
Image data needs to be read on a row basis, so we need to specify the offset for each row.
NSUInteger bytesPerRow = 4 * image.width;
[texture replaceRegion:region mipmapLevel:0 withBytes:image.data.bytes bytesPerRow:bytesPerRow];
Copy the code
Map textures to primitives
When applying textures to primitives, the fragment shader needs to know the texture data it wants to use. So we use texture coordinates: floating point texture positions are mapped to primitives. As mentioned above, the texture coordinate system starts from the upper left and has x and y values ranging from 0.0 to 1.0. So we need to add fields for texture coordinates in the input of the vertex shader.
typedef struct{
// Positions in pixel space. A value of 100 indicates 100 pixels from the origin/center.
vector_float2 position;
// 2D texture coordinate
vector_float2 textureCoordinate;
}AAPLVertex;
Copy the code
Adds texture vertex coordinates to vertex data.
static const AAPLVertex quadVertices[] ={
// Pixel positions, Texture coordinates {{250.- 250. }, { 1.f, 1.f } },
{ { - 250..- 250. }, { 0.f, 1.f } },
{ { - 250..250 }, { 0.f, 0.f } },
{ { 250.- 250. }, { 1.f, 1.f } },
{ { - 250..250 }, { 0.f, 0.f } },
{ { 250.250 }, { 1.f, 0.f } },};
Copy the code
Add texture coordinates to the RasterizerData structure to pass to the fragment shader.
struct RasterizerData {
float4 position [[position]];
float2 textureCoordinate;
};
Copy the code
In the vertex shader, texture coordinates are passed to the rasterization stage. Rasterization inserts texture data into the corresponding fragment.
out.textureCoordinate = vertexArray[vertexID].textureCoordinate;
Copy the code
Sampling texture
The fragment shader method’s argument colorTexture is a reference to the texture object MTLTexture, which is needed for sampling.
fragment float4samplingShader(RasterizerData in [[stage_in]], texture2d<half> colorTexture [[ texture(AAPLTextureIndexBaseColor) ]])
Copy the code
Sample texture data using the built-in sampling method sample(). Sample () takes two parameters: how to sample and the texture coordinates to sample. Sample () calculates the final color from one or more pixel values in the texture.
The sampling method needs to be specified when the drawing region and texture size are inconsistent. Mag_filter specifies the sampling method when the drawing area is larger than the texture, and min_filter specifies the sampling method when the drawing area is smaller than the texture. When set to Linear, the sampler gets the sample color from the pixel color around the average texture coordinate, which makes the image display smoother.
constexpr sampler textureSampler (mag_filter::linear, min_filter::linear);
// Sample the texture to obtain a color
const half4 colorSample = colorTexture.sample(textureSampler, in.textureCoordinate);
Copy the code
Customize a Metal View
Although MetalKit’s MTKView provides the main ability to quickly implement Metal features, there are times when you need to control the rendering of Metal content. This example demonstrates how to implement a custom Metal View that inherits from UIView. It uses a CAMetalLayer to render the rendered content.
Render a custom view
A CAMetalLayer creates a pool of drawable objects CAMetalDrawable. At a given time, an object in the object pool displays the contents of this layer. To change the rendering content of the layer, you need to get a drawable object from the layer and point the rendering content of the layer to the drawable object after rendering. When rendering a frame, call layer’s nextDrawable method to get a drawable object. Drawable provides the texture displayed by the Core Animation.
id<CAMetalDrawable> currentDrawable = [metalLayer nextDrawable];
// If the current drawable is nil, skip rendering this frame
if(! currentDrawable){return; } _drawableRenderDescriptor.colorAttachments[0].texture = currentDrawable.texture;
Copy the code
Implement repeated rendering
For repeated rendering, CADisplayLink is configured for this example. CADisplayLink is a timer class that allows us to draw specific content onto the screen at a rate that keeps pace with the refresh rate of the screen. After CADisplayLink is registered with runloop in a specific mode, the corresponding selector of the CADisplayLink class is called once every time the screen refreshes.
- (void)setupCADisplayLinkForScreen:(UIScreen*)screen{
[self stopRenderLoop];
_displayLink = [screen displayLinkWithTarget:self selector:@selector(render)];
_displayLink.paused = self.paused;
_displayLink.preferredFramesPerSecond = 60;
}
Copy the code
Debug shader
Xcode can grab a frame, modify the shader code and preview it in real time. Start shader debugging as follows.Grab a frame as follows
Debug vertex shader methods
After the capture is successful, switch to >Group by Pipeline State in the left navigation bar, and select Geometry to view all triangle elements. Select a triangle and click the Debug button to open the corresponding vertex shader method
Debug fragment shader methods
Go to attachments in the left navigation bar, click the zoom button in the toolbar, then adjust the position to select a pixel, click the Debug button, and Xcode will display the fragment shader method for drawing that pixel. And you can debug line by lineYou select a line of code in the left call list, and the shader debugger executes in real time. The variable view on the right shows the value of the variable at the current point in time, which is useful when debugging looping methods. You can also see variables in more detail. By clicking the square button to the right of the variable value, Xcode displays a visual palette for checking a color value, for example, and the internal structure for nested variable types.You can also change the shader code and click the refresh button in the toolbar to update the current frame to see what it looks like.