“This is the 14th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

An overview of the

In “Metal Framework: Using Metal to Draw View Content”, it describes how to set MTKView object and use render channel to change the content of view, realizing the background color to render the content of view. This example shows you how to configure the render pipeline to draw a simple 2D color triangle in the view as part of the render channel. The example provides a position and color for each vertex, and the render pipeline uses this data to render the triangle by inserting color values between the specified vertex colors.

In this example, you’ll learn how to write vertex and slice functions, create render pipeline state objects, and finally code drawing commands.

Understand the Metal rendering pipeline

The render pipeline processes the drawing command and writes data to the target of the render channel. A complete rendering pipeline consists of many phases, some of which require programming with shaders, and some of which require the configuration of fixed features. The pipeline for this example consists of three phases: vertex phase, rasterization phase, and slice phase. The vertex stage and segment stage are programmable, which can be written using Metal Shading Language (MSL), while the raster stage is not programmable, which can be configured directly using inherent feature parts.

Rendering starts with a drawing command, which includes the number of vertices and the type of primitives to render. Here is the drawing command for this example:


// Draw the triangle.

[renderEncoder drawPrimitives:MTLPrimitiveTypeTriangle

                  vertexStart:0

                  vertexCount:3];

Copy the code

The vertex phase processes the data for each vertex. After the vertex has gone through the vertex phase, the rendering pipeline rasterizes the primitives to determine which pixels in the rendering target are within the primitives’ boundaries (that is, the pixels that primitives can be converted to). The slice phase determines the pixel value of the render target.

Custom render pipeline

Vertex functions generate data for a single vertex, and slice functions generate data for a single slice, and you can write functions to specify how they work. We can configure the phases of the pipeline based on what we want it to do and how it will be done.

Deciding which data to pass to the render pipe and which data to pass to the later stages of the pipe is usually done in three places:

  • The input to the pipe is provided by App and passed to the vertex phase.

  • The output of the vertex phase is passed to the rasterization phase.

  • The input of the chip stage is provided by App or generated by the rasterization stage.

In this example, the input data for the pipe includes the positions of vertices and their colors. To demonstrate the type of transformation performed in the vertex function, the input coordinates are defined in a custom coordinate space, measured in pixels from the center of the view. These coordinates need to be converted to Metal’s coordinate system.

Declare an AAPLVertex structure that uses SIMD vector types to hold position and color data.


typedef struct

{

    vector_float2 position;

    vector_float4 color;

} AAPLVertex;

Copy the code

SIMD types are very common in Metal Shading Language, and the corresponding SIMD libraries need to be used in App. The SIMD type contains multiple channels for a particular data type, so declaring the position as VECtor_FLOAT2 means that it contains two 32-bit floating-point values (x and Y coordinates). Colors are stored using vector_FLOAT4, so they have four channels: red, green, blue, and alpha.

In App, input data is specified using a constant array:

static const AAPLVertex triangleVertices[] = { // 2D positions, RGBA colors { { 250, -250 }, { 1, 0, 0, 1 } }, {{250, 250}, {0, 1, 0, 1}}, {{0, 250}, {0, 0, 1, 1}},};Copy the code

The vertex stage generates data for vertices and needs to provide color and transformation positions. Declare a RasterizerData structure containing position and color values using the SIMD type.


struct RasterizerData

{

    // The [[position]] attribute of this member indicates that this value

    // is the clip space position of the vertex when this structure is

    // returned from the vertex function.

    float4 position [[position]];



    // Since this member does not have a special attribute, the rasterizer

    // interpolates its value with the values of the other triangle vertices

    // and then passes the interpolated value to the fragment shader for each

    // fragment in the triangle.

    float4 color;

};

Copy the code

The output position (described in detail below) must be defined as a Vector_FLOAT4 type. The colors are declared in the input data structure.

Metal needs to be told which fields in the rasterized data provide location data, because Metal does not enforce any specific naming conventions for fields in the structure. The position field is marked with the [[position]] property qualifier, which is used to hold the output position of the field.

The fragment function simply passes data from the rasterization phase to subsequent phases, so it does not require any additional parameters.

Defining vertex functions

You need to use the vertex keyword to define vertex functions, including input and output parameters.


vertex RasterizerData

vertexShader(uint vertexID [[vertex_id]],

             constant AAPLVertex *vertices [[buffer(AAPLVertexInputIndexVertices)]],

             constant vector_uint2 *viewportSizePointer [[buffer(AAPLVertexInputIndexViewportSize)]])

Copy the code

The first argument, vertexID, is modified with the [[vertex_id]] property qualifier, which is the Metal keyword. When executing a render command, the GPU calls the vertex function multiple times to generate a unique value for each vertex.

The second argument, vertices, is an array of vertex data, using the AAPLVertex structure defined earlier.

To convert the position to Metal’s coordinates, the function needs to draw the size of the triangle’s viewport in pixels, so it needs to be stored in the viewportSizePointer argument.

The second and third parameters are decorated with the [[buffer(n)]] property qualifier. By default, Metal automatically allocates slots in the argument table for each argument. When the [[buffer(n)]] qualifier is used to modify buffer parameters, it explicitly tells Metal which slot to use. Explicitly declaring slots makes it easy to change the shader code without changing the App code.

Writing a vertex function

The vertex function you write must generate two fields of the output structure, index the array of vertices with the vertexID parameter and read the input data for the vertices, and also get the viewport size.


float2 pixelSpacePosition = vertices[vertexID].position.xy;

// Get the viewport size and cast to float.

vector_float2 viewportSize = vector_float2(*viewportSizePointer);
Copy the code

Vertex functions must provide position data in clipped spatial coordinates, which are 3D points represented by four-dimensional homogeneous vectors (x,y,z,w). In the rasterization phase, the output position is obtained and the X, Y, and Z coordinates are divided by W to generate 3D points in the normalized device coordinates. Normalized device coordinates are independent of viewport size.

Normalized device coordinates use a left-handed coordinate system to map the position of the viewport. Primions are clipped to a clipping box in this coordinate system and then rasterized. The lower left corner of the clipbox is at the (-1.0,-1.0) coordinates, and the upper right corner is at (1.0,1.0). The positive Z-value points away from the camera (towards the screen). The visible portion of the Z coordinate is between 0.0 (near clipping plane) and 1.0 (far clipping plane).

The following figure shows the transformation of the input coordinate system to the normalized device coordinate system.

Since this is a two-dimensional application and does not require homogeneous coordinates, write a default value for the output coordinates, with w set to 1.0 and other coordinates set to 0.0. This means that the vertex function (x,y) generated in this coordinate space is already in the normalized device coordinate space. Divide the input position by 1/2 viewport size to generate normalized device coordinates. Because this calculation is performed using the SIMD type, you can compute both channels with one line of code, perform division and place the result in the X and Y channels at the output location.

Position = vector_float4(0.0, 0.0, 0.0, 1.0); Out.position. xy = pixelSpacePosition/(viewportSize / 2.0);Copy the code

Finally, assign the color value to out.color as the return value.


out.color = vertices[vertexID].color;

Copy the code

Write slice function

The slice stage can modify the rendering target. The rasterizer determines which pixels of the rendered target are covered by pixels, and only those pixels in the triangular tiles are rendered.

The slice function processes rasterized position information and calculates the output value for each rendered target. These chip values are processed by subsequent stages in the pipeline and eventually written to the render target.

The slice shader in this example accepts the same parameters as declared in the output of the vertex shader. Declare a fragment function using the fragment keyword. It has only one input parameter, the same RasterizerData structure provided by the vertex stage. Add the [[stage_in]] property qualifier to indicate that this parameter is generated by the rasterizer.


fragment float4 fragmentShader(RasterizerData in [[stage_in]])

Copy the code

If the shard function writes to multiple render targets, a variable must be declared for each render target. Because this example has only one render target, you can directly specify a floating point vector as the output of the function, which is the color to write to the render target.

The rasterization phase computes the values of each chip parameter and invokes the chip function with them. In the rasterization stage, the color parameter is calculated as the color mixture at the vertex of the triangle. The closer the slice is to the vertex, the greater the contribution of the vertex to the final color.

Return the interpolated color as the output of the function.


return in.color;

Copy the code

Create a render pipeline state object

After writing the shader functions, you need to create a rendering pipeline that specifies an MTLFunction object for each shader function through the MTLLibrary.


id<MTLLibrary> defaultLibrary = [_device newDefaultLibrary];


id<MTLFunction> vertexFunction = [defaultLibrary newFunctionWithName:@"vertexShader"];

id<MTLFunction> fragmentFunction = [defaultLibrary newFunctionWithName:@"fragmentShader"];

Copy the code

Next, create a MTLRenderPipelineState object, using MTLRenderPipelineDescriptor pipeline configuration.


MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];

pipelineStateDescriptor.label = @"Simple Pipeline";

pipelineStateDescriptor.vertexFunction = vertexFunction;

pipelineStateDescriptor.fragmentFunction = fragmentFunction;

pipelineStateDescriptor.colorAttachments[0].pixelFormat = mtkView.colorPixelFormat;



_pipelineState = [_device newRenderPipelineStateWithDescriptor:pipelineStateDescriptor

                                                         
                                                         error:&error];



Copy the code

In addition to specifying vertex and slice functions, you can also specify the pixel format of the render target. Pixel format (MTLPixelFormat) defines the memory layout of pixel data. For simple formats, this definition includes the number of bytes per pixel, the number of data channels stored in the pixel, and the bit layout of those channels. The render pipeline state must be rendered correctly using a pixel format compatible with the pixel format specified by the render channel, and since this example has only one render target and it is provided by the view, the view’s pixel format is copied into the render pipeline descriptor.

When using Metal to create a render pipeline state object, the render pipeline needs to convert the output pixel format of the chip function to the pixel format of the render target. If you need to create different pipe state objects for different pixel formats, you can use the same shader in multiple pipes with different pixel formats.

Set the viewport

Once you have the render pipe state object for the pipe, you can use the render command encoder to render the triangle. First, you need to set up the viewport to tell Metal which part of the render target to draw to.

// Set the region of the drawable to draw into. [renderEncoder setViewport:(MTLViewport){0.0, 0.0, _viewportsize.x, _viewportSize. J y, 0.0, 1.0}];Copy the code

Sets the render pipeline state

Specifies the render pipeline state object for the render pipeline.


[renderEncoder setRenderPipelineState:_pipelineState];

Copy the code

Sends parameter data to the vertex function

A buffer (MTLBuffer) is usually used to pass data to shaders. However, when only a small amount of data needs to be passed to the vertex function, the data can be copied directly into the command buffer.

This example copies the data of two parameters into the command buffer, the vertex data is copied from the array defined, the viewport data is copied from the same variable that set the viewport, and the chip function uses only the data received from the raster, so no parameters are passed.


[renderEncoder setVertexBytes:triangleVertices

                       length:sizeof(triangleVertices)

                      atIndex:AAPLVertexInputIndexVertices];



[renderEncoder setVertexBytes:&_viewportSize

                       length:sizeof(_viewportSize)

                      atIndex:AAPLVertexInputIndexViewportSize];

Copy the code

Encoding drawing command

Specifies the type of primitives, starting index, and number of vertices. When the triangle is rendered, the vertex function is called with vertexID values of 0, 1, and 2.


// Draw the triangle.

[renderEncoder drawPrimitives:MTLPrimitiveTypeTriangle

                  vertexStart:0

                  vertexCount:3];

Copy the code

As with Metal drawing to the screen, you need to end the encoding process and submit the command buffer. The difference is that you can use the same set of steps to code more render commands. Execute commands in the specified order to generate the final rendered image. (For performance, gpus can process commands and even parts of commands in parallel, as long as the end result is rendered sequentially.)

Color interpolation

In this example, the color values are calculated by interpolation inside the triangle. Sometimes you want to generate a value from one vertex that remains constant throughout the primitives, and you need to specify the Flat property qualifier on the output of the vertex function to do this. In the example project, you do this by adding the [[flat]] qualifier to the color field.


float4 color [[flat]];

Copy the code

The rendering pipeline uses the color value of the first vertex of the triangle (called the excited vertex) and ignores the colors of the other two vertices. You can also mix flat shader and interpolation by simply adding or removing the flat qualifier on the output of the vertex function.

conclusion

This article showed you how to configure the render pipe, how to write vertex and slice functions, how to create render pipe state objects, and finally code the drawing commands to draw a simple 2D color triangle in the view.

Download the sample code for this article