Use Metal to render the video file as follows:The general idea is as follows:

  • Use tools to decode the video to get the video track data output object readerVideoTrackOutput
  • Each time the draw method is called, the sample CMSampleBuffer for the next frame of the video is retrieved
  • Generate two texture objects from CMSampleBuffer and pass in the slice shader function
  • In the slice shader function, read the color value of the texture in YUV format
  • According to the defined transformation matrix, YUV is converted to RGB and displayed

1. Prepare

1.1 Type definition header file

This is an increase compared to the previous header fileZConvertMatrixStructure, going onYUV->RGBWhen use.

ZFragmentBufferIndexSlice function buffer index, usingMTLBufferwillZConvertMatrixThe value of the structure is passed to the chip function, which defines the index value and takes the corresponding value.

ZFragmentTextureIndexIn this case, two textures are needed, so two texture indexes are defined to carry out corresponding values.

// Struct (x,y,z,w) vector_float4 position; // Texture coordinates (s,t) vector_float2 textureCoordinate; }ZVertex; Typef struct {// matrix_float3x3 matrix; // Offset vector_float3 offset; } ZConvertMatrix; / / input vertex function index typedef enum ZVertexInputIndex {ZVertexInputIndexVertices = 0,} ZVertexInputIndex; / / fragment function index buffer typedef enum ZFragmentBufferIndex {ZFragmentBufferIndexMatrix = 0,} ZFragmentBufferIndex; / / texture of the slice function index typedef enum ZFragmentTextureIndex {ZFragmentTextureIndexTextureY = 0, ZFragmentTextureIndexTextureUV = 1, } ZFragmentTextureIndex;Copy the code

1.2 MTKView

MTKView creation process is already very familiar, no further details, directly on the code:

Func setupMTKView() {//1. MtkView = mtkView (frame: The bounds) / / get the default device mtkView. The device = MTLCreateSystemDefaultDevice () / / set the self. The view = self. MtkView; View = mtkView/agent/set mtkView. Delegate = self / / get viewport size viewportSize = vector_int2 (Int32 (mtkView. DrawableSize. Width), Int32(mtkView.drawableSize.height)) mtkView.preferredFramesPerSecond = 24 }Copy the code

1.3 setupPipeline

Set up the Render pipe

Func setupPipeline () {/ / 1. The metal / * newDefaultLibrary: by default, a metal file, it is recommended to use newLibraryWithFile: error: Read from the Library specified newLibraryWithData metal file: error: */ let defaultLibrary = mtkView.device?.makeDefaultLibrary() let vertexFunction = defaultLibrary?.makeFunction(name: "vertexShader") let fragmentFunction = defaultLibrary?.makeFunction(name: "fragmentShader") //2. Rendering pipeline description information class let pipelineStateDescriptor = MTLRenderPipelineDescriptor () pipelineStateDescriptor. VertexFunction = VertexFunction pipelineStateDescriptor. FragmentFunction = fragmentFunction / / set the color format pipelineStateDescriptor.colorAttachments[0].pixelFormat = mtkView.colorPixelFormat //3. PipelineState = try! {pipelineState = try! mtkView.device?.makeRenderPipelineState(descriptor: PipelineStateDescriptor) //4.CommandQueue = mtkView.device?.makecommandQueue ()}Copy the code

1.4 Vertex Data

// set vertex func setupVertex() {//1. Vertex coordinates (x,y,z,w); // Let quadVertices = [ZVertex(position: 0, 0, 0, 0) [1.0, 1.0, 0.0, 1.0], textureCoordinate: [1.0, 1.0]), ZVertex (position: [1.0, 1.0, 0.0, 1.0], textureCoordinate: [0.0, 1.0]), ZVertex(position: [-1.0, 1.0, 0.0, 1.0], textureCoordinate: [0.0, 0.0]), ZVertex( [1.0, 1.0, 0.0, 1.0], textureCoordinate: [1.0, 1.0]), ZVertex (position: [1.0, 1.0, 0.0, 1.0], textureCoordinate: [0.0, 0.0]), ZVertex (position: [1.0, 1.0, 0.0, 1.0], textureCoordinate: [1.0, 0.0])] / / 2. Vertices = mtkview.device?. MakeBuffer (bytes: quadVertices, length: MemoryLayout<ZVertex>.size*quadVertices.count, options: .storageModeShared) //3. NumVertices = quadververtices. Count}Copy the code

1.5 Transformation Matrix

The calculation formula of YUV->RGB has been introduced in the previous article. The following three transformation matrices are defined. During the transformation, the YUV color value is multiplied by the matrix, which is equivalent to the matrix value becoming the fixed coefficient of the calculation formula, and multiplied by different YUV components to obtain the final RGB value. Define matrix:

/ / 1. Transformation matrix / / BT. 601, which is the standard for SDTV. Let kColorConversion601DefaultMatrix = matrix_float3x3 (the columns: Simd_float3 (1.164, 1.164, 1.164), simD_float3 (0.0, -0.392, 2.017), simD_float3 (1.596, -0.813, 0.0))) / / BT. 601 full range let kColorConversion601FullRangeMatrix = matrix_float3x3 (the columns: (simd_float3 (1.0, 1.0, 1.0), simd_float3 (0.0, 0.343, 1.765), simd_float3 (1.4, 0.711, 0.0))) / / BT. 709, which is the standard for HDTV. let kColorConversion709DefaultMatrix = matrix_float3x3(columns: (simd_float3 (1.164, 1.164, 1.164), simd_float3 (0.0, 0.213, 2.112), simd_float3 (1.793, 0.533, 0.0)))Copy the code

Define the offset. The offset is fixed.

YUV subdivision has Y ‘uv, YUV, YCbCr, YPbPr and other formats. At present, YCbCr is mainly used on the computer, so YUV mainly refers to YCbCr (hereafter referred to as YUV), Cb represents the offset of blue concentration, Cr represents the offset of red concentration.

/ / 2. The offset let kColorConversion601FullRangeOffset = vector_float3 (- (16.0/255.0), 0.5, 0.5)Copy the code

Create the transformation matrix structure and store the data into the transformation matrix buffer

/ / 3. Create a transformation matrix structure. Var matrix = ZConvertMatrix () / / Settings / * kColorConversion601DefaultMatrix transformation matrix; KColorConversion601FullRangeMatrix; KColorConversion709DefaultMatrix; */ matrix.matrix = kColorConversion601FullRangeMatrix matrix.offset = kColorConversion601FullRangeOffset //4. ConvertMatrix = mtkView.device?. MakeBuffer (bytes: &matrix, length: MemoryLayout<ZConvertMatrix>.size, options: .storageModeShared)Copy the code

2. The tools

The utility class is responsible for loading video resources according to the video path and providing image sampling data for video frames when needed.

  • Create AVAssetReader based on the path. AVAssetReader retrieves decoded audio and video data from the raw data

  • AVAssetReaderTrackOutput Clients can read media data for resource tracks by adding an instance of AVAssetReaderTrackOutput to AVAssetReader using the addOutput: method of AVAssetReader. AVAssetReaderOutPut contains three types of output:

    • AVAssetReaderTrackOutput:A media sample for reading a single track from the Avasse Reader store
    • AVAssetReaderAudioMixOutput:Used to read audio samples
    • AVAssetReaderVideoCompositionOutput: a composite video frame used to read frames in one or more tracks
  • The image sample data CMSampleBuffer of the video frame can be obtained by AVAssetReaderTrackOutput

3. Draw

There are two main things you do in DRAW:

  • Get image data samperBuffer from reader
  • Generate two textures, textureY and textureUV, from samperBuffer

3.1 readerBuffer

  • Determines whether readerVideoTrackOutput is created successfully, that is, whether an output image can be produced
  • Output the next frame of image
  • Judge that the read is complete, re-initialize, and prepare for the next read
  • Return sampleBuffer
Func readBuffer() -> CMSampleBuffer? {// lock lock.lock() var sampleBuffer: CMSampleBuffer? //1. Check whether readerVideoTrackOutput is created successfully. = nil {/ / copy the contents of the cache area next to sampleBufferRef sampleBuffer = readerVideoTrackOutput. CopyNextSampleBuffer ()} else { lock.unlock() return nil } //2. Judge assetReader and read the status is completed The empty readerVideoTrackOutput/assetReader again. And reinitialize them if assetReader! = nil && assetReader.status == AVAssetReader.Status.completed { print(Date().timeIntervalSince1970) print("customInit") readerVideoTrackOutput = nil assetReader = nil setupAsset() } lock.unlock() //3. Return sampleBufferRef return sampleBuffer}Copy the code

3.2 Generating textures

  • The first step is to create CVMetalTextureCache, a cache channel provided by CoreVideo that can be shared between CPU and GPU.
  • CMSampleBuffer->CVPixelBuffer
  • Using CVMetalTextureCache, the CVPixelBuffer object located in the GPU buffer generates the CVMetalTexture object
  • The CVMetalTexture object is used to generate the MTLTexture object that metal can use
  • Pass the generated MTLTexture texture object into the slice shader function for use

For specific codes, see Demo.

4. Color conversion

  • Set the texture sampler
  • Color values in YUV format are required for texture sampling
  • YUV->GRB
  • Create RGBA and finally return
// stage_in indicates that the data is rasterized. After (rasterizer is vertex processing steps, the business layer can't modify) / / texture that is texture data, CCFragmentTextureIndexTextureY is index / / texture show is texture data, CCFragmentTextureIndexTextureUV index / / buffer that is cached data, CCFragmentInputIndexMatrix is index fragments float4 fragmentShader (RasterizerData input [[stage_in]], texture2d<float> textureY [[texture(ZFragmentTextureIndexTextureY)]], texture2d<float> textureUV [[texture(ZFragmentTextureIndexTextureUV)]], constant ZConvertMatrix *convertMatrix [[buffer(ZFragmentBufferIndexMatrix)]]){ //1. Constexpr Sampler textureSampler(mag_filter:: Linear, min_filter:: Linear); Sample (textureSampler, (Y) Textureuv. sample(textureSampler, (UV) */ float3 yuV = float3(Texturey.sample (textureSampler,  input.textureCoordinate).r, textureUV.sample(textureSampler, input.textureCoordinate).rg); ConvertMatrix ->matrix * (YUV + convertMatrix->offset) float3 RGB = convertMatrix->matrix * (YUV + convertMatrix->offset); //4. Return float4(RGB, 1.0); }Copy the code

Complete demo