The main contents of this section are:

  1. Static portrait data
  2. Static portrait split data
  3. Static portrait data is saved

Static portrait data

In WWDC18, iOS 12, Apple introduced portrait features based on depth of field mode and opened up the PORTRAIT API. Currently portrait data can only be captured statically, not in real time as depth of field.

First let’s talk about some of the caveats of the portrait API:

  • The system version needs to be iOS 12 or higher (including iOS 12), and the device limit currently needs to be iPhone X or higher (including iPhone X).
  • A camera is required, not necessarily a depth of field camera, but be sure to use a depth of field camera when one is available
  • Only still image capture
  • Everyone needs to be involved
  • The result is a mask image that can be interpreted as a grayscale image

Capturing a static portrait is the same as capturing a static depth of field. So easy, AVCapturePhotoOutput encapsulates some new portrait attributes in iOS 12, as shown below:

  • First of all need to judge according to AVCapturePhotoOutput isDepthDataDeliverySupported isPortraitEffectsMatteDeliverySupported whether equipment and the depth of field and portrait mode, Set AVCapturePhotoOutput isDepthDataDeliveryEnabled and isPortraitEffectsMatteDeliveryEnabled;
  • Step on the advice in the session. AddOutput (output) after the call, otherwise isPortraitEffectsMatteDeliverySupported may have been to false.
  • Then at the time of the capture set AVCapturePhotoSettings isDepthDataDeliveryEnabled and isPortraitEffectsMatteDeliveryEnabled;
  • In photoOutput (_ : didFinishProcessingPhoto: error:) get as data;

Starting an operation such as capturing data with Capture has already been described in a number of similar scenarios, so I won’t go into detail here. Let’s focus on the acquisition of portrait data, sample code is as follows:

func photoOutput(_ output: AVCapturePhotoOutput.didFinishProcessingPhoto photo: AVCapturePhoto.error: Error?). {
    // Get portrait encapsulation data
	if var portraitEffectsMatte = photo.portraitEffectsMatte {
        // Based on the direction of the original image
    	if let orientation = photo.metadata[ String(kCGImagePropertyOrientation) ] as? UInt32 {
           // Adjust the direction of the portrait mask
           portraitEffectsMatte = portraitEffectsMatte.applyingExifOrientation(CGImagePropertyOrientation(rawValue: orientation)!)}/ / for Buffer
        let portraitEffectsMattePixelBuffer = portraitEffectsMatte.mattingImage
    }
}
Copy the code

Through the above operations, we can get the portrait mask map data.

For those of you who have played around with depth of field maps, you can get similar portrait data when using the front depth of field. What are the advantages of the new portrait API to capture portrait data compared to the depth of field captured portrait data? Let’s look at:

We can draw several conclusions from this:

  • Portrait mode has a much higher resolution than the depth of field and retains more detail, thanks to the chip’s machine learning;
  • Portrait data, as a product of machine learning, also solves the problem that some devices that do not support depth of field can also enjoy portrait features.

Static portrait split data

In iOS 13, Apple has introduced a new feature to portrait mode — portrait segmentation. Compared with iOS 12, in addition to the mask data, there are also mask images of skin color area, hair area and tooth area (the last one is black because the female protagonist does not show her teeth).The portrait split API also has certain limitations, similar to the portrait API:

  • The system version needs to be iOS 13 or higher (including iOS 13), and the device limit currently needs to be iPhone XR or higher (including iPhone XR).
  • A camera is required, not necessarily a depth of field camera, but be sure to use a depth of field camera when one is available
  • Only still image capture
  • Everyone needs to be involved
  • As a result, multiple mask images can be obtained

So how do you turn this on? A code example is as follows:

// Set capture to support portrait segmentation type, currently only support skin, hair, teeth segmentation
self.photoOutput.enabledSemanticSegmentationMatteTypes = self.photoOutput.availableSemanticSegmentationMatteTypes
/ / set AVCapturePhotoSettings
captureSetting.enabledSemanticSegmentationMatteTypes = self.photoOutput.enabledSemanticSegmentationMatteTypes
Copy the code

Finally in photoOutput (_ : didFinishProcessingPhoto: error:) get as segmentation data:

func photoOutput(_ output: AVCapturePhotoOutput.didFinishProcessingPhoto photo: AVCapturePhoto.error: Error?).{
	for ssmType in output.enabledSemanticSegmentationMatteTypes {
    	if let segmentationMatte = photo.semanticSegmentationMatte(for: ssmType) {
            let pixelBuffer = segmentationMatte.mattingImage
        }
    }
}
Copy the code

Static portrait data is saved

Static portrait data and depth of field data are used as auxiliary information for images. Basically, they are stored in a similar way to depth of field. Here, I take portrait data as an example.

guard let destination = CGImageDestinationCreateWithData(incrementData, "public.jpeg" as CFString.1.nil) else { return }
CGImageDestinationAddImage(destination, cgImage, metaDataDic as CFDictionary)
// Get the portrait's exclusive type
var portraitAuxDataType :NSString?
let portraitAuxData = portraitEffectMatte.dictionaryRepresentation(forAuxiliaryDataType: &portraitAuxDataType)
// Write to the original image
 CGImageDestinationAddAuxiliaryDataInfo(destination, portraitAuxDataType!, portraitAuxData! as CFDictionary)
Copy the code

conclusion

This is the summary of the portrait for now, because we have the depth of field in front of us, the portrait is relatively easy to use, and there are not too many complex scenes. I will continue to track new technologies related to portraits and maintain and update documentation. If there is any description of the article is inaccurate and inappropriate, welcome to correct.