IOS Audio and Video (1) AVFoundation core class

IOS Audio and Video (2) AVFoundation video capture

IOS Audio and Video (3) AVFoundation playback and recording

IOS Audio and Video (43) AVFoundation Audio Session

IOS Audio Queue Services for AVFoundation

IOS Audio and Video (45) HTTPS self-signed certificates implement side play

IOS Audio and Video (46) Offline online speech recognition solution

  • The last post on the AVFoundation Core Classes for IOS audio and Video (I) blog gave a rough overview of some of the AVFoundation core classes and also described AVFoundation’s video capture capabilities.
  • Swift video capture Demo, OC video capture camera Demo

1. Media capture concept

  • To understand capture media, you need to understand some basic concepts:
  • Capturing sessions:

AVCaptureSession is an object that manages the capture activity and coordinates the flow of data from the input device to the capture output. AVCaptureSession Is used to connect input and output resources, taking streams of data from physical devices such as cameras and microphones and outputting them to one or more destinations. AVCaptureSession additional configuration a session preset (session preset), is used to control the capture data format and quality of preset the default value is AVCaptureSessionPresetHigh.

To perform real-time capture, you need to instantiate the AVCaptureSession object and add the appropriate inputs and outputs. The following code snippet demonstrates how to configure the capture device to record audio.

// Create the capture session.
let captureSession = AVCaptureSession(a)// Find the default audio device.
guard let audioDevice = AVCaptureDevice.default(for: .audio) else { return }

do {
    // Wrap the audio device in a capture device input.
    let audioInput = try AVCaptureDeviceInput(device: audioDevice)
    // If the input can be added, add it to the session.
    if captureSession.canAddInput(audioInput) {
        captureSession.addInput(audioInput)
    }
} catch {
    // Configuration failed. Handle error.
}
Copy the code

You can call startRunning() to start the data flow from input to output, and stopRunning() to stop the flow.

Note: The startRunning() method is a blocking call and may take some time, so the session setup should be performed on the serial queue to avoid blocking the main queue (which keeps the UI responsive). See AVCam: An implementation example of building a camera application.

  • Capture equipment:

An AVCaptureDevice is a device that provides input (such as audio or video) to capture sessions and provides control for hardware-specific capture features. It defines a unified interface for physical devices, as well as a number of control methods, Access to specify the type of the default equipment as follows: the self. The activeVideoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];

  1. An AVCaptureDevice object represents a physical capture device and the properties associated with that device. You can use the capture device to configure the properties of the underlying hardware. The capture device also provides input data (such as audio or video) to the AVCaptureSession object.
  • Capture device input:

AVCaptureDevice cannot be added to AVCaptureSession directly. It needs to be encapsulated as AVCaptureDeviceInput.

 self.captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.activeVideoDevice error:&videoError];
    if (self.captureVideoInput) {
        if ([self.captureSession canAddInput:self.captureVideoInput]){
            [self.captureSession addInput:self.captureVideoInput]; }}else if (videoError) {
    }
Copy the code
  • Capture output:

As an abstract base class, AVCaptureOutput provides an output destination to capture session data streams and defines a high-level extension class to this abstract class.

  1. AVCaptureStillImageOutput – static images (abandoned after ios10, using AVCapturePhotoOutput instead of)
  2. AVCaptureMovieFileOutput – Video,
  3. AVCaptureAudioFileOutput – audio
  4. AVCaptureAudioDataOutput – An underlying digital sample of audio
  5. AVCaptureVideoDataOutput – An underlying digital sample of video
  • Capture connection:

AVCaptureConnection: A connection between a specific pair of capture input and capture output objects in a capture session. AVCaptureConnection is used to determine which inputs produce video and which produce audio, with the ability to disable specific connections or access individual audio tracks.

  1. Capture input with one or more input ports (an instance of avCaptureinpu.port). The capture output can accept data from one or more sources (for example, the AVCaptureMovieFileOutput object accepts both video and audio data). It is only in canAddConnection (AddConnection () can only be used if the 🙂 method returns true.The 🙂 method adds an AVCaptureConnection instance to the session. When using addInput (🙂 or addOutput (🙂 method, the session automatically forms a connection between all compatible inputs and outputs. When adding unconnected inputs or outputs, simply add connections manually. You can also use connections to enable or disable data flows from a given input or to a given output.
  • Capture preview:

AVCaptureVideoPreviewLayer is a subclass CALayer, to capture the video data real-time preview.

2. Video capture example

  • The project code for this example click here to download: OC Video capture camera Demo
  • The project is written by OC, and the main functions are realized in THCameraController, as shown below:
  • The main interface variables are in the header fileTHCameraController.hInside:

#import <AVFoundation/AVFoundation.h>

extern NSString *const THThumbnailCreatedNotification;

@protocol THCameraControllerDelegate <NSObject> // 1 The error event is that some method needs to be called on the object delegate to handle - (void)deviceConfigurationFailedWithError: (NSError *)error;
- (void)mediaCaptureFailedWithError: (NSError *)error;
- (void)assetLibraryWriteFailedWithError: (NSError *)error;
@end

@interface THCameraController : NSObject

@property (weak.nonatomic) id<THCameraControllerDelegate> delegate;
@property (nonatomic.strong.readonly) AVCaptureSession *captureSession; // 2 Set and configure video capture sessions - (BOOL)setupSession: (NSError* *)error;
- (void)startSession;
- (void)stopSession; // 3 Switch between different cameras - (BOOL)switchCameras;
- (BOOL)canSwitchCameras;
@property (nonatomic.readonly) NSUInteger cameraCount;
@property (nonatomic.readonly) BOOL cameraHasTorch; // Flashlight @property (nonatomic.readonly) BOOL cameraHasFlash; // flash @property (nonatomic.readonly) BOOL cameraSupportsTapToFocus; / / focus @property (nonatomic.readonly) BOOL cameraSupportsTapToExpose; / / @ exposureproperty (nonatomic) AVCaptureTorchMode torchMode; // Flashlight mode @property (nonatomic) AVCaptureFlashMode flashMode; // Flash mode // 4 Focus, Exposure, refocus, exposure methods - (void)focusAtPoint: (CGPoint)point;
- (void)exposeAtPoint: (CGPoint)point;
- (void)resetFocusAndExposureModes; // Capture static images - (void)captureStillImage; // Start recording - (void)startRecording; // Stop recording - (void)stopRecording; // Get record status - (BOOL)isRecording; // Record time - (CMTime)recordedDuration;

@end

Copy the code
  • We need to add access rights, and if we don’t get camera and microphone rights, there will be an error when setting captureVideoInput.
// Check AVAuthorization
// Pass in AVMediaType to check, AVMediaTypeVideo or AVMediaTypeAudio
/// returns whether permissions are available
- (BOOL)ifAVAuthorizationValid:(NSString *)targetAVMediaType grantedCallback:(void (^)())grantedCallback
{
    NSString *mediaType = targetAVMediaType;
    BOOL result = NO;
    if ([AVCaptureDevice respondsToSelector:@selector(authorizationStatusForMediaType:)]) {
        AVAuthorizationStatus authStatus = [AVCaptureDevice authorizationStatusForMediaType:mediaType];
        switch (authStatus) {
            case AVAuthorizationStatusNotDetermined: { // Authorization has not been requested
                [AVCaptureDevice requestAccessForMediaType:targetAVMediaType completionHandler:^(BOOL granted) {
                    dispatch_async(dispatch_get_main_queue(), ^{
                        if(granted) { grantedCallback(); }}); }];break;
            }
            case AVAuthorizationStatusDenied: { // Explicitly reject
                if ([mediaType isEqualToString:AVMediaTypeVideo{[])METSettingPermissionAlertView showAlertViewWithPermissionType:METSettingPermissionTypeCamera];// Apply for camera permission
                } else if ([mediaType isEqualToString:AVMediaTypeAudio{[])METSettingPermissionAlertView showAlertViewWithPermissionType:METSettingPermissionTypeMicrophone];// Apply for microphone permission
                }
                break;
            }
            case AVAuthorizationStatusRestricted: { // Restrict permission changes
                break;
            }
            case AVAuthorizationStatusAuthorized: { / / authorized
                result = YES;
                break;
            }
            default: / / out
                break; }}return result;
}
Copy the code

2.1 Creating a Preview View

  • Can directly to a view layer to add a AVCaptureVideoPreviewLayer objects:
    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] init];
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];
    [self.previewLayer setSession:self.cameraHelper.captureSession];
    self.previewLayer.frame = CGRectMake(0.0.SCREEN_WIDTH.SCREEN_HEIGHT - 50);
    [self.previewImageView.layer addSublayer:self.previewLayer];
Copy the code
  • We can also replace a View’s CALayer instance directly with a view’s class method:
+ (Class)layerClass {
    return [AVCaptureVideoPreviewLayer class];
}

- (AVCaptureSession*)session {
    return [(AVCaptureVideoPreviewLayer*)self.layer session];
}

- (void)setSession:(AVCaptureSession *)session {
    [(AVCaptureVideoPreviewLayer*)self.layer setSession:session];
}
Copy the code
  • AVCaptureVideoPreviewLayer defines two methods used for conversion between screen coordinate system and equipment, the equipment system stipulated the top left corner (0, 0), the lower right corner (1, 1).
  1. (CGPoint)captureDevicePointOfInterestForPoint:(CGPoint)pointInLayerConverts points from screen coordinates to device coordinates
  2. (CGPoint)pointForCaptureDevicePointOfInterest:(CGPoint)captureDevicePointOfInterestConverts points from device coordinates to screen coordinates

2.2 Setting a Capture Session

  • The first is to initialize the capture session:
    self.captureSession = [[AVCaptureSession alloc]init];
    [self.captureSession setSessionPreset:(self.isVideoMode)?AVCaptureSessionPreset1280x720:AVCaptureSessionPresetPhoto];
Copy the code
  • Select different presets depending on whether you want to take a video or a photo, and then set the session input:
- (void)configSessionInput
{
    // Camera input
    NSError *videoError = [[NSError alloc] init];
    self.activeVideoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    self.flashMode = self.activeVideoDevice.flashMode;
    self.captureVideoInput = [AVCaptureDeviceInput deviceInputWithDevice:self.activeVideoDevice error:&videoError];
    if (self.captureVideoInput) {
        if ([self.captureSession canAddInput:self.captureVideoInput]){
            [self.captureSession addInput:self.captureVideoInput]; }}else if (videoError) {
    }
    
    if (self.isVideoMode) {
        // Microphone input
        NSError *audioError = [[NSError alloc] init];
        AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio] error:&audioError];
        if (audioInput) {
            if ([self.captureSession canAddInput:audioInput]) {
                [self.captureSession addInput:audioInput]; }}else if (audioError) {
        }
    }
}
Copy the code
  • Encapsulate both camera and microphone devices as AVCaptureDeviceInput and add them to the session. Then configure session output:
- (void)configSessionOutput
{
    if (self.isVideoMode) {
        // Video output
        self.movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
        if ([self.captureSession canAddOutput:self.movieFileOutput]) {
            [self.captureSession addOutput:self.movieFileOutput]; }}else {
        // Image output
        self.imageOutput = [[AVCaptureStillImageOutput alloc] init];
        self.imageOutput.outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};// Configure the outputSetting property to capture images in JPEG format
        if ([self.captureSession canAddOutput:self.imageOutput]) {
            [self.captureSession addOutput:self.imageOutput]; }}}Copy the code
  • You can also set the capture session directly in a method
- (BOOL)setupSession:(NSError **)error {

    
    // Create a capture session. AVCaptureSession is a central hub for capturing scenes
    self.captureSession = [[AVCaptureSession alloc]init];
    
    /* AVCaptureSessionPresetHigh AVCaptureSessionPresetMedium AVCaptureSessionPresetLow AVCaptureSessionPreset640x480 AVCaptureSessionPreset1280x720 AVCaptureSessionPresetPhoto */
    // Set the image resolution
    self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;
    
    // Get the default video capture device iOS and return to the rear camera
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
    
    // Encapsulate the capture device as AVCaptureDeviceInput
    // Note: To add a capture device for a session, the device must be wrapped as an AVCaptureDeviceInput object
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error];
    
    // Determine if videoInput is valid
    if (videoInput)
    {
        //canAddInput: tests whether it can be added to a session
        if ([self.captureSession canAddInput:videoInput])
        {
            // Add videoInput to captureSession
            [self.captureSession addInput:videoInput];
            self.activeVideoInput = videoInput; }}else
    {
        return NO;
    }
    
    // Selecting the default audio capture device returns a built-in microphone
    AVCaptureDevice *audioDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    
    Create a capture device input for the device
    AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:error];
   
    // Check whether the audioInput is valid
    if (audioInput) {
        
        //canAddInput: tests whether it can be added to a session
        if ([self.captureSession canAddInput:audioInput])
        {
            // Add audioInput to captureSession
            [self.captureSession addInput:audioInput]; }}else
    {
        return NO;
    }

    / / AVCaptureStillImageOutput instance From the camera to capture static images
    self.imageOutput = [[AVCaptureStillImageOutput alloc]init];
    
    // Configure dictionary: you want to capture images in JPEG format
    self.imageOutput.outputSettings = @{AVVideoCodecKey:AVVideoCodecJPEG};
    
    // The output connection determines whether it is available and adds it to the output connection if it is available
    if ([self.captureSession canAddOutput:self.imageOutput])
    {
        [self.captureSession addOutput:self.imageOutput];
        
    }
    
    
    // Create an AVCaptureMovieFileOutput instance to record the Quick Time movie to the file system
    self.movieOutput = [[AVCaptureMovieFileOutput alloc]init];
    
    // The output connection determines whether it is available and adds it to the output connection if it is available
    if ([self.captureSession canAddOutput:self.movieOutput])
    {
        [self.captureSession addOutput:self.movieOutput];
    }
    
    
    self.videoQueue = dispatch_queue_create("com.kongyulu.VideoQueue".NULL);
    
    return YES;
}
Copy the code

2.3 Starting and Stopping a Session

  • Sessions can be started and stopped within a VC lifetime, and since this operation is a time-consuming synchronous operation, it is recommended to perform this method in asynchronous threads. As follows:
- (void)startSession {

    // Check whether it is running
    if(! [self.captureSession isRunning])
    {
        // Use synchronous invocation to consume some time, so use asynchronous processing
        dispatch_async(self.videoQueue, ^{
            [self.captureSession startRunning];
        });
    }
}

- (void)stopSession {
    
    // Check whether it is running
    if ([self.captureSession isRunning])
    {
        // Stop running in asynchronous mode
        dispatch_async(self.videoQueue, ^{
            [self.captureSession stopRunning]; }); }}Copy the code

2.4 Switching Cameras

  • The AVCaptureDevicePosition enumeration class is used to identify the front and rear cameras on most ios devices:
typedef NS_ENUM(NSInteger.AVCaptureDevicePosition) {
    AVCaptureDevicePositionUnspecified = 0./ / unknown
    AVCaptureDevicePositionBack        = 1.// Rear camera
    AVCaptureDevicePositionFront       = 2.// Front-facing camera
}
Copy the code
  • Next get the currently active device, not active device:
- (AVCaptureDevice *)activeCamera {
    // Returns the device property of the camera corresponding to the current capture session
    return self.activeVideoInput.device;
}

// Returns the currently inactive camera
- (AVCaptureDevice *)inactiveCamera {

    // Get by looking for the reverse camera of the currently active camera, return nil if the device has only 1 camera
       AVCaptureDevice *device = nil;
      if (self.cameraCount > 1)
      {
          if ([self activeCamera].position == AVCaptureDevicePositionBack) {
               device = [self cameraWithPosition:AVCaptureDevicePositionFront];
         }else
         {
             device = [self cameraWithPosition:AVCaptureDevicePositionBack]; }}return device;
}

Copy the code
  • Check whether more than one camera is available
// Check whether more than one camera is available
- (BOOL)canSwitchCameras {
    return self.cameraCount > 1;
}

Copy the code
  • Number of video capture devices available:
// The number of available video capture devices
- (NSUInteger)cameraCount {
     return [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
}
Copy the code
  • Then from AVCaptureDeviceInput you can get the currently active device and find the device relative to it:
#pragma mark - Device ConfigurationHow to configure camera support - (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position {
    
    // Get the available video device
    NSArray *devicess = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    
    // Traverse the available video devices and return the value of the position argument
    for (AVCaptureDevice *device in devicess)
    {
        if (device.position == position) {
            returndevice; }}return nil;
}

Copy the code
  • Before switching the camera, first judge whether it can be switched:
// Switch the camera
- (BOOL)switchCameras {

    // Check whether there are multiple cameras
    if(! [self canSwitchCameras])
    {
        return NO;
    }
    
    // Get the reverse device of the current device
    NSError *error;
    AVCaptureDevice *videoDevice = [self inactiveCamera];
    
    // Encapsulate the input device as AVCaptureDeviceInput
    AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
    
    // Determine if videoInput is nil
    if (videoInput)
    {
        // Mark the start of the configuration change
        [self.captureSession beginConfiguration];
        
        // Remove the original capture input device from the capture session
        [self.captureSession removeInput:self.activeVideoInput];
        
        // Check whether the new device can be added
        if ([self.captureSession canAddInput:videoInput])
        {
            // Use videoInput as the new video capture device
            [self.captureSession addInput:videoInput];
            
            // Change the get device to videoInput
            self.activeVideoInput = videoInput;
        }else
        {
            // If a new device cannot be added. The original video capture device is re-added to the capture session
            [self.captureSession addInput:self.activeVideoInput];
        }
        
        // After the configuration is complete, AVCaptureSession commitConfiguration will batch all changes together.
        [self.captureSession commitConfiguration];
    }else
    {
        Error creating AVCaptureDeviceInput notifying the delegate to handle the error
        [self.delegate deviceConfigurationFailedWithError:error];
        return NO;
    }
    
    return YES;
}
Copy the code

Note:

  1. AVCapture Device defines a number of ways for developers to control the camera on ios devices. Independently adjust and lock the camera’s focal length, exposure and white balance. Focus and exposure can be set based on specific points of interest, so that it can realize the function of click focus and click exposure in the application. It also lets you control the use of the device’s LED as a camera flash or flashlight
  2. Whenever you modify the camera device, you must first test whether the modification action is supported by the device. Not all cameras support all functions. The pin down camera, for example, does not support focusing because it is usually at arm’s length from the target. But most rear cameras can support full-size focus. Attempting to apply an unsupported action causes an exception crash. Therefore, before modifying the camera device, check whether it is supported
  • After obtaining the corresponding device, you can encapsulate it as an AVCaptureInput object and then configure it:

// beginConfiguration and commitConfiguration enable the modification operation to be atomic, ensuring device running security.
            [self.captureSession beginConfiguration];// Start configuring the new video input
            [self.captureSession removeInput:self.captureVideoInput]; // Remove the old input before adding the new input
            if ([self.captureSession canAddInput:newInput]) {
                [self.captureSession addInput:newInput];
                self.activeVideoDevice = newActiveDevice;
                self.captureVideoInput = newInput;
            } else{[self.captureSession addInput:self.captureVideoInput];
            }
            [self.captureSession commitConfiguration];
Copy the code

2.5 Adjust focus and exposure, flash and flashlight mode

2.5.1 focus

  • Focus, isFocusPointOfInterestSupported focus of interest points is used to determine whether the equipment support, isFocusModeSupported determine whether to support a focus mode, AVCaptureFocusModeAutoFocus autofocus, Then set the focus. The code is as follows:
#pragma mark - Focus MethodsClick focus method implementation - (BOOL)cameraSupportsTapToFocus {
    
    // Ask whether the active camera supports point of interest focusing
    return [[self activeCamera]isFocusPointOfInterestSupported];
}

- (void)focusAtPoint:(CGPoint)point {
    
    AVCaptureDevice *device = [self activeCamera];
    
    // Whether point of interest focus is supported & whether auto focus mode is supported
    if (device.isFocusPointOfInterestSupported && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) {
        
        NSError *error;
        // Lock the device ready for configuration, if the lock is obtained
        if ([device lockForConfiguration:&error]) {
            
            // Set the focusPointOfInterest property to CGPoint
            device.focusPointOfInterest = point;
            
            / / for AVCaptureFocusModeAutoFocus focusMode Settings
            device.focusMode = AVCaptureFocusModeAutoFocus;
            
            // Release the lock
            [device unlockForConfiguration];
        }else{
            // Error is returned to the error handling agent
            [self.delegate deviceConfigurationFailedWithError:error]; }}}Copy the code

2.5.2 exposure

  • Ask if the device supports exposure to a point of interest
- (BOOL)cameraSupportsTapToExpose {
    
    // Ask whether the device supports exposure of a point of interest
    return [[self activeCamera] isExposurePointOfInterestSupported];
}
Copy the code
  • Exposure and focus are very similar, the core method is as follows:
static const NSString *THCameraAdjustingExposureContext;

- (void)exposeAtPoint:(CGPoint)point {

    AVCaptureDevice *device = [self activeCamera];
    
    AVCaptureExposureMode exposureMode =AVCaptureExposureModeContinuousAutoExposure;
    
    / / determine whether support AVCaptureExposureModeContinuousAutoExposure mode
    if (device.isExposurePointOfInterestSupported && [device isExposureModeSupported:exposureMode]) {
        
        [device isExposureModeSupported:exposureMode];
        
        NSError *error;
        
        // Lock the device and prepare for configuration
        if ([device lockForConfiguration:&error])
        {
            // Set the expected value
            device.exposurePointOfInterest = point;
            device.exposureMode = exposureMode;
            
            // Check whether the device supports the lock exposure mode.
            if ([device isExposureModeSupported:AVCaptureExposureModeLocked]) {
                
                AdjustingExposure (adjustingExposure) {adjustingExposure (adjustingExposure)}
                [device addObserver:self forKeyPath:@"adjustingExposure" options:NSKeyValueObservingOptionNew context:&THCameraAdjustingExposureContext];
                
            }
            
            // Release the lock
            [device unlockForConfiguration];
            
        }else{[self.delegate deviceConfigurationFailedWithError:error]; }}}Copy the code

2.5.3 flash

  • To handle focusing, we can easily adjust the flash and turn on the flashlight mode.
  • Flash and Torch are two different modes, defined as follows:
typedef NS_ENUM(NSInteger.AVCaptureFlashMode) {
    AVCaptureFlashModeOff  = 0.AVCaptureFlashModeOn   = 1.AVCaptureFlashModeAuto = 2,
}

typedef NS_ENUM(NSInteger.AVCaptureTorchMode) {
    AVCaptureTorchModeOff  = 0.AVCaptureTorchModeOn   = 1.AVCaptureTorchModeAuto = 2,}Copy the code
  • Usually you need to set up a flash for taking pictures and a flashlight for video. The specific configuration mode code is as follows:
  • To determine whether there is a flash:
// Check whether there is a flash
- (BOOL)cameraHasFlash {
    return [[self activeCamera] hasFlash];
}
Copy the code
// Flash mode
- (AVCaptureFlashMode)flashMode {
    return [[self activeCamera] flashMode];
}

// Set the flash
- (void)setFlashMode:(AVCaptureFlashMode)flashMode {

    // Get the session
    AVCaptureDevice *device = [self activeCamera];
    
    // Determine whether flash mode is supported
    if ([device isFlashModeSupported:flashMode]) {
    
        // If supported, the device is locked
        NSError *error;
        if ([device lockForConfiguration:&error]) {

            // Change the flash mode
            device.flashMode = flashMode;
            // Unlock the device after the modification is complete
            [device unlockForConfiguration];
            
        }else{[self.delegate deviceConfigurationFailedWithError:error]; }}}Copy the code

2.5.4 flashlight

  • Whether flashlight is supported:
// Whether flashlight is supported
- (BOOL)cameraHasTorch {

    return [[self activeCamera]hasTorch];
}
Copy the code
  • Switch to flashlight mode and turn on the flashlight
// Flashlight mode
- (AVCaptureTorchMode)torchMode {

    return [[self activeCamera]torchMode];
}


// Set whether to turn on the flashlight
- (void)setTorchMode:(AVCaptureTorchMode)torchMode {

    AVCaptureDevice *device = [self activeCamera];
    
    if ([device isTorchModeSupported:torchMode]) {
        
        NSError *error;
        if ([device lockForConfiguration:&error]) {
            
            device.torchMode = torchMode;
            [device unlockForConfiguration];
        }else{[self.delegate deviceConfigurationFailedWithError:error]; }}}Copy the code

2.6 Taking still Pictures

  • When setting up the capture session we willAVCaptureStillImageOutput(note:AVCaptureStillImageOutputDeprecated after IOS10, useAVCapturePhotoOutputThe session can be used to take a still image. The following code:
    AVCaptureConnection *connection = [self.cameraHelper.imageOutput connectionWithMediaType:AVMediaTypeVideo];
    if ([connection isVideoOrientationSupported]) {
        [connection setVideoOrientation:self.cameraHelper.videoOrientation];
    }
    if(! connection.enabled || ! connection.isActive) {// Connection is not available
        // Handle illegal situations
        return;
    }
Copy the code
  • Here from AVCaptureStillImageOutput instance after get a AVCaptureConnection object class, you need to set up this connection the orientation of values, there are two ways to access:
  1. The orientation is modified by monitoring gravity sensors
  2. Obtain the value from UIDevice
  • Modify orientation by listening to gravity sensors:
    // Monitor gravity sensor and adjust orientation
    CMMotionManager *motionManager = [[CMMotionManager alloc] init];
    motionManager.deviceMotionUpdateInterval = 1/15.0;
    if (motionManager.deviceMotionAvailable) {
        [motionManager startDeviceMotionUpdatesToQueue:[NSOperationQueue currentQueue]
                                           withHandler: ^(CMDeviceMotion *motion, NSError *error){
                                               double x = motion.gravity.x;
                                               double y = motion.gravity.y;
                                               if (fabs(y) >= fabs(x)) { // The y component is larger than the x component
                                                   if (y >= 0) { // Top down
                                                       self.videoOrientation = AVCaptureVideoOrientationPortraitUpsideDown; // UIDeviceOrientationPortraitUpsideDown;
                                                   } else { // Top up
                                                       self.videoOrientation = AVCaptureVideoOrientationPortrait; // UIDeviceOrientationPortrait;}}else {
                                                   if (x >= 0) { // Top to right
                                                       self.videoOrientation = AVCaptureVideoOrientationLandscapeLeft; // UIDeviceOrientationLandscapeRight;
                                                   } else { // Top left
                                                       self.videoOrientation = AVCaptureVideoOrientationLandscapeRight; // UIDeviceOrientationLandscapeLeft;}}}];self.motionManager = motionManager;
    } else {
        self.videoOrientation = AVCaptureVideoOrientationPortrait;
    }
Copy the code
  • And then we call the method to get the CMSampleBufferRef (which is a Core Foundation object defined by Core Media), Can through AVCaptureStillImageOutput jpegStillImageNSDataRepresentation class method transformed into NSData type. The following code:
    @weakify(self)
    [self.cameraHelper.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
        @strongify(self)
        if(! error && imageDataSampleBuffer) {NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];
            if(! imageData) {return; }UIImage *image = [UIImage imageWithData:imageData];
            if(! image) {return;}
    }];
Copy the code
  • Finally, we can directly save the obtained images as files. Note that Assets Library has been replaced by PHPhotoLibrary since ios 8. Here, PHPhotoLibrary is used to realize the function of saving images. The code is as follows:
    [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
        PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromImage:targetImage];
        NSString *imageIdentifier = changeRequest.placeholderForCreatedAsset.localIdentifier;
    } completionHandler:^( BOOL success, NSError * _Nullable error ) {
    }];
Copy the code
  • We can find this image from the album by the imageIdentifier returned when we save it.

  • Complete static image capture code is as follows:

#pragma mark - Image Capture MethodsTake a still picture/ * AVCaptureStillImageOutput is AVCaptureOutput subclasses. Used to capture images */
- (void)captureStillImage {
    
    // Get the connection
    AVCaptureConnection *connection = [self.imageOutput connectionWithMediaType:AVMediaTypeVideo];
    
    // The program only supports portrait, but if the user takes a picture horizontally, the direction of the resulting photo needs to be adjusted
    // Check whether the video direction is supported
    if (connection.isVideoOrientationSupported) {
        
        // Get the direction value
        connection.videoOrientation = [self currentVideoOrientation];
    }
    
    // Define a handler block that returns 1 image's NSData
    id handler = ^(CMSampleBufferRef sampleBuffer,NSError *error)
                {
                    if(sampleBuffer ! =NULL) {
                        NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer];
                        UIImage *image = [[UIImage alloc]initWithData:imageData];
                        
                        // Key: After capturing the image successfully, pass it on
                        [self writeImageToAssetsLibrary:image];
                    }else
                    {
                        NSLog(@"NULL sampleBuffer:%@",[error localizedDescription]); }};// Capture a still image
    [self.imageOutput captureStillImageAsynchronouslyFromConnection:connection completionHandler:handler];
    
    
    
}

// Get the direction value
- (AVCaptureVideoOrientation)currentVideoOrientation {
    
    AVCaptureVideoOrientation orientation;
    
    // Get the orientation of UIDevice
    switch ([UIDevice currentDevice].orientation) {
        case UIDeviceOrientationPortrait:
            orientation = AVCaptureVideoOrientationPortrait;
            break;
        case UIDeviceOrientationLandscapeRight:
            orientation = AVCaptureVideoOrientationLandscapeLeft;
            break;
        case UIDeviceOrientationPortraitUpsideDown:
            orientation = AVCaptureVideoOrientationPortraitUpsideDown;
            break;
        default:
            orientation = AVCaptureVideoOrientationLandscapeRight;
            break;
    }
    
    return orientation;

    return 0;
}


/* Assets Library framework is used to allow developers to access iOS Photo through code. Note: This will access the album, so you need to change the plist permissions. Otherwise the project will crash */

- (void)writeImageToAssetsLibrary:(UIImage *)image {

    // Create an instance of ALAssetsLibrary
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc]init];
    
    // parameter 1: image (the parameter is CGImageRef so image.cgimage)
    // Parameter 2: The direction parameter is converted to NSUInteger
    // Parameter 3: Write success or failure
    [library writeImageToSavedPhotosAlbum:image.CGImage
                             orientation:(NSUInteger)image.imageOrientation
                         completionBlock:^(NSURL *assetURL, NSError *error) {
                             // After success, send capture image notification. Thumbnail used to draw the lower-left corner of the program
                             if(! error) { [self postThumbnailNotifification:image];
                             }else
                             {
                                 // Failed to print an error message
                                 id message = [error localizedDescription];
                                 NSLog(@"% @",message); }}]; }// Send thumbnail notifications
- (void)postThumbnailNotifification:(UIImage *)image {
    
    // return to the main queue
    dispatch_async(dispatch_get_main_queue(), ^{
        // Send the request
        NSNotificationCenter *nc = [NSNotificationCenter defaultCenter];
        [nc postNotificationName:THThumbnailCreatedNotification object:image];
    });
}

Copy the code

2.7 Video Capture

  • For QuickTime movies, the metadata is at the beginning of the movie file. This helps the video player quickly read the header file to determine the content, structure, and sample location, but the recording process requires that all samples are captured before the header data is created and attached to the end of the file. This way, if a crash or interruption occurs while recording, the movie header cannot be created, resulting in an unreadable file on disk.

  • Therefore, AVFoundation’s AVCaptureMovieFileOutput class provides segmentation capture capability, generating minimal header information at the beginning of recording, and creating headers again at a certain interval during recording, thus gradually completing creation. By default, a fragment is written every 10s, which can be modified via the movieFragmentInterval property.

  • First, start video shooting:

    AVCaptureConnection *videoConnection = [self.cameraHelper.movieFileOutput connectionWithMediaType:AVMediaTypeVideo];
    if ([videoConnection isVideoOrientationSupported]) {
        [videoConnection setVideoOrientation:self.cameraHelper.videoOrientation];
    }
    
    if ([videoConnection isVideoStabilizationSupported]) {
        [videoConnection setPreferredVideoStabilizationMode:AVCaptureVideoStabilizationModeAuto];
    }
    
    [videoConnection setVideoScaleAndCropFactor:1.0];
    if(! [self.cameraHelper.movieFileOutput isRecording] && videoConnection.isActive && videoConnection.isEnabled) {
        // Check whether the video connection is available
        self.countTimer = [NSTimer scheduledTimerWithTimeInterval:1 target:self selector:@selector(refreshTimeLabel) userInfo:nil repeats:YES];
        NSString *urlString = [NSTemporaryDirectory() stringByAppendingString:[NSString stringWithFormat:@"%.0f.mov"The [[NSDate date] timeIntervalSince1970] * 1000]].NSURL *url = [NSURL fileURLWithPath:urlString];
        [self.cameraHelper.movieFileOutput startRecordingToOutputFileURL:url recordingDelegate:self];
        [self.captureButton setTitle:@"The end" forState:UIControlStateNormal];
    } else{}Copy the code
  • Set PreferredVideoStabilizationMode can support video as the stability and the quality of filming, but the effect of stability will only in the video shooting feel that, when the preview video can’t perceive.
  • We temporarily write the video file to a temporary file and call it when the shooting endsAVCaptureFileOutputRecordingDelegate (void) captureOutput: (captureOutput AVCaptureFileOutput *) didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL fromConnections:(NSArray *)connections error:(NSError *)errorMethods. You can save videos and generate video thumbnails.
- (void)saveVideo:(NSURL *)videoURL
{
    __block NSString *imageIdentifier;
    @weakify(self)
    [[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
        // Save the video
        PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromVideoAtFileURL:videoURL];
        imageIdentifier = changeRequest.placeholderForCreatedAsset.localIdentifier;
    } completionHandler:^( BOOL success, NSError * _Nullable error ) {
        @strongify(self)
        dispatch_async(dispatch_get_main_queue(), ^{
            @strongify(self)
            [self resetTimeCounter];
            if(! success) {// Error handling
            } else {
                PHAsset *asset = [PHAsset fetchAssetsWithLocalIdentifiers:@[imageIdentifier] options:nil].firstObject;
                if (asset && asset.mediaType == PHAssetMediaTypeVideo) {
                    PHVideoRequestOptions *options = [[PHVideoRequestOptions alloc] init];
                    options.version = PHImageRequestOptionsVersionCurrent;
                    options.deliveryMode = PHVideoRequestOptionsDeliveryModeAutomatic;
                    [[PHImageManager defaultManager] requestAVAssetForVideo:asset options:options resultHandler:^(AVAsset * _Nullable obj, AVAudioMix * _Nullable audioMix, NSDictionary * _Nullable info) {
                        @strongify(self)
                        [selfresolveAVAsset:obj identifier:asset.localIdentifier]; }]; }}}); }]; } - (void)resolveAVAsset:(AVAsset *)asset identifier:(NSString *)identifier
{
    if(! asset) {return;
    }
    if(! [asset isKindOfClass:[AVURLAsset class]]) {
        return;
    }
    AVURLAsset *urlAsset = (AVURLAsset *)asset;
    NSURL *url = urlAsset.URL;
    NSData *data = [NSData dataWithContentsOfURL:url];
    
    AVAssetImageGenerator *generator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
    generator.appliesPreferredTrackTransform = YES; // Consider video orientation changes when capturing thumbnails to avoid wrong thumbnail orientation
    CMTime snaptime = kCMTimeZero;
    CGImageRef cgImageRef = [generator copyCGImageAtTime:snaptime actualTime:NULL error:nil];
    UIImage *assetImage = [UIImage imageWithCGImage:cgImageRef];
    CGImageRelease(cgImageRef);
}
Copy the code
  • Comb through the process of video capture
  • (1) Judge whether to record the state
// Determine whether to record the status
- (BOOL)isRecording {

    return self.movieOutput.isRecording;
}
Copy the code
  • (2) Start recording
// Start recording
- (void)startRecording {

    if(! [self isRecording]) {
        
        // Get the current video capture connection information, used to capture video data to configure some core properties
        AVCaptureConnection * videoConnection = [self.movieOutput connectionWithMediaType:AVMediaTypeVideo];
        
        // Determine whether the videoOrientation property is supported.
        if([videoConnection isVideoOrientationSupported])
        {
            // Change the direction of the current video
            videoConnection.videoOrientation = [self currentVideoOrientation];
            
        }
        
        // Determine whether to support video stability can significantly improve the quality of video. Will only be involved in recording video files
        if([videoConnection isVideoStabilizationSupported])
        {
            videoConnection.enablesVideoStabilizationWhenAvailable = YES;
        }
        
        AVCaptureDevice *device = [self activeCamera];
        
        // The camera can be operated in smooth focusing mode. Slowing down the camera's focus speed. The camera tries to quickly autofocus when the user moves.
        if (device.isSmoothAutoFocusEnabled) {
            NSError *error;
            if ([device lockForConfiguration:&error]) {
                
                device.smoothAutoFocusEnabled = YES;
                [device unlockForConfiguration];
            }else{[self.delegate deviceConfigurationFailedWithError:error]; }}// Find the unique file system URL that writes the capture video.
        self.outputURL = [self uniqueURL];
        
        Call method parameter 1: record save path parameter 2: proxy on capture output
        [self.movieOutput startRecordingToOutputFileURL:self.outputURL recordingDelegate:self]; }} - (CMTime)recordedDuration {
    return self.movieOutput.recordedDuration;
}


// Write the unique file system URL of the video
- (NSURL *)uniqueURL {

    NSFileManager *fileManager = [NSFileManager defaultManager];
    
    / / temporaryDirectoryWithTemplateString can be named for the purpose of the file is written to create a single directory;
    NSString *dirPath = [fileManager temporaryDirectoryWithTemplateString:@"kamera.XXXXXX"];
    
    if (dirPath) {
        NSString *filePath = [dirPath stringByAppendingPathComponent:@"kamera_movie.mov"];
        return  [NSURL fileURLWithPath:filePath];
    }
    return nil;
}
Copy the code
  • (3) Stop recording
// Stop recording
- (void)stopRecording {

    // Whether to record
    if ([self isRecording]) {
        [self.movieOutput stopRecording]; }}Copy the code
  • (4) to capture video AVCaptureFileOutputRecordingDelegate callback function
#pragma mark - AVCaptureFileOutputRecordingDelegate

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput
didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL
      fromConnections:(NSArray *)connections
                error:(NSError *)error {

    / / error
    if (error) {
        [self.delegate mediaCaptureFailedWithError:error];
    }else
    {
        / / write
        [self writeVideoToAssetsLibrary:[self.outputURL copy]];
        
    }
    
    self.outputURL = nil;
}
Copy the code
  • (5) Save and write the obtained video data into the video file
// Write the captured video
- (void)writeVideoToAssetsLibrary:(NSURL *)videoURL {
    
    // The ALAssetsLibrary instance provides an interface for writing videos
    ALAssetsLibrary *library = [[ALAssetsLibrary alloc]init];
    
    // Check whether the video can be written before writing to the repository (try to make a judgment before writing)
    if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:videoURL]) {
        
        // Create a block
        ALAssetsLibraryWriteVideoCompletionBlock completionBlock;
        completionBlock = ^(NSURL *assetURL,NSError *error)
        {
            if (error) {
                
                [self.delegate assetLibraryWriteFailedWithError:error];
            }else
            {
                // Used to display video thumbnails
                [selfgenerateThumbnailForVideoAtURL:videoURL]; }};// Perform the actual write to the repository[library writeVideoAtPathToSavedPhotosAlbum:videoURL completionBlock:completionBlock]; }}Copy the code
  • (6) Get the video thumbnail
// Get the lower-left thumbnail of the video
- (void)generateThumbnailForVideoAtURL:(NSURL *)videoURL {

    // On videoQueue,
    dispatch_async(self.videoQueue, ^{
        
        // Create a new AVAsset & AVAssetImageGenerator
        AVAsset *asset = [AVAsset assetWithURL:videoURL];
        
        AVAssetImageGenerator *imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:asset];
        
        // Set maximumSize to 100 and height to 0 to calculate the height of the image based on the aspect ratio of the video
        imageGenerator.maximumSize = CGSizeMake(100.0f, 0.0f);
        
        // Capturing a video thumbnail takes into account changes in the video (such as the direction of the video). If this is not set, the thumbnail orientation may be wrong
        imageGenerator.appliesPreferredTrackTransform = YES;
        
        Note that you need to manage the creation and release of CGImageRef yourself
        CGImageRef imageRef = [imageGenerator copyCGImageAtTime:kCMTimeZero actualTime:NULL error:nil];
        
        // Convert the image to UIImage
        UIImage *image = [UIImage imageWithCGImage:imageRef];
        
        // Release CGImageRef imageRef to prevent memory leaks
        CGImageRelease(imageRef);
        
        // return to the main thread
        dispatch_async(dispatch_get_main_queue(), ^{
            
            // Send a notification to pass the latest image
            [self postThumbnailNotifification:image];
            
        });
        
    });
    
}
Copy the code

2.8 Video Zooming

  • IOS 7.0AVCaptureDeviceProvides a videoZoomFactor property to provide scaling effects on video output and capture, with a minimum value of 1.0 and a maximum value provided by the following methods:self.cameraHelper.activeVideoDevice.activeFormat.videoMaxZoomFactor;
  • Therefore, whether a device can be scaled can also be determined by judging this property:
- (BOOL)cameraSupportsZoom
{
    return self.cameraHelper.activeVideoDevice.activeFormat.videoMaxZoomFactor > 1.0f;
}
Copy the code
  • The device performs the zoom effect by centering the image captured by the camera sensor, and also byvideoZoomFactorUpscaleThresholdTo set the specific zoom center. When the zoom factor is relatively small and the cropped picture is just equal to or greater than the output size (considering that it is related to anti-edge distortion), it can be returned without enlarging. However, when zoom factors are large, the device must scale and crop the image to match the output size, resulting in the loss of image quality. The specific critical point is given byvideoZoomFactorUpscaleThresholdValue to determine.
// tests on iphone6s and iphone8plus show a value of around 2.0
self.cameraHelper.activeVideoDevice.activeFormat.videoZoomFactorUpscaleThreshold;
Copy the code
  • Scaling can be controlled by a UISlider that varies from 0.0 to 1.0. The code is as follows:
{[self.slider addTarget:self action:@selector(sliderValueChange:) forControlEvents:UIControlEventValueChanged];
}

- (void)sliderValueChange:(id)sender
{
    UISlider *slider = (UISlider *)sender;
    [self setZoomValue:slider.value];
}

- (CGFloat)maxZoomFactor
{
    return MIN(self.cameraHelper.activeVideoDevice.activeFormat.videoMaxZoomFactor, 4.0f);
}

- (void)setZoomValue:(CGFloat)zoomValue
{
    if (!self.cameraHelper.activeVideoDevice.isRampingVideoZoom) {
        NSError *error;
        if ([self.cameraHelper.activeVideoDevice lockForConfiguration:&error]) {
            CGFloat zoomFactor = pow([self maxZoomFactor], zoomValue);
            self.cameraHelper.activeVideoDevice.videoZoomFactor = zoomFactor;
            [self.cameraHelper.activeVideoDevice unlockForConfiguration]; }}}Copy the code
  • Before configuring attributes, lock the device. Otherwise, exceptions may occur. Second, interpolating scaling is an exponential growth. The incoming slider value is linear, and a POW operation is required to get the desired scaling value. In addition, the videoMaxZoomFactor value can be very large, on iphone8p this value is 16, it does not make much sense to zoom to such a large image, so you need to manually set a maximum zoom value, here select 4.0.

  • Of course, the scaling here is immediate, and the following method can be used to smooth scaling to a scale factor at a speed:

- (void)rampZoomToValue:(CGFloat)zoomValue {
    CGFloat zoomFactor = pow([self maxZoomFactor], zoomValue);
    NSError *error;
    if ([self.activeCamera lockForConfiguration:&error]) {
        [self.activeCamera rampToVideoZoomFactor:zoomFactor
                                        withRate:THZoomRate];
        [self.activeCamera unlockForConfiguration];
    } else {
    }
}

- (void)cancelZoom {
    NSError *error;
    if ([self.activeCamera lockForConfiguration:&error]) {
        [self.activeCamera cancelVideoZoomRamp];
        [self.activeCamera unlockForConfiguration];
    } else{}}Copy the code
  • Of course, we can also listen to the videoZoomFactor of the device to get the current zoom value:
    [RACObserve(self, activeVideoDevice.videoZoomFactor) subscribeNext:^(id x) {
        NSLog(@"videoZoomFactor: %f".self.activeVideoDevice.videoZoomFactor);
    }];
Copy the code
  • You can also listen to the rampingVideoZoom of the device to know whether the device is zooming smoothly:
    [RACObserve(self, activeVideoDevice.rampingVideoZoom) subscribeNext:^(id x) {
        NSLog(@"rampingVideoZoom : %@", (self.activeVideoDevice.rampingVideoZoom)? @"true": @"false");
    }];
Copy the code

2.9 Video Editing

  • AVCaptureMovieFileOutput can simply capture video, but can’t interact with video data, so you need to use the AVCaptureVideoDataOutput class. AVCaptureVideoDataOutput is a subclass of AVCaptureOutput that provides direct access to video frames captured by the camera sensor. The counterpart is the AVCaptureAudioDataOutput class that handles audio input.

  • AVCaptureVideoDataOutput a follow AVCaptureVideoDataOutputSampleBufferDelegate delegate object of the agreement, it has the following two main methods:

- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; // Called when a new video frame is written, the data will be decoded or re-encoded based on the output videoSetting
- (void)captureOutput:(AVCaptureOutput *)output didDropSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; // Called when a late video frame is discarded, usually because a time-consuming operation was performed in one of the above methods
Copy the code
  • CMSampleBufferRef is a Core Foundation style object provided by the Core Media framework for transferring digital samples in Media pipes. So we can process each Core Video frame of CMSampleBufferRef as follows:
    int BYTES_PER_PIXEL = 4;
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); //CVPixelBufferRef stores pixel data in main memory
    CVPixelBufferLockBaseAddress(pixelBuffer, 0); // Get the lock of the corresponding memory block
    size_t bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
    size_t bufferHeight = CVPixelBufferGetHeight(pixelBuffer);// Get the pixel width and height
    unsigned char *pixel = (unsigned char *)CVPixelBufferGetBaseAddress(pixelBuffer); // Get the starting position of the pixel buffer
    unsigned char grayPixel;
    for (int row = 0; row < bufferHeight; row++) {
        for (int column = 0; column < bufferWidth; column ++) { // Iterate over each pixel
            grayPixel = (pixel[0] + pixel[1] + pixel[2) /3.0;
            pixel[0] = pixel[1] = pixel[2] = grayPixel;
            pixel += BYTES_PER_PIXEL; }}CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer]; // Generate the corresponding CIImage from buffer
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); / / remove the lock
Copy the code
  • The CMSampleBufferRef also provides information about the format of each frame, and the cmFormatDescription. h header defines a number of functions to retrieve various information.
    CMFormatDescriptionRef formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer);
    CMMediaType mediaType = CMFormatDescriptionGetMediaType(formatDescription);
Copy the code
  • You can also modify the time information:
    CMTime presentation = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // Get the original timestamp of the frame sample
    CMTime decode = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // Get the decoding timestamp of the frame sample
Copy the code
  • Metadata can be attached:
    CFDictionaryRef exif = (CFDictionaryRef)CMGetAttachment(sampleBuffer, kCGImagePropertyExifDictionary, NULL);
Copy the code
  • AVCaptureVideoDataOutput is configured in much the same way as AVCaptureMovieFileOutput, but with its delegate object and callback queue specified. To ensure that video frames are delivered sequentially, the queue must be a serial queue.
    self.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    self.videoDataOutput.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)}; // The initial format of the camera is biplanar 420V, which is a YUV format, while OpenGL ES usually uses BGRA format
    if ([self.captureSession canAddOutput:self.videoDataOutput]) {
        [self.captureSession addOutput:self.videoDataOutput];
        [self.videoDataOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
    }
Copy the code

2.10 High frame rate capture

  • In addition to the ordinary video capture described above, we can also use the high-frequency capture function. High frame rate video capture was added after iOS 7, with more realistic effects and better clarity, enhanced detail and improved motion fluency, especially when recording fast-moving content, as well as high-quality slow motion video effects.
  • The basic idea to achieve high frame rate capture is as follows: First, all supported formats are retrieved from the device’s Formats property, namely the AVCaptureDeviceFormat object. And then according to the object videoSupportedFrameRateRanges attribute, so that we can learn from the support of the minimum frame rate, maximum frame rate long information in time; Then manually set the device’s format and frame duration.
  • The concrete implementation is as follows:
  • Write a category for AVCaptureDevice and get the maximum frame rate of the supported format as follows:
    AVCaptureDeviceFormat *maxFormat = nil;
    AVFrameRateRange *maxFrameRateRange = nil;
    for (AVCaptureDeviceFormat *format in self.formats) {
        FourCharCode codecType = CMVideoFormatDescriptionGetCodecType(format.formatDescription);
        //codecType is an unsigned 32-bit data type, but consists of four bytes corresponding to four characters. The possible value is "420V" or "420F". The 420V format is selected.
        if (codecType == kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) {
            NSArray *frameRateRanges = format.videoSupportedFrameRateRanges;
            for (AVFrameRateRange *range in frameRateRanges) {
                if(range.maxFrameRate > maxFrameRateRange.maxFrameRate) { maxFormat = format; maxFrameRateRange = range; }}}else{}}Copy the code
  • We can determine whether the device supports high frame rates by determining whether the maximum frame rate is greater than 30:
- (BOOL)isHighFrameRate {
    return self.frameRateRange.maxFrameRate > 30.0f;
}
Copy the code
  • Now we can configure:
    if ([self hasMediaType:AVMediaTypeVideo] && [self lockForConfiguration:error] && [self.activeCamera supportsHighFrameRateCapture]) {
        CMTime minFrameDuration = self.frameRateRange.minFrameDuration;
        self.activeFormat = self.format;
        self.activeVideoMinFrameDuration = minFrameDuration;
        self.activeVideoMaxFrameDuration = minFrameDuration;
        [self unlockForConfiguration];
    }
Copy the code
  • You lock the device, and then set both the minimum frame duration and the maximum frame duration to minFrameDuration. Frame duration is reciprocal to frame rate, so the maximum frame rate corresponds to the minimum frame duration.
  • Variable speed playback can be realized by setting different rates for AVPlayer during playback. Measured on iphone8Plus, if the rate is between 0 and 0.5, the actual playback rate is still 0.5.
  • Also note that the audioTimePitchAlgorithm property of AVPlayerItem is set. This property allows you to specify how to play audio when the video is playing at various frame ratesAVAudioTimePitchAlgorithmSpectral 或AVAudioTimePitchAlgoruthmTimeDomainCan. :
  1. AVAudioTimePitchAlgorithmLowQualityZeroLatencyLow quality, suitable for fast forward, fast back or low quality speech
  2. AVAudioTimePitchAlgoruthmTimeDomainModerate quality, low cost of computation, suitable for voice
  3. AVAudioTimePitchAlgorithmSpectralThe highest quality, most expensive calculations retain the original project spacing
  4. AVAudioTimePitchAlgorithmVarispeedHigh quality playback without pitch correction
  • In addition, AVFoundation provides face recognition and TWO-DIMENSIONAL code recognition functions.

2.11 Face recognition

  • Face recognition requires AVCaptureMetadataOutput as the output, which is first added to the capture session:
    self.metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
    if ([self.captureSession canAddOutput:self.metaDataOutput]) {
        [self.captureSession addOutput:self.metaDataOutput];
        NSArray *metaDataObjectType = @[AVMetadataObjectTypeFace];
        self.metaDataOutput.metadataObjectTypes = metaDataObjectType;
        [self.metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
    }
Copy the code
  • You can see that you need to specify the metadataObjectTypes property of AVCaptureMetadataOutput and set it to an array of AVMetadataObjectTypeFace, which represents the face metadata object. And then set the follow AVCaptureMetadataOutputObjectsDelegate delegate object of the agreement and callback thread, when detected face will call the following methods:
- (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
    if (self.detectFaces) {
        self.detectFaces(metadataObjects); }}Copy the code
  • MetadataObjects is an array containing many AVMetadataObject objects, here can be considered as the subclass of AVMetadataFaceObject. For the AVMetadataFaceObject, there are four important properties:
  1. faceIDIs used to identify each face detected
  2. rollAngle, used to identify the face tilt Angle, that is, the convenient tilt Angle of the head to the shoulder
  3. yawAngle, deflection Angle, that is, the Angle of face rotation around the Y axis
  4. bounds, identify the detected face region
        @weakify(self)
        self.cameraHelper.detectFaces = ^(NSArray *faces) {
            @strongify(self)
            NSMutableArray *transformedFaces = [NSMutableArray array];
            for (AVMetadataFaceObject *face in faces) {
                AVMetadataObject *transformedFace = [self.previewLayer transformedMetadataObjectForMetadataObject:face];
                [transformedFaces addObject:transformedFace];
            }
            NSMutableArray *lostFaces = [self.faceLayers.allKeys mutableCopy];
            for (AVMetadataFaceObject *face in transformedFaces) {
                NSNumber *faceId = @(face.faceID);
                [lostFaces removeObject:faceId];
                
                CALayer *layer = self.faceLayers[faceId];
                if(! layer) { layer = [CALayer layer];
                    layer.borderWidth = 5.0f;
                    layer.borderColor = [UIColor colorWithRed:0.188 green:0.517 blue:0.877 alpha:1.000].CGColor;
                    [self.previewLayer addSublayer:layer];
                    self.faceLayers[faceId] = layer;
                }
                layer.transform = CATransform3DIdentity;
                layer.frame = face.bounds;
                
                if (face.hasRollAngle) {
                    layer.transform = CATransform3DConcat(layer.transform, [self transformForRollAngle:face.rollAngle]);
                }
                
                if (face.hasYawAngle) {
                    NSLog(@"%f", face.yawAngle);
                    layer.transform = CATransform3DConcat(layer.transform, [selftransformForYawAngle:face.yawAngle]); }}for (NSNumber *faceID in lostFaces) {
                CALayer *layer = self.faceLayers[faceID];
                [layer removeFromSuperlayer];
                [self.faceLayers removeObjectForKey:faceID]; }};// Rotate around Z-axis
- (CATransform3D)transformForRollAngle:(CGFloat)rollAngleInDegrees {        / / 3
    CGFloat rollAngleInRadians = THDegreesToRadians(rollAngleInDegrees);
    return CATransform3DMakeRotation(rollAngleInRadians, 0.0f, 0.0f, 1.0f);
}

// Rotate around Y-axis
- (CATransform3D)transformForYawAngle:(CGFloat)yawAngleInDegrees {          / / 5
    CGFloat yawAngleInRadians = THDegreesToRadians(yawAngleInDegrees);
    
    CATransform3D yawTransform = CATransform3DMakeRotation(yawAngleInRadians, 0.0f, -1.0f, 0.0f);
    
    return CATransform3DConcat(yawTransform, [self orientationTransform]);
}

- (CATransform3D)orientationTransform {                                     / / 6
    CGFloat angle = 0.0;
    switch ([UIDevice currentDevice].orientation) {
        case UIDeviceOrientationPortraitUpsideDown:
            angle = M_PI;
            break;
        case UIDeviceOrientationLandscapeRight:
            angle = -M_PI / 2.0f;
            break;
        case UIDeviceOrientationLandscapeLeft:
            angle = M_PI / 2.0f;
            break;
        default: // as UIDeviceOrientationPortrait
            angle = 0.0;
            break;
    }
    return CATransform3DMakeRotation(angle, 0.0f, 0.0f, 1.0f);
}

static CGFloat THDegreesToRadians(CGFloat degrees) {
    return degrees * M_PI / 180;
}
Copy the code
  • We use a dictionary to manage each layer that presents a Face object, whose key value is faceID, with callbacks to update existing faceLayer and remove unwanted faceLayer. Secondly, for each face, according to its rollAngle and yawAngle, the displayed matrix should be transformed by transfor.

  • Also note that transformedMetadataObjectForMetadataObject method can transform the data on the device coordinate system to the view in the coordinate system, the scope of the device coordinate system is (0, 0) to (1, 1).

2.12 QR code identification

  • Machine readable codes include one-dimensional bar codes and two-dimensional codes, etc. AVFoundation supports multiple one-dimensional codes and three two-dimensional codes, among which the most common is QR code, also known as two-dimensional code.
  • The AVMetadataObject is still used for scanning, which is first added to the capture session.
    self.metaDataOutput = [[AVCaptureMetadataOutput alloc] init];
    if ([self.captureSession canAddOutput:self.metaDataOutput]) {
        [self.captureSession addOutput:self.metaDataOutput];
        [self.metaDataOutput setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];
        NSArray *types = @[AVMetadataObjectTypeQRCode];
        self.metaDataOutput.metadataObjectTypes = types;
    }
Copy the code
  • Then implement the delegate method:
- (void)captureOutput:(AVCaptureOutput *)output didOutputMetadataObjects:(NSArray<__kindof AVMetadataObject *> *)metadataObjects fromConnection:(AVCaptureConnection *)connection
{
    [metadataObjects enumerateObjectsUsingBlock:^(__kindof AVMetadataObject * _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
        if ([obj isKindOfClass:[AVMetadataMachineReadableCodeObject class]]) {
            NSLog(@"% @"((,AVMetadataMachineReadableCodeObject*)obj).stringValue); }}]; }Copy the code
  • For a AVMetadataMachineReadableCodeObject, there are three important attributes:
  1. StringValue: indicates the qr code information
  2. Bounds, used to indicate the rectangular bounds of a TWO-DIMENSIONAL code
  3. Corners, an array represented by a corner dictionary, is more accurate than the two-dimensional code area represented by bounds
  • We can use the above attributes to highlight the TWO-DIMENSIONAL code area on the UI interface.
  • First need to note that a from captureSession AVMetadataMachineReadableCodeObject, the coordinates are device coordinates, the coordinates of coordinate transformation is required:
- (NSArray *)transformedCodesFromCodes:(NSArray *)codes {
    NSMutableArray *transformedCodes = [NSMutableArray array];
    [codes enumerateObjectsUsingBlock:^(id  _Nonnull obj, NSUInteger idx, BOOL * _Nonnull stop) {
        AVMetadataObject *transformedCode = [self.previewLayer transformedMetadataObjectForMetadataObject:obj];
        [transformedCodes addObject:transformedCode];
    }];
    return [transformedCodes copy];
}
Copy the code
  • Second, for each AVMetadataMachineReadableCodeObject object, its bounds attribute being CGRect, so can directly draw a UIBezierPath object:
- (UIBezierPath *)bezierPathForBounds:(CGRect)bounds {
    return [UIBezierPath bezierPathWithRect:bounds];
}
Copy the code
  • The corners property is a dictionary that needs to be manually generated as cgPoints and then wired to generate UIBezierPath objects:
- (UIBezierPath *)bezierPathForCorners:(NSArray *)corners {
    UIBezierPath *path = [UIBezierPath bezierPath];
    for (int i = 0; i < corners.count; i++) {
        CGPoint point = [self pointForCorner:corners[I]].if (i == 0) {
            [path moveToPoint:point];
        } else {
            [path addLineToPoint:point];
        }
    }
    [path closePath];
    return path;
}

- (CGPoint)pointForCorner:(NSDictionary *)corner {
    CGPoint point;
    CGPointMakeWithDictionaryRepresentation((CFDictionaryRef)corner, &point);
    return point;
}
Copy the code
  • The form of the corners dictionary is roughly as follows and can be calledCGPointMakeWithDictionaryRepresentationThe convenience function converts it toCGPointForm. Typically, a corners directory contains four corner dictionaries. Once you have two UIBezierPath objects for each code, you can add the corresponding CALayer to the view to display the highlighted area.

Example 3.

3.1 Capture photos and record video Demo Swift version

  • This demo is from the Official Apple documentation. For details, see the section AVCam: Building a Camera App. This demo focuses on capturing photos with deep data and recording video using front and rear iPhone and iPad cameras. This Demo uses the latest IOS SDK and is required to run on IOS 13.0 or higher.
  • The iOS Camera app allows you to capture photos and movies from front and rear cameras. Depending on your device, the camera app also supports static capture of deep data, portrait effects, and live photos.
  • This sample code project, AVCam, shows you how to implement these capture capabilities in your own camera application. It takes advantage of the basic features of the built-in front and rear cameras on the iPhone and iPad.
  • To use AVCam, you need an iOS device running iOS 13 or later. This example does not work in the emulator because Xcode does not have access to the device camera. AVCam hides mode buttons that are not supported on current devices, such as portrait exposure transfer on the iPhone 7 Plus.
  • The code structure of the project is shown below:

3.1.1 Configuring the capture session

  • AVCaptureSession accepts input data from capture devices such as cameras and microphones. After receiving the input, AVCaptureSession seals the data to the appropriate output for processing, resulting in a movie file or still photo. After configuring the input and output of the capture session, you will tell it to start capturing and then stop capturing.
 private let session = AVCaptureSession(a)Copy the code
  • AVCam selects the rear camera by default and configures the camera capture session to stream content to the video preview view. PreviewView is a supported by AVCaptureVideoPreviewLayer custom UIView subclass. AVFoundation does not have a PreviewView class, but the sample code creates one to facilitate session management.

  • The following figure shows how the session manages the input device and captures the output:

  • Delegate any interaction with AvCapturesEssiessie, including its inputs and outputs, to a dedicated serial scheduling queue (sessionQueue) so that the interaction does not block the main queue. Perform any configuration that involves changing the session topology or interrupting its running video stream on a separate scheduling queue, because session configuration always blocks the execution of other tasks until the queue processes the changes. Similarly, the sample code assigns other tasks to the session queue, such as resuming an interrupted session, switching capture modes, switching cameras, and writing media to files, so that their processing does not block or delay user interaction with the application.

  • On the contrary, the code will impact the mission of the UI (such as update preview view) assigned to the home team, because AVCaptureVideoPreviewLayer is a subclass CALayer, is the sample preview view support layer. You must manipulate UIView subclasses on the main thread so that they display in a timely, interactive manner.

  • In viewDidLoad, AVCam creates a session and assigns it to the Preview view :previewView.session = session

  • For more information about configuring an image capture session, see Setting up a Capture Session.

3.1.2 Requesting authorization to access input devices

  • Once the session is configured, it can accept input. Every Avcapturedevice — whether a camera or a microphone — requires user authorization to access it. AVFoundation uses AVAuthorizationStatus to enumerate authorization status, which informs the application user whether to restrict or deny access to a capturing device.
  • More information about preparing application information. For custom authorization requests, see Media Capture Request Authorization on iOS.

3.1.3 Switching between front and rear Cameras

  • The changeCamera method handles switching between cameras when the user clicks a button in the UI. It uses a discovery session that lists the available device types in order of precedence and accepts the first device in its array of devices. , for example, AVCam videoDeviceDiscoverySession query application runs in the equipment, find the available input devices. Also, if the user’s device has a broken camera, it will not be available in the device array.
switch currentPosition {
case .unspecified, .front:
    preferredPosition = .back
    preferredDeviceType = .builtInDualCamera
    
case .back:
    preferredPosition = .front
    preferredDeviceType = .builtInTrueDepthCamera
    
@unknown default:
    print("Unknown capture position. Defaulting to back, dual-camera.")
    preferredPosition = .back
    preferredDeviceType = .builtInDualCamera
}
Copy the code
  • The changeCamera method handles switching between cameras, and if the discovery session finds the camera in the appropriate position, it removes the previous input from the capture session and adds the new camera as input.
// Remove the existing device input first, because AVCaptureSession doesn't support
// simultaneous use of the rear and front cameras.
self.session.removeInput(self.videoDeviceInput)

if self.session.canAddInput(videoDeviceInput) {
    NotificationCenter.default.removeObserver(self, name: .AVCaptureDeviceSubjectAreaDidChange, object: currentVideoDevice)
    NotificationCenter.default.addObserver(self, selector: #selector(self.subjectAreaDidChange), name: .AVCaptureDeviceSubjectAreaDidChange, object: videoDeviceInput.device)
    
    self.session.addInput(videoDeviceInput)
    self.videoDeviceInput = videoDeviceInput
} else {
    self.session.addInput(self.videoDeviceInput)
}
Copy the code

3.1.4 Handling interrupts and errors

  • Interrupts such as phone calls, other application notifications, and music playback may occur during the capture of a session. By adding the observer to handle these interference, to listen for AVCaptureSessionWasInterrupted:
NotificationCenter.default.addObserver(self,
                                       selector: #selector(sessionWasInterrupted),
                                       name: .AVCaptureSessionWasInterrupted,
                                       object: session)
NotificationCenter.default.addObserver(self,
                                       selector: #selector(sessionInterruptionEnded),
                                       name: .AVCaptureSessionInterruptionEnded,
                                       object: session)
Copy the code
  • When AVCam receives an interruption notification, it can pause or suspend the session and provide an option to resume activity at the end of the interruption. AVCam registers Sessionwas as a handler to receive notifications, notifying the user when a captured session breaks:
if reason == .audioDeviceInUseByAnotherClient || reason == .videoDeviceInUseByAnotherClient {
    showResumeButton = true
} else if reason == .videoDeviceNotAvailableWithMultipleForegroundApps {
    // Fade-in a label to inform the user that the camera is unavailable.
    cameraUnavailableLabel.alpha = 0
    cameraUnavailableLabel.isHidden = false
    UIView.animate(withDuration: 0.25) {
        self.cameraUnavailableLabel.alpha = 1}}else if reason == .videoDeviceNotAvailableDueToSystemPressure {
    print("Session stopped running due to shutdown system pressure level.")}Copy the code
  • Camera view controller observed AVCaptureSessionRuntimeError, receive notifications when error occurs:
NotificationCenter.default.addObserver(self,
                                       selector: #selector(sessionRuntimeError),
                                       name: .AVCaptureSessionRuntimeError,
                                       object: session)
Copy the code
  • Restart the capture session when a runtime error occurs:
// If media services were reset, and the last start succeeded, restart the session.
if error.code == .mediaServicesWereReset {
    sessionQueue.async {
        if self.isSessionRunning {
            self.session.startRunning()
            self.isSessionRunning = self.session.isRunning
        } else {
            DispatchQueue.main.async {
                self.resumeButton.isHidden = false}}}}else {
    resumeButton.isHidden = false
}
Copy the code
  • Capturing sessions can also stop if the device is under system stress, such as overheating. The camera itself does not degrade the quality or reduce the number of frames; In order to avoid surprised your users, you can let your application manually reduce the frame rate, close the depth, or according to AVCaptureDevice. SystemPressureState: feedback to adjust the performance.
let pressureLevel = systemPressureState.level
if pressureLevel == .serious || pressureLevel == .critical {
    if self.movieFileOutput == nil || self.movieFileOutput? .isRecording ==false {
        do {
            try self.videoDeviceInput.device.lockForConfiguration()
            print("WARNING: Reached elevated system pressure level: \(pressureLevel). Throttling frame rate.")
            self.videoDeviceInput.device.activeVideoMinFrameDuration = CMTime(value: 1, timescale: 20)
            self.videoDeviceInput.device.activeVideoMaxFrameDuration = CMTime(value: 1, timescale: 15)
            self.videoDeviceInput.device.unlockForConfiguration()
        } catch {
            print("Could not lock device for configuration: \(error)")}}}else if pressureLevel == .shutdown {
    print("Session stopped running due to shutdown system pressure level.")}Copy the code

3.1.5 Capture a photo

  • Take a photo on the session queue. The process starts by updating the AVCapturePhotoOutput connection to match the video orientation of the video preview layer. This allows the camera to capture exactly what the user sees on the screen:
if let photoOutputConnection = self.photoOutput.connection(with: .video) {
    photoOutputConnection.videoOrientation = videoPreviewLayerOrientation!
}
Copy the code
  • After the output is aligned, AVCam continues to create AVCapturePhotoSettings to configure capture parameters such as focus, Flash, and resolution:
var photoSettings = AVCapturePhotoSettings(a)// Capture HEIF photos when supported. Enable auto-flash and high-resolution photos.
if  self.photoOutput.availablePhotoCodecTypes.contains(.hevc) {
    photoSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey: AVVideoCodecType.hevc])
}

if self.videoDeviceInput.device.isFlashAvailable {
    photoSettings.flashMode = .auto
}

photoSettings.isHighResolutionPhotoEnabled = true
if! photoSettings.__availablePreviewPhotoPixelFormatTypes.isEmpty { photoSettings.previewPhotoFormat = [kCVPixelBufferPixelFormatTypeKeyas String: photoSettings.__availablePreviewPhotoPixelFormatTypes.first! ] }// Live Photo capture is not supported in movie mode.
if self.livePhotoMode == .on && self.photoOutput.isLivePhotoCaptureSupported {
    let livePhotoMovieFileName = NSUUID().uuidString
    let livePhotoMovieFilePath = (NSTemporaryDirectory(a)as NSString).appendingPathComponent((livePhotoMovieFileName as NSString).appendingPathExtension("mov")! photoSettings.livePhotoMovieFileURL =URL(fileURLWithPath: livePhotoMovieFilePath)
}

photoSettings.isDepthDataDeliveryEnabled = (self.depthDataDeliveryMode == .on
    && self.photoOutput.isDepthDataDeliveryEnabled)

photoSettings.isPortraitEffectsMatteDeliveryEnabled = (self.portraitEffectsMatteDeliveryMode == .on
    && self.photoOutput.isPortraitEffectsMatteDeliveryEnabled)

if photoSettings.isDepthDataDeliveryEnabled {
    if !self.photoOutput.availableSemanticSegmentationMatteTypes.isEmpty {
        photoSettings.enabledSemanticSegmentationMatteTypes = self.selectedSemanticSegmentationMatteTypes
    }
}

photoSettings.photoQualityPrioritization = self.photoQualityPrioritizationMode
Copy the code
  • The example uses a single object, PhotoCaptureProcessor, as the photo capture delegate to isolate each capture life cycle. This clear separation of capture cycles is necessary for live photos, as a single capture cycle may involve the capture of multiple frames.
  • Each time the user presses the central shutter button, AVCam captures the photo using the previously configured Settings by calling capturePhoto(with :delegate:) :
self.photoOutput.capturePhoto(with: photoSettings, delegate: photoCaptureProcessor)
Copy the code
  • The capturePhoto method takes two arguments:
  1. An AvCapturePhotoSet object that encapsulates Settings that the user has configured through the application, such as exposure, flash, focus, and flashlight.
  2. In response to an accord with AVCapturePhotoCaptureDelegate agreement may entrust system in capture pictures passed during subsequent callbacks.
  • Once the application calls capturePhoto(with :delegate:), the process of starting the photo is complete. After that, the action on the individual photo capture takes place in the delegate callback.

3.1.6 Capture delegate tracking results through photos

  • The capturePhoto method just starts the process of taking a picture. The rest of the process takes place in the delegate methods of the application implementation.

  • When you call capturePhoto, the photoOutput(_:willBeginCaptureFor:) arrives first. The parsed Settings represent the actual Settings that the camera will apply for the upcoming photo. AVCam only uses this method for activity-specific photo behavior. AVCam by examining livephotomovieviedimensions size to judge whether photos for activity photos; If the photo is an active photo, AVCam adds a count to keep track of the active photo:

self.sessionQueue.async {
    if capturing {
        self.inProgressLivePhotoCapturesCount += 1
    } else {
        self.inProgressLivePhotoCapturesCount -= 1
    }
    
    let inProgressLivePhotoCapturesCount = self.inProgressLivePhotoCapturesCount
    DispatchQueue.main.async {
        if inProgressLivePhotoCapturesCount > 0 {
            self.capturingLivePhotoLabel.isHidden = false
        } else if inProgressLivePhotoCapturesCount == 0 {
            self.capturingLivePhotoLabel.isHidden = true
        } else {
            print("Error: In progress Live Photo capture count is less than 0.")}}}Copy the code
  • The photoOutput(_:willCapturePhotoFor:) arrives just after the shutter sound is played. AVCam uses this opportunity to flash the screen to alert the user that the camera has captured a photo. The sample code implements the Flash by changing the opacity of the preview view layer from 0 to 1.
// Flash the screen to signal that AVCam took a photo.
DispatchQueue.main.async {
    self.previewView.videoPreviewLayer.opacity = 0
    UIView.animate(withDuration: 0.25) {
        self.previewView.videoPreviewLayer.opacity = 1}}Copy the code
  • PhotoOutput (_ : didFinishProcessingPhoto: error:) in the system to complete the depth data processing and portrait arrive effect after processing. AVCam checks portrait effects, exposure and depth metadata at this stage:
self.sessionQueue.async {
    self.inProgressPhotoCaptureDelegates[photoCaptureProcessor.requestedPhotoSettings.uniqueID] = nil
}
Copy the code
  • You can apply other visual effects in this delegate method, such as animating a preview thumbnail of a captured photo.
  • For more information about tracking photo progress through delegate callbacks, see Tracking Photo Capture Progress.

Capturing a camera to take a picture of an iOS device is a complex process involving physical camera mechanisms, image signal processing, operating systems and applications. While it’s possible for your application to ignore many stages of the process and just wait for the final result, you can create a more responsive camera interface by monitoring each step. After calling capturePhoto(with :delegate:), your delegate object can follow the five main steps in the process (or more, depending on your photo Settings). Depending on your capture workflow and the capture UI you want to create, your delegate can handle some or all of the following steps:

Capture system in every step of the process provides a avcaptureresolvedphotoset object. Because multiple captures can occur simultaneously, each parsed photo Settings object has a uniqueID whose value matches the uniqueID of the AvCapturePhotos that you used to take the photo.

3.1.7 Capture real-time photos

  • When you enable live photo capture, the camera takes a still image and a short video at the moment of capture. The application triggers live photo capture in the same way as static photo capture: with a single call to capturePhotoWithSettings, you can pass the URL of a live photo short video through the livePhotoMovieFileURL property. You can enable active photos at the AVCapturePhotoOutput level or configure active photos at the AvCapturePhotoSet level on a per-capture basis.

  • Because Live Photo Capture creates a short movie file, AVCam must represent the location where the movie file is saved as a URL. In addition, because live photo captures can overlap, the code must keep track of the number of live photo captures in progress to ensure that live photo tags remain visible during those captures. The photoOutput(_:willBeginCaptureFor:) delegate method in the previous section implements this trace counter.

  • PhotoOutput (_ : didFinishRecordingLivePhotoMovieForEventualFileAt: resolvedSettings triggered:) at the end of the video recording. AVCam has removed the event sign here. Since the camera has finished recording the short film, AVCam performs the Live Photo processor decrement the completion counter :livePhotoCaptureHandler(false)

  • PhotoOutput (_ : didFinishProcessingLivePhotoToMovieFileAt: duration: photoDisplayTime: resolvedSettings: error:) finally triggered, Indicates that the movie has been fully written to disk and is ready to use. AVCam takes this opportunity to display any capture errors and redirect the saved file URL to its final output location:

iferror ! =nil {
    print("Error processing Live Photo companion movie: \(String(describing: error))")
    return
}
livePhotoCompanionMovieURL = outputFileURL
Copy the code
  • For more information about integrating live photo capture into your application, see Capture A Still Photo and Live Photo.

3.1.8 Capture depth data and portrait effect exposure

  • Using AVCapturePhotoOutput, AVCam queries the capture device to see if it is configured to transfer depth data and portrait effects to a still image. If the input device supports either of these modes, and you have them enabled in capture Settings, the camera attaches depth and portrait effects as secondary metadata to each photo request on a per-photo basis. If the device supports transfer of deep data, portrait effects, or live photos, the app displays a button that toggles the Settings to enable or disable the feature.
if self.photoOutput.isDepthDataDeliverySupported {
               self.photoOutput.isDepthDataDeliveryEnabled = true
               
               DispatchQueue.main.async {
                   self.depthDataDeliveryButton.isEnabled = true}}if self.photoOutput.isPortraitEffectsMatteDeliverySupported {
               self.photoOutput.isPortraitEffectsMatteDeliveryEnabled = true
               
               DispatchQueue.main.async {
                   self.portraitEffectsMatteDeliveryButton.isEnabled = true}}if !self.photoOutput.availableSemanticSegmentationMatteTypes.isEmpty {
self.photoOutput.enabledSemanticSegmentationMatteTypes = self.photoOutput.availableSemanticSegmentationMatteTypes
               self.selectedSemanticSegmentationMatteTypes = self.photoOutput.availableSemanticSegmentationMatteTypes
               
               DispatchQueue.main.async {
                   self.semanticSegmentationMatteDeliveryButton.isEnabled = (self.depthDataDeliveryMode == .on) ? true : false}}DispatchQueue.main.async {
               self.livePhotoModeButton.isHidden = false
               self.depthDataDeliveryButton.isHidden = false
               self.portraitEffectsMatteDeliveryButton.isHidden = false
               self.semanticSegmentationMatteDeliveryButton.isHidden = false
               self.photoQualityPrioritizationSegControl.isHidden = false
               self.photoQualityPrioritizationSegControl.isEnabled = true
           }
Copy the code
  • The camera stores exposure metadata for depth and portrait effects as auxiliary images that can be discovered and addressed through the image I/O API. AVCam through search kCGImageAuxiliaryDataTypePortraitEffectsMatte types of auxiliary image to access the metadata:
if var portraitEffectsMatte = photo.portraitEffectsMatte {
    if let orientation = photo.metadata[String(kCGImagePropertyOrientation)] as? UInt32 {
        portraitEffectsMatte = portraitEffectsMatte.applyingExifOrientation(CGImagePropertyOrientation(rawValue: orientation)!)
    }
    let portraitEffectsMattePixelBuffer = portraitEffectsMatte.mattingImage
Copy the code
  • For more information about depth data capture, see Using Depth Capture Photos.

On iOS devices with dual rear cameras or a front-facing true depth camera, the capture system can record depth information. A depth map is like an image; However, instead of each pixel providing a color, it represents the distance from the camera to that part of the image (in absolute terms, or relative to other pixels in the depth map). You can use a depth map with photos to create image manipulation effects for different elements of foreground and background photos, like the portrait mode camera app on iOS. By saving color and depth data separately, you can even apply these effects to long photos after they are captured.

3.1.9 Capture semantic segmentation

  • Using AVCapturePhotoOutput, AVCam can also capture semantically segmented images, dividing a person’s hair, skin and teeth into different images. Capturing these auxiliary images alongside your main photo can simplify the application of photo effects, such as changing a person’s hair color or brightening their smile. Through the image output enabledSemanticSegmentationMatteTypes attribute is set to preferred values (hair, skin and teeth), can capture these auxiliary image. To capture all the supported types, please set this property to match the picture output availableSemanticSegmentationMatteTypes attributes.
// Capture all available semantic segmentation matte types.
photoOutput.enabledSemanticSegmentationMatteTypes = 
    photoOutput.availableSemanticSegmentationMatteTypes
Copy the code
  • When photo output complete capture a photo, you can query photos semanticSegmentationMatte (for) method to retrieve the relevant matte enrolled image segmentation. This method returns a AVSemanticSegmentationMatte, containing matte enrolled image and processing image can be used to other metadata. The sample application adds semantically segmented Matte image data to an array so you can write it to the user’s photo library.
// Find the semantic segmentation matte image for the specified type.
guard var segmentationMatte = photo.semanticSegmentationMatte(for: ssmType) else { return }

// Retrieve the photo orientation and apply it to the matte image.
if let orientation = photo.metadata[String(kCGImagePropertyOrientation)] as? UInt32.let exifOrientation = CGImagePropertyOrientation(rawValue: orientation) {
    // Apply the Exif orientation to the matte image.
    segmentationMatte = segmentationMatte.applyingExifOrientation(exifOrientation)
}

var imageOption: CIImageOption!

// Switch on the AVSemanticSegmentationMatteType value.
switch ssmType {
case .hair:
    imageOption = .auxiliarySemanticSegmentationHairMatte
case .skin:
    imageOption = .auxiliarySemanticSegmentationSkinMatte
case .teeth:
    imageOption = .auxiliarySemanticSegmentationTeethMatte
default:
    print("This semantic segmentation type is not supported!")
    return
}

guard let perceptualColorSpace = CGColorSpace(name: CGColorSpace.sRGB) else { return }

// Create a new CIImage from the matte's underlying CVPixelBuffer.
let ciImage = CIImage( cvImageBuffer: segmentationMatte.mattingImage,
                       options: [imageOption: true,
                                 .colorSpace: perceptualColorSpace])

// Get the HEIF representation of this image.
guard let imageData = context.heifRepresentation(of: ciImage,
                                                 format: .RGBA8,
                                                 colorSpace: perceptualColorSpace,
                                                 options: [.depthImage: ciImage]) else { return }

// Add the image data to the SSM data array for writing to the photo library.
semanticSegmentationMatteDataArray.append(imageData)
Copy the code

3.1.10 Saving photos to the User photo Library

  • Before you can save an image or movie to a user’s photo library, you must first request access to that library. Process images requesting write authorization capture device authorization: display alerts using the text provided in info.plist. AVCam in the fileOutput (_ : didFinishRecordingTo: the from: error:) check authorization in the callback methods, including AVCaptureOutput provides to keep the media for the output data. PHPhotoLibrary.requestAuthorization { status in

  • For more information about requesting access to a user’s photo library, see Requesting Authorization to Access photos.

  1. The user must explicitly grant your application access to the photos. Prepare your application by providing adjustment strings. The adjustment string is a localizable message that you add information to your application. Plist file that tells the user why your application needs access to the user’s photo library. Then, when the photo prompts the user to grant access, the alert displays the adjustment string you provide in the chosen locale on the user’s device.
  2. PHCollection, the first time your application uses a PHAsset PHAssetCollection, gets content from a library or PHCollectionList method, or applies a change request to change library content using a method listed in a photo library that automatically and asynchronously prompts the user to request authorization. After the system user grants permissions, remember to use the option in your application in the future, but the user can change this option at any time using the Settings application. If the user denies access to your app’s photo library, has not yet replied with permission prompts, or cannot grant access restrictions, any attempts to retrieve the photo library contents will be returned with empty PHFetchResult objects, and any attempts to change the photo library will fail. If this method returns PHAuthorizationStatus. You can invoke the requestAuthorization(_:) method to prompt users for access to the photo library.
  3. Use classes that interact with the photo library, such as PHAsset, PHPhotoLibrary, and PHImageManager(information for your application). File must include the user-oriented NSPhotoLibraryUsageDescription key text, the system will display the text when requesting user access. Without this key, iOS 10 or later apps will crash.

3.1.11 Record a Video File

  • AVCam supports video capture by using the. Video qualifier to query and add input devices. The app defaults to a rear dual camera, but if the device doesn’t have dual cameras, the app defaults to a wide Angle camera.
if let dualCameraDevice = AVCaptureDevice.default(.builtInDualCamera, for: .video, position: .back) {
    defaultVideoDevice = dualCameraDevice
} else if let backCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) {
    // If a rear dual camera is not available, default to the rear wide angle camera.
    defaultVideoDevice = backCameraDevice
} else if let frontCameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front) {
    // If the rear wide angle camera isn't available, default to the front wide angle camera.
    defaultVideoDevice = frontCameraDevice
}
Copy the code
  • Instead of passing the Settings to the system like a still photo, you pass the output URL like a live photo. The delegate callback provides the same URL, so the application does not need to store it in an intermediate variable.
  • Once the user clicks on the record to begin capturing, AVCam calls startRecording(to:recordingDelegate:):
movieFileOutput.startRecording(to: URL(fileURLWithPath: outputFilePath), recordingDelegate: self)

Copy the code
  • Just as capturePhoto triggers delegate callbacks for Still Capture, startRecording triggers a series of delegate callbacks for movie recording.

  • Track the progress of the movie recording through the delegate callback chain. Rather than achieve AVCapturePhotoCaptureDelegate, realize AVCaptureFileOutputRecordingDelegate. Because the video recording delegate callback needs to interact with the capture session, AVCam uses the CameraViewController as a delegate rather than creating a separate delegate object.
  • When the file output began to write data to a file to trigger the fileOutput (_ : didStartRecordingTo: the from:). AVCam took the opportunity to change the record button to a stop button:
DispatchQueue.main.async {
    self.recordButton.isEnabled = true
    self.recordButton.setImage(#imageLiteral(resourceName: "CaptureStop"), for: [])}Copy the code
  • The fileOutput (_ : didFinishRecordingTo: the from: error:) finally triggered, said the film has been completely written to disk, ready to use. AVCam takes advantage of this opportunity to move temporarily saved movies from a given URL to the user’s photo library or the application’s documents folder:
PHPhotoLibrary.shared().performChanges({
    let options = PHAssetResourceCreationOptions()
    options.shouldMoveFile = true
    let creationRequest = PHAssetCreationRequest.forAsset()
    creationRequest.addResource(with: .video, fileURL: outputFileURL, options: options)
}, completionHandler: { success, error in
    if! success {print("AVCam couldn't save the movie to your photo library: \(String(describing: error))")
    }
    cleanup()
}
)
Copy the code
  • If AVCam goes into the background — for example, when the user accepts a call — the application must obtain the user’s permission to continue recording. AVCam performs this save by requesting time from the system via background tasks. This background task ensures that there is enough time to write the file to the photo library, even if AVCam backs into the background. To end background execution, AVCam calls fileOutput(: didFinishRecordingTo: endBackgroundTask in the from: error:) (:).
self.backgroundRecordingID = UIApplication.shared.beginBackgroundTask(expirationHandler: nil)

Copy the code

3.1.12 Capture pictures when recording videos

  • Like the iOS camera app, AVCam can take pictures while recording video. AVCam captures these photos at the same resolution as video. The implementation code is as follows:
let movieFileOutput = AVCaptureMovieFileOutput(a)if self.session.canAddOutput(movieFileOutput) {
    self.session.beginConfiguration()
    self.session.addOutput(movieFileOutput)
    self.session.sessionPreset = .high
    if let connection = movieFileOutput.connection(with: .video) {
        if connection.isVideoStabilizationSupported {
            connection.preferredVideoStabilizationMode = .auto
        }
    }
    self.session.commitConfiguration()
    
    DispatchQueue.main.async {
        captureModeControl.isEnabled = true
    }
    
    self.movieFileOutput = movieFileOutput
    
    DispatchQueue.main.async {
        self.recordButton.isEnabled = true
        
        /* For photo captures during movie recording, Speed quality photo processing is prioritized to avoid frame drops during recording. */
        self.photoQualityPrioritizationSegControl.selectedSegmentIndex = 0
        self.photoQualityPrioritizationSegControl.sendActions(for: UIControl.Event.valueChanged)
    }
}
Copy the code
  • Click here to download the complete code: Swift video capture Demo