preface

At the beginning of this article, we will introduce AVFoundation, an audio and video framework launched by apple after iOS8.0. It provides photo/video capture function and is mainly responsible for the collection of media files. As long as audio and video related functions, such as small video or live broadcast, it is indispensable.

First, the introduction of core classes

Let’s start with some of the core classes in the AVFoundation framework 👇🏻

  • Capture sessionAVCaptureSession
  • Capture deviceAVCaptureDevice
  • Capture device inputAVCaptureDeviceInput
  • Capture device outputAVCaptureOutput (Abstract class)◦ Static ImagesAVCaptureStillImageOutputQuickTimeMovie filesAVCaputureMovieFileOutput◦ Audio data filesAVCaputureAudioDataOutput◦ Video data filesAVCaputureVideoDataOutput
  • Capture the connectionAVCaptureConnection
  • Capture the previewAVCaptureVideoPreviewLayer

Next, implement a basic case, using the AVFoundation framework, to implement the functions of a camera.

Second, video preview layer

First of all, we need to define a video preview layer 👉🏻 THPreviewView, when we take a picture or video, we need to preview, no preview is black and nothing can be seen. THPreviewView is associated by AVCaptureSession AVCaptureVideoPreviewLayer preview (capture), so the header file need to declare 👇 🏻

#import <AVFoundation/AVFoundation.h> @interface THPreviewView : UIView / / session to associated AVCaptureVideoPreviewLayer and activate AVCaptureSession @ property (strong, nonatomic) AVCaptureSession *session; @endCopy the code

Then thpreviewView. m implements the session get/set method 👇🏻 in the file

Session - (AVCaptureSession *) {/ / rewrite the method of the session, return to capture session return [(AVCaptureVideoPreviewLayer *) self. Layer session]; } - (void)setSession:(AVCaptureSession *)session {// Override session attributes to access view Layer attributes in setSession: method. / / AVCaptureVideoPreviewLayer instance and set AVCaptureSession will capture the data directly to the output layer, and to ensure synchronization with the session state. [(AVCaptureVideoPreviewLayer*)self.layer setSession:session]; }Copy the code

Of course, we also need to rewrite the View layerClass method, indicates AVCaptureVideoPreviewLayer 👇 🏻

+ (Class) layerClass {/ / in the UIView rewrite layerClass method allows developers to create custom view instance layer return [AVCaptureVideoPreviewLayer Class]; }Copy the code

2.1 Coordinate conversion problem

When we take a photo, we need to focus, such as a person, usually by clicking on the person’s head in the preview layer of the camera, 👇🏻

⚠️ Click screen is the coordinate of view, and the coordinate position represented by the camera, is not the same.

At this point, you need to convert the coordinate system of the screen to the coordinate system of the camera, which involves some physical knowledge: gravity, mirror, layer and direction changes and other factors, here is very complicated, fortunately, the Apple system provides the conversion method 👇🏻

Then we can directly define a private coordinate system transformation method 👇🏻

/ / convert touch point on the screen coordinate system to the camera coordinate system point - (CGPoint) captureDevicePointForPoint: (CGPoint) point {AVCaptureVideoPreviewLayer * layer = (AVCaptureVideoPreviewLayer *)self.layer; return [layer captureDevicePointOfInterestForPoint:point]; }Copy the code

2.2 Custom gesture events

When we take photos, we usually need to focus, exposure and other operations, imitating the gestures of the system camera. We add the gestures of click focus and double click exposure 👇🏻

  1. Added at initialization time
    _singleTapRecognizer =
    [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleSingleTap:)];

    _doubleTapRecognizer =
    [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleDoubleTap:)];
    _doubleTapRecognizer.numberOfTapsRequired = 2;

   _doubleDoubleTapRecognizer =
    [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(handleDoubleDoubleTap:)];
    _doubleDoubleTapRecognizer.numberOfTapsRequired = 2;
    _doubleDoubleTapRecognizer.numberOfTouchesRequired = 2;

    [self addGestureRecognizer:_singleTapRecognizer];
    [self addGestureRecognizer:_doubleTapRecognizer];
    [self addGestureRecognizer:_doubleDoubleTapRecognizer];
    [_singleTapRecognizer requireGestureRecognizerToFail:_doubleTapRecognizer];
Copy the code
  1. Event implementation 👇🏻
- (void)handleSingleTap:(UIGestureRecognizer *)recognizer { CGPoint point = [recognizer locationInView:self]; / / self. FocusBox is "focusing" the view, an animation [self runBoxAnimationOnView: self. FocusBox point: point]; if (self.delegate) { [self.delegate tappedToFocusAtPoint:[self captureDevicePointForPoint:point]]; } } - (void)handleDoubleTap:(UIGestureRecognizer *)recognizer { CGPoint point = [recognizer locationInView:self]; / / self. ExposureBox is "exposure" view, an animation [self runBoxAnimationOnView: self. ExposureBox point: point]; if (self.delegate) { [self.delegate tappedToExposeAtPoint:[self captureDevicePointForPoint:point]]; }} - (void)handleDoubleDoubleTap:(UIGestureRecognizer *)recognizer {// reset the animation [self runResetAnimation]; if (self.delegate) { [self.delegate tappedToResetFocusAndExposure]; }}Copy the code

Of course, these events will be delegate out 👇🏻

@protocol THPreviewViewDelegate <NSObject> - (void)tappedToFocusAtPoint:(CGPoint)point; // focus - (void)tappedToExposeAtPoint:(CGPoint)point; / / exposure - (void) tappedToResetFocusAndExposure; // Click reset Focus & Exposure @endCopy the code

Three, video capture configuration

Next, do some video/photo capture configuration.

3.1 set up the Session

This is usually set in the viewController controller page, for example we declare AVCaptureSession to capture the session 👇🏻 in the controller THCameraController

@interface THCameraController : NSObject @property (nonatomic, strong, readonly) AVCaptureSession *captureSession; // used to set and configure video capture sessions - (BOOL)setupSession:(NSError **)error; - (void)startSession; - (void)stopSession; @endCopy the code

Of course, capturing can’t be done without a device. You need to capture video with a camera, and audio with a microphone. Input/output devices are involved, and these definitions are usually placed in extended classes 👇🏻

@interface THCameraController () <AVCaptureFileOutputRecordingDelegate> @property (strong, nonatomic) dispatch_queue_t videoQueue; @property (strong, nonatomic) AVCaptureSession *captureSession; // Capture session @Property (weak, nonatomic) AVCaptureDeviceInput *activeVideoInput; / / input @ property (strong, nonatomic) AVCaptureStillImageOutput * imageOutput; @property (strong, nonatomic) AVCaptureMovieFileOutput *movieOutput; @property (strong, nonatomic) NSURL *outputURL; @endCopy the code

The next step is to implement 👇🏻

- (BOOL)setupSession:(NSError **)error {// create a capture session. Self. CaptureSession = [[AVCaptureSession alloc]init]; /* AVCaptureSessionPresetHigh AVCaptureSessionPresetMedium AVCaptureSessionPresetLow AVCaptureSessionPreset640x480 AVCaptureSessionPreset1280x720 AVCaptureSessionPresetPhoto * / / / set the resolution of the image self. CaptureSession. SessionPreset = AVCaptureSessionPresetHigh; / / get the default video capture device iOS to return to the rear camera AVCaptureDevice * videoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo]; // Wrap the capture device as AVCaptureDeviceInput // Note: Add a capture device for a session, The device must be encapsulated as an AVCaptureDeviceInput object AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:error]; If (videoInput) {//canAddInput: If ([self.capturesession canAddInput:videoInput]) {// Add videoInput to captureSession [self.capturesession] addInput:videoInput]; self.activeVideoInput = videoInput; } }else { return NO; } / / select the default audio capture devices That returns a built-in microphone AVCaptureDevice * audioDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio]; / / for the device to create a capture device input AVCaptureDeviceInput * audioInput = [AVCaptureDeviceInput deviceInputWithDevice: audioDevice error:error]; If (audioInput) {//canAddInput: If ([self.capturesession canAddInput:audioInput]) {// Add audioInput to captureSession [self.capturesession] addInput:audioInput]; } }else { return NO; } / / AVCaptureStillImageOutput instance Static images from the camera to capture the self. ImageOutput = [[AVCaptureStillImageOutput alloc] init]; / / configuration dictionary: want to capture the JPEG picture self. The imageOutput. OutputSettings = @ {AVVideoCodecKey: AVVideoCodecJPEG}; // Check whether the output connection is available. Available are added to the output connected to the if ([self. CaptureSession canAddOutput: self. ImageOutput]) {[self. CaptureSession addOutput:self.imageOutput]; Self.movieoutput = [[AVCaptureMovieFileOutput alloc]init]; // Check whether the output connection is available. Available are added to the output connected to the if ([self. CaptureSession canAddOutput: self. MovieOutput]) {[self. CaptureSession addOutput:self.movieOutput]; } self.videoQueue = dispatch_queue_create("cc.VideoQueue", NULL); return YES; }Copy the code

The above steps are 👇🏻

  1. Initialize the
  2. Set resolution
  3. Configure the input device ◦ to convert toAVCaptureDeviceInput◦ includes video input and audio input
  4. Configure output ◦ Static image output ◦ video file output
  5. ◦ be careful to be judgmental when configuring input/output for a SessionCan you addThe camera or microphone is a public device and not affiliated to the App
  6. Configure plist 👉🏻. This case involves cameras, albums, and microphones. Users need to be reminded to handle privacy requirements

3.2 Starting and Ending a Session

- (void)startSession {// Check if it is running. [self.capturesession isRunning]) {// Using synchronous calls takes some time, Dispatch_async (self.videoQueue, ^{[self.capturesession startRunning]; }); If ([self. CaptureSession isRunning]) {// if (self. CaptureSession isRunning) {// If (self. CaptureSession isRunning) {// If (self. Dispatch_async (self.videoQueue, ^{[self.captureSession stopRunning]; }); }}Copy the code

No, there are two points to note:

  1. You need to check whether the SessionIn the runstate
  2. asynchronousStart or stop sessions. Synchronization may lag.

Four, the change of front and rear cameras

The iOS system usually has two cameras 👉🏻 front and rear. The default camera is rear. (Of course, after x series >2).

4.1 Preparations

Before making the camera switch, you need to write some preparation methods 👇🏻

  • Number of cameras
- (NSUInteger)cameraCount {
     return [[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo] count];
}
Copy the code
  • Camera device at specified location
- (AVCaptureDevice *)cameraWithPosition:(AVCaptureDevicePosition)position {// get the available video device NSArray *devicess = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]; // Traverse all available video devices and return position for (AVCaptureDevice *device in Devicess) {if (device.position == position) {return device; } } return nil; }Copy the code
  • Active and inactive cameras
- (AVCaptureDevice *) activeCamera {/ / returns the camera device attributes of the current capture session return self. ActiveVideoInput. The device; } // return the currently inactiveCamera - (AVCaptureDevice *)inactiveCamera {// obtain by looking for the reverse camera of the currently active camera, if the device has only 1 camera, AVCaptureDevice * Device = nil; if (self.cameraCount > 1) { if ([self activeCamera].position == AVCaptureDevicePositionBack) { device = [self cameraWithPosition:AVCaptureDevicePositionFront]; } else { device = [self cameraWithPosition:AVCaptureDevicePositionBack]; } } return device; }Copy the code

4.2 Switching Cameras

- (BOOL)switchCameras {// Determine whether there are multiple cameras if (! [self canSwitchCameras]) { return NO; } // Get the current device's reverse device NSError *error; AVCaptureDevice *videoDevice = [self inactiveCamera]; // Wrap the input device as AVCaptureDeviceInput AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error]; If (videoInput) {self. CaptureSession beginConfiguration [self. CaptureSession beginConfiguration]; / / will capture session, the original capture input device to remove [self. CaptureSession removeInput: self. ActiveVideoInput]; If ([self.captureSession canAddInput:videoInput]) {// If ([self.captureSession canAddInput:videoInput]) { Then use videoInput as the new video capture device [self.capturesession addInput:videoInput]; ActiveVideoInput = videoInput; }else {// If a new device cannot be added. Will the original video capture device to rejoin the capture session [self. CaptureSession addInput: self. ActiveVideoInput]; } // After the configuration is complete, AVCaptureSession commitConfiguration will batch all changes together. [self.captureSession commitConfiguration]; } else {/ / create AVCaptureDeviceInput errors, notifies the entrusted to handle the error [self. The delegate deviceConfigurationFailedWithError: error]; return NO; } return YES; }Copy the code

There are a few things to note when switching 👇🏻

  • Determine if there isMultiple cameras
  • judgeThe new deviceCan I join?
  • If an error occurs, the delegate handles 👇🏻
// When an error event occurs, Need to invoke some methods on the object to entrust processing @ protocol THCameraControllerDelegate < NSObject > - (void) deviceConfigurationFailedWithError: (NSError *)error; // Device error @endCopy the code

Five, the camera automatic focus

AVCaptureDevice defines a number of ways for developers to control the camera on iOS devices, such as 👇🏻

  • Can adjust and lock the camera independentlyFocal length, exposure, white balance
  • Focus and exposureIt can be set based on specific points of interest, so that it can realize the functions of click focus and click exposure in the application
  • It also allows you to control the deviceLEDAs a photographerThe flashorThe flashlightThe use of

5.1 Whether the Function is supported

Whenever you modify the camera device, you must first test whether the modification action is supported by the device. Not all cameras support all features. For example, the front camera doesn’t support focusing because it’s usually at arm’s length from the target, but most rear cameras do support full-size focusing.

⚠️ Note: Attempting to apply an unsupported action will cause an exception crash.

Therefore, before modifying the camera device, check whether the device supports 👇🏻

- (BOOL) cameraSupportsTapToFocus {/ / asked whether the activation of the camera support point of interest focus return [[self activeCamera] isFocusPointOfInterestSupported]; }Copy the code

5.2 focus

- (void)focusAtPoint:(CGPoint)point {// obtain the currently activeCamera device AVCaptureDevice *device = [self activeCamera]; / / support point of interest focus & whether af mode if (device) isFocusPointOfInterestSupported && [device isFocusModeSupported:AVCaptureFocusModeAutoFocus]) { NSError *error; // Lock the device for configuration, If the lock if ([device lockForConfiguration: & error]) {/ / set the focusPointOfInterest attribute CGPoint device. The focusPointOfInterest = point; / / focusMode set to AVCaptureFocusModeAutoFocus device. FocusMode = AVCaptureFocusModeAutoFocus; // Release the lock [device unlockForConfiguration]; } else {/ / errors, then returned to the error handling agent [self. The delegate deviceConfigurationFailedWithError: error]; }}}Copy the code

⚠️ Note: Before modifying the device, lock lockForConfiguration:; UnlockForConfiguration is unlocked after the modification.

Automatic camera exposure & locked exposure

Due to apple’s limitation on the openness of camera exposure logic, in addition, we will encounter some scenes with special exposure requirements during normal development, so we need to find another way to solve automatic exposure and locked exposure.

Again, you need to determine whether the device supports exposure 👇🏻

- (BOOL) cameraSupportsTapToExpose {/ / asked whether the device supports a point of interest for exposure return [[self activeCamera] isExposurePointOfInterestSupported]; }Copy the code

Then a coordinate point is exposed 👇🏻

static const NSString *THCameraAdjustingExposureContext; - (void)exposeAtPoint:(CGPoint)point {// get the currently activeCamera device AVCaptureDevice *device = [self activeCamera]; / / specified is AutoExposure mode of ae 】 【 AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure; / / determine whether support auto exposure mode 】 the if (device) isExposurePointOfInterestSupported && [device isExposureModeSupported: exposureMode]) { NSError *error; / / locking devices to configure if ([device lockForConfiguration: & error]) {/ / configuration expectations device. ExposurePointOfInterest = point; device.exposureMode = exposureMode; / / determine whether equipment support mode locked 】 【 the if ([device isExposureModeSupported: AVCaptureExposureModeLocked]) {/ / support, Then kVO is used to determine the status of the adjustingExposure attribute of the device. [device addObserver:self forKeyPath:@"adjustingExposure" options:NSKeyValueObservingOptionNew context:&THCameraAdjustingExposureContext]; } // Release the lock [device unlockForConfiguration]; } else { [self.delegate deviceConfigurationFailedWithError:error]; }}}Copy the code

Like focusing, we need to lock it before modifying it. In the meantime, we can see that [Locking exposure] is an adjustingExposure attribute value of Device through KVO. Why adjustingExposure is used?

Since adjustingExposure property is read-only, we can’t directly determine the change of the current value, so we use KVO to observe.

What logic did you do when you got the new value? 👇 🏻

- (void)observeValueForKeyPath:(NSString *)keyPath ofObject:(id)object change:(NSDictionary *)change context:(void *) context {/ / judge the context (context) is THCameraAdjustingExposureContext if (context = = & THCameraAdjustingExposureContext) { AVCaptureDevice *device = (AVCaptureDevice *)object; / / whether the equipment is no longer adjust exposure level, to determine whether the equipment exposureMode can be set to AVCaptureExposureModeLocked if (! Device. IsAdjustingExposure && [device isExposureModeSupported: AVCaptureExposureModeLocked]) {/ / removing as adjustingExposure The self, Won't get the subsequent change notice [object removeObserver: self forKeyPath: @ "adjustingExposure" context: & THCameraAdjustingExposureContext]; Dispatch_async (dispatch_get_main_queue(), ^{NSError *error; If ([device lockForConfiguration: & error]) {/ / modify exposureMode device. The exposureMode = AVCaptureExposureModeLocked; // Release the lock [device unlockForConfiguration]; } else { [self.delegate deviceConfigurationFailedWithError:error]; }}); } } else { [super observeValueForKeyPath:keyPath ofObject:object change:change context:context]; }}Copy the code

KVO observations is in fact the logic 👉 🏻 modifies exposureMode device for AVCaptureExposureModeLocked “lock mode”.

Finally, we add a reset method 👇🏻

/ / reset focus & exposure - (void) resetFocusAndExposureModes {AVCaptureDevice * device = [self activeCamera]; AVCaptureFocusMode focusMode = AVCaptureFocusModeContinuousAutoFocus; / / get focus point of interest and continuous autofocus mode is supported BOOL canResetFocus = [device isFocusPointOfInterestSupported] && [device isFocusModeSupported:focusMode]; AVCaptureExposureMode exposureMode = AVCaptureExposureModeContinuousAutoExposure; / / confirm the exposure can be reset BOOL canResetExposure = [device isFocusPointOfInterestSupported] && [device isExposureModeSupported:exposureMode]; CGPoint centPoint = CGPointMake(0.5f, 0.5f); NSError *error; If ([device lockForConfiguration:&error]) {// If focus is configurable, change if (canResetFocus) {device.focusMode = focusMode; device.focusPointOfInterest = centPoint; If (canResetExposure) {device.exposureMode = exposureMode; device.exposurePointOfInterest = centPoint; } // Release lock [device unlockForConfiguration]; } else { [self.delegate deviceConfigurationFailedWithError:error]; }}Copy the code

Vii. Camera flashlight and flash mode

There are two modes of LED light: flashlight and flash. Function related code is very simple 👇🏻

CameraHasFlash {return [[self activeCamera] hasFlash]; } // flashMode - (AVCaptureFlashMode)flashMode {return [[self activeCamera] flashMode; } // setFlashMode - (void)setFlashMode:(AVCaptureFlashMode)flashMode {// get the active device AVCaptureDevice *device = [self activeCamera]; / / determine whether support flash mode the if ([device isFlashModeSupported: flashMode]) {/ / if the support, the locking device NSError * error; If ([device lockForConfiguration:&error]) {// modify flashMode device.flashMode = flashMode; [device unlockForConfiguration]; }else { [self.delegate deviceConfigurationFailedWithError:error]; CameraHasTorch {return [[self activeCamera] hasTorch]; } // torchMode - (AVCaptureTorchMode)torchMode {return [[self activeCamera] torchMode]; } // set whether to turn on flashlight - (void)setTorchMode:(AVCaptureTorchMode)torchMode {AVCaptureDevice *device = [self activeCamera]; if ([device isTorchModeSupported:torchMode]) { NSError *error; if ([device lockForConfiguration:&error]) { device.torchMode = torchMode; [device unlockForConfiguration]; } else { [self.delegate deviceConfigurationFailedWithError:error]; }}}Copy the code

Eight, static picture shooting

Compared with the system camera, above we have realized some basic functions of the camera 👇🏻

  • Session configuration, setting input and output devices
  • Auxiliary functions: Switch front and rear cameras, focus, exposure, flashlight and flash etc.

Now moving on, we implement a still image shot. The general requirements are 👇🏻

  1. Take photos and save them to the system album after completion
  2. Provide a thumbnail image to display to the user

Also, there are two things to consider when taking a still image 👇🏻

  1. The equipmentDirection of the problem: For example, if the App only supports portrait, it should also support horizontal photography
  2. Access system album, need to modifyPlist permissionsOtherwise, the project will crash

8.1 the direction

/ / get direction value - (AVCaptureVideoOrientation) currentVideoOrientation {AVCaptureVideoOrientation orientation; / / get UIDevice orientation switch ([UIDevice currentDevice]. Orientation) {case UIDeviceOrientationPortrait: orientation = AVCaptureVideoOrientationPortrait; break; case UIDeviceOrientationLandscapeRight: orientation = AVCaptureVideoOrientationLandscapeLeft; break; case UIDeviceOrientationPortraitUpsideDown: orientation = AVCaptureVideoOrientationPortraitUpsideDown; break; default: orientation = AVCaptureVideoOrientationLandscapeRight; break; } return orientation; }Copy the code

8.2 Taking Pictures

/ * AVCaptureStillImageOutput is AVCaptureOutput subclasses. */ - (void)captureStillImage {// Get connection AVCaptureConnection * Connection = [self.imageOutput connectionWithMediaType:AVMediaTypeVideo]; / / determine whether support set video direction if (connection. IsVideoOrientationSupported) {/ / to get direction value connection. VideoOrientation = [the self currentVideoOrientation]; Handler = ^(CMSampleBufferRef sampleBuffer,NSError *error) {if (sampleBuffer! = NULL) { NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:sampleBuffer]; UIImage *image = [[UIImage alloc]initWithData:imageData]; / / key: after the success of the capture image, save the image to the system album [self writeImageToAssetsLibrary: image]; } else { NSLog(@"NULL sampleBuffer:%@",[error localizedDescription]); }}; / / to capture static images [self imageOutput captureStillImageAsynchronouslyFromConnection: connection completionHandler: handler]; }Copy the code

Then there is the code to save the album 👇🏻

- (void) writeImageToAssetsLibrary image: (UIImage *) {/ / create instances ALAssetsLibrary ALAssetsLibrary * library = [[ALAssetsLibrary  alloc]init]; // Parameter 1: image (the parameter is CGImageRef, so image.cgimage) // Parameter 2: direction parameter is converted to NSUInteger // parameter 3: write success, failure processing [library] writeImageToSavedPhotosAlbum:image.CGImage orientation:(NSUInteger)image.imageOrientation completionBlock:^(NSURL *assetURL, NSError *error) {// After success, Send notification to capture images. For drawing process of the lower left corner of the thumbnail if (! Error) {[self postThumbnailNotifification: image];} else {/ / failure print error message id message = [error localizedDescription]; NSLog(@"%@",message); } }]; }Copy the code

Send thumbnail notifications 👇🏻

- (void) postThumbnailNotifification: (image UIImage *) {/ / back to the home side column dispatch_async (dispatch_get_main_queue (), ^{NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:THThumbnailCreatedNotification object:image]; }); }Copy the code

conclusion

This article realizes the functions of the system camera by imitating the AVFoundation framework, mainly involving the configuration of AVCaptureSession, front and back camera switching, focusing, exposure processing, flash, flashlight and other auxiliary functions. It is not difficult, but there are several points that need to be noted 👇🏻

  1. Video Preview Layer

    • The screenCoordinate system andcameraTransformation of coordinate system
  2. AVCaptureSession configuration

    • The input device is converted toAVCaptureDeviceInputAnd audio and video devices need to be added separately
    • Output devices can be added directly
    • Determine the value before adding itcanAddIn(Out)put
    • Configure info.plist, otherwise the project will crash
  3. Changes to front and rear cameras

    • Before the changebeginConfigurationAfter changecommitConfiguration
    • If a new device cannot be added, add the original video capture device to the capture session again
  4. Focus, exposure

    • You need to determineWhether to supportFocus, exposure and other auxiliary functions
    • Before the change, lock the device to prepare for configurationlockForConfiguration:Release after changeunlockForConfiguration
    • lockedIt was observed by KVOadjustingExposure(Read-only property) when the value changes
  5. Still picture shooting

    • Pay attention to the direction of the device
    • It needs to be modified when saving the system albumPlist permissionsOtherwise, the project will crash