Look directly at camera focus and exposure

Use AVKit for audio and video as soon as you get started. For flexible control, use AVFoundation


Key points:

  • Using resources (typically videos, images, live photos),
  • Play,
  • The editor,
  • Capture (take pictures and record video)
  • Export resources (Export processed and edited resources, photos and videos to albums)

AVFoundation, video loading and exporting, heavy use of asynchrony. Simply sending a message is definitely not going to work. Blocks the current thread, causing it to stall. AVFoundation is designed to take full advantage of 64-bit hardware and multithreading.


The first is playing,

Play local video files, and remote video with streaming media.

Local file, single play

So AVKit, AVPlayerViewController. AVPlayerViewController is a subclass of ViewController,

AVPlayerViewController is very powerful on TV OS. (This article is for iOS only)

Apple’s own AVPlayerViewController has a lot of controls for playing. In playback, you can play, pause, fast-forward, rewind, and adjust the length and width ratio of the video (that is, the picture is moderate in the screen or spread all over the screen) when playing a local file.

Here’s the code for apple’s simple design:

Let player = AVPlayer(url: // Create an AVPlayerViewController instance let playerViewController = AVPlayerViewController() playerViewController.player = queuePlayer present(playerViewController, animated: true) { playerViewController.player! .play()}// There is a closure, the interface appears, then play.Copy the code

Local file, multiple consecutive playback

For serial playback, use AVQueuePlayer, which is a subclass of AVPlayer, to put multiple videos in a video queue and play them consecutively. Plays multiple resources in sequence.

AVPlayerItem contains a lot of video resource information, including resource location URI, trajectory information, duration of video, etc.

According to Apple’s documentation, AVPlayerItem is used to manage the timing and presentation state of resources played by the player. It has an AVAsset play resource property.

var queue = [AVPlayerItem]() let videoClip = AVPlayerItem(url: Url) queue.append(videoClip) // Queue can continue to add AVPlayerItem instances let queuePlayer = AVQueuePlayer(items: queue) let playerViewController = AVPlayerViewController() playerViewController.player = queuePlayer present(playerViewController, animated: true) { playerViewController.player! .play() }Copy the code

Picture-in-picture on the iPad

In the picture in picture function, through to the AVAudioSession support background sound, the Appdelegate didFinishLaunchingWithOptions add this code below, using a background model, Start by checking the relevant background functionality in the Capability of Xcode’s target.

let session = AVAudioSession.sharedInstance() do { try session.setCategory(AVAudioSessionCategoryPlayback) try session.setActive(true) } catch let error { print("AVFoundation configuration error: \ (error. LocalizedDescription) \ n \ n AV configuration has a problem ")} / / it is necessary, because the video picture in picture function, apple are processed as background tasks.Copy the code

Streaming media and online video

Replace the local resource path URL with the network URL.

Optimized. Exit the playback interface after the playback is complete

   override func viewDidLoad() {super viewDidLoad () / / add play complete surveillance NotificationCenter default. The addObserver (self, the selector:#selector(playerItemDidReachEnd), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: nil)} // Perform exit interface control @objc funcplayerItemDidReachEnd(){ self.presentedViewController? .dismiss(animated:true, completion: {})
    }
Copy the code

Next, take a photo, set up the session to capture, and preview in real time.

Set front and rear cameras, focus and exposure, take photos (still pictures)

The core class for video recording is AVCaptureSession, an app that establishes a video streaming session with iOS. AVCaptureSession acts as a scheduling center to control the input/output streams of equipment, specifically cameras and microphones.

The AVCaptureDeviceInput class is the input source for the video stream, it is his data that is displayed in the preview interface, and he is responsible for the exported video file. After the video stream Session object is generated, it can be reconfigured. This enables dynamic modification of the video stream session configuration information. Video stream session input and output routes can also be dynamically changed. . For example, only need a session can be derived by AVCapturePhotoOutput photos, can export AVCaptureMovieFileOutput video files.

Starting a Video Session

Before captureSession. StartRunning () to add AVCapturePhotoOutput AVCaptureDeviceInput input and output/AVCaptureMovieFileOutput, Prepare AVCaptureVideoPreviewLayer preview interface

// There is a captureSession objectletCaptureSession = AVCaptureSession() // Two outputs, output photos and output videolet imageOutput = AVCapturePhotoOutput()
letmovieOutput = AVCaptureMovieFileOutput() func setupSession() -> Bool{ captureSession.sessionPreset = AVCaptureSession. Preset. High / / set the resolution of the session in the first place. The sessionPreset property sets the quality of the output videolet camera = AVCaptureDevice.default(for:.video) // The default camera is back-facing camera, not selfie.do {
            let input = try AVCaptureDeviceInput(device: camera!)
            ifCaptureSession. CanAddInput (input) {captureSession. AddInput activeInput = input (input) / / to add photos, video input}} catch {print("Error settings device input: \(error)")
            return false} // Set the microphonelet microphone = AVCaptureDevice.default(for: .audio)
        do{
            let micInput = try AVCaptureDeviceInput(device: microphone!)
            ifCaptureSession. CanAddInput (micInput) {captureSession. AddInput (micInput) / / add the microphone input}} catch {print("Error setting device audio input: \(String(describing: error.localizedDescription))")
            fatalError("Mic"} // Add two outputs, output photo and output videoif captureSession.canAddOutput(imageOutput){
            captureSession.addOutput(imageOutput)
        }
        if captureSession.canAddOutput(movieOutput){
            captureSession.addOutput(movieOutput)
        }
        return true
    }
Copy the code
Set the video session preview screen

AVCaptureVideoPreviewLayer is a subclass of CALayer, used to display the camera interface.

    func setupPreviewPreview () {/ / configuration interface previewLayer previewLayer = AVCaptureVideoPreviewLayer (session: CaptureSession) // previewLayeris is initialized by captureSession. Size and video broadcast way of stretching videoGravity previewLayer. Frame. = camPreview bounds previewLayer. VideoGravity = AVLayerVideoGravity. ResizeAspectFill camPreview. Layer. AddSublayer (previewLayer) / / camPreview is a UIView, shop is on the self. The view The above}Copy the code
Pat, startSession

The way to start the video stream, if you start it, you don’t care. If not, deal with it. Starting a video stream is a time-consuming operation that does not block the main thread and is generally asynchronous with the system default thread queue.

let videoQueue = DispatchQueue.global(qos: .default)

func startSession() {if! captureSession.isRunning{ videoQueue.async { self.captureSession.startRunning() } } }Copy the code

To take a Photo, the code below is to take a still Photo JPEG, not Live Photo.

Var outputSetting = AVCapturePhotoSettings(format: [AVVideoCodecKey: AvVideoCodecType.jpeg]) // Static image configuration funccapturePhoto() {
        guard PHPhotoLibrary.authorizationStatus() == PHAuthorizationStatus.authorized else{
            PHPhotoLibrary.requestAuthorization(requestAuthorizationHander)
            return
        }
        letsettings = AVCapturePhotoSettings(from: outputSetting) imageOutput.capturePhoto(with: settings, delegate: Self) // imageOutput output stream in the sample buffer, capture the static image} extension ViewController: AVCapturePhotoCaptureDelegate{ func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {// If the video stream has data in the sample buffer, unpack itif let imageData = photo.fileDataRepresentation() {let image = UIImage(data: imageData)
            letphotoBomb = image? .penguinPhotoBomb(image: image!) self.savePhotoToLibrary(image: photoBomb!) // Finally, the composite photo is saved to the system album // Here is a composite photo, see Github Repo below.}else{
            print("Error capturing photo: \(String(describing: error? .localizedDescription))")}}}Copy the code

Front facing camera for selfies

First, make sure your phone has multiple cameras. If there are multiple cameras, you can switch the camera input. The trick is to start configuring, modifying, and committing changes.

CaptureSession. BeginConfiguration (), and then write to modify, until captureSession.com mitConfiguration () to submit, to take effect.

The same is true of UIView rendering. CATransaction, start, set, submit, and you can see the refreshed interface on the screen.

@ibAction func switchCameras(_ sender: UIButton) {guard movieOutput.isrecording ==false else{
            return} // Make sure the phone has multiple camera guardslet frontCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front), let backCamera = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else{
            return; } // Create a new AVCaptureDeviceInput to switch. Update captureSession configuration.do{ var input: AVCaptureDeviceInput? // Identify the current camera to find another (we need)if activeInput.device == frontCamera{
                input = try AVCaptureDeviceInput(device: backCamera)
            }
            else{ input = try AVCaptureDeviceInput(device: FrontCamera)} / / to get a new input source, you can begin to configure the captureSession beginConfiguration () / / remove old input source, Is not to let the current camera input captureSession. RemoveInput (activeInput) / / add a new input source, namely let other camera inputifcaptureSession.canAddInput(input!) { captureSession.addInput(input!) ActiveInput = input} / / captureSession beginConfiguration (), begin to modify, until the next submitted to take effect. captureSession.commitConfiguration() }catch{print("Error , switching cameras: \(String(describing: error))")}}Copy the code

Focus POI: Click on the screen to take a photo and focus on the point of interest

The specific implementation is to convert the screen UI coordinates, that is, the coordinates of the preview layer, to the camera coordinate system, and then use the coordinates of the preview layer to set the focus point and mode.

Configure focus, which is user input, and use the phone’s camera hardware. When configuring the POI, there may be interference (such as background processes), which requires device locking. device.lockForConfiguration()

Note: Selfies cannot be focused. Front facing camera, no POI.

    @objc
    func tapToFocus(recognizer: UIGestureRecognizer){
        ifActiveInput. Device. IsFocusPointOfInterestSupported {/ / get the screen coordinates of the click into the coordinates of the point in the preview layerlet point = recognizer.location(in: camPreview) // Convert the coordinate points in the preview layer to the camera coordinate systemletPointOfInterest = previewLayer. CaptureDevicePointConverted (fromLayerPoint: point) / / free Settings related UI showMarkerAtPoint (point: FocusAtPoint (pointOfInterest)}} // Set focus with the coordinate points of the preview layer. func focusAtPoint(_ point: CGPoint){letDevice = activeInput. Device // First determine whether the phone can focusif device.isFocusPointOfInterestSupported , device.isFocusModeSupported(.autoFocus){
            do{/ / locking device to configure the try device. The lockForConfiguration () device. The focusPointOfInterest = point device. The focusMode =. An autoFocus Device. UnlockForConfiguration () / / configuration is complete, catch unlocked} {print("Error focusing on POI: \(String(describing: error.localizedDescription))")}}}Copy the code

Photo exposure function, double click to set exposure coordinates

Similar to focusing, the specific implementation is to convert the screen UI coordinates, that is, the coordinates of the preview layer, to the camera coordinate system, and then use the coordinates of the preview layer to set the point and mode of exposure. Unlike focus, exposure mode changes twice.

Mode continuousAutoExposure from the default locked. Locked to the selected coordinates. ContinuousAutoExposure at the end of the system can switch back to the default locked. Because I don’t know when the system will handle continuous automatic exposure, so I need to use KVO. AdjustingExposure (activeInput. Device) When the exposure adjustment is finished, the exposure mode is locked.

Call timing is good, double – click the screen, the phone camera automatic exposure, to prevent interference. Once the exposure is complete, change the exposure mode to lock. So you’re not always exposed.)

(This is a bit like listening on a keyboard, where system notifications are used.)

Configure exposure, which is user input and uses the phone’s camera hardware. When configuring exposure, there may be interference (such as background process effects), so lock will be used. device.lockForConfiguration()

Others: Selfies are exposure

Github repo @objc func tapToExpose(recognizer: UIGestureRecognizer){// Double click the github repo @objc func tapToExpose(recognizer: UIGestureRecognizer);ifActiveInput. Device. IsExposurePointOfInterestSupported {/ / like focus, get the screen coordinates of the click into the coordinates of the point in the preview layerlet point = recognizer.location(in: camPreview) // Convert the coordinate points in the preview layer to the camera coordinate systemlet pointOfInterest = previewLayer.captureDevicePointConverted(fromLayerPoint: point)
            showMarkerAtPoint(point: point, marker: exposureMarker)
            exposeAtPoint(pointOfInterest)
        }
    }
    
    private var adjustingExposureContext: String = "Exposure"
    private let kExposure = "adjustingExposure"

    func exposeAtPoint(_ point: CGPoint){
        let device = activeInput.device
        if device.isExposurePointOfInterestSupported, device.isFocusModeSupported(.continuousAutoFocus){
            do{ try device.lockForConfiguration() device.exposurePointOfInterest = point device.exposureMode = .continuousAutoExposure // Check the phone first, can lock exposure. You can adjust the exposure properties on your phone's cameraifDevice. IsFocusModeSupported (. Locked) {/ / with different focus, exposure to change two mode. / / there is an uncontrolled lengthy operations (not clear when the system handle), KVO device. AddObserver (self,forKeyPath: kExposure, options: .new, context: & adjustingExposureContext) / / change good, end of operating device. The unlockForConfiguration catch ()}} {print("Error Exposing on POI: \(String(describing: error.localizedDescription))"}} override func observeValue(forKeyPath keyPath: String? , of object: Any? , change: [NSKeyValueChangeKey : Any]? , context: UnsafeMutableRawPointer?) {// Verify that the specified context is being listened onif context == &adjustingExposureContext {
            letdevice = object as! AVCaptureDevice // If the phone camera is not in the exposure adjustment, that is, the exposure is complete, it is ready to processif! Device. IsAdjustingExposure device. IsExposureModeSupported (. Locked) {/ / observation properties, change, one-time into calling, Destruct KVO and asynchronously configure device.removeObserver(self,forKeyPath: kExposure, context: &adjustingExposureContext)
                DispatchQueue.main.async {
                    do{// When done, The exposure state recovery try device. LockForConfiguration () device. The exposureMode =. Locked device. The unlockForConfiguration catch ()} {print("Error exposing on POI: \(String(describing: error.localizedDescription))")}}}}else{
            super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context)
        }
    }
    

Copy the code

See the full code:Github.com/BoxDengJZ/A…

More:


Second is taking videos and exporting video files to albums.

There is also video compositing, combining multiple video clips into a single video file.

Common routines of AVFoundation video: video synthesis and export, video flashlight and flash lamp


Finally, about adding graphic overlay and animation to the video.


Recommended Resources:

WWDC 2016: Advances in iOS Photography

AVFoundation Programming Guide for Apple

Video tutorial

Big guy blog, AVPlayer local, web video playback related