Audio and video collection includes two parts: video collection and audio collection. In iOS, video and audio can be collected synchronously. AVFoundation, the system framework, can help us to collect audio and video. We can also switch the front and rear cameras for video, and finally write the recorded video into the sandbox

Collection and display of audio and video data

Initialize the input and output of the video

// Lazy load a session, all operations need session to perform
fileprivate lazy var session: AVCaptureSession = AVCaptureSession(a)// Save the video output
fileprivate var videoOutput: AVCaptureVideoDataOutput?
// Save the video input
fileprivate var videoInput: AVCaptureDeviceInput?
// Save the preview layer
fileprivate var previewLayer: AVCaptureVideoPreviewLayer?
Copy the code

Set the video input source and output source

// Set the video input source
guard let devices = AVCaptureDevice.devices() as? [AVCaptureDevice] else { return }
// Get our front camera (back to.back)
guard let device = devices.filter({$0.position == .front }).first else { return }
guard let input = try? AVCaptureDeviceInput(device: device) else { return }
self.videoInput = input

// Set the video output source
let output = AVCaptureVideoDataOutput(a)let queue = DispatchQueue.global()
/ / set the proxy, and obtain the data collected in the proxy, AVCaptureVideoDataOutputSampleBufferDelegate to keep
output.setSampleBufferDelegate(self, queue: queue)
self.videoOutput = output
Copy the code

Sets the audio input source and output source

// Set the audio input source
guard let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) else { return }
guard let input = try? AVCaptureDeviceInput(device: device) else {return}

// Sets the audio output source
let output = AVCaptureAudioDataOutput(a)let queue = DispatchQueue.global()
/ / AVCaptureAudioDataOutputSampleBufferDelegate to keep
output.setSampleBufferDelegate(self, queue: queue)
Copy the code

Add audio and video input and output to session, but check whether it can be added before each addition

// Add input and output

// Note: you need to call the [beginConfiguration] method of the session before setting the session
// To tell the system that you need to start the configuration now, and then call the commitConfiguration method to commit the configuration
session.beginConfiguration()
if session.canAddInput(input) {
    session.addInput(input)
}
if session.canAddOutput(output) {
    session.addOutput(output)
}
session.commitConfiguration()
Copy the code

Second, the realization of audio and video collection agent

Although the agent names for audio and video are different, the methods to be implemented are the same. Therefore, to get audio or video, we need to make a judgment first, which needs to use the method of AVCaptureOutput

// This convenience method returns the first AVCaptureConnection in the receiver's // connections array that has an AVCaptureInputPort of the specified mediaType. If // no connection with the specified mediaType is found, nil is returned. open func connection(withMediaType mediaType: String!) -> AVCaptureConnection!Copy the code
extension ViewController: AVCaptureVideoDataOutputSampleBufferDelegate.AVCaptureAudioDataOutputSampleBufferDelegate {
    func captureOutput(_captureOutput: AVCaptureOutput! , didOutputSampleBuffer sampleBuffer: CMSampleBuffer! , from connection: AVCaptureConnection!) {
        ifvideoOutput? .connection(withMediaType:AVMediaTypeVideo) == connection {
            print("Video data")}else {
            print("Audio data")}}}Copy the code

3. Initialize a preview layer to display the captured video.

// Create a preview layer
guard let previewLayer = AVCaptureVideoPreviewLayer(session: session) else {return}
previewLayer.frame = view.bounds

// Add the layer to the controller's View layer
view.layer.insertSublayer(previewLayer, at: 0)
self.previewLayer = previewLayer
Copy the code

Now that you have the basic functionality, if you want to start collecting audio and video, you just call

// Start recording
session.startRunning()
// End recording
session.stopRunning()
Copy the code

Switch the lens

In fact, this is to replace the current video input method, the process is the same as setting the input source above.

// 1. Take out the direction of the previous shot
guard let videoInput = videoInput else { return }
let position: AVCaptureDevicePosition = videoInput.device.position == .front ? .back : .front

guard let devices = AVCaptureDevice.devices() as? [AVCaptureDevice] else { return }
guard let device = devices.filter({$0.position == position }).first else { return }
guard let newInput = try? AVCaptureDeviceInput(device: device) else { return }

// 2. Remove the previous input and add a new input
session.beginConfiguration()
session.removeInput(videoInput)
if session.canAddInput(newInput) {
    session.addInput(newInput)
}
session.commitConfiguration()

// 3. Save the latest input
self.videoInput = newInput
Copy the code

Record video and write it to a file

fileprivate var movieOutput: AVCaptureMovieFileOutput?
Copy the code

Start writing files as soon as you start collecting audio and video

// Start writing files

// create output to write to file
let fileOutput = AVCaptureMovieFileOutput(a)self.movieOutput = fileOutput // Save to stop writing to the file

// Set the type, otherwise an error is reported.
let connection = fileOutput.connection(withMediaType: AVMediaTypeVideo) connection? .automaticallyAdjustsVideoMirroring =true

if session.canAddOutput(fileOutput) {
    session.addOutput(fileOutput)
}

// start writing files directly
let filePath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first! + "/abc.mp4"
let fileUrl = URL(fileURLWithPath: filePath)
fileOutput.startRecording(toOutputFileURL: fileUrl, recordingDelegate: self)
Copy the code

Stop writing files while stopping audio and video collection

// Stop writing to the file movieOutput? .stopRecording()Copy the code

See DEMO for details.