Taking photos is an important use of mobile phones, it is necessary to understand the photography, video processing related.
Take a video and export the video file to an album
To deal with AVFoundation, the routine is to configure session, add input and output, and open up the pipeline of video stream. Use device as input, get information, use session as a bridge between input and output, control and scheduling, and finally specify the type of output we want. Avcapturedevice.default (for:.audio) is the input source. The output of the video stream will use the AVCaptureMovieFileOutput class.
The code for shooting video is as follows:
func captureMovie() {// First, do an acknowledgement and toggle. If the camera is not currently shooting, shoot Guard movieOutput. IsRecording ==false else {
print("movieOutput.isRecording\n")
stopRecording()
return; } // Get the connection for the video outputletConnection = movieoutput.connection (with:.video) // Controls the orientation of the connection, and the ratio of horizontal and vertical screens of the video is consistent with that of the mobile phoneif(connection? .isVideoOrientationSupported)! { connection? .videoOrientation = currentVideoOrientation()} // Set the connected video to automatically stabilize, and the phone will select the appropriate shooting format and frame rateif(connection? .isVideoStabilizationSupported)! { connection? .preferredVideoStabilizationMode = AVCaptureVideoStabilizationMode.auto }letDevice = activeInput. Device // Because the camera needs to be sensitively focusedif device.isSmoothAutoFocusSupported{
do{
try device.lockForConfiguration()
device.isSmoothAutoFocusEnabled = false// If set totrue, lens movements lens can be slow to mobile device. The unlockForConfiguration catch ()} {print("Error setting configuration: \(String(describing: error.localizedDescription))")}}letoutput = URL.tempURL movieOutput.startRecording(to: output! , recordingDelegate: self) }Copy the code
Instead of taking photos, video uses a connection, movieOutput.Connection (with:.video).
When you make a video, it’s going to be done,
Agent in AVCaptureFileOutputRecordingDelegate class methods, and save the video file, update the UI
The outputFileURL parameter is called back to the developer by the system agent. The system writes the video file to the resource locator of the app sandbox. All you have to do is copy the video files in the sandbox to the system album.
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
if let error = error{
print("Error, recording movie: \(String(describing: error.localizedDescription))")}else{// Save to album, see github repo saveMovieToLibrary(movieURL: outputFileURL) // Change UI captureButton.setImage(UIImage(named:"Capture_Butt"), for:.normal) // stopTimer stopTimer()}}Copy the code
When you’re shooting a video, it’s nice to know how it’s going.
It’s recorded with a timer, there’s a Label display
func startTimer(){// Destroy the old oneifupdateTimer ! UpdateTimer = Timer(timeInterval: 0.5, target: self, selector: selectable)#selector(self.updateTimeDisplay), userInfo: nil, repeats: true)
RunLoop.main.add(updateTimer, forMode: .commonModes)
}
Copy the code
I’m going to turn on the light when the environment is dark, so I’m going to adjust the properties in the AVCaptureDevice class.
Use the flash for the photo, flashMode, and set the AVCapturePhotoSettings. Every photo, to new AVCapturePhotoSettings. AVCapturePhotoSettings with atomic atomic.
TorchMode is used to set the device. TorchMode directly changes the properties of AVCaptureDevice
Apple designed it well. The output type determines the light mode. The flash for taking photos is configured by instant action. When you shoot a video, it’s long light.
// MARK: Flash Modes (Still Photo), Flash FuncsetFlashMode(isCancelled: Bool = false) {
letDevice = activeInput. Device // Flash, available only for the rear camera. Front facing camera is to increase screen brightnessifDevice. isFlashAvailable{// This code is to control the flash off, auto, on three states, Var currentMode = currentFlashOrTorchMode(). Mode currentMode += 1if currentMode > 2 || isCancelled == true{
currentMode = 0
}
letnew_mode = AVCaptureDevice.FlashMode(rawValue: currentMode) self.outputSetting.flashMode = new_mode! ; Flashlabel.text = currentFlashOrTorchMode().name} // MARK: Torch Modes (Video), Torch ModessetTorchMode(isCancelled: Bool = false) {
let device = activeInput.device
ifDevice. hasTorch{// This code is to control the flashlight off, auto, on three states, Var currentMode = currentFlashOrTorchMode(). Mode currentMode += 1if currentMode > 2 || isCancelled == true{
currentMode = 0
}
let new_mode = AVCaptureDevice.TorchMode(rawValue: currentMode)
ifdevice.isTorchModeSupported(new_mode!) {do{/ / similar to the previous operation, need to lock the try device. The lockForConfiguration () device. The torchMode = new_mode! device.unlockForConfiguration() flashLabel.text = currentFlashOrTorchMode().name }catch{print("Error setting flash mode: \(String(describing: error.localizedDescription))")}}}}Copy the code
Video synthesis: Multiple audio and video clips are combined into a video file. Add background music to the video
Synthesizing video, operating is the video resource, AVAsset.
AVAsset has a subclass, AVComposition. Video composition is usually done by subclass AVComposition avmutable position.
AVComposition allows you to combine multiple sources of media files, arrange them in time, and compose the video you want. Concrete is by means of a set of audio and video tracks AVMutableCompositionTrack.
AVCompositionTrack contains fragments of a set of tracks. AVCompositionTrack subclasses AVMutableCompositionTrack, can add or delete his trajectory segments, also can adjust the percentage of time trajectory.
Take AVMutableCompositionTrack AVAsset add video resources, as the fragment trajectory.
Preview the synthesized video resource AVteth with an instance of AVPlayer and export the synthesized file with AVAssetExportSession.
Preview the composited video
The trick is to take the URL of the resource and create an AVAsset. AVAsset contains audio information (background sound, speaking voice, pure noise) and video information.
Using AVComposition subclass AVMutableComposition, add audio tracks composition. AddMutableTrack (withMediaType: . Audio and video tracks composition. AddMutableTrack (withMediaType: video
var previewURL: URL? @ibAction func previewComposition(_ sender: UIButton) {/ / must first synthesis, / / to synthesis, has the resources, and to ensure that the task of current no synthetic guard videoURLs. Count > 0, activityIndicator. IsAnimating = =false else{
return} var player: AVPlayer! defer {let playerViewController = AVPlayerViewController()
playerViewController.allowsPictureInPicturePlayback = true
playerViewController.player = player
present(playerViewController, animated: true) { playerViewController.player! .play() } } guard previewURL == nilelse {
player = AVPlayer(url: previewURL!)
return} // Before, no compositing write file, Var videoAssets = [AVAsset]() var videoAssets = [AVAsset]() var videoAssets = [AVAsset](for urlOne in videoURLs{
letAv_asset = AVAsset(url: urlOne) videoAssets.append(av_asset)} // Use AVComposition subclass AVMutableComposition to modify the composition trajectoryletComposition = AVMutableComposition() // Create two tracks, audio track and video tracklet videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
letaudioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: KCMPersistentTrackID_Invalid) var startTime = kCMTimeZerofor asset in videoAssets{
do{// insert the videoTrack try videoTrack? .insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration), of: asset.tracks(withMediaType: .video)[0], at: startTime) }catch{print("Insert synthetic video trace, video error")}do{// insert audioTrack, try audioTrack? .insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration), of: asset.tracks(withMediaType: .audio)[0], at: startTime) }catch{print("Insert synthetic video track, audio error."StartTime = CMTimeAdd(startTime, asset-.duration)}let playItem = AVPlayerItem(asset: composition)
player = AVPlayer(playerItem: playItem)
}
Copy the code
In synthetic video, more fine control, through AVMutableVideoCompositionLayerInstruction
AVMutableVideoCompositionLayerInstruction this class, the deformation of the synthetic trajectory can be adjusted (pan and zoom), such as cutting and transparency properties.
Set the AVMutableVideoCompositionLayerInstruction generally takes two parameters,
AVMutableVideoCompositionLayerInstruction through path to create the let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track).
Information configuration through the resource file AVAsset.
Generally, the screen for taking photos is 375X667, which is relatively small compared to the length of video files. The video’s file width is 1280.0 X 720.0, far larger than the phone screen. We need to make a reduction
func videoCompositionInstructionForTrack(track: AVCompositionTrack, asset: AVAsset) -> AVMutableVideoCompositionLayerInstruction{
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
letAssetTrack = asset-. tracks(withMediaType:.video)[0] // Use the preferredTransform property of the video file asset to determine whether the video is vertical or horizontalletTransfrom = assetTrack. PreferredTransform / / orientationFromTransform () method, see lot repoletAssetInfo = transfrom. OrientationFromTransform () / / in order to screen to render high-definition video var scaleToFitRatio horizontal = HDVideoSize. Width / assetTrack.naturalSize.widthifVertical scaleToFitRatio assetInfo. IsPortrait {/ / = HDVideoSize. Height/assetTrack naturalSize. Widthlet scaleFactor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
let concatTranform = assetTrack.preferredTransform.concatenating(scaleFactor)
instruction.setTransform(concatTranform, at: kCMTimeZero)
}
else{/ / transverselet scale_factor = CGAffineTransform(scaleX: scaleToFitRatio, y: scaleToFitRatio)
letScale_factor_two = CGAffineTransform (rotationAngle: PI / 2.0)letconcat_transform = assetTrack.preferredTransform.concatenating(scale_factor).concatenating(scale_factor_two) Instruction. SetTransform (concat_transform, ats: kCMTimeZero)} / / will handle good AVMutableVideoCompositionLayerInstruction returnsreturn instruction
}
Copy the code
Video compositing, and export to album. This is a time-consuming operation
The way to export is to take an AVMutableComposition, create an AVAssetExportSession, and export it in the exportAsynchronously method of the AVAssetExportSession object. Write directly to the album, the corresponding URL is session.outputurl
// Video compositing and export to album. This is a time-consuming operation private funcmergeAndExportVideo(){
activityIndicator.isHidden = false/ / light chrysanthemum, give the user feedback activityIndicator. StartAnimating previewURL () / / put the record set to nil / / video synthesis, export success, AVAsset var videoAssets = [AVAsset]()for url_piece in videoURLs{
letAv_asset = AVAsset(URL: url_piece) VideoAssets.append (av_asset)} // Create a synthesized AVMutableComposition objectletComposition = AVMutableComposition() // Create a soundtrack for the AVMutableComposition objectletaudioTrack = composition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32 (kCMPersistentTrackID_Invalid)) / / by AVMutableVideoCompositionInstruction, adjust the proportion of synthetic trajectory, such as location, cutting and transparency properties. / / AVMutableVideoCompositionInstruction object, control a set of AVMutableVideoCompositionLayerInstruction layer objectletVar startTime = kCMTimeZero mainInstruction = AVMutableVideoCompositionInstruction () / / traverse each video resources, Add audio and video tracks to AVMutableCompositionfor asset inVideoAssets {/ / because AVMutableVideoCompositionLayerInstruction object trajectory, apply to the whole video / / so here are a resource, corresponds to a tracklet videoTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do{ try videoTrack? .insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration), of: asset.tracks(withMediaType: .video)[0], at: startTime) }catch{print("Error creating Video track."} // If there is background music, do not add the video's own soundifMusicAsset == nil {// Insert audiodo{ try audioTrack? .insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration), of: asset.tracks(withMediaType: .audio)[0], at: startTime) } catch{print("Error creating Audio track."}} / / add resources, create a configuration file AVMutableVideoCompositionLayerInstructionletinstruction = videoCompositionInstructionForTrack(track: videoTrack! Instruction. SetOpacity (1.0, at: startTime)ifasset ! = videoAssets. Last {instruction. SetOpacity (0.0, at: CMTimeAdd(startTime, asset-.duration)) // Between video clips, Added transition, avoid the interference between fragment} mainInstruction. LayerInstructions. Append (instruction) / / in this way, StartTime = CMTimeAdd(startTime, asset-.duration)}letTotalDuration = startTimeifmusicAsset ! = nil {do{ try audioTrack? .insertTimeRange(CMTimeRangeMake(kCMTimeZero, totalDuration), of: musicAsset! .tracks(withMediaType: .audio)[0], at: kCMTimeZero) } catch{print("Error creating soundtrack total.")}} / / set mainInstruction time range mainInstruction. The timeRange = CMTimeRangeMake (kCMTimeZero, TotalDuration) / / AVMutableVideoComposition along the time line, how to set the video track synthesis / / AVMutableVideoComposition configuration size, duration, synthesis of video frames apply colours to a drawing interval, apply colours to a drawing sizeletvideoComposition = AVMutableVideoComposition() videoComposition.instructions = [mainInstruction] videoComposition.frameDuration = CMTimeMake(1, 30) videoComposition. RenderSize = HDVideoSize videoComposition. RenderScale = 1.0 / / composition, Create AVAssetExportSessionletexporter: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)! OutputURL = uniqueURL // Set the output format, Quick time movie file exporter. OutputFileType =. Mov optimize network broadcast exporter. / / shouldOptimizeForNetworkUse =trueExporter. VideoComposition = videoComposition exporter. / / open output session exportAsynchronously {DispatchQueue. Main. Async { self.exportDidFinish_deng(session: exporter) } } }Copy the code
See the full code:Github.com/BoxDengJZ/A…
More:
Finally, about adding graphic overlay and animation to the video.
Related: Photo focusing and Exposure, AVFoundation concise tutorial
Recommended Resources:
WWDC 2016: Advances in iOS Photography
AVFoundation Programming Guide for Apple
Video tutorial