preface

Recently, I made a small project of video editing. I stepped on some small pits, but I still realized the function without any danger. In fact, Apple officially gave us a UIVideoEditController for video processing, but it is difficult to expand or customize, so let’s use a framework AVFoundation provided by Apple to develop customized video processing. In addition, I found that there was no relevant and systematic information on the Internet, so I wrote this article, hoping to bring help to beginners (such as me) who are also doing video processing. The original | address

Project renderings

The function of the project is to undo, split, delete the video track and drag and drop the video block to expand or back the video

Function implementation

1. Select the video and play it

There are no good examples of selecting a video via UIImagePickerController and jumping to a custom edit controller:

/ / select video @ objc func selectVideo () {if UIImagePickerController. IsSourceTypeAvailable (. PhotoLibrary) {/ / initializes the graphics controller ImagePicker = UIImagePickerController() // Set the agent imagepicker. delegate = self // specify the image controller type imagepicker. sourceType = Imagepicker. mediaTypes = [kUTTypeMovie as String] // Pop controller, Self. present(imagePicker, Animated: true, completion: Nil)} else {print(" Error reading album ")}} func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {/ / the video path (select video automatically copied to the app after temporary folder) guard let videoURL = info [UIImagePickerController. InfoKey. MediaURL] as? URL the else {return} let pathString = videoURL. RelativePath print(" Video address: \(pathString)") true, completion: { let editorVC = EditorVideoViewController.init(with: videoURL) editorVC.modalPresentationStyle = UIModalPresentationStyle.fullScreen self.present(editorVC, animated: true) { } }) }Copy the code

2. Get thumbnail image by frame to initialize video track

CMTime

CMTime can be used to describe a more precise time. For example, we want to express a moment in a video. For example, 1: 01 Most of the time you can use NSTimeInterval t = 61.0 and that’s fine, but one of the big problems with floating point numbers is that you can’t accurately express 10 to the -6 like adding up a million 0.0000001, The calculation result may become 1.0000000000079181. In the process of video stream transmission, a large number of data addition and subtraction will cause errors, so we need another way to express time, that is CMTime

CMTime is a C function structure with four members. typedef struct { CMTimeValue value; // The current value of CMTimeValue CMTimeScale timescale; // The current CMTimeValue reference standard (e.g. :1000) CMTimeFlags flags; CMTimeEpoch epoch; } CMTime; For example, if timescale = 1000, CMTimeValue = 1000 * 1 = 100

CMTimeScale Timescale: The current reference standard for CMTimeValue, which indicates how many times a second is divided. It is especially important because the accuracy of the entire CMTime is controlled by it. For example, when a timescale is 1, CMTime cannot represent a fraction of a second and an increase in a fraction of a second. Similarly, when the timescale is 1000, each second is divided into 1000 parts, and the value of the CMTime represents how many milliseconds.

Implementation method

A method is called generateCGImagesAsynchronously (forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)

 /**
    	@method			generateCGImagesAsynchronouslyForTimes:completionHandler:
    	@abstract		Returns a series of CGImageRefs for an asset at or near the specified times.
    	@param			requestedTimes
    					An NSArray of NSValues, each containing a CMTime, specifying the asset times at which an image is requested.
    	@param			handler
    					A block that will be called when an image request is complete.
    	@discussion		Employs an efficient "batch mode" for getting images in time order.
    					The client will receive exactly one handler callback for each requested time in requestedTimes.
    					Changes to generator properties (snap behavior, maximum size, etc...) will not affect outstanding asynchronous image generation requests.
    					The generated image is not retained.  Clients should retain the image if they wish it to persist after the completion handler returns.
    */
    open func generateCGImagesAsynchronously(forTimes requestedTimes: [NSValue], completionHandler handler: @escaping AVAssetImageGeneratorCompletionHandler)
Copy the code

Viewing the official comments, you can see that two parameters need to be passed in: requestedTimes: [NSValue] : Array of request times (type NSValue) each element contains a CMTime that specifies the time of the video request. CompletionHandler handler: @ escaping AVAssetImageGeneratorCompletionHandler: image when the request is completed will call block, because the method is asynchronous, so you need to update the UI to return to the main thread.

Example:

func splitVideoFileUrlFps(splitFileUrl:URL, fps:Float, splitCompleteClosure:@escaping (Bool, [UIImage]) -> Void) {var splitImages = [UIImage]() Asset let optDict = NSDictionary(object: NSNumber(value: false), forKey: AVURLAssetPreferPreciseDurationAndTimingKey as NSCopying) let urlAsset = AVURLAsset(url: splitFileUrl, options: optDict as? [String : Any]) let cmTime = urlAsset.duration let durationSeconds: Float64 = CMTimeGetSeconds(cmTime) var times = [NSValue]() let totalFrames: Float64 = durationSeconds * Float64(FPS) var timeFrame: CMTime Int(totalFrames) { timeFrame = CMTimeMake(value: Int64(i), timescale: Int32(fps)) let timeValue = NSValue(time: timeFrame) times.append(timeValue) } let imageGenerator = AVAssetImageGenerator(asset: urlAsset) imageGenerator.requestedTimeToleranceBefore = CMTime.zero imageGenerator.requestedTimeToleranceAfter = CMTime. Zero let timesCount = times. Count / / call the method that get thumbnail imageGenerator. GenerateCGImagesAsynchronously (forTimes: times) { (requestedTime, image, actualTime, result, error) in var isSuccess = false switch (result) { case AVAssetImageGenerator.Result.cancelled: print("cancelled------") case AVAssetImageGenerator.Result.failed: print("failed++++++") case AVAssetImageGenerator.Result.succeeded: let framImg = UIImage(cgImage: image!) splitImages.append(self.flipImage(image: framImg, orientaion: 1)) if (Int(requestedtime. value) == (timesCount-1)) {// Last frame isSuccess = true SplitImages) print (" completed ")}}}} / / calls using callback to update the UI self. SplitVideoFileUrlFps (FPS splitFileUrl: url: 1) {[weak self](isSuccess, splitImgs) in if isSuccess { Imgcount :\(String(describing: self? .imageArr.count))") } }Copy the code

3. Video jump at specified time

 /**
     @method			seekToTime:toleranceBefore:toleranceAfter:
     @abstract			Moves the playback cursor within a specified time bound.
     @param				time
     @param				toleranceBefore
     @param				toleranceAfter
     @discussion		Use this method to seek to a specified time for the current player item.
    					The time seeked to will be within the range [time-toleranceBefore, time+toleranceAfter] and may differ from the specified time for efficiency.
    					Pass kCMTimeZero for both toleranceBefore and toleranceAfter to request sample accurate seeking which may incur additional decoding delay. 
    					Messaging this method with beforeTolerance:kCMTimePositiveInfinity and afterTolerance:kCMTimePositiveInfinity is the same as messaging seekToTime: directly.
     */
    open func seek(to time: CMTime, toleranceBefore: CMTime, toleranceAfter: CMTime)
Copy the code

And toleranceBefore: CMTime, and tolearnceAfter: CMTime, which are nicely defined as the time you want to jump. So the last two parameters, which are simply called tolerances of error, according to the official tolerances, are going to jump between these ranges that you’ve defined, which are [time-tolerancebefore, time+toleranceAfter], Of course, if you pass kCMTimeZero (in my current version this parameter has been changed to cmtime.zero), this is an exact search, but this causes extra decoding time.

Example:

let totalTime = self.avPlayer.currentItem? .duration let scale = self.avPlayer.currentItem? .duration.timescale //width: jump to video track length videoWidth: jump to video track length let process = width/videoWidth // fast forward function self.avplayer. seek(to: CMTimeMake(value: Int64(totalTime * process * scale!) , timescale: scale!) , toleranceBefore: CMTime.zero, toleranceAfter: CMTime.zero)Copy the code

Iv. Player monitoring

By monitoring the player we can change the movement of the control track to achieve the linkage between the video player and the video track

/** @method addPeriodicTimeObserverForInterval:queue:usingBlock: @abstract Requests invocation of a block during playback to report changing time. @param interval The interval of invocation of the block during normal playback, according to progress of the current time of the player. @param queue The serial queue onto which block should be enqueued. If you pass NULL, the main queue (obtained using dispatch_get_main_queue()) will be used. Passing a concurrent queue to this method will result in undefined behavior. @param block The block to be invoked periodically. @result An object conforming to the NSObject protocol. You must retain this returned value as long as you want the time observer to be invoked by the player. Pass this object to -removeTimeObserver: to cancel time observation. @discussion The block is invoked periodically at the interval specified, interpreted according to the timeline of the current item. The block is also invoked whenever time jumps and whenever playback starts or stops. If the interval corresponds to a very short interval in real time, the player may invoke the block less frequently than requested. Even so, the player will invoke the block sufficiently often for the client to update indications of the current time appropriately in its end-user interface. Each call to -addPeriodicTimeObserverForInterval:queue:usingBlock: should be paired with a corresponding call to -removeTimeObserver:. Releasing the observer object without a call to -removeTimeObserver: will result in undefined behavior. */ open func addPeriodicTimeObserver(forInterval interval: CMTime, queue: DispatchQueue? , using block: @escaping (CMTime) -> Void) -> AnyCopy the code

One of the more important parameters is interval: CMTime, which determines the interval between code callbacks, and if you change the frame of the video track in this callback it also determines how smoothly the video track moves

Example:

/ / player listening in the self. AvPlayer. AddPeriodicTimeObserver (forInterval: CMTimeMake (value: 1, the timescale: 120), the queue: Dispatchqueue.main) {[weak self](time) in}Copy the code
Problems with fast forward methods

This listening method and the fast forward method in point 3 cause a problem: dragging the video track to fast forward also triggers the callback, creating an endless loop of dragging the video track frame (change frame) -> fast forward -> Trigger callback -> Change frame. Then you have to add a judgment not to trigger the callback.

Fast forward method and player linkage problems

The video playback is asynchronous, and the fast forward method takes time to decode the video, resulting in the time difference in the process of the linkage between the two parties. And when you think the video has been fast-forward, you want to change the position of the video track. Due to the time brought by decoding, several wrong times will be passed in the callback, making the video track wobble back and forth. So the way the current project works is that the callback needs to determine whether the frame being changed is legal (too large, too small)

Ps: If there is a better solution to these two problems, welcome to discuss together!

5. Export video

/** @method insertTimeRange:ofTrack:atTime:error: @abstract Inserts a timeRange of a source track into a track of a composition. @param timeRange Specifies the timeRange of the track to be inserted. @param track Specifies the source track to be inserted. Only AVAssetTracks of AVURLAssets And avems are supported (avems starting in MacOS X 10.10 and iOS 8.0). @param startTime Specifies the time at which the inserted track is to be presented by the composition track. You may pass kCMTimeInvalid for startTime to indicate that the timeRange should be appended to the end of the track. @param error Describes failures that may be reported to the user, e.g. the asset that was selected for insertion in the composition is restricted by copy-protection. @result A BOOL value  indicating the success of the insertion. @discussion You provide a reference to an AVAssetTrack and the timeRange within it that you want to insert. You specify the start time in the target composition track at which the timeRange should be inserted. Note that the inserted track timeRange will be presented at its natural duration and rate. It can be  scaled to a different duration (and presented at a different rate) via -scaleTimeRange:toDuration:. */ open func insertTimeRange(_ timeRange: CMTimeRange, of track: AVAssetTrack, at startTime: CMTime) throwsCopy the code

Three parameters are passed in: timeRange: CMTimeRange: specifies the timeRange of the video to be inserted track: AVAssetTrack: specifies the video track to be inserted. Only AvassetTrack of AVURLAssets and AVCompositions is supported (Avases from MacOS X 10.10 and iOS 8.0). StarTime: CMTime: specifies the time when the synthesized video is inserted. You can pass the kCMTimeInvalid parameter to specify that the video should be appended to the end of the previous video.

Example:

Let composition = AVMutableComposition () / / merge video and audio track let videoTrack = composition. AddMutableTrack (withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID()) let audioTrack = composition.addMutableTrack( withMediaType: AVMediaType.audio, preferredTrackID: CMPersistentTrackID()) let asset = AVAsset.init(url: self.url) var insertTime: CMTime = CMTime.zero let timeScale = self.avPlayer.currentItem? .duration.timescale // Loop the information of each fragment for clipsInfo in self.clipsInfoarr {// Let clipsDuration = Double(Float(clipsinfo.width)/self.videowidth) * self.totalTime // Let startDuration = -float (clipsinfo.offset)  / self.perSecondLength do { try videoTrack? .insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!) ), timescale: timeScale!) , duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!) ), timescale: timeScale!) ), of: asset.tracks(withMediaType: AVMediaType.video)[0], at: insertTime) } catch _ {} do { try audioTrack? .insertTimeRange(CMTimeRangeMake(start: CMTimeMake(value: Int64(startDuration * Float(timeScale!) ), timescale: timeScale!) , duration:CMTimeMake(value: Int64(clipsDuration * Double(timeScale!) ), timescale: timeScale!) ), of: asset.tracks(withMediaType: AVMediaType.audio)[0], at: insertTime) } catch _ {} insertTime = CMTimeAdd(insertTime, CMTimeMake(value: Int64(clipsDuration * Double(timeScale!) ), timescale: timeScale!) ) } videoTrack? .preferredTransform = CGAffineTransform(rotationAngle: CGFloat. PI / 2) // Get the merged video path let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory,.userDomainMask,true)[0] let destinationPath = documentsPath + "/mergeVideo-\(arc4Random ()%1000).mov" print(" \(destinationPath)")Copy the code

End: Complete clipping functionality can be achieved through these apis and interactive logic. If the article has the insufficiency place, welcome to point out!

Ios learning materials | receiving address