1. Play video overview
AVFoundation encapsulates the three main classes AVPlay, AVPlayerLayer, and AVPlayerItem for playback.
- AVPlayer
AVPlayer is a controller object for playing time-based listening media that can play local, distributed downloads, and HTTP Live Streaming.
HTTP Live Streaming (HLS) is an HTTP – based network transport protocol proposed by Apple. It’s part of Apple’s QuickTime X and iPhone software systems. It works by splitting the entire stream into small HTTP-based files for download, a few at a time. While the media stream is playing, the client can choose to download the same resource from many different alternate sources at different rates, allowing the streaming session to adapt to different data rates. When starting a streaming session, the client downloads an extended M3U (M3U8) Playlist file containing metadata to find available media streams.
HLS only requests basic HTTP packets, and unlike RTP, HLS can pass through any firewall or proxy server that allows HTTP data to pass through. It is also easy to use content delivery networks to deliver media streams.
Apple has presented the HLS protocol as an Internet draft (progressively submitted) and has submitted it to the IETF as an informal standard in the first phase. However, although Apple has occasionally submitted minor updates, the IETF has taken no further action to develop the standard.
AVPlayer only manages the playback of a single resource, and its subclass, AVQueuePlayer, manages resource queues.
- AVPlayerLayer
AVPlayerLayer is built on top of Core Animation and extends Core Animation’s CALayer class. It does not provide any visual controls except the content rendering surface and supports only video Gravity, Can choose AVLayerVideoGravityResizeAspect, AVLayerVideoGravityResizeAspectFill, AVLayerVideoGravityResize three values, fully display of proportion respectively, Equal proportions fully paved, and unequal proportions fully paved.
- AVPlayerItem
AVAsset contains only static information about media resources. AVPlayerItem can build a data model of the dynamic view of media resources and save AVPlayer playing state.
2. Play the video
To start with an AVAsset to play, do the following initialization
self.avPlayerItem = [AVPlayerItem playerItemWithAsset:self.targetAVAsset];
self.avPlayer = [AVPlayer playerWithPlayerItem:self.avPlayerItem];
self.avPlayerLayer = [AVPlayerLayer playerLayerWithPlayer:self.avPlayer];
[self.layer addSublayer:self.avPlayerLayer];
Copy the code
Create AVPlayerItem, AVPlayer, and AVPlayerLayer, and finally add the AVPlaterLayer to the view of the content to be displayed. However, the video cannot be played immediately. AVPlayerItem has a status that indicates whether the current video is ready to be played and needs to listen for this property.
[RACObserve(self.avPlayerItem, status) subscribeNext:^(id x) {
@strongify(self);
if (self.avPlayerItem.status == AVPlayerItemStatusReadyToPlay) {
// The video is ready
if (self.autoPlayMode) {
self.playerButton.hidden = YES;
[self beginPlay];
} else {
self.playerButton.enabled = YES;
self.playerButton.hidden = NO; }}else if (self.avPlayerItem.status == AVPlayerItemStatusFailed) {NSLog(@"failed"); }}];Copy the code
3. Processing time
When floating point data type is used to represent time, the time will be obviously offset due to inaccurate data and accumulation of multi-time calculation during video playback. Therefore, data flow cannot be synchronized and self-description cannot be achieved. It is difficult to compare and calculate in different time axes. So AVFoundation uses the CMTime data structure to represent time.
typedef struct
{
CMTimeValue value;
CMTimeScale timescale;
CMTimeFlags flags;
CMTimeEpoch epoch; /* The CMTime structure is usually set to 0, but you can use it to distinguish between unrelated timelines. For example, an epoch can distinguish time N in loop 0 from time N in loop 1 by using the demo loop to increment each cycle. * /
} CMTime;
Copy the code
CMTime represents time = value/timescale.
Practice of 4.
4.1 Creating a Video View
UIView hosts an instance of CALayer and can inherit UIView by overwriting its Class method + (Class)layerClass to return a specific type of CALayer, so that UIView will select that type when initializing to create the host Layer.
+ (Class)layerClass {
return [AVPlayerLayer class];
}
Copy the code
And then you just pass in an AVPlayer object in your custom initialization method and you can set the AVPlayer property on the root layer of UIView.
- (id)initWithPlayer:(AVPlayer *)player {
self = [super initWithFrame:CGRectZero];
if (self) {
self.backgroundColor = [UIColor blackColor];
[(AVPlayerLayer *) [self layer] setPlayer:player];
}
return self;
}
Copy the code
You can load AVPlayerItem by selecting some metadata key values in the form shown below
NSArray *keys = @[
@"tracks".@"duration".@"commonMetadata".@"availableMediaCharacteristicsWithMediaSelectionOptions"
];
self.playerItem = [AVPlayerItem playerItemWithAsset:self.asset
automaticallyLoadedAssetKeys:keys];
self.player = [AVPlayer playerWithPlayerItem:self.playerItem];
self.playerView = [[THPlayerView alloc] initWithPlayer:self.player];
Copy the code
This allows the audio track, duration, common metadata, and backup to be loaded at the same time as AVPlayerItem is loaded.
4.2 Listening Status
After initialization AVPlayerItem needs to wait for the state to AVPlayerItemStatusReadyToPlay will need to be listening
[RACObserve(self.avPlayerItem, status) subscribeNext:^(id x) {
@strongify(self);
if (self.avPlayerItem.status == AVPlayerItemStatusReadyToPlay) {
// The video is ready
CMTime duration = self.playerItem.duration;
[self.player play];
} else if (self.avPlayerItem.status == AVPlayerItemStatusFailed) {// The video cannot play}}];Copy the code
4.3 Listening Time
For listening on playback times, AVPlayer provides two methods
- On a regular basis to monitor
self.intervalObserver = [self.avPlayer addPeriodicTimeObserverForInterval:CMTimeMake(1.2) queue:dispatch_get_main_queue() usingBlock:^(CMTime time) {
NSLog(@"%f".CMTimeGetSeconds(time));
}];
Copy the code
This method sends messages at intervals to the specified queue, which must be a serial queue, and the callback block takes the current time of a player expressed in CMTime.
- Boundary time monitoring
self.intervalObserver = [self.avPlayer addBoundaryTimeObserverForTimes:@[[NSValue valueWithCMTime:CMTimeMake(1.2)], [NSValue valueWithCMTime:CMTimeMake(2.2)]] queue:dispatch_get_main_queue() usingBlock:^{
NSLog(@ "..");
}];
Copy the code
This method takes an array of cmtimes and fires a callback block when the boundary point contained in the array is reached, but the block does not provide the current CMTime value.
Also pay attention to the release of listening
if (self.intervalObserver){
[self.avPlayer removeTimeObserver:self.intervalObserver];
}
Copy the code
4.4 Listening Ends
AVPlayerItemDidPlayToEndTimeNotification notice when the end of the video playback can register this notice to learn video broadcast has stopped
[[NSNotificationCenter defaultCenter] addObserverForName:AVPlayerItemDidPlayToEndTimeNotification object:self.avPlayerItem queue:[NSOperationQueue mainQueue] usingBlock:^(NSNotification * _Nonnull note) {
NSLog(@"did play to end");
}];
Copy the code
Another option is to listen to AVPlayer’s speed rate and determine the relationship between the current time and the total time when the speed drops to 0
@weakify(self);
[RACObserve(self.avPlayer, rate) subscribeNext:^(id x) {
@strongify(self);
float currentTime = CMTimeGetSeconds(self.avPlayerItem.currentTime);
float durationTime = CMTimeGetSeconds(self.avPlayerItem.duration);
if (self.avPlayer.rate == 0 && currentTime >= durationTime) {
dispatch_async(dispatch_get_main_queue(), ^{
[selfendPlayer]; }); }}];Copy the code
4.5 Controlling the Playback progress
We’re using a UISlider to control the playback of the video, and the UISlider has three events that we can add to the selector
- Click start UIControlEventTouchDown
- In the sliding UIControlEventValueChanged
- Click the end UIControlEventTouchUpInside
_scrubberSlider = [[UISlider alloc] init];
[_scrubberSlider addTarget:self action:@selector(sliderValueChange) forControlEvents:UIControlEventValueChanged];
[_scrubberSlider addTarget:self action:@selector(sliderStop) forControlEvents:UIControlEventTouchUpInside];
[_scrubberSlider addTarget:self action:@selector(sliderBegin) forControlEvents:UIControlEventTouchDown];
Copy the code
After obtaining the video size, you can set the value attribute of slider
self.scrubberSlider.minimumValue = 0.0;
self.scrubberSlider.maximumValue = CMTimeGetSeconds(self.avPlayerItem.duration);
Copy the code
And then there’s the implementation of three selectors
- (void)sliderBegin
{
[self pausePlayer];
}
- (void)sliderValueChange
{
[self.avPlayerItem cancelPendingSeeks];
[self.avPlayerItem seekToTime:CMTimeMakeWithSeconds(self.scrubberSlider.value, NSEC_PER_SEC)];
}
- (void)sliderStop
{
[self beginPlay];
}
Copy the code
In the process of sliding, cancelpendingopens a method that can cancel all the previous seekTime operation, and then perform seekToTime operation according to the value of slider. Finally, the slide ends and the playback resumes.
4.6 Obtaining the Image sequence
The AVAssetImageGenerator can be used to generate a fixed point-in-time collection of image sequences for a video, as described below.
We first initialize an AVAssetImageGenerator object
self.imageGenerator = [AVAssetImageGenerator assetImageGeneratorWithAsset:targetAVAsset];
self.imageGenerator.maximumSize = CGSizeMake(400.0f, 0.0f);
[self.imageGenerator setRequestedTimeToleranceBefore:kCMTimeZero];
[self.imageGenerator setRequestedTimeToleranceAfter:kCMTimeZero];
Copy the code
SetRequestedTimeToleranceBefore and setRequestedTimeToleranceAfter method can set up access frame when the degree of deviation, the more accurate for the higher performance requirements.
It then generates an array of time values
CMTime duration = self.targetAVAsset.duration;
NSMutableArray *times = [NSMutableArray array];
CMTimeValue increment = duration.value / 20;
CMTimeValue currentValue = 2.0 * duration.timescale;
while (currentValue <= duration.value) {
CMTime time = CMTimeMake(currentValue, duration.timescale);
[times addObject:[NSValue valueWithCMTime:time]];
currentValue += increment;
}
__block NSUInteger imageCount = times.count;
__block NSMutableArray *images = [NSMutableArray array];
Copy the code
Finally, the method is called to generate the image
[self.imageGenerator generateCGImagesAsynchronouslyForTimes:times completionHandler:^(CMTime requestedTime, CGImageRef _Nullable imageref, CMTime actualTime, AVAssetImageGeneratorResult result, NSError * _Nullable error) {
if (result == AVAssetImageGeneratorSucceeded) {
UIImage *image = [UIImage imageWithCGImage:imageref];
[images addObject:image];
} else {
NSLog(@"Error: %@", [error localizedDescription]);
}
if (--imageCount == 0{}});Copy the code
4.7 Displaying Subtitles
AVMediaSelectionOption identifies the alternate media rendering of AVAsset and contains alternate audio, video, or text tracks, which may be language-specific audio tracks, alternate camera angles, or subtitles.
First by AVAsset availableMediaCharacteristicsWithMediaSelectionOptions attribute to get all the backup of the current video track, The returned string may be AVMediaCharacteristicVisual (standby video orbit), AVMediaCharacteristicAudible (alternate audio track), AVMediaCharacteristicLegible subtitle, etc.
After get this array, through mediaSelectionGroupForMediaCharacteristic access to the corresponding type contains a combination of all of the orbital AVMediaSelectionGroup orbit, The options property of AVMediaSelectionGroup is then iterated to retrieve all AVMediaSelectionOption objects. Once you have the AVMediaSelectionOption object, you can set the properties of the AVPlayerItem.
NSString *mc = AVMediaCharacteristicLegible;
AVMediaSelectionGroup *group = [self.asset mediaSelectionGroupForMediaCharacteristic:mc];
if (group) {
NSMutableArray *subtitles = [NSMutableArray array];
for (AVMediaSelectionOption *option in group.options) {
[subtitles addObject:option.displayName];
}
// Get all supported subtitle names
} else{}NSString *mc = AVMediaCharacteristicLegible;
AVMediaSelectionGroup *group = [self.asset mediaSelectionGroupForMediaCharacteristic:mc];
BOOL selected = NO;
for (AVMediaSelectionOption *option in group.options) {
if ([option.displayName isEqualToString:subtitle]) {
[self.playerItem selectMediaOption:option inMediaSelectionGroup:group];
// Set the subtitle property after the match}} [self.playerItem selectMediaOption:nil inMediaSelectionGroup:group];// Set to nil to cancel captioning
Copy the code