rendering
gif1
gif2
Due to the licecap recorded GIF frame loss is too serious, all blurred out, put two HD screenshots
png1
png2
preface
In March this year, the news of Douyu’s $100 million financing led by Tencent was republished by reports on major platforms. At the moment when e-sports and pan-entertainment have become popular investments, the network live broadcasting platform has naturally attracted attention from all walks of life. Steal two trends from live games
Game Live Broadcast Scale
Game Live Broadcast Scale
And that’s just the live game pie. The competition in the live broadcast industry will become more and more fierce, and both anchors and live broadcast platforms will face fierce competition. Of course, the live broadcast industry will become more and more standardized with more and more live broadcast elements.
A sneak peek at live video
Live video can be divided into collection, pre-processing, coding, transmission, server processing, decoding and rendering
-
Collection: the iOS system is relatively simple because there are not many kinds of software and hardware, and the hardware adaptation is good. Android end on the market many models, to do some models of adaptation. PC side is the most troublesome, all kinds of weird camera drivers. Therefore, many small and medium-sized live broadcast platforms have given up PC live broadcast, and some live broadcast platforms only do video live broadcast on iOS.
-
Pre processing: beauty algorithm, video blur effect, watermark and so on are done in this link. At present, the most famous open source framework on iOS is undoubtedly GPUImage, which has 125 built-in rendering effects and supports various script customization. The beauty effect of my imitation meow broadcast is also based on GPUImage.
-
Coding: the key and difficult point is to find the optimal balance in the design of resolution, frame rate, bit rate, GOP and other parameters. After iOS8, Apple opened videoToolbox. framework, which can be directly hard codec, which is also one of the reasons why most live streaming platforms only support iOS8 at least. IOS hardware compatibility is good, so hard coding can be directly adopted. Hard-coding Android is another big hole.
-
Transmission: This is usually handed over to the CDN service provider. CDN only provides bandwidth and transmission between servers, and the jitter cache of the network connection between sender and receiver should be realized by itself. At present, the largest CDN service provider in China should be Wangju.
-
Server processing: It is necessary to do some stream processing work on the server, so that the pushed streams can adapt to different protocols of various platforms, such as RTMP,HLS,FLV…
-
Decoding and rendering: that is, audio and video playback. There is no doubt that decoding must also be hard decoding. IOS side compatibility is better, Android is still a big hole. The difficulty lies in the synchronization of sound and painting, which is hard for many live broadcast platforms at present. A good open source project in China should be the iJkPlayer of site B. Douyu is based on iJkPlayer, so is this project.
Technical pit: Noise reduction, audio decoder, Bluetooth adapter, echo cancellation, signaling control, login, authentication, permission management, status management, application messaging, Message Push, gift systems, instant chat, payment systems, statistical systems, databases, caching, distributed file storage, message queues, Operation and maintenance system and so on the size of the pit waiting for you to fill!!
Capital pit: take bandwidth as an example, 20,000 people are online at the same time, the mobile phone bit rate is 600KB, and the monthly bandwidth cost is at least about 300,000. According to YY’s financial report for the fourth quarter of 2015, their bandwidth cost is 161.1 million yuan, equivalent to 50 million + per month. Labor costs + channel expenses and other expenses are not discussed in detail.
Social pit: also have to fight every moment with all kinds of dark forces, including pornography, advertising, brush trumpet, brush recharge, infringement, DDos… (I decomcompiled the official APP of Miaobo, their project name is Shehui, O(∩_∩)O haha ~)
Project Download Address
GitHub download address
preparation
The project is primarily based on iJkPlayer. Best is packaged into framework. Originally I am going to write a packaged tutorial, but later in the book Jane found a special detailed packaging blog, share with everyone: www.jianshu.com/p/1f06b27b3… .
If you fail to package according to the tutorial (of course, the probability is relatively small), I have a copy of the packaged (Release version), download address: link: pan.baidu.com/s/1eRVetdK password: 2dc0 download, directly unpack.
Project file structure
-
Frameworks: If the Frameworks folder does not exist, click Classes, select Show in Finder, create a new one, and drag the Frameworks you packed or downloaded into the project. You can also create a folder and delete the Frameworks directly
-
Profile: A personal center, which contains only one ProfileController. Because always write repetitive code, all write vomit, here are interested in their own write it, So easy…
-
Network: utility class for Network connections. About the network real-time monitoring, network status switch, network request tools are in this.
-
Other: global constant. Of course, you can also make the file structure more detailed in it.
-
Home: contains modules such as the latest anchor, the hottest live broadcast, the most concerned live broadcast, and the gift list. And, most importantly, live video.
-
ShowTime: You can tell by the name. Pre-processing of live video, smart beauty and H264 hard coding are all in here.
-
Main: configuration of UITabBarController and UINavigationController
-
Toos: There’s a little bit of a nomenclature here, it’s all about the categories that the project uses
-
Login: indicates the Login module
-
Resource: Resource file used by the project
The project,
-
Tip1: Interprets the network type.
When we watch the live broadcast, we usually use WiFi or 3/4G(tuhao level). When ordinary users switch network, we will give a friendly reminder to tell them: Your network status has changed to XX state. If a user switches from WiFi to 4G and your APP doesn’t alert him or her, his or her traffic drops to zero or even owes the carrier a lot of money, I think your APP’s user experience will go to zero or negative.
We can use Apple’s Reachability to monitor changes in network state in real time with the code below
Typedef NS_ENUM(NSUInteger, NetworkStates) {networkNone, // No NetworkStates2G, // 2G NetworkStates3G, // 3G NetworkStates4G, // 4G NetworkStatesWIFI // WIFI };Copy the code
+ (NetworkStates)getNetworkStates {NSArray *subviews = [[[[UIApplication sharedApplication] valueForKeyPath:@"statusBar"] valueForKeyPath:@"foregroundView"] subviews]; // Save the network state NetworkStates states = NetworkStatesNone; Child in subviews for (id) {if ([child isKindOfClass: NSClassFromString (@ "UIStatusBarDataNetworkItemView")]) {/ / access to the status bar code int networkType = [[child valueForKeyPath:@"dataNetworkType"] intValue]; Switch (networkType) {case 0: // States = NetworkNone; break; case 1: states = NetworkStates2G; break; case 2: states = NetworkStates3G; break; case 3: states = NetworkStates4G; break; case 5: { states = NetworkStatesWIFI; } break; default: break; }}} // Select return States according to the state; }Copy the code
-
Tip2: Login module
If you run it a few more times, you’ll see that there are two videos playing in the background of the login module, one at a time. And it is infinitely repetitive, which means that as long as you stay in the login screen, the current video will be played in a single video loop. The login here is just a few buttons, no specific login logic, randomly click any button can enter the home page.
We need to monitor the video to see if it’s finished.
[[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(didFinish) name:IJKMPMoviePlayerPlaybackDidFinishNotification object:nil];Copy the code
If finished, let IJKFFMoviePlayerController play again
- (void)didFinish {// Continue to play [self.player play]; }Copy the code
This effect is believed to be seen or done by many people. Let me give you an idea of how I do it (not necessarily best, but just to give you an idea)
One parent controller HomeViewController+ three child controllers (hot/latest/attention). Each controller manages its own business logic with high cohesion and low coupling). Rewrite HomeViewController loadView and replace self.view with UIScrollView. Add the views of the three child controllers to the UIScrollView. For other effects, please refer to my code, which has detailed Chinese annotations.
-
Tip4: Live broadcast (audience end) this is one of the key points of the whole project. The layout of this kind of live broadcast should be relatively mainstream. Many live-streaming apps I have downloaded have the layout of this project, including YY. There are a lot of things involved in this, I really can’t explain in a few words.
A: live broadcast of anchors B: Live video broadcast of associated anchors. By default, there is only interface and no sound. Click this view to switch to this anchor C: drop down to switch to another anchor, this feature is very common. Way is to live controller is a UICollectionViewController, only one cell, and the cell. The frame is the self. CollectionViewb. Bounds. When we enter the live controller, we actually pass in an array of associated anchors. Every time we pull down, we load the host D in the array. Viewing audience details e. viewing anchor details F. Footprints: Particle animation, detailed explanation later G. Bullet screen: Click the first button in the bottom toolbar to turn on/off the bullet screen.
-
Tip5: Particle animation The layer that animates the visitor’s footprint particles is added to the player’s view. The following code is commented in detail
CAEmitterLayer *emitterLayer = [CAEmitterLayer layer]; / / transmitters in the center of the x, y plane emitterLayer. EmitterPosition = CGPointMake(self.moviePlayer.view.frame.size.width-50,self.moviePlayer.view.frame.size.height-50); Emitterlayer.emittersize = CGSizeMake(20, 20); // EmitterLayer.emitterSize = CGSizeMake(20, 20); // renderMode emitterlayer. renderMode = kCAEmitterLayerUnordered; / / open the 3 d effect. / / _emitterLayer preservesDepth = YES; NSMutableArray *array = [NSMutableArray array]; // create particle for (int I = 0; i<10; I++) {CAEmitterCell *stepCell = [CAEmitterCell emitterCell]; // The particle creation rate is 1/s stepCell. BirthRate = 1; Lifetime = arc4random_uniform(4) + 1; // stepCell. LifetimeRange = 1.5; Color =[[UIColor colorWithRed:0.8 green:0.4 blue:0.2 alpha:0.1]CGColor]; UIImage *image = [UIImage imageNamed:[NSString stringWithFormat:@"good%d_30x30", i]]; // StepCell.contents = (id)[image CGImage]; // [fire setName:@"step%d", I]; // stepCell. Velocity = arc4random_uniform(100) + 100; VelocityRange = 80; / / particle in the launch of the xy plane Angle stepCell emissionLongitude = M_PI + M_PI_2;; // Stepcell.emissionRange = m_pi_2/6; // Scale stepcell.scale = 0.3; [array addObject:stepCell]; } emitterLayer.emitterCells = array; [self.moviePlayer.view.layer insertSublayer:emitterLayer below:self.catEarView.layer];Copy the code
-
Tip6: Barrage barrage also uses a third party wheel barragerender. tip6: Barrage barrage also uses a third party wheel BarrageRenderer. The documentation for this open source project is in Chinese and is very simple to use.
The basic configuration
_renderer = [[BarrageRenderer alloc] init]; // Set the display area of the barrage. CanvasMargin = UIEdgeInsetsMake(ALinScreenHeight * 0.3, 10, 10, 10); [self.contentView addSubview:_renderer.view];Copy the code
Barrage configuration
#pragma mark - Spawn Sprite descriptors - cutscene text descriptor - (BarrageDescriptor *)walkTextSpriteDescriptorWithDirection:(NSInteger)direction { BarrageDescriptor * descriptor = [[BarrageDescriptor alloc]init]; descriptor.spriteName = NSStringFromClass([BarrageWalkTextSprite class]); descriptor.params[@"text"] = self.danMuText[arc4random_uniform((uint32_t)self.danMuText.count)]; descriptor.params[@"textColor"] = Color(arc4random_uniform(256), arc4random_uniform(256), arc4random_uniform(256)); descriptor.params[@"speed"] = @(100 * (double)random()/RAND_MAX+50); descriptor.params[@"direction"] = @(direction); Descriptor. Params [@"clickAction"] = ^{UIAlertView *alertView = [[UIAlertView alloc]initWithTitle:@" prompt "message:@" reload clicked" Delegate :nil cancelButtonTitle:@" cancel "otherButtonTitles:nil]; [alertView show]; }; return descriptor; }Copy the code
The last step, always remember to start
[_renderer start];Copy the code
- Tip7: Smart beauty effect is now live broadcast platform, beauty is standard. Otherwise, most of the anchors are unwatchable. Beauty algorithm needs GPU programming, need to understand image processing. I’m not very familiar with image processing, and the relevant literature is hazy. So, still use the open source wheel: GPUImage. This open source framework has nearly 1.3W+ STAR (July 5th data), which is really not covered, with 125 built-in filter effects that you can’t think of, only you can’t use. I have detailed usage in my project, and it’s very simple. Here is an excerpt from his.h file. On the one hand, it is convenient for everyone to modify the beauty effect in my project, on the other hand, it is also a backup.(I really forget the exact source, if anyone found the source address link, please contact me to add it.)
#import "GLProgram.h"
// Base classes
#import "GPUImageOpenGLESContext.h"
#import "GPUImageOutput.h"
#import "GPUImageView.h"
#import "GPUImageVideoCamera.h"
#import "GPUImageStillCamera.h"
#import "GPUImageMovie.h"
#import "GPUImagePicture.h"
#import "GPUImageRawDataInput.h"
#import "GPUImageRawDataOutput.h"
#import "GPUImageMovieWriter.h"
#import "GPUImageFilterPipeline.h"
#import "GPUImageTextureOutput.h"
#import "GPUImageFilterGroup.h"
#import "GPUImageTextureInput.h"
#import "GPUImageUIElement.h"
#import "GPUImageBuffer.h"
// Filters
#import "GPUImageFilter.h"
#import "GPUImageTwoInputFilter.h"
#pragma mark - 调整颜色 Handle Color
#import "GPUImageBrightnessFilter.h" //亮度
#import "GPUImageExposureFilter.h" //曝光
#import "GPUImageContrastFilter.h" //对比度
#import "GPUImageSaturationFilter.h" //饱和度
#import "GPUImageGammaFilter.h" //伽马线
#import "GPUImageColorInvertFilter.h" //反色
#import "GPUImageSepiaFilter.h" //褐色(怀旧)
#import "GPUImageLevelsFilter.h" //色阶
#import "GPUImageGrayscaleFilter.h" //灰度
#import "GPUImageHistogramFilter.h" //色彩直方图,显示在图片上
#import "GPUImageHistogramGenerator.h" //色彩直方图
#import "GPUImageRGBFilter.h" //RGB
#import "GPUImageToneCurveFilter.h" //色调曲线
#import "GPUImageMonochromeFilter.h" //单色
#import "GPUImageOpacityFilter.h" //不透明度
#import "GPUImageHighlightShadowFilter.h" //提亮阴影
#import "GPUImageFalseColorFilter.h" //色彩替换(替换亮部和暗部色彩)
#import "GPUImageHueFilter.h" //色度
#import "GPUImageChromaKeyFilter.h" //色度键
#import "GPUImageWhiteBalanceFilter.h" //白平横
#import "GPUImageAverageColor.h" //像素平均色值
#import "GPUImageSolidColorGenerator.h" //纯色
#import "GPUImageLuminosity.h" //亮度平均
#import "GPUImageAverageLuminanceThresholdFilter.h" //像素色值亮度平均,图像黑白(有类似漫画效果)
#import "GPUImageLookupFilter.h" //lookup 色彩调整
#import "GPUImageAmatorkaFilter.h" //Amatorka lookup
#import "GPUImageMissEtikateFilter.h" //MissEtikate lookup
#import "GPUImageSoftEleganceFilter.h" //SoftElegance lookup
#pragma mark - 图像处理 Handle Image
#import "GPUImageCrosshairGenerator.h" //十字
#import "GPUImageLineGenerator.h" //线条
#import "GPUImageTransformFilter.h" //形状变化
#import "GPUImageCropFilter.h" //剪裁
#import "GPUImageSharpenFilter.h" //锐化
#import "GPUImageUnsharpMaskFilter.h" //反遮罩锐化
#import "GPUImageFastBlurFilter.h" //模糊
#import "GPUImageGaussianBlurFilter.h" //高斯模糊
#import "GPUImageGaussianSelectiveBlurFilter.h" //高斯模糊,选择部分清晰
#import "GPUImageBoxBlurFilter.h" //盒状模糊
#import "GPUImageTiltShiftFilter.h" //条纹模糊,中间清晰,上下两端模糊
#import "GPUImageMedianFilter.h" //中间值,有种稍微模糊边缘的效果
#import "GPUImageBilateralFilter.h" //双边模糊
#import "GPUImageErosionFilter.h" //侵蚀边缘模糊,变黑白
#import "GPUImageRGBErosionFilter.h" //RGB侵蚀边缘模糊,有色彩
#import "GPUImageDilationFilter.h" //扩展边缘模糊,变黑白
#import "GPUImageRGBDilationFilter.h" //RGB扩展边缘模糊,有色彩
#import "GPUImageOpeningFilter.h" //黑白色调模糊
#import "GPUImageRGBOpeningFilter.h" //彩色模糊
#import "GPUImageClosingFilter.h" //黑白色调模糊,暗色会被提亮
#import "GPUImageRGBClosingFilter.h" //彩色模糊,暗色会被提亮
#import "GPUImageLanczosResamplingFilter.h" //Lanczos重取样,模糊效果
#import "GPUImageNonMaximumSuppressionFilter.h" //非最大抑制,只显示亮度最高的像素,其他为黑
#import "GPUImageThresholdedNonMaximumSuppressionFilter.h" //与上相比,像素丢失更多
#import "GPUImageSobelEdgeDetectionFilter.h" //Sobel边缘检测算法(白边,黑内容,有点漫画的反色效果)
#import "GPUImageCannyEdgeDetectionFilter.h" //Canny边缘检测算法(比上更强烈的黑白对比度)
#import "GPUImageThresholdEdgeDetectionFilter.h" //阈值边缘检测(效果与上差别不大)
#import "GPUImagePrewittEdgeDetectionFilter.h" //普瑞维特(Prewitt)边缘检测(效果与Sobel差不多,貌似更平滑)
#import "GPUImageXYDerivativeFilter.h" //XYDerivative边缘检测,画面以蓝色为主,绿色为边缘,带彩色
#import "GPUImageHarrisCornerDetectionFilter.h" //Harris角点检测,会有绿色小十字显示在图片角点处
#import "GPUImageNobleCornerDetectionFilter.h" //Noble角点检测,检测点更多
#import "GPUImageShiTomasiFeatureDetectionFilter.h" //ShiTomasi角点检测,与上差别不大
#import "GPUImageMotionDetector.h" //动作检测
#import "GPUImageHoughTransformLineDetector.h" //线条检测
#import "GPUImageParallelCoordinateLineTransformFilter.h" //平行线检测
#import "GPUImageLocalBinaryPatternFilter.h" //图像黑白化,并有大量噪点
#import "GPUImageLowPassFilter.h" //用于图像加亮
#import "GPUImageHighPassFilter.h" //图像低于某值时显示为黑
#pragma mark - 视觉效果 Visual Effect
#import "GPUImageSketchFilter.h" //素描
#import "GPUImageThresholdSketchFilter.h" //阀值素描,形成有噪点的素描
#import "GPUImageToonFilter.h" //卡通效果(黑色粗线描边)
#import "GPUImageSmoothToonFilter.h" //相比上面的效果更细腻,上面是粗旷的画风
#import "GPUImageKuwaharaFilter.h" //桑原(Kuwahara)滤波,水粉画的模糊效果;处理时间比较长,慎用
#import "GPUImageMosaicFilter.h" //黑白马赛克
#import "GPUImagePixellateFilter.h" //像素化
#import "GPUImagePolarPixellateFilter.h" //同心圆像素化
#import "GPUImageCrosshatchFilter.h" //交叉线阴影,形成黑白网状画面
#import "GPUImageColorPackingFilter.h" //色彩丢失,模糊(类似监控摄像效果)
#import "GPUImageVignetteFilter.h" //晕影,形成黑色圆形边缘,突出中间图像的效果
#import "GPUImageSwirlFilter.h" //漩涡,中间形成卷曲的画面
#import "GPUImageBulgeDistortionFilter.h" //凸起失真,鱼眼效果
#import "GPUImagePinchDistortionFilter.h" //收缩失真,凹面镜
#import "GPUImageStretchDistortionFilter.h" //伸展失真,哈哈镜
#import "GPUImageGlassSphereFilter.h" //水晶球效果
#import "GPUImageSphereRefractionFilter.h" //球形折射,图形倒立
#import "GPUImagePosterizeFilter.h" //色调分离,形成噪点效果
#import "GPUImageCGAColorspaceFilter.h" //CGA色彩滤镜,形成黑、浅蓝、紫色块的画面
#import "GPUImagePerlinNoiseFilter.h" //柏林噪点,花边噪点
#import "GPUImage3x3ConvolutionFilter.h" //3x3卷积,高亮大色块变黑,加亮边缘、线条等
#import "GPUImageEmbossFilter.h" //浮雕效果,带有点3d的感觉
#import "GPUImagePolkaDotFilter.h" //像素圆点花样
#import "GPUImageHalftoneFilter.h" //点染,图像黑白化,由黑点构成原图的大致图形
#pragma mark - 混合模式 Blend
#import "GPUImageMultiplyBlendFilter.h" //通常用于创建阴影和深度效果
#import "GPUImageNormalBlendFilter.h" //正常
#import "GPUImageAlphaBlendFilter.h" //透明混合,通常用于在背景上应用前景的透明度
#import "GPUImageDissolveBlendFilter.h" //溶解
#import "GPUImageOverlayBlendFilter.h" //叠加,通常用于创建阴影效果
#import "GPUImageDarkenBlendFilter.h" //加深混合,通常用于重叠类型
#import "GPUImageLightenBlendFilter.h" //减淡混合,通常用于重叠类型
#import "GPUImageSourceOverBlendFilter.h" //源混合
#import "GPUImageColorBurnBlendFilter.h" //色彩加深混合
#import "GPUImageColorDodgeBlendFilter.h" //色彩减淡混合
#import "GPUImageScreenBlendFilter.h" //屏幕包裹,通常用于创建亮点和镜头眩光
#import "GPUImageExclusionBlendFilter.h" //排除混合
#import "GPUImageDifferenceBlendFilter.h" //差异混合,通常用于创建更多变动的颜色
#import "GPUImageSubtractBlendFilter.h" //差值混合,通常用于创建两个图像之间的动画变暗模糊效果
#import "GPUImageHardLightBlendFilter.h" //强光混合,通常用于创建阴影效果
#import "GPUImageSoftLightBlendFilter.h" //柔光混合
#import "GPUImageChromaKeyBlendFilter.h" //色度键混合
#import "GPUImageMaskFilter.h" //遮罩混合
#import "GPUImageHazeFilter.h" //朦胧加暗
#import "GPUImageLuminanceThresholdFilter.h" //亮度阈
#import "GPUImageAdaptiveThresholdFilter.h" //自适应阈值
#import "GPUImageAddBlendFilter.h" //通常用于创建两个图像之间的动画变亮模糊效果
#import "GPUImageDivideBlendFilter.h" //通常用于创建两个图像之间的动画变暗模糊效果
#pragma mark - 尚不清楚
#import "GPUImageJFAVoroniFilter.h"
#import "GPUImageVoroniConsumerFilter.h"Copy the code
-
Tip8: H264 hard encoding If using ijkPlayer hard decoding, a single code can be used.
// Enable hard decoding [option setPlayerOptionValue:@"1" forKey:@" VideoToolbox "];Copy the code
Hardcoded application scenario: We want to transmit the video data of the host to the server
Images are collected through the camera, and then the collected images are encoded by hard coding. Finally, the encoded data are combined into H264 codes and transmitted through the network.
The camera collects images. The iOS system provides AVCaptureSession to collect the image data of the camera. In the project, I directly used GPUImageVideoCamera in GPUImage, and directly set the proxy of GPUImageVideoCamera. In its proxy method – (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer; Data encoding can be done.
One thing to remember: the data collected by AVCaptureSession and GPUImageVideoCamera are all unencoded CMSampleBuffer.
The collected data is then hardcoded with iOS open VideoToolbox. There are many online tutorials for hard coding and decoding of VideoToolbox. Of course, it is best to read the official documents of Apple. If it is just hard coding, just read my project.
Key encoding functions
void didCompressH264(void *outputCallbackRefCon, void *sourceFrameRefCon, OSStatus status, VTEncodeInfoFlags infoFlags, CMSampleBufferRef sampleBuffer ) { if (status ! = 0) return; // The unencoded data collected is ready if (! CMSampleBufferDataIsReady(sampleBuffer)) { NSLog(@"didCompressH264 data is not ready "); return; } ALinH264Encoder* encoder = (__bridge ALinH264Encoder*)outputCallbackRefCon; bool keyframe = ! CFDictionaryContainsKey((CFArrayGetValueAtIndex(CMSampleBufferGetSampleAttachmentsArray(sampleBuffer, true), 0)), kCMSampleAttachmentKey_NotSync); If (keyframe) / / key frames {CMFormatDescriptionRef format = CMSampleBufferGetFormatDescription (sampleBuffer); size_t sparameterSetSize, sparameterSetCount; const uint8_t *sparameterSet; OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 0, &sparameterSet, &sparameterSetSize, &sparameterSetCount, 0 ); if (statusCode == noErr) { size_t pparameterSetSize, pparameterSetCount; const uint8_t *pparameterSet; OSStatus statusCode = CMVideoFormatDescriptionGetH264ParameterSetAtIndex(format, 1, &pparameterSet, &pparameterSetSize, &pparameterSetCount, 0 ); if (statusCode == noErr) { encoder->sps = [NSData dataWithBytes:sparameterSet length:sparameterSetSize]; encoder->pps = [NSData dataWithBytes:pparameterSet length:pparameterSetSize]; NSLog(@"sps:%@ , pps:%@", encoder->sps, encoder->pps); } } } CMBlockBufferRef dataBuffer = CMSampleBufferGetDataBuffer(sampleBuffer); size_t length, totalLength; char *dataPointer; OSStatus statusCodeRet = CMBlockBufferGetDataPointer(dataBuffer, 0, &length, &totalLength, &dataPointer); if (statusCodeRet == noErr) { size_t bufferOffset = 0; static const int AVCCHeaderLength = 4; while (bufferOffset < totalLength - AVCCHeaderLength) { uint32_t NALUnitLength = 0; memcpy(&NALUnitLength, dataPointer + bufferOffset, AVCCHeaderLength); NALUnitLength = CFSwapInt32BigToHost(NALUnitLength); NSData *data = [[NSData alloc] initWithBytes:(dataPointer + bufferOffset + AVCCHeaderLength) length:NALUnitLength]; bufferOffset += AVCCHeaderLength + NALUnitLength; NSLog(@"sendData-->> %@ %lu", data, bufferOffset); }}}Copy the code
touch
It’s a knockoff, a knockoff, but it’s still huge. Specific details or we need to see my project source code. Just a few thousand words is really unable to explain so many knowledge points. The name of the blog article said that it is the first glimpse, but it is really only the first glimpse, there are too many holes in the video live broadcast. And line and cherish…
Project compilation environment
Xcode7(and above) is best run on a real machine. There are some areas that the simulator does not support, and you will not see any effects, such as hard coding/intelligent beauty, etc., these functional modules, WHICH I have made restrictions, need to be in the real state.
Project Download Address
GitHub download address is star and fork. Future bugs will be updated on GitHub. If you have any questions, you can leave a message/private message to me in Jianshu, or private message to me on Weibo (my micro blog is on the front page of Jianshu).
To contact me
github