preface
Project Demo, to achieve real-time filter, photo, video functions.
I’ve been playing Hashfilter, and I’ve learned a lot. I’m ugly. Let’s see if there’s a chance to be less ugly. Only a few filters, the author is a straight man of steel, no beauty effect.
The principle of
After obtaining the image input stream, the device processes the frame to form a new image, and finally refreshes the UI.
implementation
Apple has a simple UIImagePickerController, but it doesn’t scale well. Therefore, the author uses the AVFoundation framework. It involves the input stream and output stream, which is convenient for us to process each frame and display it.
If you don’t know the input and output related classes, it shouldn’t affect your understanding of this article. But I suggest you read Hassumori’s blog first. For example, OC’s input manages AVCaptureInput and OC’s output manages AVCaptureOutput.
There is a class called AVCaptureMovieFile that combines audio and images directly. But because I had to process the pictures, I had to take them apart. Therefore, the author uses AVCaptureAudioDataOutput and AVCaptureVideoDataOutput to process the image and then assemble it.
1. Obtain the input stream
The input stream is initialized with the associated device.
/ / image _device = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo]; _cameraDeviceInput = [[AVCaptureDeviceInput alloc] initWithDevice:self.device error:nil]; / / audio AVCaptureDevice * micDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio]; _microphoneDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:micDevice error:nil];Copy the code
2. Initialize the output stream
_queue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL); // image _videoDataOutput = [[AVCaptureVideoDataOutput alloc] init]; _videoDataOutput.videoSettings = @{(id)kCVPixelBufferPixelFormatTypeKey : [NSNumber numberWithInteger:kCVPixelFormatType_32BGRA]}; _videoDataOutput.alwaysDiscardsLateVideoFrames = YES; [_videoDataOutputsetSampleBufferDelegate:self queue:_queue]; // audioDataOutput = [[AVCaptureAudioDataOutput alloc] init]; [_audioDataOutputsetSampleBufferDelegate:self queue:_queue];
Copy the code
A serial queue is created to ensure that each frame is processed sequentially. Void captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection; .
3. Create a session to connect input and output
AVCaptureSession sessions act as an intermediary.
_session = [[AVCaptureSession alloc] init];
if ([_session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
[_session setSessionPreset:AVCaptureSessionPreset1280x720]; } {// Combine input and outputif ([_session canAddInput:_cameraDeviceInput]) {
[_session addInput:_cameraDeviceInput];
}
if([_session canAddOutput:_videoDataOutput]) { [_session addOutput:_videoDataOutput]; }}Copy the code
4. Start a session
Turn on the input stream and get data to the output stream.
// start start [_session startRunning];Copy the code
Note that if you want to modify an input or output stream, do so in a single commit. For example, switch the camera (modify the input stream).
/ / input stream AVCaptureDeviceInput * newInput = [AVCaptureDeviceInput deviceInputWithDevice: newCamera error: nil];if(newInput ! = nil) { [self.session beginConfiguration]; / / to remove the original input [self. The session removeInput: self. CameraDeviceInput];if ([self.session canAddInput:newInput]) {
[self.session addInput:newInput];
self.cameraDeviceInput = newInput;
} else {
[self.session addInput:self.cameraDeviceInput];
}
[self.session commitConfiguration];
}
Copy the code
5. Output stream data callback method
// Process the image here, And save each frame to self.outputImg - (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { @autoreleasepool {if(Output == _audioDataOutput && [_audioWriterInput isReadyForMoreMediaData]) {// Process audio [_audioWriterInput appendSampleBuffer:sampleBuffer]; }if(the output = = self. VideoDataOutput) {/ / processing/video frames/processing images, save to the self. The outputImg in [the self imageFromSampleBuffer: sampleBuffer]; }}}Copy the code
In this step, process each frame and apply a filter.
6. A filter
CIFilter is used here, which is a framework for image processing with Apple’s CoreImage framework. Hundreds of effects have been achieved, of which only three have been selected. Those interested can go to the official documentation.
Self. context = [CIContext contextWithOptions: [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]forKey:kCIContextUseSoftwareRenderer]]; Self. context = [CIContext contextWithOptions: nil]; //3. Create OpenGL optimized CIContext object, Real-time performance can be obtained the self. The context = [CIContext contextWithEAGLContext: [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2]]. // convert UIImage to CIImage CIImage * CIImage = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"WechatIMG1.jpeg"]]. / / create a filter CIFilter * filter = [CIFilter filterWithName: _dataSourse [indexPath. Row] keysAndValues: kCIInputImageKey, ciImage, nil]; [filtersetDefaults]; CIContext *context = [CIContext contextWithOptions:nil]; CIImage CIImage *outputImage = [filter outputImage]; / / handle to create CGImage CGImageRef CGImage = [self. The context createCGImage: outputImage fromRect: [outputImage among]]. imageview.image = [UIImage imageWithCGImage:cgImage]; // Release the CGImageRelease(CGImage) handle;Copy the code
Take photos and video
In the callback method, we apply the filter to get each frame. Click the photo button and save the picture to the album to complete the photo taking function.
For video, we mainly use AVAssetWriter and AVAssetWriterInputPixelBufferAdaptor.
Once we get the image and audio stream, we put it into a buffer. Eventually determining the timestamps, AVAssetWriter combines them into a video.
Afterword.
I have a problem. Write it down, too.
The front camera has a mirror problem. Most of the ideas on the Internet are iOS front-facing camera mirroring problems, but can not deal with. The reason may be that the author’s project processed the frames and the data was not a native image. The image can be mirrored again.
if([[self cameraDeviceInput device] position] = = AVCaptureDevicePositionFront) {/ / lead to mirror the result = [the result imageByApplyingOrientation:UIImageOrientationUpMirrored]; }Copy the code
reference
- iOS+Objective-C: Create a real time photo filter app