OpenGL
What exactly is OpenGL? It is generally regarded as an API(Application Programming Interface), which contains a series of functions that can manipulate graphics and images. However, OpenGL itself is not an API, it is simply a Specification developed and maintained by the Khronos organization.
How does OpenGL work
OpenGL is implemented as a client-server system, with applications as clients and OpenGL implementations provided by graphics hardware vendors as servers. The client program needs to call the interface of OpenGL to achieve 3D rendering, so OpenGL commands and data will be cached in memory, under certain conditions, these commands and data will be sent to video memory through CPU clock, under the control of GPU, using the data and commands in video memory, through the rendering pipeline to complete graphics rendering. The results are stored in the frame buffer, and the frames in the frame buffer are eventually sent to the display to display the results.
Rendering pipeline schematic
1. Vertex processing: Input a single vertex into the vertex shader. The main purpose of the vertex shader is to convert 3D coordinates into another 3D coordinates. All Coordinates within the range of so-called Normalized Device Coordinates will eventually appear on the screen (Coordinates outside this range will not be displayed).
2, primitives assembly: vertex shader output vertex as input, assembly into the corresponding primitives (OpenGL ES only supports three primitives, respectively vertex, line segment, triangle, complex graphics have to achieve by rendering multiple triangles).
3. Process primitives: The primitives from the previous step are input to the geometric shader, which takes as input a set of vertices in the form of primitives. It can generate other shapes by generating new vertices to construct new primitives (or other primitives).
4. Rasterization: The pixel is mapped to the corresponding pixel on the final screen, and the pixel beyond the view is cropped (for efficiency), and the fragment is eventually generated.
5. Process the fragment: Input the fragment into the fragment shader, whose main purpose is to calculate the final color of a pixel (where the filter is processed).
6. Test and mix: This stage detects the corresponding depth value and template value of the fragment and uses them to determine whether the pixel is in front or behind other objects and whether it should be discarded. This phase also checks the alpha value (which defines an object’s transparency) and blends the objects.
Small concept analysis
OpenGL contextOpenGL is a huge state machine by itself. A set of variables describes how OpenGL should behave at the moment. OpenGL’s state is often referred to as the OpenGL context.
Vertex data: An array of vertex coordinates needed to render an image, such as three vertices needed to render a triangle.
shader: a small program running on the GPU that quickly processes your data in the graphics rendering pipeline. There are three types: vertex shader, geometry shader, and fragment shader. We only need to configure vertex and fragment shader.
textureTexture is a 2D image (there are even 1D and 3D textures) that can be used to add details to objects.
The frame buffer: a buffer that receives render results and specifies the area (texture or render buffer) for the GPU to store render results. The default frame buffer is generated and configured when the render window is created, and we can customize our own frame buffer.
OpenGL coordinates: ranges from -1 to 1. It is a three-dimensional coordinate system, usually represented by X, Y, and Z. The positive Z axis is pointing out of the screen.
Texture coordinatesThe texture coordinate system, which ranges from 0 to 1, is a two-dimensional coordinate system, lower left of the origin, used to indicate which part of the image should be sampled from (using texture coordinates to get texture color).
rasterizer: The process of deciding which pixels are to be covered by the set primions.
OpenGL programming
General software development, belongs to CPU programming, CPU programming is serial programming, follow the code sequence, the code is executed where. OpenGL programming, which is GPU programming, has a set of variables that describe how OpenGL should behave at the moment. Programming core: vertex shader, fragment shader
The programming flow
Create OpenGL context, render layer — > Create frame buffer, Render buffer and Bind — > Create shader program, load shader code and compile, link — > Configure window size, start shader, load vertex data and texture — > Draw Render — > Data Cleanup
Tiktok filter practice
In order to better connect the above content, today we will practice a classic douyin filter – dithering.
prepared
Can from github.com/caixindong/… Download our preliminary project, the project provides the most basic camera capture project, open/XDCaptureService/Example/XDCaptureService/XDViewController. M file, here is the place where we write related practices code:
- (void)captureService:(XDCaptureService *)service getPreviewLayer:(AVCaptureVideoPreviewLayer *)previewLayer {
if (previewLayer) {
dispatch_async(dispatch_get_main_queue(), ^{
[_contentView.layer addSublayer:previewLayer];
previewLayer.frame = _contentView.bounds;
});
}
}
- (void)captureService:(XDCaptureService *)service outputSampleBuffer:(CMSampleBufferRef)sampleBuffer {
}Copy the code
We will first – (void) service getPreviewLayer: (AVCaptureVideoPreviewLayer captureService: (XDCaptureService *) *) in the previewLayer code commented out first, we don’t use AVFoundation default AVCaptureVideoPreviewLayer as our preview, preview for the default view does not support us to do further filter processing, So we need to implement a preview view based on OpenGL implementation to allow us to filter every video frame from the render level. – (void)captureService:(XDCaptureService *)service outputSampleBuffer:(CMSampleBufferRef) We need to do the video frame rendering logic inside.
OpenGL of actual combat
We’ll create a new XDOpenGLPreView class as our OpenGL preview view that provides a method for rendering external video frame data:
#import <UIKit/UIKit.h>
#import <CoreVideo/CoreVideo.h>
@interface XDOpenGLPreView : UIView
- (void)renderPixelBuffer:(CVPixelBufferRef)pixelbuffer;
@endCopy the code
1. Create OpenGL context and render layers
+ (Class)layerClass {
return [CAEAGLLayer class];
}
- (instancetype)initWithFrame:(CGRect)frame {
if (self = [super initWithFrame:frame]) {
#if __IPHONE_OS_VERSION_MAX_ALLOWED >= 80000
if ( [UIScreen instancesRespondToSelector:@selector(nativeScale)] )
{
self.contentScaleFactor = [UIScreen mainScreen].nativeScale;
}
else
#endif
{
self.contentScaleFactor = [UIScreen mainScreen].scale;
}
CAEAGLLayer *eaglLayer = (CAEAGLLayer *)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = @{ kEAGLDrawablePropertyRetainedBacking : @(NO),
kEAGLDrawablePropertyColorFormat : kEAGLColorFormatRGBA8 };
_oglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if ( ! _oglContext ) {
NSLog( @"Problem with OpenGL context." );
return nil;
}
}
return self;
}Copy the code
CAEAGLLayer is a subclass of CALayer that displays arbitrary OpenGL graphics as our rendering layer. KEAGLDrawablePropertyRetainedBacking: NO: keep any configuration rendering layer after drawing the image previously reserved for reuse; KEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8: configuration rendering layer of the pixel format; KEAGLRenderingAPIOpenGLES2: specify use OpenGL version is 2.0; 2, initialization related objects including frame buffer, render buffer, shader program initialization, see the code comments
- (BOOL)initializeBuffers { BOOL success = YES; // Disable OpenGL to compare and update depth buffer glDisable(GL_DEPTH_TEST); //////// initializes the frame buffer // Create a frame buffer, assign an ID to it, and assign the value to _frameBuffer. 1 indicates the number of requests. Glgenframebuffer (1, &_framebuffer); glgenframeBuffer (1, &_framebuffer); GL_FRAMEBUFFER glBindFramebuffer GL_FRAMEBUFFER glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer); //////// create render buffer, apply an ID to it, assign _colorBufferHandle, create render buffer, 1 indicates the number of requests glGenRenderbuffers(1, &_colorBuffer); // Bind the render buffer. Bind tells OpenGL: Where I refer to GL_RENDERBUFFER later, GlBindRenderbuffer (GL_RENDERBUFFER, GL_RENDERBUFFER, GL_RENDERBUFFER, GL_RENDERBUFFER) _colorBuffer ); // Bind the render buffer to the CAEAGLLayer, [_oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable (CAEAGLLayer *)self.layer]; // Get the parameters of the currently bound render cache object. The Target should be GL_RENDERBUFFER, and the second argument should be the name of the argument to be obtained. The last is a pointer to store the return value of integer amount glGetRenderbufferParameteriv (GL_RENDERBUFFER GL_RENDERBUFFER_WIDTH, & _width); glGetRenderbufferParameteriv( GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_height ); // Add the render cache to the GL_COLOR_ATTACHMENT0 attachment of the frame cache, so that the entire data stream is strung down. When the data is rendered, it is put into the frame buffer. The data actually flows to the render buffer, which is fed to the render layer for display. glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorBuffer ); if ( glCheckFramebufferStatus( GL_FRAMEBUFFER ) ! = GL_FRAMEBUFFER_COMPLETE ) { NSLog( @"Failure with framebuffer generation" ); success = NO; goto bail; } // Create a texture buffer, Need to use when creating texture CVReturn err = CVOpenGLESTextureCacheCreate (_oglContext kCFAllocatorDefault, NULL, NULL, & _textureCache); if ( err ) { NSLog( @"Error at CVOpenGLESTextureCacheCreate %d", err ); success = NO; goto bail; } // create shader program _program = glCreateProgram(); / / load the vertex shader code GLuint verShader = [self loadShader: GL_VERTEX_SHADER withString: kPassThruVertex]; if (verShader == 0) { NSLog( @"Error at verShader"); success = NO; goto bail; } / / load the fragment shader code GLuint fraShader = [self loadShader: GL_FRAGMENT_SHADER withString: kPassThruFragment]; if (fraShader == 0) { NSLog( @"Error at fraShader"); success = NO; goto bail; } // bind vertex shader and fragment shader to shader program glAttachShader(_program, verShader); glAttachShader(_program, fraShader); // The first method is to use the glBindAttribLocation function to implement the index and variable mapping. // First, we specify an index (typically starting at 0) for each vertex attribute variable in the shader. // The other way is to specify it directly in the shader, which is implemented through the GLSL keyword Layout. To achieve this effect, we need to change the contents of the previous Vertex Shader. glBindAttribLocation(_program, ATTRIB_VERTEX, "position"); glBindAttribLocation(_program, ATTRIB_TEXTUREPOSITON, "texturecoordinate"); // link shader program glLinkProgram(_program); // Delete the shader glDeleteShader(verShader); glDeleteShader(fraShader); // Get the attributes defined by the shader to be used later to assign the attributes inside. _frame = glGetUniformLocation(_program, "videoframe"); _offset = glGetUniformLocation(_program, "offset"); _uMvpMatrix = glGetUniformLocation(_program, "uMvpMatrix"); bail: if (! success) { [self reset]; } return success; }Copy the code
To start with a simple shader code example:
Static const char * kPassThruVertex = _STRINGIFY(attribute vec4 position; attribute mediump vec4 texturecoordinate; varying mediump vec2 coordinate; void main() { gl_Position = position; coordinate = texturecoordinate.xy; }); Static const char * kPassThruFragment = _STRINGIFY(VARYING highp vec2 coordinate; uniform sampler2D videoframe; void main() { gl_FragColor = texture2D(videoframe, coordinate); });Copy the code
Attribute: Modifiers only exist in vertex shaders and are used to store input information about each vertex. For example, Position and TextureCoords are defined to receive vertex Position and texture information. Vec4 and VEC2 are data types, which respectively refer to four-dimensional vector and two-dimensional vector. Mat4: also data type, refers to 4*4 matrix; Varying: Specifies the output of the vertex shader and the input of the fragment shader. If both the vertex shader and the fragment shader are declared at the same time and exactly the same, the data in the vertex shader can be obtained from the fragment shader.