In this article we will take a look at how you can write your own shaders in iOS using GLSL.

Here we first to complete a simple case, using GLSL render a picture.

Vertex shader section

attribute vec4 position; 
attribute vec2 texturePos;
varying lowp vec2 varyingTexturePos;
void main() {
    varyingTexturePos = texturePos;
    gl_Position = position;
}Copy the code

First let’s review the functionality of vertex shaders. It can be used to perform custom calculations, perform all kinds of transformations, lighting, or vertex-based functions that traditional fixed pipes don’t allow, but whatever cool stuff it ends up doing, the ultimate goal is to compute vertex coordinates. So we all end up having to assign the computed vertex coordinates to a built-in variable in the vertex shader, gl_Position.

This code is very simple with only 7 lines. First, we define two attribute variables, one is the 4-dimensional vector position representing the vertex coordinates, and the other is the 2-dimensional vector texturePos representing the texture coordinates. The reason for using the attribute modifier is that vertex and texture coordinates vary from vertex to vertex, in other words position and texturePos are different each time they are executed. Such variables must be modified with the attribute because the value of the variable modified by the Attribute can be passed to the vertex shader through the OpenGL Vertex API or as part of the vertex array (VBO). If you don’t understand this, just remember that you can use attributes for variables that vary by vertex, such as vertex coordinates, texture coordinates, normals, and so on. In addition, attributes can only be used in vertex shaders, as we covered in the basic syntax article.

Looking at the varyingTexturePos modified by VARYING, to understand what this code means, we must first understand what VARYING does. The VARYING qualifier provides the interface between the vertex shader and the chip shader. In simple terms, the variable varying modifies is transmitted to the chip shader. (Of course, it is not simple to pass directly, the number of vertices and the number of slices do not match, in fact, after the pixel assembly difference calculation is passed to the slice shader). With this in mind, we know that the varyingTexturePos is a variable to be passed to the fragment shader, representing texture coordinates. So why define two texture coordinates? One is modified with an attribute to receive the outsourced VBO parameter, and the other is modified with VARYING to indicate that it will be passed to the chip shader.

Finally, look at the main function. Each shader has one and only one main function as the entrance to the shader. The texturePos variable varying is assigned to the texturePos texture coordinate that was passed to the shader from the external VBO to ensure that it will have a value when passed to the shard shader. The vertex coordinates passed to the shader from the external VBO are then assigned directly to the built-in variable gl_Position.

The slice shader section

uniform sampler2D colorMap;
varying lowp vec2 varyingTexturePos;
void main() {
    gl_FragColor = texture2D(colorMap, varyingTexturePos);
}Copy the code

In the first sentence, we declare a uniform variable of type sampler2D colorMap, representing the texture sampler. Uniform usage: Uniform usage: The uniform qualifier is used to declare global variables whose values are the same across the entire primitive being processed. Uniform qualifiers are used to declare variables that do not change throughout primitive processing, such as the texture sampler here, which in this case uses the same texture unit regardless of which fragment.

Consider the sampler2D data type, which represents a 2-dimensional texture sampler. What exactly is a texture sampler? Simply put, it is a data type that GLSL defines for developers to pass texture objects into the slice shader. Similar data types are sampler1D and sampler3D for different dimensional textures. The value of this variable is a texture unit, a gluint-type texture identifier or texture name.

We then see the code varying LOWp VEC2 varyingTexturePos; Is it the same as the vertex shader? Yes, the varyingTexturePos is the texture coordinates passed in by the vertex shader (actually the texture coordinates passed in by interpolation). Note that the notation including the variable names must be exactly the same as the chip processor or you won’t get the texture coordinates.

The task of the pixel shader is to calculate the color of each pixel. There is only one assignment statement in main, where gl_FragColor represents the color of the pixel, which is a built-in variable. Texture2D is a built-in function that takes two parameters, the first is the texture sampler and the second is the texture coordinate. The function extracts the color of the corresponding pixel in the texture. So in this example, the color of each pixel in the texture is extracted and assigned to the built-in variable gl_FragColor.

How do I use my own shaders to manipulate images in iOS

As I mentioned earlier, in order to draw with OpenGL, you must first use the render context and draw the surface. So let’s finish these two parts first.

Initialize the render context

This step is actually the same as before using GLKit, so I won’t say more here, if you don’t understand, you can go to the GLKit application

The specific code is as follows:

- (void)setupContext {
    self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
    [EAGLContext setCurrentContext:self.context];
}Copy the code

Initialize the drawing surface

So the surface that we draw in GLKit is GLKView, and Apple has already wrapped it for us and we can use it directly, so now what if we don’t use GLKView, because Apple is the only platform that doesn’t support EGL, we made an EAGL, and in this EAGL we’ve already used EAGLContext, There is also a class for drawing surfaces called CAEAGLLayer, which some of you may have already played with. To initialize this layer, look at the following code:

+ (Class)layerClass {
    return [CAEAGLLayer class];
}

- (void)setupDrawableSurface {
    self.eaglLayer = (CAEAGLLayer *)self.layer;
    [self.eaglLayer setContentsScale:[UIScreen mainScreen].scale];
    //self.eaglLayer.drawableProperties = @{kEAGLDrawablePropertyRetainedBacking:@false,kEAGLDrawablePropertyColorFormat:kEAGLColorFormatRGBA8};
}Copy the code

The code is simple enough to mention the drawableProperties property, which specifies the characteristics of a drawable surface. A dictionary type of attribute, consists of two Key values kEAGLDrawablePropertyRetainedBacking and kEAGLDrawablePropertyColorFormat. KEAGLDrawablePropertyRetainedBacking corresponding value is a Boolean value that indicates whether or not the need data cache the current draw a frame, the default is False, because the performance and memory to this attribute will have a certain amount of overhead. KEAGLDrawablePropertyColorFormat said can map the format of the corresponding surface color buffer. The default is kEAGLColorFormatRGBA8 to indicate that RGBA has 8 bits each.

Clearing buffer

We know that the final destination of the OpenGL rendering pipeline is the FrameBuffer, and that the FrameBuffer object is not actually a buffer, but an aggregator object containing one or more attachment points to which the actual buffer is attached. Such as Color Attachment, depth Attachment,stencli attatchment, etc., which are attached to color buffers, depth buffers, and template buffers respectively. In this case, we just want to render an image so we just need the color buffer, so we need to declare the color renderBuffer and the frameBuffer, the renderBuffer and the frameBuffer. It is best to clean up the buffer before using it to ensure that the data is safe. You can skip this step if you are sure that the buffer has not been used before. The code for clearing the color render buffer and frame buffer is as follows:

- (void)clearBuffers {
    glDeleteRenderbuffers(1, &_renderBuffer);
    self.renderBuffer = 0;
    glDeleteFramebuffers(1, &_frameBuffer);
    self.frameBuffer = 0;
}Copy the code

Configure the render buffer and frame buffer

After cleaning up the dirty data, it’s time to configure these buffers as follows:

- (void)setupBuffers {
    glGenRenderbuffers(1, &_renderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, self.renderBuffer);
    [self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.eaglLayer];
    glGenFramebuffers(1, &_frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, self.frameBuffer);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.renderBuffer);
}Copy the code

RenderBuffer and frameBuffer, like normal buffers, gen the buffer identifier (or buffer name) and bind the buffer. RenderbufferStorage :(NSUInteger)target fromDrawable:(nullable id

)drawable; This method attaches the drawable self.eaglLayer to the GL_RENDERBUFFER target as the store of the OpenGL render buffer. Here’s what the official document explains: To create a RenderBuffer that can be displayed on screen, you bind the RenderBuffer and then allocate shared storage to it by calling this method, RenderBuffer can only be displayed later by calling presentRenderbuffer if shared storage is allocated to it through this method.

FrameBuffer is an aggregate object that manages various buffer objects, so how do we attach renderBuffer to the frameBuffer? We can through this method glFramebufferRenderbuffer renderBuffer adherent to GL_COLOR_ATTACHMENT0 the attachment points. Once we’ve done this, we’ve connected the frame buffer, render buffer, and draw surface so that we can begin our drawing.

The course of preparing

With these prerequisites in place, we can begin the topic of this section: how to use custom shaders in iOS? We know that a shader is a piece of code that performs a specific function, and that code needs to be compiled and linked into an executable program to run, so in order to use these custom shaders in iOS, we first have to compile the shader and then link it into a program. See the following code for details:

- (GLuint)shaderWithType:(GLenum)type path:(NSString *)path {
    GLuint shader = glCreateShader(type);
    NSString *shaderContent = [NSString stringWithContentsOfFile:path encoding:NSUTF8StringEncoding error:nil];
    const char *content = [shaderContent UTF8String];
    glShaderSource(shader, 1,&content , NULL);
    glCompileShader(shader);
    GLint status;
    glGetShaderiv(shader, GL_COMPILE_STATUS, &status);
    if(status == GL_FALSE) {
        char message[512];
        glGetShaderInfoLog(shader, sizeof(message), NULL, message);
        NSLog(@"shader compile error: %@",[NSString stringWithUTF8String:message]);
    }
    return shader;
}
- (void)createProgram {
    GLuint program = glCreateProgram();
    self.program = program;
    NSString *vertexShaderPath = [[NSBundle mainBundle] pathForResource:@"shader" ofType:@"vsh"];
    NSString *fragShaderPath = [[NSBundle mainBundle] pathForResource:@"shader" ofType:@"fsh"];
    GLuint vertexShader = [self shaderWithType:GL_VERTEX_SHADER path:vertexShaderPath];
    GLuint fragShader = [self shaderWithType:GL_FRAGMENT_SHADER path:fragShaderPath];
    glAttachShader(program, vertexShader);
    glAttachShader(program, fragShader);
    glLinkProgram(program);
    GLint status;
    glGetProgramiv(program, GL_LINK_STATUS, &status);
    if(status == GL_FALSE) {
        char *message[512];
        glGetProgramInfoLog(program, sizeof(message), NULL, message);
        NSLog(@"program link error: %@",[NSString stringWithUTF8String:message]);
    }
    NSLog(@"program link success");
    glUseProgram(program);
    glDeleteShader(vertexShader);
    glDeleteShader(fragShader);
}Copy the code

The main function of this method is to generate a compiled shader. This method has two input parameters. The first parameter represents the shader type (vertex shader or fragment shader). The second represents the path to the file in which the shader code resides. In this method we first create a shader of the corresponding type using glCreateShader, where the input parameter is a GLenum type (only GL_VERTEX_SHADER or GL_FRAGMENT_SHADER can be passed for vertex shader and fragment shader respectively). However, we only get an empty shader with no code in it, so we need to use glShaderSource to specify the corresponding shader code we wrote in GLSL. GlShaderSource (GLuint shader, GLsizei count, const GLchar* const *string, const GLint* length);

  • Shader: Represents a handle to a shader object
  • Count: Indicates the number of source strings for a shader. A shader can consist of multiple source strings, but each shader can have only one main function
  • string:Pointer to an array that holds the shader source string of count
  • Length: pointer to an array of integers that holds the size of each shader string and the number of elements is count. NULL means that the string is null-terminated

Once you’ve done this you can call glCompileShader to compile the shader. But the compiler shader here, unlike the one we built for iOS with Xcode, you can actually look at the compiler and see the prompts, the error messages and so on. So here we are compiling in code and we don’t even know if it’s successful or not so OpenGL gives us glGetShaderiv and glGetShaderInfoLog. The three parameters in the void glGetShaderiv(GLuint shader, GLenum pname, GLint *params) function have the following meanings:

  • Shader represents the shader object handle that needs to be compiled,
  • Pname specifies the information parameter to be fetched. The value can be GL_COMPILE_STATUS/GL_DELETE_STATUS/ GL_INFO_LOG_LENGTH/GL_SHADER_SOURCE_LENGTH/ GL_SHADER_TYPE.
  • Params – pointer to the integer storage location of the query result.

Void glGetShaderInfolog(GLuint shader, GLSizei maxLength, GLSizei *length, GLChar *infoLog)

  • Shader – The shader object handle to which the information log needs to be retrieved
  • MaxLength – The size of the cache for storing information logs
  • Length – The length of the information log being written (minus the NULL terminator); If you don’t need to know the length. This parameter can be Null
  • InfoLog – pointer to the character cache where information logs are stored.

These two interfaces let us know if the compilation was successful, and if there was an error, what the error message was.

Now that we can compile our custom shader using OC code, how to connect it to a program is up to the second method, where we first create a program using glCreateProgram, The first method is then called twice to get the compiled vertex shader and the slice shader, respectively, attached to the application using glAttachShader, and finally linked to the program using glLinkProgram. Of course, there may be errors when the program is linked, but the API for checking the link success and error message is similar to that for compiling shaders. Once the link is successful, we can use this program, use glUseProgram to use this program.

Note that the program is running on the GPU, while the rest of our OC code is running on the CPU, so what about passing data to the program? Take a look at this code:

- (void)setupCoordData {
    GLfloatF vertexData [] = {0.65, 0.414 f to 0.0 f to 0.0 f to 1.0 f to 0.65 f to 0.414 f to 0.0 f to 0.0 f to 0.0 f to 0.65 f to 0.414 f to 0.0 f to 1.0 f to 0.0 f, 0.65 f to 0.414 f, f, 0.0 1.0 f, f 0.0, 0.65 f, f 0.414, 0.0 f, f 1.0, 1.0 f to 0.65 f, f 0.414, 0.0 f, f 0.0, 1.0, f}; GLuint buffer; glGenBuffers(1, &buffer); glBindBuffer(GL_ARRAY_BUFFER, buffer); glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW); int vertexCoordPosition = glGetAttribLocation(self.program,"position");
    glEnableVertexAttribArray(vertexCoordPosition);
    glVertexAttribPointer(vertexCoordPosition, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, 0);
    int textureCoordPosition = glGetAttribLocation(self.program, "texturePos");
    glEnableVertexAttribArray(textureCoordPosition);
    glVertexAttribPointer(textureCoordPosition, 2, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, (GLfloat *)NULL + 3);
}Copy the code

This code is not familiar, before using GLKIt also wrote similar code, here we first define a vertex array, store vertex coordinates and texture coordinates. (0.65 and 0.414 are just to ensure that the picture is not deformed. My picture is the 1300*828 picture found on the Internet). Then generate the buffer, bind the buffer, and copy the vertex data from the CPU to GPU memory. This code has been used many times before, if you don’t understand the students should take a look at the previous section.

After copying the coordinate data (vertex coordinates and texture coordinates) from GPU memory, we also need to tell program how to read the data. In the previous vertex shader we defined two attribute variables: Attribute VEC4 position and Attribute VEC2 texturePos, variables like that we can use glGetAttribLocation to get their position in the program, We then use glVertexAttribPointer to specify how the attribute variable at that location reads data from the buffer. Note will now call glEnableVertexAttribArray this function first open the attribute variables read data channel, because in order to improve the efficiency of iOS is off by default.

GLKit’s attribute variables are fixed GLKVertexAttribPosition and GLKVertexAttribTexCoord0, After all, the program is fixed so the attribute names must be fixed as well. Our custom shader has a higher degree of freedom, just note that the second parameter must be the same as the attribute variable name in the shader when calling glGetAttribLocation.

At this point, the GLKIt concept is quite clear, such as the GLKVertexAttrib enumeration

typedef NS_ENUM(GLint, GLKVertexAttrib)
{
    GLKVertexAttribPosition,
    GLKVertexAttribNormal,
    GLKVertexAttribColor,
    GLKVertexAttribTexCoord0,
    GLKVertexAttribTexCoord1
}Copy the code

Its five values represent vertex coordinates, normals, vertex colors, texture 1, and texture 2, respectively.

Processed the attribute in the shader variable after the number of sources, and then processing shaders, uniform variable to the number of sources in this case, only the fragment shader has a uniform sampler2D colorMap variable said texture samplers, see the following code:

- (void)setupTextureData {
    UIImage *image = [UIImage imageNamed:@"meinv"];
    CGImageRef imageRef = image.CGImage;
    size_t width = CGImageGetWidth(imageRef);
    size_t height = CGImageGetHeight(imageRef);
    GLubyte *imageData = calloc(width * height * 4, sizeof(GLubyte));
    CGContextRef contextRef = CGBitmapContextCreate(imageData, width, height, 8 , width * 4, CGImageGetColorSpace(imageRef), CGImageGetBitmapInfo(imageRef));
    CGContextTranslateCTM(contextRef, 0, height);
    CGContextScaleCTM(contextRef, 1, -1);
    CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(contextRef);

    glBindTexture(GL_TEXTURE_2D, 0);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    GLsizei w = (GLsizei)width;
    GLsizei h = (GLsizei)height;
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData);
    
    int samplerLocation =  glGetUniformLocation(self.program, "colorMap");
    glUniform1i(samplerLocation, 0);
}Copy the code

This code is actually divided into three parts: 1. 2. Load the texture. 3, pass the texture into the uniform variable colorMap in program.

First look at the first part of the image decompression, here we use the CoreGraphics API to complete the image decompression, it should be noted that CG frame coordinate origin and X, Y axis direction is different from UIKit, CG frame coordinate origin in the bottom left corner of the screen X\Y axis extension respectively to the right \ up; Whereas in UIKit, the origin is at the top left of the screen, and the X\Y axes are left to the right and down. So CGContextTranslateCTM(contextRef, 0, height) needs to be called before drawImage; Move the origin of coordinates from the top left to the bottom left, and then flip the Y-axis direction from down to up with CGContextScaleCTM(contextRef, 1, -1). In fact, this is what we call texture flipping, if you remove these two lines of code you will find that you load the texture upside down.

Let’s move on to the second part of loading textures. Before calling glTexImage2D to load the texture, we need to bind the texture and set the texture parameters. For this part, you can refer to my previous article – Texture

Finally, we can pass texture data into the program. Similar to attribute variables, we also need to obtain the uniform variable location in the program before transferring its value. This can be done using the API glGetUniformLocation. Then call glUniform1i to pass the previously bound texture unit into the uniform variable colorMap in program.

At this point we are done with the ready-to-draw phase, which includes linking the shader compiler into the program and entering the necessary data for the program. The specific code is as follows:

- (void)prepareDraw {
    [self createProgram];
    [self setupCoordData];
    [self setupTextureData];
}Copy the code

draw

There’s nothing to say about this last stage, so let’s get right to the code:

- (void)draw {glClearColor(0.3f, 0.4f, 0.5f, 1.0f); glClear(GL_COLOR_BUFFER_BIT);float scale = [UIScreen mainScreen].scale;
    glViewport(self.bounds.origin.x * scale ,self.bounds.origin.y * scale, self.bounds.size.width * scale, self.bounds.size.height * scale);
    glDrawArrays(GL_TRIANGLES, 0, 6);
    [self.context presentRenderbuffer:GL_RENDERBUFFER];
}Copy the code

Note: Don’t forget to set the viewport before drawing, and don’t forget to call presentRenderbuffer after drawing to render the image of the render buffer to the screen

conclusion

This example is very simple, is to load an image, but it can figure out how to use GLSL to write custom shaders in iOS projects, so that we can use GLSL to write complex filters in iOS projects laid a good foundation!

Related articles:
  1. OpenGL first lesson — Name explanation
  2. OpenGL is introduced in lesson 2 – The common fixed storage shader

  3. OpenGL introduction third lesson — matrix transformation and coordinate system

  4. OpenGL entry lesson 4 — Depth

  5. OpenGL entry lesson 5 — front and back elimination

  6. OpenGL entry lesson 6 — Clipping and Blending

  7. OpenGL entry lesson 7 — Texture

  8. OpenGL is in class 8 — a supplementary case

  9. OpenGL ES que

  10. The application of GLKit

  11. GLSL met

  12. How to use custom shaders written in GLSL in iOS

  13. Precision qualifier in GLSL

  14. Use OpenGL to write a rotating stereo album for your girlfriend