This article describes how to use OpenGL ES to implement long legs. Learning this example will deepen our understanding of the texture rendering process. In addition, we will highlight the new knowledge of “render to texture”.

Warning: This is an advanced tutorial, please make sure you are familiar with the concepts of OpenGL ES texture rendering before you read this tutorial, otherwise you may become addicted to it. portal

Note: OpenGL ES in the following refers to OpenGL ES 2.0.

1. Effect display

First of all, let’s look at the final effect. In simple terms, this function is to achieve local stretching of the picture. Logically speaking, it is not complicated.

Second, the train of thought

1. How to achieve stretching

Let’s recall that we wanted to render an image and split the image into two triangles as follows:

If we want to stretch the image, we can simply change the Y values of the four vertices.

So what if we just want to stretch the middle part of the image?

In fact, the answer is easy to think of, we just need to change the way the image split. As shown below, we split the image into 6 triangles, or 3 small rectangles. So, all we need to do is stretch the little rectangle in the middle.

2. How to achieve repeated adjustment

Looking at the dynamic renderings above, we can see that the second compression operation is based on the results of the first stretching operation. Therefore, at each step we need to take the result of the previous step as the original diagram and adjust it again.

The “raw map” here is a texture. In other words, we need to create a new texture for each adjustment.

This step is the focus of this article and will be implemented through the “render to texture” method, which we will describe in more detail later.

Why use OpenGL ES

Some people might say, “What you have here is a mediocre feature. Even if I don’t know OpenGL ES, I can implement it in other ways.”

Indeed, in iOS, we typically use CoreGraphics for drawing. Assuming that we use CoreGraphics, we also split and draw the original image according to the above implementation ideas, and re-splice the original image when repeated adjustments. Visually, the same function can be achieved.

However, since the CoreGraphics drawing is cpu-dependent, when we adjust the stretch area, we need to constantly redraw, and the CPU usage will inevitably skyrocket, resulting in a lag. With OpenGL ES there is no such problem.

4. Stretch logic

We know from the above that we need 8 vertices to render the image, and the key to stretch logic is to calculate the vertex coordinates, and then re-render after getting the result.

The key steps to compute vertices are as follows:

/** Based on the current control size and texture size, Calculate the initial texture coordinates @param size The original texture size @param startY the starting ordinate of the middle region 0~1 @param endY the end ordinate of the middle region 0~1 @Param newHeight The height of the new middle region */
- (void)calculateOriginTextureCoordWithTextureSize:(CGSize)size
                                            startY:(CGFloat)startY
                                              endY:(CGFloat)endY
                                         newHeight:(CGFloat)newHeight {
    CGFloat ratio = (size.height / size.width) *
                    (self.bounds.size.width / self.bounds.size.height);
    CGFloat textureWidth = self.currentTextureWidth;
    CGFloat textureHeight = textureWidth * ratio;
    
    / / drawing
    CGFloat delta = (newHeight - (endY -  startY)) * textureHeight;
    
    // Determine whether the maximum value is exceeded
    if (textureHeight + delta >= 1) {
        delta = 1 - textureHeight;
        newHeight = delta / textureHeight + (endY -  startY);
    }
    
    // Texture vertices
    GLKVector3 pointLT = {-textureWidth, textureHeight + delta, 0};  / / the top left corner
    GLKVector3 pointRT = {textureWidth, textureHeight + delta, 0};  / / the top right corner
    GLKVector3 pointLB = {-textureWidth, -textureHeight - delta, 0};  / / the bottom left corner
    GLKVector3 pointRB = {textureWidth, -textureHeight - delta, 0};  / / the bottom right hand corner
    
    // The vertex of the middle rectangle
    CGFloat startYCoord = MIN(2 - * textureHeight * startY + textureHeight, textureHeight);
    CGFloat endYCoord = MAX(2 - * textureHeight * endY + textureHeight, -textureHeight);
    GLKVector3 centerPointLT = {-textureWidth, startYCoord + delta, 0};  / / the top left corner
    GLKVector3 centerPointRT = {textureWidth, startYCoord + delta, 0};  / / the top right corner
    GLKVector3 centerPointLB = {-textureWidth, endYCoord - delta, 0};  / / the bottom left corner
    GLKVector3 centerPointRB = {textureWidth, endYCoord - delta, 0};  / / the bottom right hand corner
    
    // The top two vertices of the texture
    self.vertices[0].positionCoord = pointLT;
    self.vertices[0].textureCoord = GLKVector2Make(0.1);
    self.vertices[1].positionCoord = pointRT;
    self.vertices[1].textureCoord = GLKVector2Make(1.1);
    // Four vertices in the middle region
    self.vertices[2].positionCoord = centerPointLT;
    self.vertices[2].textureCoord = GLKVector2Make(0.1 - startY);
    self.vertices[3].positionCoord = centerPointRT;
    self.vertices[3].textureCoord = GLKVector2Make(1.1 - startY);
    self.vertices[4].positionCoord = centerPointLB;
    self.vertices[4].textureCoord = GLKVector2Make(0.1 - endY);
    self.vertices[5].positionCoord = centerPointRB;
    self.vertices[5].textureCoord = GLKVector2Make(1.1 - endY);
    // The next two vertices of the texture
    self.vertices[6].positionCoord = pointLB;
    self.vertices[6].textureCoord = GLKVector2Make(0.0);
    self.vertices[7].positionCoord = pointRB;
    self.vertices[7].textureCoord = GLKVector2Make(1.0);
}
Copy the code

5. Render to texture

As mentioned above, we need to create a new texture for each adjustment and use it for the next adjustment.

For the sake of resolution, instead of reading the frame cache that corresponds to the current screen rendering, we go “render to texture” and recreate a texture with the same width as the original.

Why is that?

Let’s say we have a 1000 X 1000 image, and the controls on the screen are 100 X 100, then the texture rendered to the screen will have a rendering cache size of 100 X 100 (regardless of screen density). If we were to read the screen rendering at this point, we would only be able to read 100 X 100 resolution at best.

This will cause the image resolution to drop, so we will use a method that preserves the original resolution, namely “render to texture”.

Until now, we have rendered textures directly to the screen, with key steps like this:

GLuint renderBuffer; // Render cache
GLuint frameBuffer;  / / frame buffer
    
// Bind render cache to output layer
glGenRenderbuffers(1, &renderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
[self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
    
// Bind render cache to frame cache
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
                          GL_COLOR_ATTACHMENT0,
                          GL_RENDERBUFFER,
                          renderBuffer);
Copy the code

We generate a render cache and mount this render cache to the GL_COLOR_ATTACHMENT0 color cache of the frame cache and bind the output layer to the current render cache through the context.

In fact, if we don’t need to display our rendering results on screen, we can also render data directly to another texture. More interestingly, the rendered result can also be used as a normal texture. This is also the basis for our repeated adjustment feature.

Specific operations are as follows:

// Generate frame cache and mount render cache
GLuint frameBuffer;
GLuint texture;
    
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    
glGenTextures(1, &texture);
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newTextureWidth, newTextureHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
    
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texture, 0);
Copy the code

By comparison, we can see that the Texture replaces the Renderbuffer and is also attached to GL_COLOR_ATTACHMENT0, but there is no need to bind layer.

In addition, we need to set a size for the new texture that is not limited by the size of the controls on the screen, which is why the new texture can maintain its original resolution.

At this point, the rendered result is stored in the texture, which can be used as a normal texture.

Six, save the results

When we have adjusted the image to our satisfaction, we need to save it. There are two steps here. The first step is to regenerate the texture as mentioned above, and the second step is to transform the texture into an image.

The second step is implemented through the glReadPixels method, which reads and extracts texture data from the current frame cache. Directly on the code:

// Return a UIImage corresponding to a texture, bound to the corresponding frame cache before calling
- (UIImage *)imageFromTextureWithWidth:(int)width height:(int)height {
    int size = width * height * 4;
    GLubyte *buffer = malloc(size);
    glReadPixels(0.0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, size, NULL);
    int bitsPerComponent = 8;
    int bitsPerPixel = 32;
    int bytesPerRow = 4 * width;
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB(a);CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
    CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
    CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL.NO, renderingIntent);
    
    // Now the imageRef is upside down. Call CG to redraw it and flip it upside down
    UIGraphicsBeginImageContext(CGSizeMake(width, height));
    CGContextRef context = UIGraphicsGetCurrentContext(a);CGContextDrawImage(context, CGRectMake(0.0, width, height), imageRef);
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext(a);UIGraphicsEndImageContext(a); free(buffer);return image;
}
Copy the code

So now that we’ve got our UIImage object, we’re ready to save it to our album.

The source code

Check out the full code on GitHub.

reference

  • Use OpenGL in iOS to enhance functionality
  • Learn rendering to textures in OpenGL ES

Use iOS OpenGL ES for long legs