OpenGL ES introduction

OpenGL ES (OpenGL for Embedded Systems) is an advanced 3D graphics application programming interface (API) for handheld and Embedded Systems. OpenGL ES is the dominant graphics API in smart phones at present. iOS, Andriod , BlackBerry ,bada ,Linux ,Windows.

On the other hand, as iOS developers we certainly have to look at the official Apple documentation for OpenGL ES.

The Open Graphics Library (OpenGL) is used for visualizing 2D and 3D data. It is a multipurpose open- standard graphics library that supports applications for 2D and 3D digital content creation, mechanical and architectural design, virtual prototyping, flight simulation, video games, and more. You use OpenGL to configure a 3D graphics pipeline and submit data to it. Vertices are transformed and lit, assembled into primitives, and rasterized to create a 2D image. OpenGL is designed to translate function calls into graphics commands that can be sent to underlying graphics hardware. Because this underlying hardware is dedicated to processing graphics commands, OpenGL drawing is typically very fast. OpenGL for Embedded Systems (OpenGL ES) is a simplified version of OpenGL that eliminates redundant functionality to provide a library that is both easier to learn and easier to implement in mobile graphics hardware

Open Graphics Library (OpenGL) for visualization of 2d and 3D data. It is a versatile open standard graphics library that supports 2D and 3D digital content creation, mechanical and architectural design, virtual prototyping, flight simulation, video games and other applications. You can use OpenGL to configure and submit data to a 3D graphics pipeline. Vertices are transformed and lit, grouped into primitives, and rasterized to create 2D images. OpenGL aims to convert function calls into graphical commands that can be sent to the underlying graphics hardware. Because this underlying hardware is dedicated to processing graphics commands, OpenGL drawing is typically very fast.

OpenGL for Embedded Systems(OpenGL ES) is a simplified version of OpenGL that eliminates redundancy and provides a library that is both easy to learn and easier to implement in mobile graphics hardware.

Since OpenGL ES is a simplified version of OpenGL, it is necessary to understand the basic knowledge of OpenGL before learning OpenGL ES. Therefore, it is recommended that you first read the previous article on OpenGL.

OpenGL ES rendering process analysis



As shown above, the entire rendering process should be

  1. Graph metadata (vertex coordinates, texture coordinates, transformation matrices, and so on) and image data (textures) are passed to the vertex shader through channels like Attribute or Uniform via the client
  2. Vertex shaders make various transformations using the passed vertex coordinates and transformation matrix to generate the final vertex position, and if there is light, can also generate per-vertex light color through the light calculation formula
  3. Note that the vertex shader does not immediately enter the pixel shader, but first goes through the pixel assembly; At this stage, clipping, perspective segmentation, and Viewport transformation are performed. And then it goes through rasterization to the chip shader
  4. Rasterization is the process of converting primions into a set of two-dimensional fragments. These transformed fragments are processed by the slice shader. These two-dimensional fragments are the pixels that can be drawn on the screen. The task of the pixel shader is to fill these pixels with color, which can be the color passed in through the attribute channel, or the texture element extracted from the texture image (the texture color corresponding to a texture coordinate). It can also be a self-calculated color (mix, etc.)
  5. Finally, the frame buffer is reached for transparency, templating, depth testing, blending, and so on. Note that blending here is a bit different from blending in the fragment shader, in that it combines the newly generated fragment color with the corresponding color value stored in the frame cache.


EGL and EAGL

The OpenGL ES command requires a rendering context and a drawing surface to complete the drawing of a graphic image. Rendering context is used to store the state of OpenGL ES. Draw surface is the surface used to draw primitives, specifying the type of cache required for rendering, such as color buffer depth buffer and template buffer.

However, OpenGL ES does not provide an API for how to create a rendering context or how the context is connected to the native windowed system. EGL is the interface between Khronos rendering apis (such as OpenGL ES) and the native window system. Because each window-system has a different definition, EGL provides a basic opaque type, EGLDisplay, that encapsulates all system dependencies and interfaces to native window-systems. But our iOS platform is the only one that supports OpenGL ES but not EGL. Apple provides its own implementation of the EGL API, called EAGL.

Because OpenGL ES is a C-based API, it is very convenient and widely supported. As a C API, it integrates seamlessly with Objective-C Cocoa Touch applications. However, the OpenGL ES specification does not define a window layer; Instead, the managed operating system must provide functions to create an OpenGL ES rendering context that receives commands and a frame buffer to write the results of any drawing commands. So using OpenGL ES on iOS requires using the iOS class to set up and render the drawing surface, and using the corresponding API on the platform to render its content.

EAGLContext 

EAGLContext is apple’s own rendering context for OpenGL, which manages the state information, commands, resources, and so on needed to render graphics using OpenGL. In IOS applications, each thread maintains a current context. When an application calls Opengl ES ‘API, the context of the thread is changed by that API (changing its managed state, commands, resources, and so on).

You can set the currentContext by calling the setCurrentContext: method of the EAGLContext class. You can also get the currentContext of a thread by calling the currentContext method of the EAGLContext class. In addition, you can choose which version of the Opengl ES API to use when creating and initializing the EAGLContext object. To create an Opengl ES 3.0 context, initialize it as follows:

EAGLContext* myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
Copy the code

If the device does not support the OpenGL ES API that you set the lock to, the initWithAPI: method will return nil. To support multiple versions of the OpenGL ES API, you should first try to initialize the render context to the latest version. If the object returned is nil, the context of the old version is initialized. Such as:

EAGLContext* CreateBestEAGLContext()
{
   EAGLContext *context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
   if (context == nil) {
      context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
   }
   return context;
}
Copy the code

The API properties of the context indicate the version of OpenGL ES supported by the context. We should first test the API properties of the context and then use it to select the correct render path. A common pattern for implementing this behavior is to create a class for each render path. The application tests the context and creates the renderer once during initialization.

EAGLSharegroup

Although the EAGLContext manages the state of OpenGL, the state of OpenGL is not managed directly by the EAGLContext, but by the EAGLContext. In other words, the state of OpenGL ES is created and maintained by EAGLSharegroup objects; Each EAGLContext contains an EAGLSharegroup object, which delegates OpenGL state maintenance to the EAGLSharegroup. The relationship of 3 is shown as follows:



We all know that resources are scarce on mobile, and creating backups of the same content in multiple contexts is a luxury; The graphical resources of the device can be better utilized if common resources are shared. This is where EAGLSharegroup becomes very useful. When multiple contexts are associated with the same EAGLSharegroup, Opengl ES objects created by any context are available in all contexts. Note: All contexts that share the same shared group must use the same version of the Opengl ES API to initialize the context

When a shared group is shared by multiple contexts, it is incumbent on us to manage changes in Opengl ES object state. Here are the rules:

  1. While an application may be accessing an object from multiple contexts, make sure that the object is not being changed simultaneously.
  2. When an object is to be changed by a command issued to the context, the object cannot be read or changed by another context.
  3. When an object is changed, all contexts of the bound object must be able to see the changes. If a context refers to it before binding, the contents of the object are undefined

GLKit framework Overview

The GLKit framework is designed to simplify application development based on OpenGL/OpenGL ES. Its appearance accelerated the development of OpenGL/OpenGL ES applications. GLKit uses math libraries, background texture loading, pre-created shader effects, and standard views and view controllers to implement the rendering loop.

The GLKit framework provides functionality and classes that can reduce the amount of work required to create new shader-based applications, or support processing applications that rely on the fixed pipeline functionality provided by earlier versions of OpenGL ES or OpenGL.

GLKView provides a drawing place (View); GLKViewController(extends to standard UIKit. Used to manage and render the contents of a drawing view.)

Also note that although Apple has now deprecated OpenGL ES(the underlying rendering has changed from OpenGL ES to Metal), since OpenGL ES is a C-based API, it integrates seamlessly with Objective-C Cocoa Touch applications as a C API, So iOS developers can continue to use it, so there will be a warning when using the GLKit API but it will not affect the rendering results.

GLKit basic process of rendering graphics

When using GLKit to render graphics, we first need to create a GLKView as the drawing surface, and at the same time set its basic parameters related to OpenGL ES, such as the configuration of the render cache format, etc. Then we need to call GLKit related API as required to set relevant graphics parameters (such as color, texture, vertex coordinates, etc.) to complete the relevant rendering, and finally render the results on the phone screen. The text description may be a little abstract, as shown below:

                     

A frameBuffer is a two-dimensional array of pixels. Each memory cell corresponds to a single pixel on the screen. The entire frameBuffer corresponds to a single image, the current screen image. Frame buffering usually includes: color buffering, depth buffering, template buffering and cumulative buffering. These buffers may be in a memory area, or they may be separate, depending on the hardware. Pixel data (called slices) must pass a series of tests before it can be written to the frame buffer. If the slice fails one of these tests, no further tests or operations are performed. These test or operation processes are: start (slice)- cropping test -alpha test – template test – Depth test – mix – jitter – logical operation – end (write frame buffer), this series of operations are for the output of the slice shader (slice), so it is also called slice by slice operation.

The frame buffer can be simply understood as a place to store the drawing results; After understanding what frame buffer is, it is not difficult to understand the above figure. First, we need to set the format of frame buffer, including color buffer, depth buffer and so on. Different formats may draw different visual effects of graphics. Secondly, you can use code to implement the drawing operation, drawing results will be temporarily stored in the frame buffer; Finally, after an image is drawn, it can be presented on the device screen.

GLKit API introduction

Here we only introduce some common classes and APIS, basic can meet you more than 90% of the development tasks, for a few relatively unpopular API is relatively small, we have relevant requirements to look at the relevant API bar

  1. GLKTextureInfo OpenGL texture information abstraction, which encapsulates various texture-related information

           
name
: Texture name in OpenGL context;



          
 target
: the target of the texture binding (indicating that the texture is multi-dimensional)



           
height
: Height of texture



           
width
: Height of texture



           
textureOrigin:
The position of the texture origin



            alphaState
: State of alpha component in texture



            containsMipmaps: Boolean value, whether the texture contains an MIP map
 


2. GLKTextureLoader is a texture loading utility that simplifies loading textures from various resource files

– initWithSharegroup: Initialization method, notice that shareGroup is an EAGLSharegroup parameter, The ShareGroup object manages the OpenGL ES resources associated with one or more EAGLContext objects. If not specified or the value is NULL, a new object is created and used when the resource needs to be shared.

+ textureWithContentsOfFile: options: errer: 2 d texture image data to be loaded from a file, and use the data to generate a new texture

– textureWithContentsOfFile: options: queue: completionHandler: asynchronous loading of 2 d texture image data from a file, and use these data to create a new texture object

          – textureWithContentsOfURL:options:error:Load 2D texture image data from URL and use data creation
Create a new texture object


          – textureWithContentsOfURL:options:queue:completionHandler: 
Asynchronously loads 2D texture image data from the URL and uses the data to create new texture objects


         + textureWithContentsOfData:options:errer: Load 2D texture image data from memory and root it



Create a new texture from the data


        – textureWithContentsOfData:options:queue:completionHandler: 
Asynchronously loads 2D texture image data from memory and root it
Create a new texture from the data


       – textureWithCGImage:options:error: Load 2D texture image data from Quartz image and use the data to create



Create a new texture object


       – textureWithCGImage:options:queue:completionHandler: 
Asynchronously load 2D texture image data from Quartz image and use data creation
Create a new texture object

      
+ cabeMapWithContentsOfURL:options:errer: 
Loads a cube map texture from a single URL



Image data and create new textures based on the data
      – cabeMapWithContentsOfURL:options:queue:completionHandler: 
Asynchronously loads a cube map texture from a single URL
Image data and create new textures based on the data


      + cubeMapWithContentsOfFile:options:errer: Loads a cube map texture from a single file



Image data, and create new textures from the data


– cubeMapWithContentsOfFile: options: queue: completionHandler: asynchronous load cube texture texture image data from a single file, and from the data to create new textures

      + cubeMapWithContentsOfFiles:options:errer:Loads cube maps from a series of files
Texture map imagines data and creates new textures from the data
 


– cubeMapWithContentsOfFiles: options: options: queue: completionHandler: asynchronous loading from a series of file data cube texture texture image, and from the data to create new textures

3, GLKView uses OpenGL ES to draw content view default implementation

– initWithFrame:context: Initialize the new view delegate View Proxy drawableColorFormat drawableDepthFormat drawableStencilFormat Template render buffer format DrawableHeight Height of the underlying render buffer object, in pixels. DrawableWidth Width of the underlying render buffer object, in pixels. Context OpenGL ES context – bindDrawable Binds the underlying FrameBuffer object to OpenGL ES enableSetNeedsDisplay, which controls whether setNeedsDisplay is valid – Display redraws the view contents immediately

      snapshot Draw the view content and return it as a new image object (UIImage *). This method should never be called in the draw method, otherwise it will recurse in an infinite loop

     – deleteDrawableDeletes a drawable object associated with a view
 


GLKViewDelegate Is the callback interface for GLKView objects

-glkView :drawInRect: draw view content (proxy method must be implemented)

GLKViewController A view controller that manages the OpenGL ES rendering loop

    
 paused
Boolean value, whether the render loop has been paused


     pausedOnWillResignActiveBoolean value that the view controller is when the current program is about to push out the active state
No Automatically pauses the rendering loop



     resumeOnDidBecomeActiveBoolean value that indicates whether view control is automatic when the current program becomes active
Resuming the render loop
 



     framesDisplayedThe number of frame updates sent by the view controller since it was created (that is, how many frames were drawn)



    timeSinceFirstResumeThe amount of time that has elapsed since the view controller first resumed sending the update event



    timeSinceLastResumeThe amount of time that has elapsed since the last time the view controller resumed sending the update event



    timeSinceLastUpdateSince the last time the view controller called the delegate method (
glkViewControllerUpdate:
) after the amount of time elapsed
    
timeSinceLastDraw
The amount of time that has elapsed since the view controller last called the view display method.



    preferredFramesPerSecondRate at which the view controller invokes the view and updates the view contents (ideally update frames per second)
 


FramesPerSencond The actual rate at which the view controller calls the view and updates its contents (this is a read-only property, because the actual frame rate is not entirely determined by the developer setting the value via preferredFramesPerSecond, but also by the screen refresh rate and so on, So framesPerSencond can only get as close to preferredFrame Second as possible without exceeding the screen refresh rate).

6, GLKViewControllerDelegate render loop callback methods

  
– glkViewControllerUpdate:Called before each frame is displayed



  – glkViewController:willPause:
Called before the render loop pauses or resumes.


GLKBaseEffect is a simple lighting/shading system for shader based OpenGL rendering

   labelGive an Effect a name



   transform Binding effects are applied to vertex data model views, projections, and texture transformations
 



   lightingTypeUsed to calculate the lighting strategy for each fragment, with the value GLKLightingType enumeration (
GLKLightingTypePerVertex: performs lighting calculations on each vertex in the triangle and interpolates in the triangle rows;
GLKLightingTypePerPixel means that the input for the lighting calculation is inserted into the triangle and the lighting calculation is performed on each fragment
In simple terms, the two enumerated values, one representing the calculation of lighting in the vertex shader and the other representing the calculation of lighting in the fragment shader.
   
lightModelTwoSided
Boolean value representing calculated illumination on both sides of the pixel



   
materialCalculates the material properties to use when lighting
  
 lightModelAmbientColor
Environment color, applied to all primitives rendered.



   
l
ight0The first lighting property in the scene



   light1The second lighting property in the scene



   
light2
The third lighting property in the scene



   texture2d0The first texture property



   texture2d1Second texture property


ColorMaterialEnable a Boolean value indicating whether the vertex color property useConstantColor is used when calculating light and material interactions, indicating whether the constantColor constantColor is used Constant color used when not providing each vertex color data – prepareToDraw prepares the rendering effect

Example – Load an image using GLKit

In OpenGL an image is just a texture so loading an image is just loading a texture, and now we’re going to use GLKit to load the texture and draw it onto the screen.

Since the OpenGL ES command requires a rendering context and drawing surface to complete the drawing of graphics images, the first step in drawing graphics with GLKit is to configure EAGLContext and GLKView. The specific code is as follows:

- (EAGLContext *)bestCreateEAGLContext
{
    EAGLContext *temContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
    if(! temContext) { temContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; }return temContext;
}
- (void)setupConfig 
{
    context = [self bestCreateEAGLContext];
    [EAGLContext setCurrentContext:context]; GLKView *temView = [[GLKView alloc] initWithFrame:self.view.bounds context:context]; temView.delegate = self; GlClearColor (1.0 f, f, 1.0 1.0 f, 1.0 f); [self.view addSubview:temView]; }Copy the code

Here you prepare the render context and draw the surface, and set the clear screen color. In complex cases, if you need to format the cache, you can also use GLKView to configure the cache directly. For example, add:

temView.drawableDepthFormat = GLKViewDrawableDepthFormat16;
temView.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;Copy the code

In this case, of course, these configurations are not necessary, because we just want to load a picture, does not involve the depth related configuration, and the default color buffer format is GLKViewDrawableColorFormatRGBA8888 so can not write.

After setting the rendering context, rendering surface and other basic conditions, the second step can set the required parameters, such as vertex coordinates, texture coordinates and so on, the specific code is as follows:

- (void)setupVertexData
{    
    GLfloatVertexData [] = {0.5 f to 0.5 f to 0.0 f to 1.0 f to 0.0 f to 0.5 f to 0.5 f to 0.0 f to 1.0 f to 1.0 f to 0.5 f to 0.5 f to 0.0 f to 0.0 f to 1.0 f to 0.5 f to 0.5 f to 0.0 f, 0.0 f, f, 1.0-0.5 f to 0.5 f to 0.0 f to 0.0 f to 0.0 f to 0.5 f to 0.5 f to 0.0 f to 1.0 f to 0.0 f,}; GLuint bufferId; glGenBuffers(1, &bufferId); glBindBuffer(GL_ARRAY_BUFFER, bufferId); glBufferData(GL_ARRAY_BUFFER, sizeof(vertexData), vertexData, GL_STATIC_DRAW); glEnableVertexAttribArray(GLKVertexAttribPosition); glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(GLfloat) * 5, (GLfloat *)NULL + 0);
    glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
    glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, sizeof(GL_FLOAT) * 5, (GLfloat *)NULL + 3);
}Copy the code

Note that the vertexData array is stored in memory, so first copy the memory data to video memory, and store the data in video memory requires the corresponding buffer; So we call glGenBuffers and glBindBuffer. So, what do these two functions do?

void glGenBuffers(GLsizei n,GLuint * buffers); Generate buffer object names; This function takes two arguments, the first one representing the number of buffer names to be generated. The second argument is an array to store the names of the buffered objects.

The glGenBuffers() function simply generates the name of a buffer object. This buffer object does not have any meaning. It is a buffer object, not yet a vertex array buffer, similar to a pointer variable in C. We can allocate a memory object and refer to it by its name. OpenGL has many buffer object types, so what buffer object type is used in the glBindBuffer() function.

void glBindBuffer(GLenum target,GLuint buffer); Bind a named buffer object. The first parameter is the type of the buffer object, and the second parameter is the name of the buffer object to bind, which is the name generated in the previous function that binds the buffer object to the OpenGL context for use. If you bind target to an already created buffer object, that buffer object will be the active object of the current target. However, if the bound buffer value is 0, OpenGL will no longer use any cache object for the current target. There is an analogy in the OpenGL Red Book: the process of binding objects is like setting a switch switch on a railroad. Each object in each buffer type is like a different track. We set the switch to one of the states, so that subsequent trains will enter this track. GL_INVALID_VALUE is generated if buffer is not a name previously returned from a call to glGenBuffers. In other words, the name is a GLuint, but you should never specify a constant such as 0. If you do, you will get GL_INVALID_VALUE.

After binding the buffer type, all function calls of the buffer type are used to configure the target buffer type, such as GL_ARRAY_BUFFER, the vertex buffer type. GlBufferData is used to transfer data by specifying the target buffer type. And every target buffer type and binding in advance before using a buffer object, so as to give meaning to the buffer object a type, it is important to note OpenGL allow us to bind multiple buffer type at the same time, as long as these buffer type is different, in other words, at the same time, cannot bind the two buffer objects of the same type. For a type, only one type can be activated at a time, otherwise an error will occur.

For example, if two target buffers of the same type are bound, the configuration of the data is bound to be wrong. You can imagine, if we want to save the data in the vertex buffer, but the vertex buffer can have a lot of buffer object, which one do I need to, so I’m going to bind a ahead of time, after, I as long as the incoming data within the vertex buffer, the data will be automatically entered into the object is bound inside, In this case, vertexData is passed into a buffer object named bufferId, and the memory data is copied to video memory.

GlBufferData is used to copy data from memory to video memory. Its prototype function is: void glBufferData(GLenum target, GLsizeiptr size, const GLvoid *data, GLenum usage); The first parameter indicates the buffer type; The second parameter represents the data byte size; The third parameter indicates the data to be copied; The fourth parameter indicates whether a static copy is a dynamic copy, which will be covered in a later article.

In addition, all vertex shader Attribute variables are turned off by default in iOS for performance reasons. This means that vertex data is not available on the shader side (server side). Even if you already use the glBufferData method to copy vertex data from memory to the vertex cache (GPU memory). So, must open the channel by glEnableVertexAttribArray method. Specify access properties to allow vertex shaders to access data copied from CPU to GPU. Note: data on the GPU is visible, the shader can read data is determined by whether to enable the corresponding attribute, that’s the function of glEnableVertexAttribArray, allowed to read the vertex shader GPU (server) data. After opening the properties channel, the data is visible on the GPU side, but how to read the data, how to distinguish which are the vertex coordinates? What are texture coordinates? Which brings us to the glVertexAttribPointer function.

Let’s take a look at the function prototype: GlVertexAttribPointer (GLuint indx, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid* PTR) The function is to specify how data is read. The index argument specifies the index value of the vertex attribute to be modified; Parameter size indicates the number of reads each time; Parameter type, specifying the data type of each component in the array; The normalized parameter indicates whether normalized operations are required. The stride parameter specifies the offset between consecutive vertex attributes. If 0, the vertex attributes are understood to be closely aligned. For example, in this example, the direct offset of the two vertices is 5 floating points, so sizeof(float) *5; The PTR parameter specifies a pointer to the first component of the first vertex property in the array, for example in this case the first component of the first vertex property of the vertex coordinates is the first float so (GLfloat *)NULL + 0, Or in this case the first component of the texture coordinate’s first vertex property is the fourth float so (GLfloat *)NULL + 3. In general, using this function can specify how many numbers to read each time, the offset between the two numbers and the starting position of the first number, so that the natural can determine the clear way to read data.

Ok, the above is the configuration of parameters is the focus of this article so a little bit wordy, after configuring these parameters we can enter the theme and start the third step of loading the texture. Directly on the code:

-(void)setUpTexture {//1. Get texture image path NSString *filePath = [[NSBundle mainBundle]pathForResource:@"bui" ofType:@"jpg"]; // The origin of the texture coordinates is the lower left corner, but the image shows that the origin should be the upper left corner. NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:@(1),GLKTextureLoaderOriginBottomLeft, nil]; GLKTextureInfo *textureInfo = [GLKTextureLoader textureWithContentsOfFile:filePath options:options error:nil]; CEffect = [[GLKBaseEffect alloc]init]; GLKBaseEffect alloc = [GLKBaseEffect alloc]init]; cEffect.texture2d0.enabled = GL_TRUE; cEffect.texture2d0.name = textureInfo.name; }Copy the code

This part of the code is relatively simple you can directly look at the above code and comments on the line, at this point we have completed the basic preparation can start our final work of drawing. GlkView :(void)glkView:(glkView *)view drawInRect:(CGRect)rect :(CGRect)rect

-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect 
{
    glClear(GL_COLOR_BUFFER_BIT);
    [mEffect prepareToDraw];
    glDrawArrays(GL_TRIANGLES, 0, 6);
}Copy the code


Related articles:
  1. OpenGL first lesson — Name explanation
  2. OpenGL is introduced in lesson 2 – The common fixed storage shader

  3. OpenGL introduction third lesson — matrix transformation and coordinate system

  4. OpenGL entry lesson 4 — Depth

  5. OpenGL entry lesson 5 — front and back elimination

  6. OpenGL entry lesson 6 — Clipping and Blending

  7. OpenGL entry lesson 7 — Texture

  8. OpenGL is in class 8 — a supplementary case

  9. OpenGL ES que

  10. The application of GLKit

  11. GLSL met

  12. How to use custom shaders written in GLSL in iOS

  13. Precision qualifier in GLSL

  14. Use OpenGL to write a rotating stereo album for your girlfriend