1. OpenGL(Open Graphics Library) is a cross-programming language, cross-platform programming Graphics program interface, which abstracts the resources of a computer as an OpenGL object, and the operation of these resources is abstracted as an OpenGL instruction.

OpenGL ES (OpenGL for Embedded Systems) is a subset of OpenGL 3D graphics API. It is designed for Embedded devices such as mobile phones, PDAs and game consoles, and many unnecessary and low performance API interfaces are removed.

DirectX is made up of many apis, DirectX is not a pure graphics API. Most importantly, DirectX is a multimedia processing API on Windows. It does not support platforms other than Windows, so it is not a cross-platform framework. According to the qualitative classification, it can be divided into four parts: the display part, the sound part, the inbound part and the network part.

Metal: Apple has launched Metal, a new platform for game developers that can render 3D graphics 10 times better. Metal is Apple’s framework for 3D rendering.

  1. OpenGL State machine A state machine can be understood as a machine that can store states and output corresponding outputs according to the current state.

Key points:

Memory function, save the current state receive input, modify the current state, or output according to the current state when entering a special state (shutdown state), no longer receive input, stop working

  1. Before an application calls any OpenGL instruction, it first needs to create an OpenGL context. This context is a very large state machine that holds all the states in OpenGL and is the basis for the execution of OpenGL instructions.

OpenGL functions, no matter in which language, are similar to C language process-oriented functions. They’re essentially manipulating a state or an object in this huge state machine called the OpenGL context. By encapsulating OpenGL instructions, we can encapsulate the related calls of OpenGL into an object-oriented graphics API.

Since the OpenGL context is a large state machine, switching contexts tends to be expensive, but different drawing modules may need to use completely separate state management. As a result, you can create multiple different contexts in your application, use different contexts in different threads, and share textures, buffers, and so on. This approach makes more sense and more efficient than repeatedly switching context or changing render state a lot.

Key points:

OpenGL instruction execution is based on a very large state machine. OpenGL context switching is expensive. Although multiple contexts may be used, textures, buffers, and other resources are shared between the contexts. OpenGL’s functions, while procedural, can encapsulate related calls as procedural graphical apis.

  1. Rendering The process of Rendering graphics/image data into 3D spatial images is called Rendering.

  2. Vertex arrays and vertex buffers Vertex data is the outline of the image. All images in OpenGL are made up of primitives. In OpenGL ES, there are three types of primitives: points, lines, and triangles.

When the drawing method is called, the vertex data is passed directly from memory, that is to say, this part of the data was stored in memory before, which is called VertexArray.

Better performance is to pre-allocate a block of video memory to which vertex data is pre-passed. This portion of video memory is called the VertexBuffer.

Key points:

Three types of primitives: point, line and triangle. The VertexArray is in memory. The VertexBuffer is in the cache.

  1. The pipeline renders the graphics in OpenGL, passing through nodes one by one. And this operation can understand the pipeline. Like an assembly line, tasks are carried out in sequence. Pipeline is an abstract concept. It is called pipeline because the graphics card processes data in a fixed order, and strictly in this order.

Key points:

Tasks are carried out in strict sequence.

  1. Fixed pipeline/Storage shader In earlier versions of OpenGL, the package of a variety of shader blocks, built-in a fixed shader including lighting, coordinate transformation, cropping, and many other functions to complete. To help developers to complete the graphics rendering. Developers only need to pass in the corresponding parameters, can quickly complete the graphics rendering. Similarly, iOS development encapsulates a lot of apis, and we just need to call, we can implement functions, and we don’t need to pay attention to the underlying implementation principle. But because OpenGL’s usage scenarios are so rich, fixed pipelines or storage shaders cannot do every job, leaving the relevant parts open for programming.

Key points:

Early versions of OpenGL encapsulated shader blocks to aid rapid development. Due to the limited functionality provided, it was later changed into a programmable form.

  1. The Shader program transforms a fixed rendering pipeline architecture into a programmable rendering pipeline.

Before OpenGL actually calls the draw function, it also needs to specify a shader program compiled by the shader. The common shaders are a VertexShader, FragmentShader/PixelShader, geometrysshader, and a surface subdivision shader. Fragment shaders and pixel shaders are just different names in OpenGL and DX. Unfortunately, until OpenGL ES 3.0, OpenGL ES only supported the most basic shaders, vertex shaders and fragment shaders.

OpenGL handles shaders just like any other compiler. By compiling, linking and other steps, a shader program (glProgram) is generated, which contains the operation logic of vertex shader and fragment shader. When OpenGL is drawing, the vertex shader first computes the incoming vertex data. Then the vertex is converted to primitives through primitives assembly. Then rasterization, the pixel vector graphics, into rasterized data. Finally, the rasterized data is passed into the fragment shader for operation. The fragment shader operates on each pixel in the rasterized data and determines the color of the pixel.

Key points:

Changed the fixed render pipeline architecture to a programmable render pipeline. Common shaders include vertex shaders, fragment shaders/pixel shaders, geometry shaders, and surface subdivision shaders. OpenGL ES only supports vertex shaders and fragment shaders. OpenGL generates shader programs by compiling, linking, and so on. When Drawing in OpenGL, the vertex shader performs operations on the incoming vertex data. Then the vertex is converted to primitives through primitives assembly. After rasterization, the vector graphics of the pixel are converted into rasterized data. Finally, the rasterized data is passed into the fragment shader for operation. The fragment shader operates on each pixel in the rasterized data and determines the color of the pixel. VertexShader is a program used in OpenGL to calculate vertex properties.

Generally speaking, typical vertex attributes need to be calculated mainly include vertex coordinate transformation, vertex light operation and so on. This is where the conversion of vertex coordinates from its own coordinate system to a normalized coordinate system takes place.

Vertex shaders are vertex-by-vertex programs, meaning that each vertex data is executed once. Of course, this is parallel, and vertex shaders cannot access data from other vertices.

Key points:

Generally used to deal with the transformation of each vertex of the graph (rotation/translation/projection, etc.). Parallel computation, and the operation process cannot access the data of other vertices. A FragmentShader is a program in OpenGL that calculates the color of a fragment (pixel). Generally used to deal with the color calculation and filling of each pixel in the graph.

The fragment shader is a per-pixel program, meaning that the fragment shader is executed once per pixel, also in parallel.

Key points:

Generally used to deal with the color calculation and filling of each pixel in the graph. Parallel computation, and the operation process cannot access the data of other vertices.

  1. GLSL(OpenGL Shading Language) GLSL is the Shading Language used for programming in OpenGL, which is implemented on the GPU of the graphics card. Instead of a fixed part of the render pipeline, the different layers in the render pipeline are programmable. For example: view conversion, projection conversion and so on. GLSL(GL Shading Language) Shader code is divided into two parts: Vertex Shader and Fragment Shader.

  2. Rasterization is the process of converting vertex data into slices. Has the function of converting a graph into an image composed of a grid, characterized by each element corresponding to a pixel in the frame buffer.

Rasterization is actually a process of transforming geometric elements into two-dimensional images. This process involves two parts of work. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area.

The process of converting the mathematical description of an object and the color information associated with the object into the pixels for the corresponding position on the screen and the colors for filling the pixels is called rasterization. This is the process of converting analog signals into discrete signals.

Key points:

Rasterization is the process of converting vertex data into slices. This process consists of two parts: (1) determining which integral grid areas in window coordinates are occupied by primitives; ② Assign a color value and a depth value to each area.

  1. Texture can be understood as a picture. When rendering graphics, it is necessary to fill the image in the area surrounded by vertices to make the scene more realistic. The images used here are often referred to as textures. It’s just that in OpenGL, we’re more used to calling it texture than image.

  2. If a pixel is still unculled after the test phase, the colors of the pixels will be blended with the colors attached to the frame buffer. The Blending algorithm can be specified using OpenGL functions. But OpenGL offers a limited number of hybrid algorithms. If a more complex hybrid algorithm is needed, it can generally be implemented through the fragment shader, but of course the performance will be worse than the native hybrid algorithm.

  3. For example, if you want to translate, scale, or rotate a graph, you need to use Transformation matrices.

12.1 A Projection matrix is used to convert 3D coordinates to 2D screen coordinates in which actual lines will also be drawn.

  1. Render screen/swap buffers (SWapBuffers) Render buffers typically map system resources such as Windows. If you render the image directly to the window’s corresponding render buffer, you can display the image to the screen.

It is worth noting that if each window has only one buffer, then the screen is refreshed during drawing and the window may display an incomplete image.

To solve this problem, regular OpenGL programs will have at least two buffers. What is displayed on the screen is called the screen buffer, and what is not displayed is called the off-screen buffer. After a buffer is rendered, the image is displayed on the screen by swapping the screen buffer with the off-screen buffer.

As a result of the display refresh is generally by rows, the screen up and down in order to prevent the exchange buffer area of the image points belong to two different frames, exchange generally will be waiting for the signal to the display refresh to complete, the display device swapping two refresh interval, the signal is called a vertical sync signal, this technology is called the vertical synchronization.

With the use of double buffers and vSYNC, the frame rate is not quite as high as the hardware allows because you always have to wait for the buffer to be swapped before rendering the next frame. To solve this problem, the three-buffer technique was introduced. While waiting for vSYNC, two off-screen buffers are rendered back and forth alternately, while vsync occurs, the screen buffer is swapped with the recently rendered off-screen buffer to maximize hardware performance.

Key points:

If each window has only one buffer, the window may display an incomplete image if the screen is refreshed during drawing. To solve this problem, regular OpenGL programs will have at least two buffers. Vertical sync: because the display refresh is commonly by rows, the screen up and down in order to prevent the exchange buffer area of the image points belong to two different frames, exchange generally will be waiting for the signal to the display refresh to complete, the display apparatus swapping two refresh interval, the signal is called a vertical sync signal, vertical sync. This technology is called Three-buffer technology: With the use of double buffers and vSYNC, the frame rate is not quite as high as the hardware allows because you always have to wait for the buffer to swap before rendering the next frame. To solve this problem, the three-buffer technique was introduced. While waiting for vSYNC, two off-screen buffers are rendered back and forth alternately, while vsync occurs, the screen buffer is swapped with the recently rendered off-screen buffer to maximize hardware performance.