• Introduction to the Graphics API

    1. OpenGL (Open Graphics Library)Is a cross-programming language, cross-platform programming graphical programming interface, it abstracts the resources of the computer as one OpenGL object, the operation of these resources abstracts as one OpenGL instruction
    2. OpenGL ES (OpenGL for Embedded Systems)OpenGL 3d graphics API is a subset of OpenGL 3D graphics API, designed for mobile phones, PDAs and game consoles and other embedded devices, removing many unnecessary and low performance API interfaces.
    3. DirectX:DirectX is made up of many apis, DirectX is not a pure graphics API. Most importantly, DirectX is a multimedia processing framework for Windows. It does not support platforms other than Windows, so it is not a cross-platform framework. According to the qualitative classification, can be divided into four parts, the display part, the voice part, the input part and the network part.
    4. Metal:Apple has launched Metal, a new platform for game developers that can render 3D images 10 times better. Metal is Apple’s framework for 3D rendering

      Note: You can see that there are only OpenGL ES and Metal for mobile development, but because OpenGL ES is a subset of OpenGL, So for mobile developers, the main thing is to learn OpenGL OpenGL ES Metal.
  • OpenGL common technical terms to explain

    1. OpenGL context:
      • Before your application invokes any OpenGL instructions, you need to arrange to create an OpenGL context first. This context is a very large state machine that holds various states in OpenGL, which is the basis for the execution of OpenGL instructions
      • OpenGL functions, in any language, are procedural-oriented functions like C, which essentially operate on a state or an object in the vast state machine of the OpenGL context, and of course you have to set that object to the current object first. Therefore, by encapsulating OpenGL instructions, it is possible to encapsulate the relevant calls of OpenGL into an object-oriented graphics API
      • Since the OpenGL context is a large state machine, switching contexts tends to be expensive, but different drawing modules may need to use completely separate state management. As a result, you can create multiple different contexts in your application, use different contexts in different threads, and share textures, buffers, and so on. This approach is much more efficient than repeatedly switching context or changing render state a lot. Summary:
      1. Context is the basis of all OpenGL instructions (before any instruction execution must first create a context) is also a state of a huge machine, is mainly used to record the current state of all kinds, for example, the current status is acceptable input, you can accept input instruction, through the current state of the input and update the status, And can correspond to the output, for example, the current shutdown state, then do not accept the input to stop working
      2. OpenGL functions are process-oriented, through the OpenGL instruction package can become an object-oriented graphics API
      3. Switching contexts is expensive, but textures, buffers, and other resources are shared between different contexts, so multiple different contexts can be common
    2. Rendering: The operation of converting graphics and images into 2D spatial images is called rendering. (usually we see pictures of JPG, PNG is compressed files, for example, extract the after picture is there are a lot of songs of pixels, and then after decompression along with the color of the location of the pixels and the pixels and the whole picture of vertex arrays (seen professional photography djinn edited pictures is in operation, a pixel, changing pixel saturation of the color, The process of putting those pixels together to make a picture is called rendering
    3. VertexArray and VertexBufferDrawing usually involves drawing the skeleton of the image first, and then filling in the skeleton with colors. The same is true for OpenGL. The vertex data is the skeleton of the image to be drawn, and unlike in real life, images in OpenGL are made up of primitives. In OpenGL ES, there are three types of primitives: points, lines, and triangles. So where does the vertex data end up being stored? The developer can optionally set the function pointer to pass the vertex data directly from memory when calling the draw method, that is, the data that was previously stored in memory is called the vertex array. Better performance is to pre-allocate a block of video memory to which vertex data is pre-passed. This part of video memory is called the vertex buffer

      Summary:
      1. In OpenGL ES, there are three types of primitives: points, lines, and triangles. For example, a square is made up of two triangles instead of one square.
      2. The difference between a vertex array and a vertex buffer is that one is in memory and the other is in video memory (the vertex buffer is more efficient (the GPU directly accesses video memory to draw and the GPU is parallel, in memory the CPU needs to read data and the CPU is serial so the vertex buffer is more efficient)).
    4. Shader:
      • The fixed rendering pipeline architecture has been completely transformed into a programmable rendering pipeline. Therefore, OpenGL also needs to specify a shader program compiled by the shader before actually calling the draw function. Common shaders are VertexShader, FragmentShader/PixelShader, geometrysshader, Surface segmentation shader. Fragment shaders and pixel shaders are just different names in OpenGL and DX. Unfortunately, until OpenGLES 3.0, only the most basic shaders, vertex and fragment shaders, were supported.
      • OpenGL handles shaders just like any other compiler. By compiling, linking and other steps, a shader program (glProgram) is generated, which contains the operation logic of vertex shader and fragment shader. When OpenGL is drawing, the vertex shader first computes the incoming vertex data. Then the vertex is converted to primitives through primitives assembly. Then rasterization, the pixel vector graphics, into rasterized data. Finally, the rasterized data is passed into the fragment shader for operation. The fragment shader operates on each pixel in the rasterized data and determines the color of the pixel. Here’s an example: The production line of mobile phone case, the normal production line must have a lot of abrasive tools, such as the mold of Apple mobile phone, the mold of Xiaomi mobile phone, etc. During the production process, different molds are selected to manufacture different mobile phone cases, which is a fixed pipeline frame. If the mold is not fixed but obtained by programming the mold (that is, we can produce any phone case) the program to obtain the mold is OpenGL shader compiled into the shader program
    5. Pipeline: when rendering a graph in OpenGL, you will experience node by node. This operation can understand the pipeline. You can think of pipeline. Each task is executed in a pipeline-like fashion. Pipeline is an abstract concept. It is called pipeline because the graphics card processes data in a fixed order, and strictly in this order. Just as water flows from one end of a pipe to the other, the sequence cannot be broken.
    6. Fixed line/store shader: in the early days of the OpenGL version, it encapsulates a lot of kinds of built-in a shader block contains light, coordinate transformation, cropping, and so on various functions of fixed to complete the shader program, to help developers to complete graphics rendering. The developers need only into the appropriate parameters, can quickly complete the graphics rendering. It’s like iOS development encapsulates a lot of apis, and we just need to call it, and we can implement it. There is no need to focus on the underlying implementation principles. But because OpenGL has so many different usage scenarios, a fixed pipeline or storage shader cannot do every job. Open the relevant parts as programmable; Or the example mentioned above: the production of mobile phone case, there are many templates for mobile phone case, when the production of one of the existing templates for production of such a line is called fixed pipeline
    7. VertexShader:
      • Generally used to deal with graph per vertex transformation (rotation/translation/projection, etc.)
      • Vertex shader is a program used to calculate vertex properties in OpenGL. Vertex shaders are vertex-by-vertex programs, that is, each vertex data is executed once, in parallel and without access to other vertex data
      • Generally speaking, typical vertex attributes need to be calculated mainly include vertex coordinate transformation, vertex light operation and so on. This is where the conversion of vertex coordinates from its own coordinate system to a normalized coordinate system takes place. The vertex shader is used to determine the position of the graph by calculating the corresponding position of the given vertex array, but also by changing the vertex array to modify the position of the graph or translation, rotation, projection and so on
    8. FragmentShader:
      • Generally used to deal with the color calculation and filling of each pixel in the graph
      • Fragment shaders are programs used in OpenGL to calculate the color of fragments (pixels). Fragment shaders are pixel-by-pixel programs, which means that each pixel performs a fragment shader once, and of course a parallel bullet extraction: the fragment shader deals with the color of the pixels and, in short, colors the pixels
    9. GLSL (OpenGL Shading Language): OpenGL Shading Language is the Language used for Shading programming in OpenGL, which is short custom programs written by developers that they execute on the GRAPHICS card’s GPU, Instead of a fixed part of the render pipeline, the different layers in the render pipeline are programmable. For example: view conversion, projection conversion and so on. GLSL (GL Shading Language) Shader code is divided into 2 parts: Vertex Shader and Fragment
    10. Rasterization:
      • It is the process of converting vertex data into slices, which has the function of converting a graph into an image composed of a grid, characterized by each element corresponding to a pixel in the frame buffer.
      • Rasterization is actually a process of transforming geometric elements into two-dimensional images. This process involves two parts of work. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area. The rasterization process produces slices
      • The mathematical description of the object and the color information related to the object are converted into pixels for corresponding positions on the screen and colors for filling pixels. This process is called rasterization, which is a process of converting analog signals into discrete signals. Rasterization is actually a process of converting pixels into pixels. This process starts with filling pixels into pixels and finally generates pixels. For example: If you have a triangle with three lines, it’s really just a frame of a triangle that you see when you raster it and you see that triangle is filled with pixels but no color, the coloring is done in the pixel shader
    11. Texture: texture can be understood as an image. you need to encode and fill the image in the rendering image to make the scene more lifelike. But in OpenGL, we’re more used to calling it a texture, not an image, so we can think of a texture as an image
    12. Blending: After the test phase, if the pixels are still unblending, the colors of the pixels will be blended with the colors attached to the frames in the buffer. The Blending algorithm can be specified using OpenGL functions. However, the mixing algorithm provided by OpenGL is limited. If a more complex mixing algorithm is needed, it can be achieved through the pixel shader. Of course, the performance will be worse than the original mixing algorithm.
    13. Transformation matrix: for example, a graph wants to be translated, scaled, or rotated. You have to use the transformation matrix.
    14. Projection matrix Projection: It is used to convert 3D coordinates to 2D screen coordinates, and actual lines will be drawn in 2D coordinates
    15. OpenGL projection:
      • Perspective projection: In fact, the human eyes see things that follow the perspective projection big near small
      • Orthographic: a one-to-one representation of an object (sort of like looking in a mirror, one-to-one representation)
    16. Coordinates:
      • Object coordinates: Record the length, width and height of the object from a corner of the object as the coordinate system of the dot.
      • World coordinates: corresponds to the coordinates of this object in a certain area. The dot of the coordinate axis is an Angle of the area and records the absolute position of this object in this area
      • Observer coordinates: The absolute position of an object in the viewport’s viewport area. The origin of the coordinate axis is an Angle of the viewport’s viewport area
      • Normalized device coordinates: it is to record the coordinates of objects in the device display area
      • Screen coordinates: Record the position of an object on the screen

        The diagram below helps to understand coordinates
  • The rendering flow of shaders (to help understand the above concepts)