What is OpenGL
So let’s see what OpenGL really is. It’s generally thought of as aAPI(Application Programming Interface
, application programming interface), contains a series of functions that can manipulate graphics and images. However, OpenGL itself is not an API, it is simply an APIAreable organizationSpecification developed and maintained.
The OpenGL specification specifies exactly how each function should be executed and what its output should be. Internally how each function is implemented (Implement) is up to the OpenGL library developer. Because the OpenGL specification does not specify implementation details, specific OpenGL libraries allow different implementations to be used as long as the functionality and results match the specification (i.e., as a user there is no difference in functionality).
The developer of the actual OpenGL library is usually the producer of the graphics card. The version of OpenGL supported by the graphics card you purchase was developed specifically for this series of graphics cards. The OpenGL library is maintained by Apple itself when you use Apple systems. On Linux, there are OpenGL libraries provided by graphics card manufacturers, as well as hobbyist versions. This also means that any time OpenGL libraries behave differently from the specification, it’s basically a bug left by the library’s developers.
Since most implementations of OpenGL are written by the graphics card vendor, when a bug occurs it can usually be resolved by upgrading the graphics card driver. These drivers will include the latest version of OpenGL that your graphics card can support, which is why it is always advisable to update your graphics card drivers occasionally.
Two, OpenGL related special noun analysis
1.OpenGL context
- The state of OpenGL is often called an OpenGL context (
Context
). We usually change OpenGL state by setting options and manipulating buffers. Finally, we render using the current OpenGL context. - Before your application calls any OpenGL instructions, you need to arrange to create an OpenGL context first. This context is a very large one
State Machine
, which stores various states in OpenGL, which is also the basis of OpenGL instruction execution. - OpenGL library is written in C language, and also supports the derivation of many languages, but its core is still a C library. Because some of C’s language constructs are not easily translated to other high-level languages, OpenGL was developed with some abstraction layers. Object is one of them.
- In OpenGL an object is a set of options that represent a subset of OpenGL states. For example, we can use an object to represent the Settings of the drawing window, and then we can set its size, supported color bits, and so on. You can think of an object as a C-style structure
(Struct)
:
struct object_name {
float option1;
int option2;
char[] name;
};
Copy the code
- Switching contexts can be expensive, but different drawing modules may need to use completely separate state management. As a result, you can create multiple different contexts in your application, use different contexts in different threads, and share textures, buffers, and so on. This approach is much more efficient than repeatedly switching context or changing render state a lot
2. The OpenGL state machine
- The state machine
As the name implies, it is a machine, a machine containing various states. A state machine describes the various states an object goes through in its life cycle, the transitions between states, the causes, conditions, and the activities performed during the transitions. In other words, a state machine is a behavior that describes the sequence of states an object experiences in response to events in its life cycle and its response to those state events. Therefore, it has the following characteristics:
-
Have memory function, can remember its current state.
-
Can receive input, according to the input content and their original state, modify their current state, and can have the corresponding output.
-
When entering a special state (shutdown state), the transformer will no longer receive input and stop working.
-
OpenGL state machine
In OpenGL you can read this:
- OpenGL can record its own status (such as the current color, whether the blending function is enabled, etc.).
- OpenGL can accept input (when we call OpenGL, we can actually see OpenGL receiving input from us), such as when we call
glColor3f
OpenGL receives this input and modifies its current color state. - OpenGL can go to a stop state and no longer accept input. OpenGL always stops working before the program exits.
3. Apply colours to a drawing
- The operation of converting graphic/image data into 3D spatial images is called rendering. It is simply understood that the graphic image data to be displayed is transformed into the content displayed on the screen.
VertexArray and VertexBuffer
Modern buildings are basically frame-type structure. When building, the main frame of the house is built first. With the frame, different materials (window glass, brick wall, etc.) are filled in the corresponding position according to the design drawing. In OPenGL, similarly, the display image is first framed and then filled. Unlike in the real world, images in OpenGL are made up of primions. In OpenGLES, there are three types of primitives: points, lines, and triangles. So we can determine the specific position of the image as long as we know the fixed position of the three elements. So where does the vertex data end up being stored?
- When drawing a graph, the points (two points on a line, three points on a triangle) of the positions of the elements needed to form the graph are the vertices.
- The developer can optionally set the function pointer to pass the vertex data directly from memory when calling the draw method, that is, the data that was previously stored in memory is called the vertex array.
- Better performance is to pre-allocate a block of video memory to which vertex data is pre-passed. This part of video memory is called the vertex buffer.
5. The line
Automobile production line is the necessary process of assembling cars from alloy frame step by step into cars that can be driven by us. Pipelining is similar to OpenGL, which is how we render images. Pipeline is an abstract concept. It is called pipeline because the graphics card processes data in a fixed order, and strictly in this order. Just as water flows from one end of a pipe to the other, the sequence cannot be broken. Pipelines are also divided into fixed pipes (store shaders) and programmable pipes:
-
Fixed line also called storage shader: when using a fixed line, OpenGL don’t need also don’t allow yourself to define vertex rendering and pixel rendering logic, inside it has a set of complete curing rendering process, only need to developers on the CPU code needed to enter the render parameters and specify a particular switch, can complete different rendering.
-
Programmable pipelines: OpenGL’s programmable pipelines must be rendered by the developers themselves, otherwise the final image cannot be drawn. Developers can write specific logic in vertex rendering and pixel rendering according to their specific needs, which can simplify the logic of rendering pipeline to the greatest extent to improve rendering efficiency, or they can implement specific algorithms and logic to render effects that cannot be rendered by fixed pipeline.
The difference between a fixed pipe and a programmable pipe is that a fixed pipe is wrapped in OpenGL and can be rendered by passing in arguments. The programmable pipeline requires developers to fully customize the rendering. However, the programmable pipeline does not mean that every step in the pipeline can be customized. Until OpenGLES 3.0, customization was only supported for the two most basic shaders: vertex shaders and source shaders.
6. Shader program
** Shaders ** are small programs that run on a GPU. These small programs run for a specific part of the graphics rendering pipeline. At its most basic, a shader is simply a program that converts input to output. Shaders are also very independent programs because they cannot communicate with each other; The only communication between them is through input and output. Some shaders allow developers to configure themselves, which allows us to replace the default shaders with ones we write ourselves. This gives us more control over specific parts of the graphics rendering pipeline, and because they run on the GPU, they save us valuable CPU time.
- GLSL: OpenGL Shading Language, GLSL is a C Language. GLSL is tailored for graphical computing and contains some useful features for vector and matrix operations. Shaders always start with a version declaration, followed by input and output variables, uniform, and main functions. The entry point for each shader is the main function, where we process all the input variables and output the results to the output variables.
A typical shader has the following structure:
#version version_number in type in_variable_name; in type in_variable_name; out type out_variable_name; uniform type uniform_name; Int main() {// Process input and do some graphics... // Print the processed result to the output variable out_variable_name = weird_stuff_we_processed; }Copy the code
OpenGL Coloring Language GLSL is a language for coloring programming in OpenGL, that is, short custom programs written by developers. They are executed on the GPU (Graphic Processor Unit) of the graphics card, replacing part of the fixed rendering pipeline, so that different levels in the rendering pipeline have programmability, such as view conversion, projection conversion, etc. The GLSL shader code is divided into two parts :VertexShader(vertex colored shader) and FragmentShader(FragmentShader).
- Common shaders are VertexShader, FragmentShader/PixelShader, geometrysshader, Surface subdivision shader. Fragment shaders and pixel shaders are just different names in OpenGL and DX.
- OpenGL handles shaders just like any other compiler. By compiling links and other steps, a shader program (glProgram) is generated, which contains the operation logic of vertex shader and chip shader. When Drawing in OpenGL, the vertex shader first computes the incoming vertex data. Then the vertex is converted to primitives through primitives assembly. Then rasterization, the pixel vector graphics, into rasterized data. Finally, the rasterized data is fed into a chip shader for operation. The pixel shader operates on each pixel in the rasterized data and determines the color of the pixel.
1). VertexShader
- Generally used to deal with the transformation of each vertex of the graph rotation/translation/projection, etc
- Vertex shader is used in OpenGL to calculate the vertex properties of the program, vertex shader is a vertex-by-vertex operation program, that is, each vertex data is executed in parallel vertex shader, and the vertex shader can not access the data of other vertices during the operation.
- Generally speaking, the typical vertex attributes need to be calculated mainly include vertex coordinate transformation, vertex light operation and so on. This is where the conversion of vertex coordinates from its own coordinate system to a normalized coordinate system takes place.
2). Source shader (FragmentShader)
- Generally used to process the color calculation and filling of each pixel in the graph
- The chip shader is a program used in OpenGL to calculate the color of the chip (pixel). The chip shader is a per-pixel program, meaning that each pixel executes the chip shader once in parallel.
7. Rasterization
- Rasterization is the process of converting vertex data into slice metadata, which has the function of transforming the graph into an image composed of each raster. Each element in the slice corresponds to a pixel in the frame buffer.
- Rasterization is actually a process of transforming geometric elements into two-dimensional images. This process involves two parts of work. The first part of the work: determine which integral grid areas in window coordinates are occupied by basic primitives; Part two: Assign a color value and a depth value to each area. The rasterization process produces slices.
- The mathematical description of the object and the color information related to the object are converted into pixels for corresponding positions on the screen and colors for filling pixels. This process is called rasterization, which is a process of converting analog signals into discrete signals.
Texture 8.
As we’ve seen before, we can add color to each vertex to add graphic detail and create interesting images. However, if we want the graph to look realistic, we must have enough vertices to specify enough colors. This incurs a lot of extra overhead because each model requires more vertices, and each vertex requires a color attribute.
Artists and programmers prefer itTexture (Texture). A texture is a 2D image (there are even 1D and 3D textures) that can be used to add detail to an object; You can imagine a texture being a piece of paper painted with bricks that folds seamlessly into your 3D house so that your house looks like it has a brick facade. Because we can insert so much detail into a single image, we can make things very fine without specifying extra vertices. As shown in the picture below, a piece of paper with bricks is attached to the triangle.
9. Mix (Blending)
After the test phase, if the pixel is still unculled, the color of the pixel will be mixed with the color attached to the frame buffer. The blending algorithm can be specified by the OpenGL function. However, the mixing algorithm provided by OpenGL is limited. If a more complex mixing algorithm is needed, it can be achieved through the pixel shader. Of course, the performance will be worse than the original mixing algorithm.
Object transparency is divided into two kinds of transparency, one is completely transparent and the other is partially transparent. Completely transparent will make the color completely penetrate, and partially transparent will make the color penetrate and also show its own color. In order to render different levels of transparency, we need toOpen the hybrid(glEnable(GL_BLEND))
. The simple understanding is that two pictures of different colors are placed next to each other, and where they overlap, they need to be mixed to show the new color.
10. Transformation
We now know how to create an object, color it, texture it, and give it some detail, but since they’re still static objects, it’s not interesting enough. We could try changing the vertices of objects every frame and reconfiguring the buffers to make them move, but this would be tedious and consume a lot of processing time. We now have a better solution that uses Matrix objects to better Transform an object. These matrices are known as transformations. ##11. Projection matrix
- Projection plays a crucial role in OpenGL, our common display devices TV, mobile phone, computer are two-dimensional, but we see the display content is a lot of three-dimensional, that is, three dimensional. The projection is used to convert 3D coordinates to 2d screen coordinates, where the actual lines will be drawn. And again, we do it using matrices, which are projections.
- Projection is divided into orthographic projection and perspective projection, different projection methods, we see the results are very different. The diagram below:
12. Render the screen/SwapBuffer (SwapBuffer)
- Render buffers typically map system resources such as Windows. If you render the image directly to the window’s corresponding render buffer, you can display the image to the screen. In the simplest case, there is only one frame buffer, and reading and flushing of the frame buffer can be inefficient.
- To solve this problem, regular OpenGL programs will have at least two buffers, known as double buffering. What is displayed on the screen becomes the on-screen buffer, and what is not displayed becomes the off-screen buffer. In this case, the GPU will pre-render one frame into a buffer for the video controller to read, and after the next frame is rendered, the GPU will point the video controller’s pointer directly to the second buffer. So there’s a huge increase in efficiency.
- Double buffering solves the efficiency problem, but it introduces a new one. When the video controller has not finished reading, that is, the screen content is just half displayed, THE GPU submits a new frame content to the frame buffer and exchanges the two buffers, the video controller will display the lower part of the new frame data on the screen, causing the picture tearing phenomenon, as shown below:
To solve this problem, gpus usually have a mechanism called VSync (also known as V-sync). When VSync is enabled, the GPU will wait for a VSync signal from the display before performing a new frame rendering and buffer update. This will solve the problem of tearing and increase the smoothness of the picture, but it will consume more computing resources and cause some latency.
Resources: learnopengl – cn. Making. IO/type/www.w3cschool.cn/stiahw/fb7j…
Remember to like it if you think it’s good! I heard that the people who read the praise will meet the test, meet the prize will be in. ღ(´ · ᴗ · ‘)