OpenGL ES

1, the introduction of

OpenGL ES (OpenGL for Embedded Systems) is an advanced 3D graphics application for handheld and Embedded Systems

OpenGL ES is currently the dominant graphics API in smart phones. Supported on platforms: iOS, Android, BlackBerry, Bada,Linux,Windows

  • OpenGL ES is a simplified version of OpenGL that eliminates redundancy and provides a system that is both easy to learn and easier to learn

A library implemented in mobile graphics hardware

  • OpenGL ES is an advanced 3D graphics application for embedded devices and mobile devices, such as iOS, Android, and Windows
  • OpenGL ES is cross-platform and does not provide window-dependent methods

2. OpenGL ES version introduction

  • OpenGL ES 1.x: For fixed function pipelined hardware
  • OpenGL ES 2.X: For programmable pipeline hardware
  • OpenGL ES 3.X: Extension of OpenGL ES 2.0

3, OpenGL ES rendering process

I mentioned this earlier in OpenGL:

  • Operation in CPU:

App will schedule OpenGL ES Framework through OpenGL ES API; OpenGL ES Server is scheduled by OpenGL ES Client, and the related vertex data of image rendering is transferred to GPU for processing.

  • Operation on GPU:

The data transferred from the Client is processed by the corresponding shader, and finally displayed on the screen;

4, OpenGL ES graphics pipeline

  • We can directly manipulate vertex buffers, vertex shaders, texture coordinates, and slice shaders by calling the API
  • Vertex data must be stored in memory before being copied into the vertex buffer
  • Normal process: Through API call, copy vertex data from memory to video memory (for more efficient GPU call) –> pass vertex data and texture coordinates into vertex shader for processing (graphics transformation and other operations) –> perform pixel assembly according to graphics connection mode –> raster in GPU –> pass slice shader, Process the color value for each pixel -> store the processed result in the frame buffer -> display it on the screen

Let’s take a look at the official picture. It actually means the same thing.

  • Application: provides pixel assembly vertex information and image information
  • Vertex shader: Handles Vertex — graphic transformations (rotation, scaling, translation)
  • Geometry (Pixel Assembly) : Pixel assembly + cropping (The part beyond the screen is cropping)
  • Fragment: Texturing + atomizing
  • Framebuffer Operation: transparency mixing, template, depth testing; Finally, in blending, these actions are all done in the frame buffer just before they are displayed

Second, analyze the whole process in detail

1. Vertex shaders

What do you do in a vertex shader?

0, gl_Position: get the final result after processing

1. Modify gl_PointSize

2. Calculation of matrix transformation

3. Calculate the lighting formula to generate each vertex color

4. Generate or transform texture coordinates and pass them to the slice (because the slice cannot accept the parameters directly from the attributes, bridge them through the vertex shader)

Summary: It can be used to perform custom calculations, implement new transformations, lighting, or vertex-based effects that traditional fixed features do not allow.

Example: (Note: VSH file cannot be written in Chinese)

attribute vec4 position; attribute vec2 textCoordinate; uniform mat4 rotateMatrix; varying lowp vec2 varyTextCoord; void main() { varyTextCoord = textCoordinate; vec4 vPos = position; vPos = vPos * rotateMatrix; gl_Position = vPos; Vec4 and VEC2 are vector types, representing the four-dimensional vector and two-dimensional vector MAT4:4 *4. The matrix VARYING is a modifier: Varying passes texture coordinates to the slice shader LOWP: low precision */ /* The texture coordinates are bridge */ / The vertex rotation matrix is multiplied: the column vector is multiplied by the column matrix to obtain rotated vertex coordinates. The vertex coordinates are assigned to gl_Position */Copy the code

2. Pixel assembly

** pixel: ** point, line, triangle, etc

** Primitives assembly: ** Computes vertex data into primitives. Clipping, perspective segmentation, and viewport transformation are performed at this stage.

At this stage, developers can’t program. The meta assembly is over to the next step: rasterization

3. Rasterization

Rasterization is the process of converting primions into a set of two-dimensional fragments. (Why two-dimensional fragments? Because our screens are two-dimensional.)

At this stage, developers can’t program. At this stage, the corresponding pixel (point/line/triangle) is drawn, and the rasterized result is processed by the pixel shader.

4. Chip shader

What’s a chip shader for? 1. Calculate the color of each pixel. 2. Get the texture value of the corresponding pixel in the image

Summary: It can be used to fill the color of every pixel in the image/video/graph (for example, adding a filter to the video is actually changing the color of every pixel in the video)

Example: (Note: FSH file cannot be written in Chinese)

varying lowp vec2 varyTextCoord; uniform sampler2D colorMap; void main() { gl_FragColor = texture2D(colorMap, varyTextCoord); Texture2D (texture sampler2D, texture coordinates) : Gets the color value of the corresponding position/coordinate, which is called the gl_FragColor (built-in variable) : Assign the final color value to it */Copy the code

If there are 1000 pixels, the pixel shader is executed 1000 times. So how do you think about performance?

  • The chip shader is executed by the GPU and achieves true high concurrency, so there are no performance concerns like those on the CPU

5. Operate fragment by fragment

This is also a stage where we can’t program.

  • Pixel ownership test: Determines whether the pixels at positions (Xw,Yw) in the frame cache are currently owned by OpenGL ES. For example, if a View displaying OpenGL ES frame cache is obscured by another View. The windowing system can determine that the shaded pixel does not belong to the OpenGL ES context. The pixel attribution test is part of OpenGL ES, which is not controlled by the developer, but carried out internally by OpenGL ES.
  • Clipping test: The clipping test determines whether (Xw,Yw) is within the clipping rectangle that is part of the OpenGL ES state. If the segment is outside the clipping region, it is discarded.
  • Depth testing: Enter the depth value of the fragment for progressive comparison to determine whether the fragment rejects the test
  • Blend: Blend combines the newly generated fragment color with the color value stored at the location in the frame cache.
  • Jitter: Jitter can be used to minimize artifacts caused by using limited precision to store color values in the frame cache.

3. Understand EGL and EAGL

As we know, the OpenGL ES command requires a rendering context and a drawing surface to complete the drawing of a graphic image. However, the OpenGL ES API does not provide information on how to create a rendering context or how the context is connected to the native windowed system.

  • Render context: Stores the relevant OpenGL ES state.
  • Draw surface: A surface used to draw primitives that specifies the type of cache required for rendering, such as color cache, depth buffer, and template cache.

EGL (Embedded Graphics Library) : Is the interface between Khronos rendering apis (such as OpenGL ES) and native window systems. However, iOS supports OpenGL ES, but does not support EGL.

EAGL: EGL is not supported on iOS. So Apple provides its own iOS implementation of the EGL API, called EAGL.