directory
- Matrices and matrix transformations
- The coordinate system
- OpenGL matrices and matrix transformations
- Practice: Pan, rotate, scale, 3D
- data
- harvest
Matrix and matrix transformation
1.1 Review of the basic knowledge of matrices
OpenGL makes a lot of use of vectors and matrices. One of the most important uses of matrices is to create vector projections (e.g., orthogonal and perspective projections) and to make objects rotate, translate and scale. Let’s look at some of the commonly used matrix types.
Unit matrix
Scaling matrix
Rotation matrix
Translation matrix
Matrix multiplication When you transform matrices, the multiplication is left, not commutative, but associative, AB! = BA C(BA)=(CB)A=CBA C(B(Av))=(CBA)v
Orthogonal projection matrix
Projection matrix
Homogeneous coordinates What are homogeneous coordinates?
3d data can be scaled and rotated by multiplying 3d vectors with a 3×3 matrix. But for cartesian coordinates (Euclidean coordinates) translation operation is time addition operation, so there is no way to achieve linear continuity of transformation. This introduces homogeneous coordinates, which are represented by N+1 dimensions. A point (X,Y,Z) becomes (X,Y,Z,w) in homogeneous coordinates and X= X /w; Y= y/w; Z equals Z over w with homogeneous coordinates can also be represented by matrix multiplication, which guarantees formal linear consistency, so that no matter how many transformations you do, how many transformations you do, you can be represented as a series of matrix multiplication, which is very convenient. In OpenGL, the fourth component of 3d data is used to implement perspective projection transformation.
The function of homogeneous coordinates, the multiplication and addition of the combined matrix operations, unified all kinds of transformations, that is, scaling, rotation, translation, and so on, in the form of a series of matrix products. The linear consistency of the form is guaranteed
Coordinate system
2.1 Left – hand and right – hand coordinate systems
The thumb pointing in the positive x direction and the other finger pointing in the positive Y direction indicate the different coordinate systems described above. Normalized device coordinates use a left-handed coordinate system, while earlier versions of OpenGL used a right-handed one, which is why Android’s Matrix generates an inverted Z Matrix by default.
2.2 Object Coordinate System (modeling coordinate system)
Is a local coordinate system, for the convenience of models and transformations. For example, when creating a circle, the center of the circle is generally used as a reference point to create each point on the circle. In fact, a reference coordinate system with the center of the circle as the origin is constructed.
2.3 World Coordinate System
After the object is modeled, the next step is to combine the various figures into the drawing plane. In order to determine the position of each figure, it is necessary to give up the respective object coordinate system (the standard building coordinate system) and put it into a unified coordinate system, which is called the world coordinate system
2.4 Eye coordinate System (Observation coordinate system)
Users can define the observation area and direction according to the requirements of graphic display, that is, to define the position and direction of viewpoint (or camera), and reposition and describe the graphics in the whole world coordinate system from the perspective of the observer.
2.5 Normalized Equipment Coordinate System (Normalized Equipment coordinate System)
In order to make the observation processing independent of the output device, the object description of the observation coordinate system is transformed to an intermediate coordinate system, which is both independent of the device and can be easily transformed to the device coordinate system. Its coordinate range is [0,1] to improve the portability of the application.
2.6 Device Coordinate System (Screen Coordinate system)
The definition of pixels on the surface of a particular computer graphics display device. For each specific display device, there is a separate coordinate system.
OpenGL matrix transformation
Now that we’ve reviewed the basics, let’s look at matrix transformations in OpenGL
The vertex shader’s position input is saved as object coordinates, and the output position is saved as clipping coordinates.
Model — View — Projection (MVP) Matrix Model matrix — Transform object coordinates to world coordinates View matrix — Transform world coordinates to eye coordinates Projection matrix — Transform eye coordinates to clipping coordinates (homogeneous coordinates)
In traditional OpenGL functionality, the model and attempt are merged into a matrix, called the model-view matrix, which transforms object coordinates to eye coordinates. Any position in space and any desired direction can be uniquely determined by a 4X4 matrix, and if we multiply all the vectors of an object by this matrix, then we transform the entire object to a given position and direction in space. It can be created using glRotatef, glTranslatef, glScalef, etc., and needs to be handled by the application itself
View transformations View transformations allow us to place the point of view wherever we want and to view the scene in any direction. Determining a view transformation is like placing a camera in a scene and pointing it in a certain direction.
Model transformations Model transformations are used to manipulate models and specific objects within them. These transformations move objects to the desired position, and then rotate and scale them.
Projection Transform The projection transform is applied to the vertices after the model view transformation, which specifies how a completed scene (with all model transformations completed) is projected onto the final image on the screen. Orthographic: All polygons are drawn on the screen precisely to the specified relative size. Perspective projection: Perspective projection is characterized by shortening perspective, which makes distant objects appear smaller than objects of the same size entering and leaving.
Viewport Transformations When all the transformations are complete, you have a two-dimensional projection of the scene that will be mapped to a window somewhere on the screen. This mapping to physical window coordinates is our final transformation, called the viewport transformation.
Image from The OpenGL Programming Guide (Little Red Book)
Four, the practice
4.1 Translation, rotation and scaling
// Define a uniform MAT4 u_Matrix that doesn't change often; attribute vec4 a_Position; attribute vec4 a_Color; varying vec4 v_Color; void main() { v_Color = a_Color; Gl_Position = u_Matrix * a_Position; }Copy the code
Render key code
@Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { Log.i(TAG, "onSurfaceCreated: curThread= " + Thread.currentThread()); GlClearColor (0.0 f, f, 0.0 0.0 f, 0.0 f); String vertexCode = ShaderHelper.loadAsset(MyApplication.getContext().getResources(), "vertex_shader.glsl"); String fragmentCode = ShaderHelper.loadAsset(MyApplication.getContext().getResources(), "fragment_shader.glsl"); ProgramId = shaderhelper. loadProgram(vertexCode, fragmentCode); uMatrixLocation = GLES20.glGetUniformLocation(programId, U_MATRIX); int aPosition = GLES20.glGetAttribLocation(programId, A_POSITION); Log.i(TAG, "drawFrame: aposition="+aPosition); mVertexData.position(0); GLES20. GlVertexAttribPointer (aPosition COORDS_PER_VERTEX, / / a vertex is described by using several migration GLES20. GL_FLOAT, / / false vertex data types, STRIDE,// How many bytes a vertex needs to offset mVertexData// allocated buffer); / / open the vertex shader attribute GLES20. GlEnableVertexAttribArray (aPosition); int aColor = GLES20.glGetAttribLocation(programId, A_COLOR); mVertexData.position(COORDS_PER_VERTEX); GLES20.glVertexAttribPointer(aColor,COLOR_PER_VERTEX,GL_FLOAT,false,STRIDE,mVertexData); GLES20.glEnableVertexAttribArray(aColor); } @Override public void onDrawFrame(GL10 gl) { Log.i(TAG, "onDrawFrame: " + "curThread= " + Thread.currentThread()); GLES20.glClear(GL_COLOR_BUFFER_BIT); GLES20.glUniformMatrix4fv(uMatrixLocation, 1, false, modelMatrix, 0); GLES20. GlDrawArrays (GLES20. GL_TRIANGLES, 0, 6); }Copy the code
The following translation, rotation, scale, and so on are assigned to the model matrix by onSurfaceChanged.
The original image
Times the identity matrix is exactly the same thing
Matrix.setIdentityM(modelMatrix,0);
Copy the code
To zoom in
Matrix.setIdentityM(modelMatrix,0); Matrix. ScaleM (modelMatrix, 0,0.5 f, f 0.5, 0).Copy the code
I rotate it without setting the projection and z offset, so it’s still a planar graph
Matrix.setIdentityM(modelMatrix,0); Matrix. RotateM (modelMatrix, 0 to 60,1,0,0);Copy the code
translation
Matrix.setIdentityM(modelMatrix,0); Matrix. TranslateM (modelMatrix, 0,0.3 f, f 0.3, 0).Copy the code
4.2 Add W to achieve 3D effect
private final static int COORDS_PER_VERTEX = 4; Float [] TRIANGLE_COORDS = {/ / x, y, z, w. r, g, b 0.5 f, f 0.5, 0, 5 f, 1 f, f 0.5, 0.5, f - 0.5 f to 0.5 f, 0, 1 f, 0.5 f, 1 f, f 0.5, 0.5, f 0.5 f, 0, 1 f, f 0.5, 0.5 f, 1 f, f, 0.5 0.5 f, 0, 5 f, 1 f, f, 0.5 0.5 f to 0.5 f, f 0.5, 0, 5 f, f, 0.5 0.5 f, 1 f, - 0.5 f to 0.5 f, 0, 1 f, 0.5 f, 1 f, 0.5 f}Copy the code
The effect is as follows:
The complete code for Render is as follows:
public class MyRender implements GLSurfaceView.Renderer { private static final String TAG = "MyRender"; private int programId; Private final static int COORDS_PER_VERTEX = 4; Private final static int COLOR_PER_VERTEX = 1; private final static int COLOR_PER_VERTEX = 1; private final static int BYTES_PER_FLOAT = 4; private static final String A_POSITION = "a_Position"; private static final String A_COLOR = "a_Color"; Private final int STRIDE = (COORDS_PER_VERTEX+ COLOR_PER_VERTEX)* BYTES_PER_FLOAT; private FloatBuffer mVertexData; Public MyRender2() {float[] TRIANGLE_COORDS = {0.2f, 0.2f,0,5f, 1f, 0.2f, -0.2f, -0.2f,0, 1f, 0.2f, 1 f, f, 0.5 0.5 f to 0.5 f, 0, 1 f, f 0.5, 0.5 f, 1 f, f, 0.5 0.5 f, 0, 5 f, 1 f, f, 0.5 0.5 f to 0.5 f, f 0.5, 0, 5 f, f, 0.5 0.5 f, 1 f, 0.5 f, 0.5-0.5 f, 0, 1 f, f, 1 f, f, 0.5}; mVertexData = ByteBuffer .allocateDirect(TRIANGLE_COORDS.length * BYTES_PER_FLOAT) .order(ByteOrder.nativeOrder()) .asFloatBuffer() .put(TRIANGLE_COORDS); } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { Log.i(TAG, "onSurfaceCreated: curThread= " + Thread.currentThread()); GlClearColor (0.0 f, f, 0.0 0.0 f, 0.0 f); String vertexCode = ShaderHelper.loadAsset(MyApplication.getContext().getResources(), "vertex_shader.glsl"); String fragmentCode = ShaderHelper.loadAsset(MyApplication.getContext().getResources(), "fragment_shader.glsl"); ProgramId = shaderhelper. loadProgram(vertexCode, fragmentCode); int aPosition = GLES20.glGetAttribLocation(programId, A_POSITION); Log.i(TAG, "drawFrame: aposition="+aPosition); mVertexData.position(0); GLES20. GlVertexAttribPointer (aPosition COORDS_PER_VERTEX, / / a vertex is described by using several migration GLES20. GL_FLOAT, / / false vertex data types, STRIDE,// How many bytes a vertex needs to offset mVertexData// allocated buffer); / / open the vertex shader attribute GLES20. GlEnableVertexAttribArray (aPosition); int aColor = GLES20.glGetAttribLocation(programId, A_COLOR); mVertexData.position(COORDS_PER_VERTEX); GLES20.glVertexAttribPointer(aColor,COLOR_PER_VERTEX,GL_FLOAT,false,STRIDE,mVertexData); GLES20.glEnableVertexAttribArray(aColor); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { Log.i(TAG, "onSurfaceChanged: width=" + width + " h=" + height + "curThread= " + Thread.currentThread()); GLES20.glViewport(0, 0, width, height); } @Override public void onDrawFrame(GL10 gl) { Log.i(TAG, "onDrawFrame: " + "curThread= " + Thread.currentThread()); GLES20.glClear(GL_COLOR_BUFFER_BIT); GLES20. GlDrawArrays (GLES20. GL_TRIANGLES, 0, 6); }}Copy the code
But the value of W is written dead in the code, we will automatically generate w values through the projection matrix, for dynamic change.
4.3 Projection matrix 3D
The first option is to modify the vertex shader. Gl_position is not assigned entirely by glVertexAttribPointer, but by matrix multiplication. The specific vertex shader code is as follows: _
// Define a uniform MAT4 u_Matrix that does not change frequently; attribute vec4 a_Position; attribute vec4 a_Color; varying vec4 v_Color; void main() { v_Color = a_Color; Gl_Position = u_Matrix * a_Position; }Copy the code
Let’s assign the u_Matrix matrix in Render. First we define four matrix variables and matlocatin
private final float[] projectionMatrix = new float[16]; Private final Float [] modelMatrix = new float[16]; Private final Float [] viewMatrix = new float[16]; Private final Float [] mvpMatrix = new float[16]; //4*4d MVP matrix private int uMatrixLocation; Uniform MAT4 variable location in GLSLCopy the code
Then assign the above matrix at onSurfaceChanged
@Override public void onSurfaceChanged(GL10 gl, int width, int height) { Log.i(TAG, "onSurfaceChanged: width=" + width + " h=" + height + "curThread= " + Thread.currentThread()); GLES20.glViewport(0, 0, width, height); SetIdentityM (modelMatrix,0); Matrix. TranslateM (modelMatrix, 0, 0, 2.5 f); Matrix. RotateM (modelMatrix, 0 to 60,1,0,0); / / set the view matrix The V Matrix. SetLookAtM (viewMatrix, 0,0,0,1,0,0,0,1,0,0); PerspectiveM (projectionMatrix,0,60,(float) width/ height,1f,200f); Matrix.multiplyMM(projectionMatrix,0,projectionMatrix,0,modelMatrix,0); Matrix.multiplyMM(mvpMatrix,0,projectionMatrix,0,viewMatrix,0); }Copy the code
And then it’s called in onDrawFrame
GLES20.glUniformMatrix4fv(uMatrixLocation, 1, false, mvpMatrix, 0);
Copy the code
The effect is as follows:
Five, the data
“Computer Graphics foundation (OpenGL edition)” “OpenGL ES 3.0 programming guide” “OpenGL programming guide” (the Little Red Book) “OpenGL ES application development practice guide” [OpenGL matrix transformation (about too good ~!)] [Songho] [Why homogeneous coordinates, [Android OpenGL ES(2)- Orthogonal projection] [OpenGL coordinate transformation] [OpenGL coordinate system analysis (vertex from object coordinate system to screen coordinate system calculation process] [OpenGL perspective projection model view matrix/projection matrix/observer matrix]
Six, harvest
This article contains a lot of information. The most important thing is to have a clear understanding of the concept. In terms of code implementation, each platform provides a better encapsulation method, such as Matrix.
- Reviewed the basic knowledge of matrices (identity matrix, orthogonal matrix, projection matrix, matrix multiplication rules, etc.) and left and right hand coordinates,
- Learned a lot of new concepts, such as world coordinates, homogeneous coordinates, viewport coordinates, normalized coordinates, device coordinates, view transformation, model transformation, projection transformation, viewport transformation and so on. It took several books to understand the basic concepts.
- This section describes the API of the Matrix class in Android
- Practice the 3D effect, translation, zoom, rotation.
Thank you for reading
Next we will learn to practice OpenGL texture mapping, welcome to pay attention to the public account “audio and video development journey”, learn to grow together.
The original address