Title: Coordinate system of OpenGL(7)
date: 2020-07-11 10:34
Category: graphics
tags: opengl
OpenGL(7) coordinate system
The project code can be seen in 6.1.coordinate_systems
The foreword 0.
In the last video we learned about using matrix transformations to transform vertices to get to the point where we’re looking at objects that are dynamic, but what OpenGL wants is that every time the vertex shader takes the vertex data and runs, All visible vertices are Normalized Device Coordinate, that is, the x, Y, and Z coordinates of each vertex should be between -1.0 and 1.0. Vertices outside this coordinate range are not visible.
Coordinate processing:
- Define a range of coordinates yourself.
- Transform these coordinates to standardized device coordinates in the vertex shader.
- These converted normalized device coordinates are passed into a Rasterizer.
- Converts vertex coordinates to two-dimensional coordinates or pixels on the screen.
This paper focuses on explaining the concept of coordinate system and coordinate space needed in the above process:
The vertices of objects are also transformed into multiple coordinate systems before they are finally transformed into screen coordinates, so several important coordinate systems are derived:
- Local Space (or Object Space)
- World Space
- View Space (or Eye Space)
- Clip Space
- Screen Space
1. An overview of the
To transform one frame to another, you need to use several transformation matrices,
- Model
- View
- Projection
Vertex coordinate starts from local coordinate in local space and is transformed into World coordinate by model transformation. After View Coordinate transformation into View Coordinate, after projection transformation into Clip Coordinate, and finally displayed on the Screen through viewport transformation in the form of Screen Coordinate.
The flow chart is as follows:
-
Local coordinates are coordinates relative to the local origin, that is, the coordinates of the object
-
Local coordinates are converted into world space coordinates, and the converted world coordinates are placed relative to the global origin of the world, and other objects are placed relative to the world origin. (Model transformation)
-
The world coordinates are transformed into observation space coordinates (view space) so that each coordinate is viewed from the perspective of the camera or observer. (View transformation)
-
Coordinates arrive in the observation space (view space) and need to be projected onto clipping coordinates, which are processed in the range of -1.0 to 1.0, and determine which vertices will appear on the screen. (Projection transformation)
-
Finally, the clipped projection coordinates need to be converted to screen coordinates through a ViewPort Transform. This Transform operation will Transform coordinates in the range of -1.0 to 1.0 to the coordinates defined by the glViewPort function. The final coordinates will be sent to the raster. This is then converted to the familiar Shader fragment. (Viewport transformation)
2. Local space
Local space refers to the object’s coordinate space, i.e. if you draw a character image model, the model is based on the bp (0, 0) which you select the starting position (position but eventually they appear in different locations in the world), relative to the world coordinate system, a model by all the vertices in a local space.
3. World space
In the real placement of objects, that is, the local space coordinates of objects are transformed into the real world space by some means. Then the coordinates of objects are transformed from local space to world space by ** Model Matrix **. The matrix can shift, scale and rotate an object to its proper position and direction.
4. View space
View space is also called observation space, which is called OpenGL camera. Is the result of converting the world space coordinates to the coordinates in front of the user’s vision. The viewing space is the space viewed from the camera’s point of view, and this operation is accomplished by some additional combination of shifts and rotations that transform a particular object in front of the camera. OK, AnyWay, the transformation of these operations will also be done by matrices, so the results of these Matrix operations will be stored in a View Matrix.
5. Cut space
Definition: At the end of a vertex shader run, OpenGL expects coordinates to fall within a certain range, and any points outside this range should be clipped. The clipped coordinates are ignored, so the remaining coordinates are the visible fragments on the screen. This is the definition of clipping space.
The standardized device Coordinate range is between **-1.0 and 1.0**, but obviously it is not in line with the actual Coordinate range, so we will specify our own Coordinate Set and transform it back to the standardized device Coordinate system, so that we can achieve the expected OpenGL.
In order to transform vertex coordinates from view space to clipping space, a Projection Matrix is defined that specifies the coordinates of a range.
Explain the above: for example, with a range between -1000 and 1000 on each dimension, the projection matrix then transforms the specified range of coordinates between -1000 and 1000 into the range of normalized coordinates (-1.0,1.0). That is, all coordinates that are not in this range will not be mapped to the range -1.0 to 1.0, hence the concept of clipping.
Example: coordinates (1239,299,22) will not be visible because the x component is outside the specified range and will therefore be converted to normalized device coordinates greater than 1.0 during the conversion process, which will be clipped out.
Frustum: Observation box created by projection matrix; Every coordinate that appears in the frustum’s range will eventually appear on the user’s screen;
Projection: The process of converting a specified range of coordinates into a standardized equipment coordinate system;
Perspective division: All vertices are transformed into clipping space. In this process, the x, Y, and z components of the position vector are divided by the homogeneous W component of the vector. This is the process of converting the coordinates of the 4D clipping space into 3D normalized device coordinates. (Execution is automatically performed at the end of the vertex shader run)
The projection matrix that transforms view coordinates into clipping coordinates can be divided into two forms, each of which defines a different flat truncated head, Orthographic Projection Matrix and Perspective Projection Matrix, respectively, are introduced below.
6. Orthogonal projection
Orthogonal projection matrix defines a cube-like flat cutting head box, defines a clipping space, outside the space will be clipped; Creating an orthogonal projection matrix requires specifying the width, height, and length of the visible flat truncated body.
After the orthogonal projection matrix is transformed into the clipping space, all coordinates in the body of the flat cutter will not be clipped.
As shown above, the visible coordinates are defined by the width, height, near plane, and far plane. Any coordinates that appear before or after the near plane are clipped out.
The orthogonal projection directly maps all coordinates inside the flat frustum to normalized device coordinates because the W component of each vector is not changed. If the W component is equal to 1.0, perspective division does not change this coordinate.
Create an orthogonal projection matrix using GLM’s built-in function GLM :ortho:
glm::ortho(0.0 f.800.0 f.0.0 f.600.0 f.0.1 f.100.0 f);
Copy the code
Parameter Description:
The first two parameters specify the left and right coordinates of the flat frustum, The third and fourth parameters specify the bottom and top of the frustum. The fifth and sixth parameters define the distance between the near plane (0.1f) and the far plane (100.0f).
Note: The orthogonal projection matrix maps coordinates directly to the 2D plane (on the screen), but in practice a direct projection matrix will produce unreal results because perspective is not considered. Here is what perspective is and a perspective matrix.
7. Perspective projection
The concept of perspective: An actual scene in life where things look smaller as they move further away from you. This effect is called perspective.
The perspective effect is like looking at an infinite stretch of highway or railroad:
As shown in the figure above, the two lines of a rail appear to intersect at a great distance, which is exactly what perspective projection needs to simulate, using a perspective projection matrix.
-
The projection matrix maps the given flat truncated body range to the clipping space
-
Modify the w value of each vertex coordinate so that the further away the vertex coordinates are from the observer, the larger the W component is (in the mapping process, it needs to be divided by the W component; the larger the W component (the farther away), the smaller the image visually
-
The coordinates that are transformed into the clipping space will be in the range from -w to W (anything larger than that will be clipped)
Each component of the vertex coordinates is divided by its w component, and the further away from the observer the smaller the vertex coordinates are. Because of this manipulation, the perspective effect can be presented.
Create a perspective projection matrix using the GLM built-in functions
glm::mat4 proj = glm::perspective(glm::radians(45.0 f), (float)width/(float)height,0.1.100.0 f);
Copy the code
What you do with GLM :: Perspective is you create a large frustum that defines the visible space. Anything outside the frustum will not end up in the clipping volume and will be clipped.
The first parameter defines the value of foV, which is an incoming Angle representing the range of views seen. The second parameter sets the aspect ratio by dividing the width of the viewport by the height. The third and fourth parameters set the near and far planes of the frustum, with 0.1F for the near distance and 100.f for the far distance. All vertices in the near and far planes and in the body of the frustum are rendered.
Note: When you set the near value of the perspective matrix too high (e.g., 10.f),OpenGL will clip out the coordinates near the camera (between 0.0f and 10.f). One of the results is that objects that are too close have a penetrating effect.
3D renderings of perspective and orthogonal matrix projection are shown below:
Summary: With perspective projections, distant vertices look smaller, whereas in orthogonal projections each vertex is the same distance from the observer
8. Combination transformation matrix
After introducing the above concepts, an object drawing needs to go through multiple transformation matrices to achieve this relatively realistic 3D effect. During the transformation, the model matrix, view matrix, projection matrix, a vertex coordinate according to the above operation is the clipped coordinate, the final vertex will be copied to the vertex shader gl_Position, through this built-in variable to tell OpenGL, then OpenGL will automatically perspective division and clipped.
Viewport transformation: The output of the vertex shader requires all vertices to be in clipping space. The matrix transformation already does this. OpenGL performs perspective division on clipping coordinates to convert them to normalized device coordinates. OpenGL uses the glViewPort internal parameter to map the normalized device coordinates to screen coordinates, each of which is associated with a point on the screen, and finally renders the view to the screen
Around 9.3 D
To draw a 3D image, go through the following process:
- The first step is to create a model matrix that contains displacement, scaling, and rotation operations that are applied to all the vertices of the object to transform it into the global world space.
Also use the GLM built-in function to define a model matrix:
glm::mat4 model;
model = glm::rotate(model,glm::radians(55.0 f),glm::vec3(1.0.0.0. 0. 0));
Copy the code
The vertex coordinates are then mapped to world space by multiplying the vertex coordinates by the model matrix.
-
And then create a view matrix, why? The above operations only map the object to the world space. It is unknown where it is in the world space. The view matrix is created to move the object to the point of view where we can see it and make it visible.
How to deal with visibility of objects?
Next, we need to create a view matrix. We want to move back slightly in the scene to make the object visible (in world space, we are at the origin (0,0,0))
To move around the scene, consider the following
Moving the camera backwards is the same as moving the entire scene forward.
That’s what the view matrix does, we reverse the scene to where we want the camera to be. Because we want to move backwards, and because OpenGL is a right-handed coordinate system, we have to move along the positive z axis. So how do you do that? To keep the scene moving backwards so that objects are visible, we do this by panning the scene to the negative Z-axis, which is like moving backwards relative to OpenGL’s positive Z-axis.
Don’t forget that the positive Z axis is pointing at you from the screen, and if we want the camera to move backwards, we move along the positive z axis
Assume that your screen is centered on the 3-axis, with the positive Z-axis facing you across the screen.
Right-handed System:
So OpenGL is a right-handed coordinate system, which basically means that the positive X is on the right hand side, the positive Y is pointing up, and the positive Z is pointing out.
Right hand coordinate system practice:
- Extend your right hand along the positive Y-axis with your finger pointing upwards
- Thumb pointing to the right
- Point your index finger up
- Bend the other fingers down 90 degrees.
Create a view matrix with the GLM built-in function:
glm::mat4 view;
// Note that according to the above analysis, in order for the objects in the scene to be visible, the scene needs to be moved backwards, which is implemented in the form of an operation matrix
view = glm::translate(view,glm::vec3(0.0 f.0.0 f.3.0 f));
Copy the code
- Define a projection matrix to use perspective projection in the scene, again defined by the GLM built-in function:
glm::mat4 projection;
projection = glm::perspective(glm::radians(45.0 f),screenWidth/screenHeight,0.1 f.100.0 f);
Copy the code
- Once you have created the transformation matrix, you need to pass it into the shader. By declaring a
uniform
The uniform global variables are defined to receive all kinds of transformation matrices processed by the above definition and then multiplied by the vertex coordinates.
#version 330 core
layout (location =0) in vec3 aPos;
/ /...
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main(a){
// Matrix multiplication is read from right to left.
gl_Position = projection*view*model*vec4(aPos,1.0);
/ /...
}
Copy the code
- Since the transformation matrix changes frequently, you need to pass the matrix into the shader so that you can process the matrix and update the vertex coordinates every time you render
/ /...
int modelLoc = glGetUniformLocation(shader.ID,"model");
glUniformMatrix4fv(modelLoc,1,GL_FLASE,glm::value_ptr(model));
/ /...
Copy the code
reference
learnopeng-cn
OpenGL Projection Matrix