Preface 📚

I originally intended to directly introduce GLSL basic syntax or 3d graphics related content, but later I thought, no matter in WebGL system or OpenGL system there is a very important concept – coordinate system 🤔 in the previous article also mentioned several coordinate systems, such as: Texture coordinate system, WebGL coordinate system, etc., but these are relatively general concepts, when all objects into the WebGL system will follow the standard of these coordinate system 📐

But when after entering to the three-dimensional world, we will face more complex graphics image space, more complex graphics changes (but cannot leave the basic translation/rotation/scaling) as well as more complex graphics data 🪐 and also involved in the process of graphic display through transformation matrix transform coordinates from one coordinate system to another 🚀 coordinate system of the process

Coordinate system 📦

Different coordinate systems are also called corresponding Spaces. First, let’s see what coordinate systems (Spaces) a vertex must go through before it is finally converted to a slice:

  • Local Space/Object Space
  • World Space
  • View Space/Eye Space
  • Clip Space
  • Screen Space

Vertex coordinates (located in local space), called local coordinates, are then transformed step by step from local space to screen space in the above order 💻

And coordinates from one space to another need to transform the matrix to complete the process:

  • Model Matrix
  • View Matrix
  • Projection Matrix

The transformation process is shown in the figure below:

❗️ This section is based on OpenGL introduction ❗️

Local space 🏡

Local space refers to the coordinate space where the object is located, that is, where the object was originally located. In local space, objects are located at the origin of space, and all adjustments are made based on the relative positions of objects. For example: I believe that you have assembled four-wheel drive, when we assembled four-wheel drive do not need to care about the latitude and longitude of my parts, only need to care about, I should be installed in the motor of this component where can 🌰

World Space 🌎

World space is the space in which a virtual scene is located, such as a game scene! It refers to the coordinates of the vertex relative to the scene. When all objects are imported into the program, they may all be crowded at the origin of the world (0.0, 0.0, 0.0). We need to define a position for each object. So that they can be reasonably placed in a larger scene 👍🏻 the role of the model matrix is to place objects in different positions in the scene through operations such as displacement, scaling and rotation

Observation space 👀

What we present to the user on the screen through WebGL is not all the content placed in the world space, but the scene presented by the user’s eyes through the camera! Observation space is the space observed from the camera’s perspective, also known as camera space or visual space 🤩

A space with the camera position as the origin and the observation direction of the -Z axis usually uses a series of translation and rotation transformations to convert objects in world space to the observation space 📦

Clipping space ✂️

Taken toward the camera, there is also a visual field, all things are all can’t see beyond the field of vision, has to be rejecting ✁ at each vertex shader run, at the end of the OpenGL hope coordinates are within a specified range, beyond the scope of the coordinates will be cut off, the rest of the coordinates in a fragment shader stages, That’s where the clipping space got its name! The projection matrix transforms the object from the observation space to the clipping space 🤔

Research shows that people eyes overlap horizon is 124 degrees, usually when the concentration is about one 5, namely 25 degrees (know why when a person’s attention is likely to ignore the surrounding happened), from the comfort of monocular vision is 60 degrees 👁, of course, the camera left and right, up and down in the OpenGL direction also have certain scope, This range is called FOV!

The blue W and H in the figure determine the range that can be seen up, down, left and right of the camera; Usually set the upper, lower, left and right views to be 90 degrees, and set a near clipping plane and a far clipping plane at the same time: objects closer to the near clipping plane and farther than the far clipping plane are removed, and all objects between the two clipping planes are mapped to the projection plane ✄

Let’s go back to the projection matrix. There are two kinds of projection matrices: Orthographic projection matrix and the perspective projection matrix (within the scope of certain coordinate transformation to a standardized device coordinate system (NDC) is referred to as the process of projection), orthographic projection and perspective projection difference obviously, perspective projection looks more three-dimensional real (just want to learn painting when teachers teach “nearly coscodl small, nearly high low”), but not for orthographic projection this effect, Using orthogonal projection object without perspective, distant objects also won’t appear smaller, each vertex in the orthographic projection distance the distance of the observer is the same, orthographic projection, of course, also has its USES, such as to render some 2 d building or engineering program, in these scenarios the engineers want to vertex won’t be affected by perspective 👷 came ️

Create corresponding transformation matrices for each of the above steps: model matrix, observation matrix, and projection matrix ~ a vertex coordinate will be transformed to clipping coordinate according to the following procedure:


The output of the vertex shader requires all vertices to be in clipping space, which is done using a transformation matrix. Then OpenGL performs perspective division on clipping coordinates to transform them to standardized device coordinates. OpenGL uses internal parameters to normalize device coordinates to screen coordinates, each of which is associated with a point on the screen. This process is called viewport transformation!

Perspective division is the process of transforming 4D trimmed space coordinates into 3D standardized equipment coordinates

Viewport space 🎥

This space can be simply understood as the application window, the projection plane and the pixel on the window through one to one mapping to the window, display on the window, this step OpenGL will help us complete!

Thanks for watching from LearnOpenGL

Review ⏎

Now that we know about the coordinate systems above, recall that the example in our previous article seemed to have only local space and viewport space. In fact, when the vertex is processed by the slice shader, it becomes the normalized device coordinate (NDC). That is to say, the flow of our previous example looks like this:

This should be dealt with before we draw triangle in rectangle canvas, rotation after the deformation of the scene 🧐 omitted so many coordinate conversion process, we can do some articles in the conversion process of each coordinate system, to solve our problems mentioned above? The answer is of course!

That is to say, we need to work on the two links of the world space coordinate system → observation coordinate system and observation coordinate system → clipping space coordinate system. It can also be seen in the figure above that it is actually scaled by equal proportions, so that it can be displayed on the screen according to our imagination 😁

Here is the interesting coordinate system, welcome to follow the public account: Refactor, more fun and useful articles will be published later to share with you, thanks for reading 🔚

For some reason, a URI malformed error appeared when a period was entered in the world coordinates section. The chapter was published on the public account last Friday, May 15th, but for this reason it was delayed until today. As you can see from the world coordinates section, The content does not contain a full stop 😁 HHHHH amazing!