Hi, I’m Kenney, a programmer. In my previous article OpenGL 3D Rendering Technology: Normal texture is a pixel-level normal texture that stores normal vector information in a texture as a color value. This article will discuss normal vector-related transformations.

The normal texture

Normal texture is to store normal vectors in the texture map in the form of color values. In general, even if there is no normal texture, the model still has normal vectors. Then why do we need normal texture? Is because the model of the normal vector data, and vertex one-to-one correspondence, the triangles in a is a plane, so the normal vector of triangles in a vertex normal vector, if you watch, you will find that for a triangle, three vertex normal vector is the same, here to pay attention to the situation of vertex reuse, A vertex position may correspond to multiple normal vectors, depending on which triangle the vertex is in. Thus, for a triangle, since the normal vector is the same everywhere, the illumination calculation results in a very smooth surface, but we sometimes want to get the effect of rough surfaces, such as this classic brick wall:

If a triangle is a flat plane with the same normal vectors, how do you get rough? Can do a lot of very small triangle, let these small Angle towards different places, do have a obvious problems is the vertex number changed very much, so there are bosses invented the method to texture the thing, it will be the normal vector stored in one of the texture in the fragment shader in French vector samples, get the pixel level measured method, Can easily achieve pixel level uneven effect.

The following image shows how normal textures can help us get pixel-level normal vectors:

Normal texture is used to store normal vectors as color values in the texture map, that is, (x,y,x) is stored as (r,g,b), then read the color values as normal vectors (x,y,z). Not so simple, such as the cube above, we can only use a method of texture, render time will make six surface has the same effect of uneven, if read out the color value directly as a normal vector, then for each of the corresponding position, on the surface of the outside to the texture on the value of the sample is the same, this will lead to each of the points on the surface of the, The normal vector is the same, as shown on the left below, which is obviously not what we want, we want the effect on the right below:

Of course, it is also possible to make 6 normal graphs, using their own one for each face, which can also solve this problem, but this introduces some disadvantages, such as resource consumption, now generally do not do this, so only one normal texture, how to achieve the correct use of different faces? The key point here is that the normal vector data that’s stored in it, it’s not in model coordinates, it’s read out and it’s not usable.

Tangent space

In order to realize that normal vectors can be used correctly on different surfaces, the concept of Tangent Space is introduced. In particular, the Tangent Space is defined on each triangle. It is also called TBN coordinate system. B is BiTangent,N is Normal, T is the same as the U axis of the texture coordinate system, B is the same as the V axis of the texture coordinate system. After reading the RGB color value in the Normal map, it is slightly converted from 0 1 to -1 1, which is xyz in the corresponding tangent space:

Tips: The normal vector stored in the normal texture is under the tangent space, and the specific rotation of the model cannot be determined. It is generally pointing to the Z-axis of the tangent space. Therefore, the Z component, namely the B component of RGB, is generally much larger than the other two components, so the normal texture usually looks bluer.

So the question arises, since the RGB color value in the normal map represents XYZ in the tangent space, that is, it is the normal vector in the tangent space, and the light source position and the model vertices are in the model coordinate system, how can we use the normal vector in the tangent space? So we have to convert them to the same coordinate system. Here is about to use a knowledge of linear algebra, if you know a set of coordinates based on another coordinates system, said that in this set of coordinates based on the coordinates of this set of coordinates based on another coordinates, says it can be coordinate under the base of coordinate transformation in this group to another coordinates, sounds a bit around, find a learning material for you: zhuanlan.zhihu.com/p/69069042

In our case, TBN is a set of basis vectors in the model coordinate system, so using TBN to transform the normal vector in the normal texture, the result is the normal vector in the model coordinate system, so that they are all in the same coordinate system, and then they can be used, for example, to transform to the world coordinate system.

So how do you compute the three vectors TBN? Online there are many articles, the derivation of online most of the derivation process of the article, based on the T and U in the same direction, B and V in the same direction so as to construct two texture coordinate transformation to the vertex coordinates of model equations, so as to solve the T and B, this essay is also similar, but such direct deduction, will cause some confusion, I looked at least there will be some doubt, Maybe I don’t understand it personally, but I want to share my understanding with you.

LearnOpenGL -cn.github. IO /05%20Advanc… There are lots of articles on the Internet that do this, but let’s take a look at two pictures:

T is in the same direction as U, and B is in the same direction as V. It can be obtained from these two figures:

Looking at this step in fact it is better to understand, “E” is for the model coordinate system between two vertices in the form of vector, meaning the two formulas is mentioned the knowledge of linear algebra, it will be the representation of texture coordinates, through the transformation of T and B, to the representation of model coordinate system, in other words, the role of T and B is the texture coordinates coordinates are transformed into model. Again to recall our first to what is, we want the TBN, it can normal vector into a model in the case of the tangent space coordinates of the normal vector, but when we solve the T and B coordinates suggest using the texture coordinates into a model equation to establish the constraint, it looks like we are in for a texture coordinate transformation to become model coordinates of TB.

Why do I get T and B this way, plus N, why do I get the ability to convert a normal vector in tangent space to a normal vector in model coordinates?

I haven’t seen any articles that can make this clear, but here I’m trying to explain it as I understand it, and reemphasize our goal: to find three basis vectors in TBN coordinate system, which can be used to transform normal vectors in tangent space into normal vectors in model coordinate system.

My train of thought is for a triangle model, the method of vector texture map according to the corresponding texture coordinates alignment in this triangle, then put the T axis B axis respectively with V U axis shaft alignment, probably will get a figure like that above 3, the TBN coordinate system is then fixed down, is our final requirements, just don’t know yet TBN three coordinates, how much is the base, Then we pull the TBN coordinate system and the normal texture directly away, preserving the shape, to get the image on the far right. Then the normal vector of the yellow triangle and green, from the above third figure to the right of the above transformation is actually from the tangent space to the model coordinate system transformation, and transformation method is linear algebra knowledge mentioned above, the TBN coordinates system, coordinates with TBN three group corresponding values in the model coordinate system to transform.

And now to find TBN, let’s do some things, we don’t know the transformation coordinates of the green normal vector, because that’s what we ultimately want to get by the transformation TBN, but we know the transformation coordinates of the yellow triangle, which are the model vertices of the triangle in model coordinates, So we can set up the transformation equation by transforming the triangle in TBN coordinates to the triangle in model coordinates, which is this:

T and B can be solved by using these two expressions:

The solution process is not complicated, just simple linear algebra, LearnOpenGL steps are relatively clear, no longer expand. So once you have T and B, and you put the normal vectors of your model together, you get your TBN matrix, and you’re done.

Although finally come to this formula is not too complicated, but if when calculating the light, so count every piece of RMB, cost a lot, after all fragment shader execution more frequently and in general there are two optimization strategy: a advance the TB is not good, the other one is the illumination calculation into tangent space.

Calculate TB in advance

If you look closely, the tangent data in the glTF format can be stored in the model because TB is computed only on the data in the model. GlTF format also supports this.

Name Accessor Type(s) Component Type(s) Description
TANGENT "VEC4" 5126 (FLOAT) XYZW vertex tangents where the w component is a sign value (-1 or +1) indicating handedness of the tangent basis

Source: github.com/KhronosGrou… If you look at some of the official glTF sample models, you can also find fields like this:

. "meshes" : [ { "name" : "TwoSidedPlane", "primitives" : [ { "attributes" : { "NORMAL" : 2, "POSITION" : 1, "TANGENT" : 3, "TEXCOORD_0" : 4 }, "indices" : 0, "material" : 0, "mode" : 4 } ] } ] ...Copy the code

Note that glTF only supports tangent, not both tangent and bitangent. I guess the reason is that bitangent is available via normal and tangent cross product, so it doesn’t need to be saved. If it could be saved, the cross product calculation would be eliminated, but the model would also be larger.

Convert lighting calculations to tangent space

In order to obtain the normal vector in the normal texture and sample the normal texture, the simplest and direct method is to sample the normal vector from the normal texture when calculating the illumination of a fragment in the Fragment shader, then convert it to the model coordinate system with TBN matrix, and finally transfer the model matrix to the world coordinate system to calculate the illumination. However, due to the large number of fragments shader execution, there will be a large number of TBN matrix transformation. A commonly used optimization method is to put illumination calculation into the tangent space:

// vertex shader. out vec3 v_tangentLightPos; out vec3 v_tangentViewPos; out vec3 v_tangentFragPos;void main(a) {... mat3 normalMatrix =transpose(inverse(mat3(u_modelMatrix)));
  // Shift the TBN three coordinates to the world coordinate system
  mat3 worldTBNMatrix = mat3(
    normalize(normalMatrix * tangent),
    normalize(normalMatrix * bitangent),
    normalize(normalMatrix * a_normal.xyz)
  );
  // Convert vertex coordinates in model coordinates to world coordinates
  vec3 fragPos = (u_modelMatrix * a_position).xyz;
  // Find the inverse matrix of TBN's three coordinate axes in the world coordinate system
  mat3 inversedWorldTBNMatrix = transpose(worldTBNMatrix);
  // Convert the light source position from the world coordinate system to tangent space
  v_tangentLightPos = inversedWorldTBNMatrix * u_lightPos;
  // Convert the observation point position from the world coordinate system to tangent space
  v_tangentViewPos  = inversedWorldTBNMatrix * u_viewPos;
  // Convert model vertex coordinates from world coordinates to tangent spacev_tangentFragPos = inversedWorldTBNMatrix * fragPos; . }// fragment shader. in vec3 v_tangentLightPos; in vec3 v_tangentViewPos; in vec3 v_tangentFragPos; .void main(a) {...// Read the normal vector coordinates from the normal texture, convert to -1~1 and normalize
  vec3 normal = texture(normalTexture, v_texCoord0).rgb;
  normal = normalize(normal * 2.0 - 1.0);
  // Light direction
  vec3 lightDir = normalize(tangentLightPos - tangentFragPos);
  // View direction
  vec3 viewDir = normalize(tangentViewPos - tangentFragPos);
  // Direction of reflected light
  vec3 reflectDir = reflect(-lightDir, normal);
  // Lighting calculation. }Copy the code

Vertex shader (vertex shader, vertex shader, vertex shader, vertex shader, vertex Shader) This matrix is intended to solve the problem that in some cases the normal lines of the model are no longer perpendicular to the plane of the triangle when transformed with the vertices.

Here’s a simple example:

Assuming the triangle and normal vector shown in the figure on the left, apply A transformation of the model matrix. The transformation of the model matrix is to scale x by 0.5. After scaling, the normal vector n=(0.5,1,1) and A’C=(-0.5,0,1), it is obvious that n and A’C are no longer perpendicular to the triangle. Why does transpose the inverse of the model matrix solve this problem? Find an article for everyone: zhuanlan.zhihu.com/p/72734738, interested friends can go to study.

We use normalMatrix to transform tangent and bitangent to the world coordinate system just like a_normal. The matrix of tangent, bitangent and A_normal after transformation is called worldTBNMatrix. It goes from transitioning from tangent space to model coordinates to transitioning from tangent space to world coordinates. Why? Recall from linear algebra earlier in this article that tangent, Bitangent, and a_normal are transformed into world coordinates. Our goal is to compute illumination in the tangent space, and now we have a matrix that can be converted from the tangent space to the world coordinate system, so its inverse matrix can be converted from the world coordinate system to the tangent space, and because the worldTBNMatrix is made up of three pairwise perpendicular coordinate bases, it is an orthogonal matrix, From the basic knowledge of linear algebra, we can know that the orthogonal matrix has a property that the inverse of the orthogonal matrix is equal to its transpose, so the inverse matrix can be obtained by direct transpose, eliminating the complex inverse calculation. V_tangentLightPos, v_tangentViewPos, and v_tangentFragPos are passed to the fragment shader with varying forms. Since v_tangentLightPos and v_tangentViewPos are the same for the three vertices of a triad, the interpolation will be the same, while v_tangentFragPos interpolates between the three vertices to get different coordinates.

The above process is a little bit more complicated, so let me draw a picture to show it:

Well, that’s all for this article. Thanks for reading!

Reference:

Learnopengl -cn. Making. IO / 05% 20 advanc…

zhuanlan.zhihu.com/p/69069042

zhuanlan.zhihu.com/p/72734738