Hello everyone, MY name is Kenney, a programmer. This is my OpenGL ES advanced series of articles. There is a corresponding project on github, welcome to follow, link: github.com/kenneycode/…
Today I will introduce you to the lighting principle of OpenGL, generally used in 3D rendering.
First of all, let’s learn what the color of the object is. The object will absorb some color of light, and the light that is not absorbed will be reflected, which is the color of the object:
LightColor =(x, y, z); lightColor=(x, y, z); lightColor=(a*x, b*y, C *z)
Here are some examples:
Here are some that seem to defy our common sense, like the first one, green lighting on a red object gives black?? You might say, in real life, isn’t it supposed to look red and green?
We see red objects in our life, is not only reflected the red, but most of the reflected light is red, if you can find this only a reflection of pure red objects of different colors, and a color extremely pure green light source, and in an environment without any other light, use the pure green light to illuminate the only reflect pure red objects, You should be able to see that it doesn’t reflect green light, so if there’s no light reflected, it’s black.
So in the program, it is possible to define such objects that reflect only red, and the same color of light can be defined precisely.
The following begins to talk about the calculation principle of illumination, will involve the matrix transformation principle of OpenGL, unfamiliar friends can refer to my other article: “OpenGL ES Advanced Advanced: coordinate system and matrix transformation”, be sure to master the matrix transformation principle, or it is difficult to understand the illumination calculation after the article.
To build a scene without lighting, I rendered a yellow unit cube at the origin of the world coordinate system:
Here is a classic illumination model, Phong illumination model.
The main structure of Phong illumination model consists of three components:
Ambient Lighting: Even if it’s dark, there’s some light, like moonlight, that gives the object a base light so it’s not dark.
Diffuse Lighting: Simulates the directional impact of light sources on objects, and is the most visually significant component of the Von’s Lighting model. The more a part of an object is directly in front of a light source, the brighter it will be.
Specular Lighting: Simulates bright spots on shiny objects. The color of mirror light tends to be more like the color of light than the color of the object.
Let’s look at the calculation of ambient light:
// fragment shader
void main()
{
float ambientStrength = 0.2;
vec3 ambient = ambientStrength * lightColor;
vec3 result = ambient * objectColor;
fragColor = vec4(result, 1.0);
}
Copy the code
Simply multiply the color of the ambient light by the color of the object, usually with an intensity factor. Here is what happens when the intensity is 0.2 and 0.5:
Next, diffuse lighting, which gets a little more complicated, takes into account the reflection of light from a surface, the more perpendicular the surface is to the direction of the light, the more intense the reflection of light. Because of the need to calculate the reflected light, so there is a need to reflect the normal vector of the surface, general 3D software to export 3D model, can be exported together with the normal vector, here because I am using a simple cube, the direction of the amount is written in the program.
First calculate the coordinates of the reflection points in the world coordinate system:
// vertex shader
fragPos = vec3(model * vec4(aPos, 1.0));
Copy the code
Then calculate the direction of incident ray:
// fragment shader
vec3 lightDir = normalize(lightPos - fragPos);
Copy the code
Then calculate the diffuse color:
// fragment shader
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor;
Copy the code
To explain this a little bit, the diffuse reflection that I mentioned earlier is that the more part of the object is facing the light source, the brighter it is, so the dot product of the normal vector and the direction of the light can be used to measure the degree of facing the light source, because it is proportional to the cosine of the Angle between it.
Norm is not the original value of the normal line, but the direction of the normal line after transformation to the world coordinate system, because the original value of the normal line is its model coordinates, and we calculated the illumination in the world coordinate system, so the following transformation is required:
// vertex shader
normal = mat3(transpose(inverse(model))) * aNormal;
// fragment shader
vec3 norm = normalize(normal);
Copy the code
Here, you can find that normal lines can not be directly transformed to the world coordinate system like vertices by multiplying the Model matrix, because it is a direction vector and does not have the meaning of position, while the model matrix transforms the position, so it needs other methods to transform, the derivation process is more complicated and will not be discussed here.
Let’s look at the calculation of mirror illumination:
First calculate the observation direction:
// fragment shader
vec3 viewDir = normalize(viewPos - fragPos);
Copy the code
Then calculate the highlight color:
// fragment shader
vec3 reflectDir = reflect(-lightDir, norm);
float spec = myPow(max(dot(viewDir, reflectDir), 0.0), 16);
vec3 specular = specularStrength * spec * lightColor;
Copy the code
Explain here also wait a moment, there is a front part and the diffuse color similar, by observation of dot product to control the direction and the direction of the reflection Angle, that is, the more positive toward the direction of light reflection, can see the brighter the reflected light, here will give it the power to the power, give it to strengthen this effect.
So the final color is the three light colors multiplied by the object color:
// fragment shader
vec3 result = (ambient + diffuse + specular) * texColor;
fragColor = vec4(result, 1.0);
Copy the code
Here’s the end result:
Texture = texture = texture = texture = texture = texture = texture = texture = texture = texture
Let’s keep going. Let’s implement several light sources. What are the types of light sources? For example, in our real life, there are bulbs, lamp strips, and lamps with lampshades. When these light sources illuminate the same object, the effect is not the same.
Let’s start with parallel light, which is characterized by the fact that each ray of light coming from a source is approximately parallel to each other, like sunlight:
It’s very simple. The direction of light is this one:
// fragment shader
vec3 lightDir = normalize(-lightDirection);
Copy the code
Take a look at the effect:
A point light source, like our light bulb, shines in all directions, and the light decays with distance:
The implementation method is also very simple, is the attenuation coefficient multiplied by the original light value can be:
// fragment shader
float dist = length(lightPos - fragPos);
float kc = 0.1;
float kl = 0.05;
float kq = 0.05;
float attenuation = 1.0 / (kc + kl * dist + kq * (dist * dist));
ambient *= attenuation;
diffuse *= attenuation;
specular *= attenuation;
Copy the code
To see how it works, doesn’t it feel like a light bulb?
Another source of light, called concentrator, is similar to adding a lampshade to a light bulb:
Its calculation is actually changed from point light source:
// fragment shader
if (theta > 0.8) {... float dist = length(lightPos - fragPos); float kc =0.1;
float kl = 0.1;
float kq = 0.1;
float attenuation = 1.0 / (kc + kl * dist + kq * (dist * dist));
ambient *= attenuation;
diffuse *= attenuation;
specular *= attenuation;
vec3 result = (ambient + diffuse + specular) * texColor;
fragColor = vec4(result, 1.0);
} else {
fragColor = vec4(ambient * texColor, 1.0);
}
Copy the code
Because to achieve the effect of the lampshade, so will calculate an Angle, when greater than a certain value, only add ambient light, this value can control the opening Angle of the lampshade, look at the effect:
Did you get the lampshade effect? But it seems unnatural, the edges are too stiff, and it is, because when the Angle is greater than the specified value, only the ambient light is added, and the diffuse and highlights disappear in a moment, so the edges are too stiff.
To optimize, make a transition so that the diffuse and highlights slowly disappear:
// fragment shader
float epsilon = 0.15;
float intensity = clamp((theta - 0.85) / epsilon, 0.0.1.0);
diffuse *= intensity;
specular *= intensity;
Copy the code
Look at that. That’s a lot softer on the edges, isn’t it?
Now let’s talk about normal mapping. What is normal mapping? As you can see, is there anything unnatural about the brick wall above? Is it too smooth? Normal mapping is to store the normal vector information of each point on a map. During rendering, the normal map will be loaded into the texture, and the normal vector of the corresponding point will be sampled from it. The normal map will be made to point around in the normal direction to get a bumpy effect.
Let’s see what an opinion graph looks like:
The color information inside is actually a normal vector, although it is also RGBA, but it is not actually expressing color, just storing vector values as color values.
The normal texture is sampled from the normal texture instead of one face:
// fragment shader
vec3 normal = texture(normalTex, texCoord).rgb;
normal = normalize(normal * 2.0 - 1.0);
Copy the code
By the way, if you’ve ever worked with normal maps before, you may have noticed that normal maps always look blue. The normal vector in the normal map is usually expressed in the tangent space, and the z component of the normal vector in the tangent space is usually large. The color of XYZ stored in the tangent space corresponds to RGB, so the larger B component will look bluer.
What happens after an additive graph:
I made a demo:
The code is in my repository: github.com/kenneycode/…
Thanks for reading!
Reference:
learnopengl-cn.github.io/