Translation,

The original title: OpenGL Android Lesson # One: Getting Started the original link: www.learnopengles.com/android-les…


Ambient light and diffuse light

Welcome to lesson two, we will learn how to use

Shader implementationLambertian reflectance, also known as standard diffuse lighting.



In OpengGLES2, we need to implement our own lighting algorithm,

So we need to learn how math works and how it applies to our scenarios.

Prerequisites for reading this article

Each lesson in this series builds on previous lessons. Before we begin, take a look at lesson 1, because this course will introduce you to the basic concepts.

What is the light

That’s right! A world without light is dark. Without [light], we can’t even perceive the world or the objects around us, except for other senses like sound and touch. Light tells us whether an object is bright or dim, far or near, what its Angle is.

In the real world, the light we perceive is actually a collection of trillions of tiny particles called photons. It flies out of the light source, bounces off thousands or millions of times, and eventually reaches our glasses what we call light.

How do we model the effects of light through computer graphics? There are two popular methods: ray tracing and rasterized ray tracing works by mathematically tracking actual rays and seeing where they end up. The technique can produce very accurate and realistic results, but the downside is that simulating all that light is computationally expensive and often too slow for real-time rendering. Because of this limitation, most real-time graphics computing uses rasterization, which simulates lighting by approximation. Given the realism of the current game, rasterization looks great, and real-time graphics can be quickly implemented even on mobile phones. OpengGL ES is primarily a rasterized library, so we’ll focus on that.

Different kinds of light

It turns out that we can abstract away how light works and come up with three basic ways of lighting


The ambient light

The ambient light

This is the basic lighting level that seems to spread throughout the scene. It doesn’t seem to come from any

The light from the source, as it has bounced many times before it reaches you. This type of light

Can be experienced outdoors on a cloudy day, or indoors as a cumulative effect of many different light sources.

We can set a basic brightness for an object or scene, rather than all of them

Light counts separately.


Ambient lighting and diffuse lighting

An example of

The diffuse lighting

This is the light that bounces directly from an object and reaches your eye, the brightness of the object

Depending on its Angle to the light, the direction facing the light is brighter than the other angles

Besides, no matter what Angle we’re at relative to the object, we all feel that the object is the same, right

Brightness, this is also known asLambert’s law of cosines. Diffuse illumination or Lambert reflectance in

It is common in daily life and you can easily see it on the white walls illuminated by indoor lights.


An example of specular highlights

Specular lighting

Unlike diffuse lighting, specular lighting also occurs when we move relative to an object

Change. This gives the object a “shine” and can be made on a “smoother” surface

Such as glass and other shiny objects.

To simulate the light

Just as there are three main types of lighting in 3D scenes, there are three main types of light: directional light, point light, and spot light, which can also be easily seen in everyday life.


A bright landscape

A directional light source

Directional lighting usually comes from a distant source that illuminates the whole area evenly

The scene reaches the same brightness. This light source is the simplest type, no matter where you are

Lighting has the same intensity and direction everywhere.


An example of a point light source

A point source

Point light sources can be added to the scene to provide more diverse and realistic lighting.

Point light irradiationIt goes down with distanceAnd its rays are in all directions

Propagating outward, the light source is in the center.


The spotlight

The spotlight

In addition to having the characteristics of a point light source, the spotlight also has the direction of light attenuation,

Usually cone-shaped.

mathematics

In this lesson, we will look at ambient and diffuse lighting from a point light source.

Ambient lighting

Ambient lighting is actually indirect diffuse lighting, but it can also be thought of as low-level light spread throughout the scene. If we think of it this way, then it’s very computable:

// Final color = Material color * Ambient color
final color = material color * ambient light color
Copy the code

For example, we have a red object and a dark white ambient lighting. Let’s assume that an array of three colors (red, green, and blue) stores colors, using the RGB color model:

// Final color = red * Dark white = dark red
final color = {1.0.0{} *0.1.0.1.0.1} = {0.1.0.0.0.0}
Copy the code

The final color of the object will be dark red, which is what you would expect if you had a red object illuminated by a dim white light. There really isn’t much more to basic ambient light than that, unless you want to add more advanced lighting technologies such as light energy transfer.

Diffuse lighting – Point light source

For diffuse lighting, we need to add decrement and light source position. The light source position will be used to calculate the Angle between the light and the surface, which will affect the overall light level of the surface. It will also be used to calculate the distance from the light source to the surface, which determines the intensity of the light at that point.

Step 1: Calculate the Lambert Factor

The most important thing we need to figure out is the Angle between the surface and the light. Lambert’s law of cosines is a more appropriate calculation method for the surface facing direct light because of the full intensity irradiation, and the inclined surface because of the less irradiation. We have two vectors, one is from light to a point on the surface, the second is the surface normals (if the surface is flat, the surface normals are pointing up or vector) on the surface of the perpendicular to the table, and then we can through the study of the normalization of each vector to calculate the cosine, make its length is 1, and then through the calculation of the dot product of two vectors (quantity). This can be done easily with OpenGL ES 2.

We call this lambert factor, and it’s between 0 and 1

// Ray vector = light position - object position
light vector = light position - object position
// Cosine = the dot product of the object normal and the normalized ray vector
cosine = dot product(object normal, normalize(light vector))
// The Lambert factor = the largest of cosines and 0
lambert factor = max(cosine, 0)
Copy the code

First we get the ray vector by subtracting the position of the light source from the position of the object, and then we get the cosine by taking the dot product of the object normal and the light vector. We normalize the light vector, which means changing its length, the length is 1, the normal line of this object is also 1, the dot product of two normalized vectors gives the cosine between them. Since the dot product is in the range of -1 to 1, we limit it to 0 to 1.

Here’s an example of a plane at the origin with a surface normal pointing to the sky.

The light is at {0, 10, -10}, and we want to calculate the light at the origin.

// Ray vector
light vector = {0.10.- 10} - {0.0.0} = {0.10.- 10}
// Object normals
object normal = {0.1.0}
Copy the code

Simply put, if we follow the ray vector, we reach the position of the light source. To normalize the vector, we divide each component by the length of the vector:

/ / light vector length = square root (0 * 0 + 10 * 10 + (-- 10 * 10)) = square root (200) = 14.14
light vector length = square root(0*0 + 10*10 + (- 10 * - 10)) = square root(200) = 14.14
// Normalize ray vectors
normalize light vector = {0.10/14.14.- 10/14.14} = {0.0.707.0.707}
Copy the code

Then we compute the dot product:

/ / the dot product
dot product({0.1.0}, {0.0.707.0.707}) = (0 * 0) + (1 * 0.707) + (0 * 0.707) = 0.707
Copy the code

There’s a nice explanation for a pair of dot products

Finally we limit the scope:

// Lambert factor
lambert factor = max(0.707.0) = 0.707
Copy the code

OpenGL ES 2’s shader language has built-in support for some of these functions, so we don’t need to do all the math by hand, but it’s still helpful to understand what’s going on.

Step 2: calculate the reduction coefficient

Next, we need to calculate the loss. The actual light decrement from the source follows the inverse square law

It can also be expressed like this:

// Brightness = 1 / distance squared
luminosity = 1 / (distance * distance)
Copy the code

Going back to our example, since we have a ray length of 14.14, here is our final brightness:

luminosity = 1 / (14.14 * 14.14) = 1 / 200 = 0.005
Copy the code

As you can see, the inverse square law causes a sharp reduction in distance. This is what point light does in the real world, but because of the limited scope of our graphics display, it is useful to control this decay factor so that we can still get realistic lighting without making it look dim.

Step 3: Calculate the final color

Now that we know the cosine and the decrement, we can calculate our final brightness:

// Final color = Material color * (Light color * Lambert factor * Brightness)
final color = material color * (light color * lambert factor * luminosity)
Copy the code

Continuing with our previous example of a red object and a white light source, here we calculate the final color:

final color = {1.0.0* ({}1.1.1} * 0.707 * 0.005) = {1.0.0{} *0.0035.0.0035.0.0035} = {0.0035.0.0}
Copy the code

To recall, for diffuse lighting, we need to use the Angle and distance between the surface and the light to calculate the final overall diffuse brightness.

Here are the steps:

/ / the first step
light vector = light position - object position
cosine = dot product(object normal, normalize(light vector))
lambert factor = mac(cosine, 0)

/ / the second step
luminosity = 1 / (distance * distance)

/ / the third step
final color = material color * (light color * lambert factor * luminosity)

Copy the code

Put it all into an OpenGL ES 2 shader

Vertex shader

final String vertexShader =
        "uniform mat4 u_MVPMatrix; \n" + // a constant representing the composite Model, View, and Projection matrices
        "uniform mat4 u_MVMatrix; \n" + // A constant representing the combined model and View matrices
        "uniform vec3 u_LightPos; \n" + // The position of the light source in the eye space (relative to the camera perspective)

        "attribute vec4 a_Position; \n" + // We will pass in the position information for each vertex
        "attribute vec4 a_Color; \n" + // We will pass in the color information for each vertex
        "attribute vec3 a_Normal; \n" + // We will pass in the normal information for each vertex

        "varying vec4 v_Color; \n" + // This will be passed into the fragment shader

        "void main() \n" + // Vertex shader entry
        "{ \n" +
        // Convert vertices to eye space (relative to camera perspective)
        " vec3 modelViewVertex = vec3(u_MVMatrix * a_Position); \n" +
        // Convert the normal direction to eye space (relative to the camera Angle)
        "Vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); \n" +
        // Will be used to reduce
        " float distance = length(u_LightPos - modelViewVertex); \n" +
        // Get the ray vector from the light source to the vertex
        " vec3 lightVector = normalize(u_LightPos - modelViewVertex); \n" +
        // Compute the dot product of the ray vector and the vertex normal. If the normal and the ray vector point in the same direction, it will get the maximum illumination
        "Float diffuse = Max (dot(modelViewNormal, lightVector), 0.1); \n" +
        // Reduce light according to distance
        "Diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance))); \n" +
        // Multiply the color by the brightness and it will be inserted into the triangle
        " v_Color = a_Color * diffuse; \n" +
        // gl_Position is a special variable to store the final position
        // Multiply the vertices by the matrix to get the final point of the normalized screen coordinates
        " gl_Position = u_MVPMatrix * a_Position; \n" +
        "} \n";
Copy the code

There is quite a bit to do here. We mentioned in lesson 1 that we would have a model/ View /projection composite matrix, but we added a Model/View matrix. Why is that? Because we will need this matrix to calculate the distance between the light source position and the current vertex position. For diffuse lighting, it doesn’t matter whether you use world space (Model matrix) or eye space (Model/View matrix) as long as you can calculate the right distance and Angle.

We pass in the color and position of the vertex, and its normals. We pass the final color into the fragment shader, which interpolates between vertices, also known as the Gouraud shader.

Let’s look at the meaning of each part of the shader:

// Convert vertices to eye space (relative to camera perspective)
" vec3 modelViewVertex = vec3(u_MVMatrix * a_Position); \n"
Copy the code

Since we are observing the light source position in eye space, we convert the current vertex position to the coordinate system in eye space, so we can calculate the corresponding distance and Angle.


// Convert the normal direction to eye space (relative to the camera Angle)
"Vec3 modelViewNormal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); \n" +
Copy the code

We also need to switch the direction of the normal. Here we just did a regular multiplication as above, but it won’t work if the Model or View matrix is rotated or tilted: we actually need to eliminate the tilting or scaling effect by multiplying the normal by the inversion of the original matrix. This website is a good explanation of why we have to do this, right


// Will be used to reduce
" float distance = length(u_LightPos - modelViewVertex); \n"
Copy the code

As shown in the previous math section, we need this distance to calculate the drag coefficient


// Get the ray vector from the light source to the vertex
" vec3 lightVector = normalize(u_LightPos - modelViewVertex); \n"
Copy the code

We also need ray vectors to calculate the Lambert reflection factor


// Compute the dot product of the ray vector and the vertex normal. If the normal and the ray vector point in the same direction, it will get the maximum illumination
"Float diffuse = Max (dot(modelViewNormal, lightVector), 0.1); \n"
Copy the code

This is the same as the math part above, but is done in the OpenGL ES 2 shader. The latter 0.1 is a very cheap way to light the environment (the minimum will be limited to 0.1).


// Reduce light according to distance
"Diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance))); \n"
Copy the code

This is a little bit different from the math up here. We scaled the square of the distance by 0.25 to suppress the effect of attenuation, and we also added 1 to the modified distance so that we do not oversaturate when the light source is very close to the object (otherwise, when the distance is less than 1, the equation actually illuminates the light source rather than decrement it).


// Multiply the color by the brightness and it will be inserted into the triangle
" v_Color = a_Color * diffuse; \n" +
// gl_Position is a special variable to store the final position
// Multiply the vertices by the matrix to get the final point of the normalized screen coordinates
" gl_Position = u_MVPMatrix * a_Position; \n"
Copy the code

When we have the final light color, we multiply it by the color of the vertex to get the final output color, and then we project the position of that vertex onto the screen.

Pixel shader

final String fragmentShader =
        "precision mediump float; \n" + // We set the default precision to medium, we don't need the high precision in the fragment shader
                "varying vec4 v_Color; \n" + // This is the vertex shader color interpolated from each segment of the triangle
                "void main() \n" + // Fragment shader entry
                "{ \n" +
                " gl_FragColor = v_Color; \n" + // Pass the color directly
                "} \n";
Copy the code

Because we’re counting light on a per-vertex basis, our fragment shader is the same as last time, and what we’re doing is passing the color directly. In the next lesson, we will learn about per-pixel lighting.

Lighting per vertex and lighting per pixel

In this lesson we are going to focus on implementing per-vertex lighting. This is usually sufficient for objects with smooth surfaces (such as terrain), or for diffuse reflections of objects with many triangles. However, when your object does not contain many vertices (like our cube in this case), or has sharp corners, vertex lighting can cause artifacts because brightness interpolates linearly across the polygon; These artifacts also become more noticeable when specular highlights are added to the image. More Wiki articles on Gouraud’s coloring method

The structure of the cube

In the first lesson, we packed the position and color attributes into an array, but OpengGL ES 2 also allows us to store the attributes separately:

//X, Y, Z
final float[] cubePositionData = {
        // In OpenGL, counterclockwise winding (the dots below are in counterclockwise order) is the default.
        // This means that when we are looking at a triangle, if the TVS are counterclockwise, we are looking at the "front", if not we are looking at the back
        // OpenGL has an optimization where all triangles on the back are removed because they usually represent the back of an object and are not visible anyway
        / / positive
        -1.0 F.1.0 F.1.0 F,
        -1.0 F, -1.0 F.1.0 F.1.0 F.1.0 F.1.0 F,
        -1.0 F, -1.0 F.1.0 F.1.0 F, -1.0 F.1.0 F.1.0 F.1.0 F.1.0 F. };// R, G, B, A
final float[] cubeColorData = {
        // Side red
        1.0 F.0.0 F.0.0 F.1.0 F.1.0 F.0.0 F.0.0 F.1.0 F.1.0 F.0.0 F.0.0 F.1.0 F.1.0 F.0.0 F.0.0 F.1.0 F.1.0 F.0.0 F.0.0 F.1.0 F.1.0 F.0.0 F.0.0 F.1.0 F. };Copy the code

New OpenGL flag

We also enabled culling and depth buffering using the glEnable() call:

// Use culling to get rid of the back
GLES20.glEnable(GLES20.GL_CULL_FACE);
// Enable deep testing
GLES20.glEnable(GLES20.GL_DEPTH_TEST);
Copy the code

As an optimization, you can tell OpenGL to discard the triangles on the back of the object. When we defined the cube, we also defined the three points of each triangle so that it would be counterclockwise when we looked at the front. When we flip the triangle so that we get to the back, these points will be shown clockwise. You can only see three faces of a cube at once, so this optimization tells OpenGL not to waste time drawing triangles on the back.

Later when we draw transparent objects, we want to turn culling off and the back of the object will become visible.

We also turned on depth testing. Depth testing is not necessary if you always draw objects from back to front, but by enabling it not only do you not need to worry about drawing order (although rendering is faster if you draw the nearest object first), some graphics cards will also be optimized to speed up rendering by spending less time drawing pixels.

Load changes to the shader program

Because the steps for loading a shader program in OpenGL are roughly the same, these steps can easily be refactored into a separate method. We also added a callback to retrieve debugging information in case of compile/link failure:

GLES20.glGetProgramInfoLog(programHandle);
GLES20.glGetShaderInfoLog(shaderHandle);
Copy the code

Vertex and shader of spot

This new vertex and shader program draws on the screen to represent the position of the current light source:

// Define a simple shader
final String pointVertexShader =
        "uniform mat4 u_MVPMatrix; \n" +
        "attribute vec4 a_Position; \n" +
        "void main() \n" +
        "{ \n" +
        " gl_Position = u_MVPMatrix * a_Position; \n" +
        "Gl_PointSize = 5.0; \n" +
        "} \n";
final String pointFragmentShader =
        "precision mediump float; \n" +
        "void main() \n" +
        "{ \n" +
        "Gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0) \n" +
        "} \n";
Copy the code

This shader is similar to the simple shader in lesson 1, with a new member gl_PointSize that directly sets its value to 5.0, which is the pixel size of the point. It will be used when we draw the point using glES20.gl_points mode. We also directly set the display color to white.

Further practice

  • Try removing “over-saturation” to see what happens
  • There’s a flaw in the lighting. Can you tell what it is? Hint: What are the downsides of the way we do ambient lighting, and what alpha will release?
  • If you havegl_PointSizeAdd to cube shader and useGL_POINTSWhat happens when you plot it?

Further reading

  • Clockworkcoders tutorial: lighting per fragment
  • Lighthouse3d.com: Normal matrix
  • Arcsynthesis.org: OpenGL Tutorial: Normal Conversion
  • OpenGL Programming Guide: 5 chapters lighting

The further reading sections above were an invaluable resource to me while writing this tutorial, so I strongly encourage you to read them for more information and explanation.

Tutorial directory

  • OpenGL Android Course 1: Getting Started
  • OpenGL Android Course 2: Ambient and Diffuse Light
  • OpenGL Android Lesson 3: Use each piece of lighting
  • OpenGL Android Course 4: Introduction to texture basics
  • OpenGL: An Introduction to Blending
  • OpenGL Android Course 6: Introduction to texture filtering

Packaging materials

This course is available on Github for download and is available on Google Play for download