This blog takes a look at 2D textures and completes a Renderer that draws and displays an image.

2 d texture

2D texture is the most basic and commonly used texture form in OpenGL ES. A 2D texture is essentially a two-dimensional array of image data. The individual data elements of a texture are called “Texel (Texture Pixels)”. When rendering with 2D textures, texture coordinates are used as indexes in the texture image. The texture coordinates of a 2D texture are specified by a pair of 2D coordinates (s,t), sometimes called (u,v) coordinates.

Texture coordinates are on the X and y axes, ranging from 0 to 1 (note that we are using 2D texture images). Using texture coordinates to obtain texture color is called Sampling. Texture coordinates start at (0, 0), which is the lower left corner of the texture image, and end at (1, 1), which is the upper right corner of the texture image. The image below shows how we map texture coordinates to triangles.

We specify three texture coordinate points for the triangle. As shown above, we want the lower left corner of the triangle to correspond to the lower left corner of the texture, so we set the texture coordinates of the lower left corner of the triangle to (0, 0); The upper vertex of the triangle corresponds to the upper middle position of the image so we set its texture coordinates to (0.5, 1.0); Similarly, the lower right vertex is set to (1, 0). We simply pass these three texture coordinates to the vertex shader, which then passes them to the fragment shader, which interpolates texture coordinates for each fragment.

Texture coordinates would look like this:

floatTexCoords [] = {0.0 f, f, 0.0 1.0 f / / left corner, 0.0 f, / / the bottom right hand corner 0.5 f, f / 1.0 / high};Copy the code

Texture sampling is very loosely interpreted and can be interpolated in several different ways. So we need to tell OpenGL how to sample the texture ourselves.

Texture wrap

Texture coordinates usually range from (0, 0) to (1, 1), so what happens if we set texture coordinates outside the range? OpenGL’s default behavior is to repeat the texture image (we basically ignore the integer part of the floating-point texture coordinates), but OpenGL offers more options:

Around the way describe
GL_REPEAT Default behavior for textures. Repeat texture image.
GL_MIRRORED_REPEAT Same as GL_REPEAT, but each repeat image is mirrored.
GL_CLAMP_TO_EDGE The texture coordinates are constrained between 0 and 1, and the excess portion repeats the edges of the texture coordinates, creating the effect of the edges being stretched.
GL_CLAMP_TO_BORDER The coordinate beyond is the user-specified edge color.

Each of the above options can be set to a separate coordinate axis (s, t (and r if using 3D textures) using the glTexParameter* function:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_MIRRORED_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_MIRRORED_REPEAT);
Copy the code

The first parameter specifies the texture target; We are using a 2D texture, so the texture target is GL_TEXTURE_2D. The second parameter requires us to specify the setting options and the applied texture axis. What we are going to configure is the WRAP option and specify the S and T axes. The final wrap parameter requires us to pass a wrap. In this case OpenGL assigns GL_MIRRORED_REPEAT to the currently active texture.

Texture filtering

Texture coordinates are resolution-independent and can be any floating point value, so OpenGL needs to know how to map texture pixels to texture coordinates. This becomes important when you have a large object with a low resolution texture. As you might have guessed, OpenGL also has an option for Texture Filtering. There are many options for texture filtering, but for now we will discuss only the two most important ones: GL_NEAREST and GL_LINEAR.

GL_NEAREST (also called Nearest Neighbor Filtering) is OpenGL’s default texture Filtering mode. When set to GL_NEAREST, OpenGL will select the pixel whose center point is closest to the texture coordinates. In the image below you can see four pixels with the plus sign representing texture coordinates. The center of the texture pixel in the upper left corner is closest to the texture coordinates, so it will be selected as the sample color:

GL_LINEAR (also known as LINEAR Filtering) computes an interpolation based on texture pixels near texture coordinates that approximates the colors between those texture pixels. The closer the center of a texture pixel is to the texture coordinates, the more the color of the texture pixel contributes to the final sample color. In the image below you can see that the returned color is a blend of adjacent pixels:

So what are the visual effects of these two texture filters? Let’s see what happens when you apply a low-resolution texture to a very large object (the texture is magnified so that every texture pixel can see it) :

GL_NEAREST generates a grainy pattern and we can clearly see the pixels that compose the texture, while GL_LINEAR produces a smoother pattern and it is difficult to see individual texture pixels. GL_LINEAR produces more realistic output, but some developers prefer the 8-bit style, so they use the GL_NEAREST option.

You can set texture filtering options when Magnify and Minify, for example you can use neighborhood filtering when the texture is zoomed in and linear filtering when the texture is zoomed in. We need to use the glTexParameter* function to specify filters for zooming in and out. This code will look similar to the texture surround setup:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Copy the code

Multi-level fade texture

Imagine that we have a large room with thousands of objects, each with a texture. Some objects will be far away, but their textures will have the same high resolution as nearby objects. Since distant objects may produce only a few fragments, It is difficult for OpenGL to get the correct color value for these fragments from a high-resolution texture, because it needs to pick up only one texture color for a fragment that spans a large portion of the texture. This creates an unreal feeling on small objects, not to mention the memory waste of using high-resolution textures on them.

OpenGL solves this problem using a concept called multilevel asymptotic Texture (Mipmap), which is simply a series of texture images, one half the size of the previous one. The idea behind multistage receding textures is simple: If the distance from the observer exceeds a certain threshold, OpenGL will use a different multistage receding texture, the one that best fits the distance of the object. Due to the distance, the resolution is not high and will not be noticed by the user. At the same time, the multi-stage fade texture is another plus that it is very good performance. Let’s take a look at what a multi-level fade texture looks like:

Manually creating a series of multilevel asymptote textures for each texture image is cumbersome. Fortunately, OpenGL has a glGenerateMipmaps function that calls it after creating a texture and OpenGL does all the work.

Filtering way describe
GL_NEAREST_MIPMAP_NEAREST The nearest multilevel asymptotic texture is used to match pixel size, and adjacent interpolation is used for texture sampling
GL_LINEAR_MIPMAP_NEAREST The nearest multistage asymptotic texture level is used, and linear interpolation is used for sampling
GL_NEAREST_MIPMAP_LINEAR Linear interpolation is performed between two multilevel asymptotic textures that best match pixel size, and adjacent interpolation is used for sampling
GL_LINEAR_MIPMAP_LINEAR Linear interpolation is used between two adjacent multi-level asymptotic textures, and linear interpolation is used for sampling

Just like texture filtering, we can use glTexParameteri to set the filtering to one of the four methods mentioned above:

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Copy the code

Code implementation

First, define vertex coordinates and texture coordinates

/** ** vertex coordinates * (x,y,z) */
private float[] POSITION_VERTEX = new float[] {0f.0f.0f.// Vertex coordinates V0
        1f.1f.0f.// Vertex coordinates V1
        -1f.1f.0f.// Vertex coordinates V2
        -1f, -1f.0f.// Vertex coordinates V3
        1f, -1f.0f     // Vertex coordinates V4
};

/** * Texture coordinates * (s,t) */
private static final float[] TEX_VERTEX = {
        0.5 f.0.5 f.// Texture coordinates V0
        1f.0f.// Texture coordinate V1
        0f.0f.// Texture coordinates V2
        0f.1.0 f.// Texture coordinate V3
        1f.1.0 f    // Texture coordinates V4
};
Copy the code

Here, vertex coordinates and texture coordinates are one-to-one, just because they have different origin and coordinate values, as shown in the following figure.

/** * index, the final drawing through the index to extract the corresponding vertex data, and then according to the specified way to draw */
private static final short[] VERTEX_INDEX = {
        0.1.2.// Vertices V0,V1,V2 form a triangle
        0.2.3.Vertices V0,V2,V3 form a triangle
        0.3.4.// Vertices V0,V3,V4 form a triangle
        0.4.1   // Vertices V0,V4,V1 form a triangle
};
Copy the code
/** * vertex shader */
private String vertextShader =
                "#version 300 es\n" +
                "layout (location = 0) in vec4 vPosition; \n" +
                "layout (location = 1) in vec2 aTextureCoord; \n" +
                "/ / matrix \ n" +
                "uniform mat4 u_Matrix; \n"+
                "// Output texture coordinates (s,t)\n" +
                "out vec2 vTexCoord; \n" +
                "void main() { \n" +
                " gl_Position = u_Matrix * vPosition; \n" +
                "Gl_PointSize = 10.0; \n" +
                " vTexCoord = aTextureCoord; \n" +
                "}\n";


Copy the code

The fragment shader should then take the output variable vTexCoord as the input variable.

Fragment shaders should also be able to access texture objects, but how can we pass texture objects to fragment shaders? GLSL has a built-in data type for texture objects, called a Sampler, which suffixes with a texture type, such as sampler1D, sampler3D, or in our case, sampler2D. We can simply declare a Uniform sampler2D to add a texture to the fragment shader, and we will assign the texture to that Uniform later.

/** * fragment shader */
private String fragmentShader =
                "#version 300 es\n" +
                "precision mediump float; \n" +
                "uniform sampler2D uTextureUnit; \n" +
                "// Receive texture coordinates (s,t)\n from the vertex shader" +
                "in vec2 vTexCoord; \n" +
                "out vec4 vFragColor; \n" +
                "void main() {\n" +
                " vFragColor = texture(uTextureUnit,vTexCoord); \n" +
                "}\n";
Copy the code

We use GLSL’s built-in texture function to sample the color of the texture. Its first parameter is the texture sampler and its second parameter is the corresponding texture coordinate. The texture function samples the corresponding color value using the previously set texture parameter. The output of the fragment shader is the (filtered) color of the texture’s (interpolated) texture coordinates.

public static int loadTexture(Context context, int resourceId) {
    final int[] textureIds = new int[1];
    // Create a texture object
    GLES30.glGenTextures(1, textureIds, 0);
    if (textureIds[0] = =0) {
        Log.e(TAG, "Could not generate a new OpenGL textureId object.");
        return 0;
    }
    final BitmapFactory.Options options = new BitmapFactory.Options();
    // Here we need to load the unscaled data of the original image
    options.inScaled = false;
    final Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), resourceId, options);
    if (bitmap == null) {
        Log.e(TAG, "Resource ID " + resourceId + " could not be decoded.");
        GLES30.glDeleteTextures(1, textureIds, 0);
        return 0;
    }
    // Bind textures to OpenGL
    GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureIds[0]);

    // Set the default texture filter parameters
    GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_LINEAR_MIPMAP_LINEAR);
    GLES30.glTexParameteri(GLES30.GL_TEXTURE_2D, GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR);

    // Load the bitmap into the texture
    GLUtils.texImage2D(GLES30.GL_TEXTURE_2D, 0, bitmap, 0);

    // Generate MIP map
    GLES30.glGenerateMipmap(GLES30.GL_TEXTURE_2D);

    // If data is already loaded into OpenGL, the bitmap can be reclaimed
    bitmap.recycle();

    // Unbind the texture
    GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, 0);

    return textureIds[0];
}
Copy the code

draw

@Override
public void onDrawFrame(GL10 gl) {
    GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT);

    // Use program fragments
    GLES30.glUseProgram(mProgram);

    GLES30.glUniformMatrix4fv(uMatrixLocation, 1.false, mMatrix, 0);

    GLES30.glEnableVertexAttribArray(0);
    GLES30.glVertexAttribPointer(0.3, GLES30.GL_FLOAT, false.0, vertexBuffer);

    GLES30.glEnableVertexAttribArray(1);
    GLES30.glVertexAttribPointer(1.2, GLES30.GL_FLOAT, false.0, mTexVertexBuffer);

    GLES30.glActiveTexture(GLES30.GL_TEXTURE0);
    // Bind the texture
    GLES30.glBindTexture(GLES30.GL_TEXTURE_2D, textureId);

    / / to draw
    GLES20.glDrawElements(GLES20.GL_TRIANGLES, VERTEX_INDEX.length, GLES20.GL_UNSIGNED_SHORT, mVertexIndexBuffer);
}
Copy the code

Final presentation:

Source making