Objective of this paper: start from 0, learn WebGL and realize a webGL image filter function

The main process

First of all, this is a rectangle-like, an image, and we transform it. So our implementation process is

Below, we implement it in stages

Stage 1: Draw a square

In this step, our main goal is to understand the basic knowledge and concepts, learn and draw as you go.

Let’s start by looking at what webGL’s entire drawing process looks like and how it differs from our everyday JS logic.Step by step, what is a shader program and how do you create a shader program?

In plain English, the shader program is used to set the position of the entire canvas to be drawn and the color Settings to be drawn.This is the whole process of creating a shader and looking at the code through binding to the corresponding program context

html ... <canvas ID =" glCanvas "width="640" height="480"> Your browser seems not to support or disable HTML5 <code>&lt; canvas&gt; </code> element. </canvas>... Const vsSource = 'attribute vec4 aVertexPosition; attribute vec4 aVertexColor; varying lowp vec4 vColor; void main() { gl_Position = aVertexPosition; vColor = aVertexColor; } `; // GLSL code const fsSource = 'VARYING LOWp VEC4 vColor; void main() { gl_FragColor = vColor; } `; // Create a shader, gl for context, type for vertex shader or fragment shader, Function createShader(gl, type, source) {const shader = gl.createshader (type); gl.shaderSource(shader, source); gl.compileShader(shader); return shader; } function initShaderProgram(gl, vsSource, fsSource) { const vshader = createShader(gl, gl.VERTEX_SHADER, vsSource); const fshader = createShader(gl, gl.FRAGMENT_SHADER, fsSource); const shaderProgram = gl.createProgram(); AttachShader (shaderProgram, vshader); AttachShader (shaderProgram, fshader); // attachShader to the corresponding shaderProgram. gl.linkProgram(shaderProgram); Return shaderProgram; } // initialize a shaderProgram const shaderProgram = initSharderProgram(gl, vsSource, fsSource);Copy the code

There should be a lot of questions, we answered step by step, first of all, the most confused should be unable to understand GLSL. Therefore, you need to understand some basic GLSL syntax concepts.

Basic types of

type instructions
void An empty type, that is, does not return any value
bool Boolean type true,false
int A signed integer
float A signed floating scalar
vec2, vec3, vec4 N-component Floating point vector
bvec2, bvec3, bvec4 N A Boolean vector
ivec2, ivec3, ivec4 N dimensional integer vector signed integer vector
mat2, mat3, mat4 2×2, 3×3, 4×4 float matrix
sampler2D A 2D texture
samplerCube The cube mapped texture

Common variable types

Uniform is a variable that an external program passes to the shader. Inside the shader, it is equivalent to a constant

Attribute is a variable used only in vertex shaders to represent some vertex-related information

Varying is the variable that passes the vertex shader and fragment shader

Precision qualifier

  • Precision range

    • Floating point range
      • Highp (-2 to the 62nd, 2 to the 62nd);
      • Mediump (-2 to the 14th, 2 to the 14th);
      • Lowp (2, 2);
    • The integer range
      • Highp (-2 to the 16th, 2 to the 16th);
      • Mediump (-2 to the tenth, 2 to the tenth);
      • Lowp (minus 2 to the eighth, 2 to the eighth);
  • Specify default precision

    • precision

      • Vertex shader predefined, predefined is the default value

        precision highp float; // High precision highp int; Lowp sampler2D; precision lowp samplerCube;Copy the code
      • Fragment shader predefined

        precision mediump int; Lowp sampler2D; precision lowp samplerCube;Copy the code

        As you can see from this, the fragment shader does not pre-define floating-point numbers. It is likely that some floating-point numbers will fail to be drawn because of undefined precision. Therefore, it is best to manually set the precision type of floating-point numbers in the fragment shader code yourself

First, the buffered data is received via the attribute declaration aVertexPosition aVertexColor (see the flow chart). The data is passed from the vertex shader to the fragment shader via the attribute vColor varying.

Next we define a main function, gl_Position, which describes the position of the drawing, and assign the aVertexColor property to vColor so that the fragment shader can use the data in the buffer. Similarly, the fragment shader program does the same. Gl_FragColor (proper noun) describes the color information to be drawn

After creating the shader program, let’s look at the process of initializing the buffer

Corresponding code implementation

const positionBuffer = gl.createBuffer(); gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer); Const vertice = [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0,]; gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertice), gl.STATIC_DRAW);Copy the code

Next is the drawing process, the flow chart is as followsLet’s go to the code

{ gl.bindBuffer(gl.ARRAY_BUFFER, buffer.position); Gl.vertexattribpointer (gl.getattribLocation (shaderProgram, 'aVertexPosition'), 2, FLOAT, false, 0, 0); FLOAT (x,y,z); gl.enableVertexAttribArray( gl.getAttribLocation(shaderProgram, 'aVertexPosition') ); } { gl.bindBuffer(gl.ARRAY_BUFFER, buffer.color); Color buffer / / / / binding allows the use of color attribute. Gl enableVertexAttribArray (gl) getAttribLocation (shaderProgram, "aVertexColor")); gl.vertexAttribPointer( gl.getAttribLocation(shaderProgram, "aVertexColor"), 4, gl.FLOAT, false, 0, 0); } gl.useProgram(shaderProgram); gl.drawArrays(gl.TRIANGLE_FAN, 0, 4); // Set the drawing mode and the number of vertices to be drawnCopy the code

To see the final gl.useProgram is to render the current state using the shader program, and finally call gl.drawArrays to draw vertices.

Here we talk about the drawing process, generally in the way of triangle drawing, drawing mode is also distinguished

Drawing mode

  • Say, TRIANGLES are a single triangle

  • Gl. TRIANGLE_STRIP Draws triangles with shared edges, reading one vertex at a time, starting with the second triangle, and using the last two vertices of the last triangle to form a triangle. If a rectangle is drawn with four points, the first point should be opposite the last vertex

  • Gl. TRIANGLE_FAN Draws triangles with shared edges. Starting with the second triangle, read one vertex at a time, using the last vertex and the first vertex of the previous triangle to form the triangle

Here you can imagine the order of vertices when drawing a square in different drawing modes

If only the x.y coordinate plane texture [-1, 1, -1, -1, 1, 1, 1, -1, -1] is drawn as gl.TRIANGLE_STRIP, whereas [-1, 1, -1, -1, 1, 1, 1, 1] is gl.TRIANGLE_FAN

So we have our first square.

See -> for the complete code

Stage 2: Draw a rectangle with an image

Here we need to understand the concept of texturing and texture. Here, in plain English, a picture is required to be attached to the corresponding vertex position, which is a map.Let’s look at the code

const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);

gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texImage2D(
    gl.TEXTURE_2D,
    0,
    gl.RGBA,
    gl.RGBA,
    gl.UNSIGNED_BYTE,
    image
);

return texture;
Copy the code

Here we look at the lines of the gl.texparameteri parameter to accommodate images of various sizes.

To put it simply, WebGL only supports images with a power of two in width and height by default, so in order to break the default, some compromises have to be made. Here are some scaling behaviors that are not set to quadratic width and height.

Finally, paste the image. Note that the image has to be loaded.

See -> for the complete code

Now that we have completed the first two steps of our task, we are ready to display the image, and now we are ready to implement a simple filter

Stage 3: Filter processing

We started thinking, what is a filter? What are the implementation schemes of filters? In my understanding, filter is to change the color of the picture and adjust the color value of each pixel to achieve the overall aesthetic feeling.

What are the implementation schemes of filters? Everyday thought is simple (linear or power computation), slightly more complicated is convolution computation (linear algebra begins to suffer). However, we are more familiar with LUT(Look Up Table).

The idea here is abstractly to get the color value for each pixel, look it up in the table, and replace the current color value with the converted value. We can look at it in detailZhang Xinxu’s articleNow we have decided on this plan!

Without further ado, work began.

const vsSource = ` attribute vec4 aVertexPosition; attribute vec2 aTexCoord; varying vec2 vTexcoord; void main() { gl_Position = aVertexPosition; VTexcoord = vec2(atexcoord. x, 1.0-atexcoord. y); } `; const fsSource = ` precision lowp float; // The precision of float must be specified because the precision of the fragment shader does not have the default VARYING VEC2 vTexcoord during the calculation. uniform sampler2D uSample; uniform sampler2D uSample1; uniform lowp float intensity; void main() { lowp vec4 textureColor = texture2D(uSample, vTexcoord); Lowp float blueColor = texturecolor.b * 63.0; lowp vec2 quad1; Quad1.y = floor(floor(blueColor) / 8.0); Quad1.x = floor(blueColor) - (quad1.y * 8.0); lowp vec2 quad2; Quad2.y = floor(ceil(blueColor) / 8.0); Quad2.x = ceil(blueColor) - (quad2.y * 8.0); lowp vec2 texPos1; X = (quad1.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Y = (quad1.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); lowp vec2 texPos2; X = (quad2.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Texpos2.y = (quad2.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); lowp vec4 newColor1 = texture2D(uSample1, texPos1); lowp vec4 newColor2 = texture2D(uSample1, texPos2); lowp vec4 newColor = mix(newColor1, newColor2, fract(blueColor)); Gl_FragColor = mix(textureColor, vec4(newcolor.rgb, texturecolor.w), 1.0); gl_FragColor = mix(textureColor, vec4(newcolor.rgb, texturecolor.w), 1.0); } `;Copy the code

The author only gave this part of the code, here is mainly on the LUT algorithm (find transform color) implementation, there have been other fairly mature related filter algorithm

As for the search table, in the process of this operation, that is, the texture of a picture, after getting the RGB of the corresponding position, it can be mixed with the original picture. The data for the lookup table is passed into the program as a constant UNIFORM through an external variable. The implementation is placed after gl.useProgram and before gl.drawArray

. gl.useProgram(shaderProgram); const textureUnitIndex = 1; Const uImageLoc = gl.getUniformLocation(shaderProgram, "uSample1"); var filterTexture = gl.createTexture(); gl.activeTexture(gl.TEXTURE0 + textureUnitIndex); gl.bindTexture(gl.TEXTURE_2D, filterTexture); Gl.texparameteri (gl.texture_2d, gl.texture_WRAP_s, gl.clamp_to_edge); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST); gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST); // Set the image gl.teximage2d (gl.texture_2d, 0, gl.rgba, gl.rgba, gl.unsigned_byte, filter); gl.uniform1i(uImageLoc, textureUnitIndex); // Set the lookup table gl.drawArrays(gl.TRIANGLE_FAN, 0, 4);Copy the code

Look at the effect, my dearest nephew.

Hey hey, which piece is the most good-looking I want to take credit 😄

Complete code with lookup table ->

conclusion

Generally speaking, the target has been basically achieved. In the process of learning, we can realize:

  1. The main work is spent in learning the thinking mode of drawing WebGL. Only by understanding the thinking mode, can we better understand the processing of each step.
  2. Linear algebra is a little difficult, in the process of learning, but also to understand a variety of rotation, although there are now a variety of libraries, but in order to understand, or their own manual calculation of a few rotation, in order to understand a calculation process, a draft paper is full, do not know the students thought I was preparing for the postgraduate entrance examination.

Recommended learning materials:

Webglfundamentals.org/webgl/lesso…

Learnopengl – cn. Readthedocs. IO/useful/latest / 0…