Portal:

  1. Overview of WebGL – Principles
  2. Drawing points and triangles
  3. Drawing points and Triangles (Advanced)
  4. WebGL Practice Part 3 — Drawing pictures

preface

(multi picture warning!! Flow party cautious, tuhao optional)

Following up on the question in the previous section, why is the following tall picture distorted? And how can we do that without distorting it?


The answer is simple: the aspect ratio of the rectangle is not the same as the aspect ratio of the image itself. The question is how do we make this picture look the same?

Plan 1:

Some students said: when we set the vertex position, keep the rectangle’s aspect ratio the same as the image’s aspect ratio.

This is a good idea, we can try, the size of the image is 533×300, the aspect ratio is 533/300 = 1.77, keeping the length constant, our width should be: 2/1.77 = 1.125, so we need to modify our vertex data to:

const pointPos = [
    -1.0.5625,
    -1, -0.5625.1, -0.5625.1, -0.5625.1.0.5625,
    -1.0.5625,];Copy the code

OK, we found that it was exactly what we wanted it to be.

But wait! Here our canvas size is 500×500, what if our canvas size is not 500×500? Let’s change the canvas size to 480×270. What happens?

Oh, No! Why did it change shape again? !!!!! This means it’s not just about the aspect ratio of the vertex data, it’s also about the aspect ratio of the canvas. Say here photograph must your brain inside should already very circle! Is there an easier way to solve this problem?

Scheme 2:

The answer must be yes. We notice that we are always using numbers between -1 and 1 to represent coordinate information. Can I use actual coordinates to represent our vertex data? For example, in a 500×500 canvas, (300, 400) is directly used to represent the position of points

This is definitely possible. That’s part of what we’re going to talk about today: coordinate transformations

Coordinate transformation

Let’s translate this problem: the problem is actually to convert the coordinates of one scope to another. In WebGL, we are actually converting the coordinates between the left boundary L and the right boundary R to between -1 and 1. Same thing on the y-coordinate.

We can solve this problem by inequality transformation:

In the same way

Now that we have these two formulas, we can do the coordinate transformation, mapping the coordinates from 0 to 480 to -1 to 1. Let’s modify the data:


let pointPos = [
    0.0.533.0.533.300.533.300.0.300.0.0
];
const texCoordPos = [
    0.0.1.0.1.1.1.1.0.1.0.0
];
Copy the code

Write a function that transforms coordinates


const convert = (l, r) = > {
    return function (cordinate) {
        return 2* cordinate / (r - l) - (r + l) / (r - l); }}const convertX = convert(0, canvas.width);
const convertY = convert(0, canvas.height);

for (let i = 0; i < pointPos.length; i += 2) {
    pointPos[i] = convertX(pointPos[i]);
    pointPos[i + 1] = convertY(pointPos[i + 1]);
}
Copy the code

This way, we can use real coordinates to represent the position of the picture. Now we have learned how to use real coordinates to represent positions, but we are doing coordinate transformation in JS code, it feels a little bit troublesome, can we do coordinate transformation in vertex shader? The answer is yes. We can use matrix vector multiplication.

In vertex shaders, is the position of a point represented by a 4-dimensional vector. So we can use a 4×4 matrix to transform this point. I just want to give you the basics of matrix multiplication

matrix

What is a matrix

An M x N matrix is a rectangular array of M rows and N columns of elements. The figure is as follows:

Matrix multiplication

The multiplication of two matrices M and N must meet the following conditions: the number of columns of matrix M must be equal to the number of rows of matrix N. The result of their multiplication can be expressed as:

So a point multiplied by a matrix yields:

Each row of the result represents the values of x, y, z, and w. This x, y, z is easy to understand, w is the homogeneous term in homogeneous coordinates. Coordinates with w value can be understood as:

(x, y, z, w) <==> (x / w, y / w, z / w)

Now that we have derived the coordinate transformation matrix, we can use a function to generate such a matrix.

export function createProjectionMat(l, r, t, b, n, f) {
    return [
        2 / (r - l), 0.0.0.0.2 / (t - b), 0.0.0.0.2 / (f - n), 0,
        -(r + l) / (r - l), -(t + b) / (t - b), -(f + n) / (f - n), 1]}Copy the code

Next, we express the transformation matrix in shader and modify our shader program as follows:

const vertexShader = ` attribute vec4 a_position; attribute vec2 a_texCoord; uniform mat4 u_projection; varying vec2 v_texCoord; void main () { gl_Position = u_projection * a_position; v_texCoord = a_texCoord; } `;
Copy the code

There is an extra uniform variable in shader, so now we need to pass in the uniform matrix. We can use the uniformMatrix4fv API.

// Get the position of u_projection in the shader
const u_projection = gl.getUniformLocation(program, 'u_projection');
// Generate the coordinate transformation matrix
const projectionMat = createProjectionMat(0, width, height, 0.0.1);
// Pass in data
gl.uniformMatrix4fv(u_projection, false, projectionMat);
Copy the code

Ok, done!

Now that we’re done with coordinate transformations, we’re going to move on to affine transformations.

Affine transformation

What is an affine transformation

Simply put, an affine transformation is a linear transformation plus a translation. We can use affine transformations to translate, scale, rotate, and so on, but why do we do that separately? Don’t worry, we’ll explain that later. So let’s look at some linear transformations.

The zoom

What we’re doing, which should be easier to understand, is multiplying the coordinates by a number. The lower left corner is algebraic representation, and the lower right corner is matrix representation.

rotating

It’s not so easy to understand why rotation is represented this way. The following is a simple derivation. Let’s first consider how to represent the rotation of a point:

At this point, the derivation is complete.

translation

Translation is even easier, you just add a number to the coordinate to represent translation. We found that for two-dimensional coordinates, we could not express such addition and subtraction using a 2×2 matrix and multiplication of 2-dimensional vectors

So, homogeneous coordinates came into being.

Homogeneous coordinates

Any point on the plane can be represented as a triplet (X, Y, W), called the homogeneous coordinates of that point

If W is not 0, then this point represents X/W, Y/W in the Euclidean plane.

So, we use homogeneous coordinates to represent the shift as follows:

Similarly, for the previous scaling and rotation, we can rewrite it as:

One thing to note here:

Since matrix multiplication has no commutative property, there is order between scaling, translation and rotation. We can take these matrices and multiply them in different ways and look at what happens:

According to the above results, we can see that although the scaling, translation and rotation matrices are multiplied by each other and their order is only changed, the results are completely different. So we need to pay attention to that when we use it.

Now we’ll rewrite the program so that it can pan, rotate and scale.

Let’s write the function that creates the transformation matrix


export function createTranslateMat(tx, ty) {
    return [
        1.0.0.0.0.1.0.0.0.0.1.0,
        tx, ty, 0.1]}export function createRotateMat(rotate) {
    rotate = rotate * Math.PI / 180;
    const cos = Math.cos(rotate);
    const sin = Math.sin(rotate);
    return [
        cos, sin, 0.0,
        -sin, cos, 0.0.0.0.1.0.0.0.0.1]}export function createScaleMat(sx, sy) {
    return [
        sx, 0.0.0.0, sy, 0.0.0.0.1.0.0.0.0.1]}Copy the code

Modify Shader as follows:

const vertexShader = ` attribute vec4 a_position; attribute vec2 a_texCoord; uniform mat4 u_projection; uniform mat4 u_rotate; uniform mat4 u_scale; uniform mat4 u_translate; varying vec2 v_texCoord; void main () { gl_Position = u_projection * u_translate * u_rotate * u_scale * a_position; v_texCoord = a_texCoord; } `;

// Get the position of the new transformation matrix in the shader
const u_translate = gl.getUniformLocation(program, 'u_translate');
const u_scale = gl.getUniformLocation(program, 'u_scale');
const u_rotate = gl.getUniformLocation(program, 'u_rotate');

// Create transformation matrix
let translateMat = createTranslateMat(0.0);
let rotateMat = createRotateMat(0);
let scaleMat = createScaleMat(1.1);

// Pass in data
gl.uniformMatrix4fv(u_translate, false, translateMat);
gl.uniformMatrix4fv(u_rotate, false, rotateMat);
gl.uniformMatrix4fv(u_scale, false, scaleMat);
Copy the code

By adding some UI controls, we can easily control the position of the image.

conclusion

At this point, you have learned how to manipulate the position of a graph using affine transformations. Here’s a quick recap of what we did today:

  1. We showed you how to convert a specific coordinate into a WebGL coordinate space in the range of (-1 ~ 1) coordinates. (Coordinate system transformation)
  2. We talked about matrices, and you should know how to use matrices to represent transformations.
  3. We talked about affine transformations, and affine transformations areLinear transformation plus translation
    • Linear transformations are: rotation, scaling
    • In order to use matrix to represent translation, we introduce the concept of homogeneous coordinate, through which we can use matrix to represent linear transformation and translation

Well, that’s it for today, and we’re going to talk a little bit more about image processing. If you think it’s good, give it a thumbs up! !