Portal:
- Overview of WebGL – Principles
- Drawing points and triangles
- Drawing points and Triangles (Advanced)
- WebGL Practice Part 3 — Drawing pictures
- Affine transformation
preface
One of the world’s four magic arts, China’s Photoshop and beauty camera, is taking its place thanks to image processing. Today we are going to look briefly at how to graphically process an image. Some common effects are:
- Graying (that is, turning a color image into black and white)
- sharpen
- The fuzzy
- Toning (hue, saturation, contrast, color temperature, etc.)
- Other LUT effects, etc. (simply speaking, some common filters in various PHOTOSHOP software, such as “Summer Sonatas”, “Cyberpunk” and so on)
For those of you who started reading this section, today we will talk about how to do image processing in terms of using Canvas2D and WebGL.
The so-called digital image processing, in fact, is to carry out a certain operation on each pixel in the image, and then output the result of the operation to the picture data. This is also known as image processing in the spatial domain.
There’s also what’s called frequency-domain image processing, where we can transform an image from the spatial domain to the frequency domain by Fourier transform, do something in the frequency domain (like filtering, where we can eliminate signals in a particular frequency range), and then convert it back to the spatial domain. Due to the complexity of image processing in the frequency domain, it is not discussed much in this paper.
Let do it
Black and white
Let’s start with a simple effect, such as graying, which turns a color image into a black and white image.
The principle of
Let’s think about how the RGB value of gray is represented. In fact, before color TV, we actually use a component to indicate the brightness of a pixel. Therefore, when it is generalized to RGB tri-color, it means gray when R, G and B components are equal. The larger the RGB value is, the more it tends to white; otherwise, the more it tends to black. So, there are many ways to make RGB components equal:
- Change the color of two of the channels to the color value of the other channel, such as changing GB in RGB to the value of R, or RB to the value of G.
- Take the average of RGB
We can write the following Shader:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
void main () {
vec4 color = texture2D(u_texture, v_texCoord);
gl_FragColor = vec4(vec3(dot(color.rgb, vec3(0.333.0.333.0.333))), color.a);
}
Copy the code
Dot (color.rgb, vec3(0.333, 0.333, 0.333)) Dot is the dot product of the vector. Color.r * 0.333 + color.g * 0.333 + color.b * 0.333; And 0.333 is about 1/3. Why don’t we write (color.r + color.g + color.b) / 3.0? Because in the graphics card, it will accelerate the function built in the GLSL language, and its operation efficiency is higher. So we used built-in functions in GLSL as much as possible.
In addition to using WebGL, we can also carry out image processing in Canvas2D, and the general process is as follows:
- Draw the image onto the canvas
- Get the pixel value of the canvas through getImageData
- Traverse all pixels for processing
- PutImageData draws the processed data onto the canvas
The general structure of the code is as follows:
const imageData = ctx.getImageData(0.0, width, height);
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
let r = imageData.data[(y * width + x) * 4];
let g = imageData.data[(y * width + x) * 4 + 1];
let b = imageData.data[(y * width + x) * 4 + 2];
let avg = (r + g + b) / 3.0;
imageData.data[(y * width + x) * 4] = avg;
imageData.data[(y * width + x) * 4 + 1] = avg;
imageData.data[(y * width + x) * 4 + 2] = avg;
}
}
ctx.putImageData(imageData, 0.0);
Copy the code
The effects are as follows:
palette
Common color filters are: adjust hue, saturation, contrast. The filter properties of the CSS correspond to hue-rotate, Saturate, and Contrast.
What is hue? Hue is the primary characteristic of color, is the most accurate standard to distinguish all kinds of different colors. Hue determines the base color of the color, as shown below.
Hue rotation is to change the basic color of a color without changing its saturation and brightness, called hue rotation. Let’s implement the hue rotation property in CSS.
By querying the W3C standard (Filter Effects Module Level1), we can find various implementation criteria in the Filter property as follows:
For type=”hueRotate”, values is a single one real number value (degrees). A hueRotate operation is equivalent to the following matrix Operation: For the “hueRotate” type, whose value is a number represented by an Angle value, the operation of hue rotation is equivalent to the matrix operation below
Where the terms a00, A01, etc. Are calculated as follows
A00 = 0.213 + cos(hueRotate) * 0.787 + sin(hueRotate) * (-0.213) and so on
The new RGB value is based on the matrix multiplication rule, and the result is: R’ = a00 * R + A01 * G + A02 * B + 0 * A + 0 * 1; G’ = a10 * R + a11 * G + a12 * B + 0 * A + 0 * 1; B’ = a20 * R + a21 * G + a22 * B + 0 * A + 0 * 1; A’ = 0 * R + 0 * G + 0 * B + 1 * A + 0 * 1;
So we can write our image processing program according to the above rules.
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform mat4 hueMat;
void main () {
vec4 color = texture2D(u_texture, v_texCoord);
gl_FragColor = hueMat * color;
}
Copy the code
We speed up the calculation by creating our hue rotation matrix in JS code and then passing the matrix to the GPU, taking advantage of the GPU’s ability to compute matrix multiplication. (In WebGL Overview – Principles I explained how to pass our parameters to the GPU)
export function createHueRotateMatrix(value) {
let sin = Math.sin((value * Math.PI) / 180);
let cos = Math.cos((value * Math.PI) / 180);
return new Float32Array([
0.213 + cos * 0.787 - sin * 0.213.0.213 - cos * 0.213 + sin * 0.143.0.213 - cos * 0.213 - sin * 0.787.0.0.0.715 - cos * 0.715 - sin * 0.715.0.715 + cos * 0.285 + sin * 0.14.0.715 - cos * 0.715 + sin * 0.715.0.0.0.072 - cos * 0.072 + sin * 0.928.0.072 - cos * 0.072 - sin * 0.283.0.072 + cos * 0.928 + sin * 0.072.0.0.0.0.0.0.0.0.1.0,]); }Copy the code
We do the same thing in Canvas2D, except we need to do the matrix multiplication manually.
const hueMat = createHueRotateMatrix(200);
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
let r = imageData.data[(y * width + x) * 4];
let g = imageData.data[(y * width + x) * 4 + 1];
let b = imageData.data[(y * width + x) * 4 + 2];
imageData.data[(y * width + x) * 4] = hueMat[0] * r + hueMat[4] * g + hueMat[8] * b;
imageData.data[(y * width + x) * 4 + 1] = hueMat[1] * r + hueMat[5] * g + hueMat[9] * b;
imageData.data[(y * width + x) * 4 + 2] = hueMat[2] * r + hueMat[6] * g + hueMat[10] * b; }}Copy the code
Here is the effect of a hue rotation of 200 degrees:
You can also try to achieve saturation and contrast adjustment algorithm, so far, are some relatively simple image processing effects. The final result can be obtained through a single image processing. Next, we try to achieve some more complex image processing effects. For example: blur effect.
blur
The principle of
The opposite of fuzzy is clear, so how do we judge whether an image is clear or not? We look to see if the lines of an image and some edges (like between the edge of the hair and the background) are clear. When it comes to specific pixels, the values of the two pixels are very different from each other, thus forming the so-called edge. So if we average the values of the two pixels, and then reassign the value to that pixel, then the edge will weaken. The theoretical basis of ambiguity is based on this. It is to average the surrounding pixels of a pixel and assign the average value to the pixel.
As shown below:
We average the pixels in the middle and the average value is: (1 + 2 + 3 + 4 + 5 + 6 + 7 + 8 + 9) / 9 = 5. (This example may be inappropriate, but the average value is still 5, but the process is the same).
Based on this principle, we can write the following image processing program:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform vec2 u_resolution;
void main () {
vec2 one_pixel = 1. / u_resolution;
vec4 color = texture2D(u_texture, v_texCoord);
vec4 l = texture2D(u_texture, v_texCoord + vec2(1.0.0.0) * one_pixel);
vec4 r = texture2D(u_texture, v_texCoord + vec2(1.0.0.0) * one_pixel);
vec4 t = texture2D(u_texture, v_texCoord + vec2(0.0.1.0) * one_pixel);
vec4 b = texture2D(u_texture, v_texCoord + vec2(0.0.1.0) * one_pixel);
gl_FragColor = vec4((l + r + t + b + color).rgb / 5., color.a);
}
Copy the code
After one of the above fuzzy operations we found a problem:
Only through a single image processing, the blurred effect is not obvious
If we want to enhance the effect of the blur, we must make the image through the above blur process several times. At this point, we need something called a Framebuffer.
Framebuffer
The Chinese name for Framebuffer is “Framebuffer” object. By default, WebGL does the drawing in the color buffer. In summary, the resulting image drawn is stored in a color buffer. The frame buffer object can be used instead of a color buffer or a depth buffer (we’re not thinking about depth buffers here). The process of drawing in the frame buffer is also called off-screen drawing.
But the drawing operation does not take place directly in the frame buffer, but on the object associated with the frame buffer. A frame buffer has three associative objects: color associative object, depth associative object, and template associative object, which are used to replace the color buffer, depth buffer, and template buffer respectively. What we’re going to use here is the color associated object.
So, all we need to do now is create the frame buffer and set a color associated object to the current frame buffer. The code is as follows:
export function createFramebufferTexture(gl, number, width, height) {
let framebuffers = [];
let textures = [];
for (let i = 0; i < number; i++) {
let framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
The texture here is the color association object, which replaces the color buffer
let texture = createTexture(gl);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,width,height,0,gl.RGBA,gl.UNSIGNED_BYTE,null);
// Associate the texture with the framebuffer
gl.framebufferTexture2D(gl.FRAMEBUFFER,gl.COLOR_ATTACHMENT0,gl.TEXTURE_2D,texture,0);
texture && textures.push(texture);
framebuffer && framebuffers.push(framebuffer);
}
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
return [framebuffers, textures];
}
Copy the code
So what we’re going to do here is we’re going to create 2 frame buffers, and then we’re going to play ping-pong, we’re going to do it in the A buffer, we’re going to do it in the A buffer, we’re going to do it in the B buffer, we’re going to do it again in the A buffer, and so on. Now, a friend asks, can I do this in a buffer? The answer is no, WebGL does not support both its own input and its own output. So we must use at least 2 frame buffers.
function drawByShader() {
// Set the original texture to the first input
gl.bindTexture(gl.TEXTURE_2D, texture);
for (let i = 0; i < 10; i++) {
// Enter one of the frame buffers
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffers[i % 2]);
// Draw to the texture object in the current frame buffer
gl.drawArrays(gl.TRIANGLES, 0.6);
// Bind the texture object of the current buffer as input for the next processing
gl.bindTexture(gl.TEXTURE_2D, textures[i % 2]);
}
// When the frame buffer is set to NULL, it is drawn to the color buffer (i.e. on the screen).
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.drawArrays(gl.TRIANGLES, 0.6);
}
Copy the code
So let’s look at the effect again:
After 10 reprocessing, we get a good blur effect, and if we continue to increase the number of reprocessing, the image will become more blurred. In the context of Canvas2D, we can write the following code:
function drawImage(img) {
ctx.drawImage(img, 0.0);
const width = canvas2.width;
const height = canvas2.height;
const imageData = ctx.getImageData(0.0, width, height);
// Do 20 repetitions
for (let i = 0; i < 20; i++) {
processImageData(imageData, (x, y, texture2D) = > {
const color = texture2D(x, y);
const l = texture2D(x - 1, y);
const r = texture2D(x + 1, y);
const t = texture2D(x, y - 1);
const b = texture2D(x, y + 1);
const sum = [color, l, r, t, b].reduce((prev, cur) = > {
return [
prev[0] + cur[0],
prev[1] + cur[1],
prev[2] + cur[2],
prev[3] + cur[3]]} [0.0.0.0]);
return sum.map(item= > item /= 5);
});
}
ctx.putImageData(imageData, 0.0);
}
function processImageData(
imageData,
processFunc = (x, y, texture2D) => {
return texture2D(x, y);
}
) {
const width = imageData.width;
const height = imageData.height;
const texture2D = (x, y) = > {
if (x < 0) {
x = 0;
}
if (x >= width) {
x = width - 1;
}
if (y < 0) {
y = 0;
}
if (y >= height) {
y = height - 1;
}
let r = imageData.data[(y * width + x) * 4];
let g = imageData.data[(y * width + x) * 4 + 1];
let b = imageData.data[(y * width + x) * 4 + 2];
let a = imageData.data[(y * width + x) * 4 + 3];
return [r, g, b, a];
}
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
[
imageData.data[(y * width + x) * 4],
imageData.data[(y * width + x) * 4 + 1],
imageData.data[(y * width + x) * 4 + 2],
imageData.data[(y * width + x) * 4 + 3] ] = processFunc(x, y, texture2D); }}}Copy the code
We can use this property of the Framebuffer to process images multiple times to achieve some good effects, such as the common Bloom effect:
The Bloom effect is a common in-game effect that causes brighter areas of the screen to “spread out” into the surrounding area, creating a hazy effect.
The basic principle is: extract the lighter parts of the original image, then apply a blur effect to the extracted parts, and finally overlay the blur effect with the original image. Specific implementations are not listed in this article. Readers can find their own materials to practice.
conclusion
Today we learned how to use Canvas for image processing and learned some basic principles of image processing:
- Black and white: when r, G, and B are equal, the image will appear black and white.
- Hue: we refer to W3C standards to find the corresponding matrix representation, and we complete hue rotation by matrix multiplication.
Next, we learned about the concept of a Framebuffer, with which we can take the image completed in the last processing as input to the next processing. We use this feature to complete the image processing of fuzzy effect.
Finally, we made effects like Bloom that required multiple image processors.
That’s all for today. If you found this article useful, don’t forget to like it before you go!