Gaussian blur of WebGL intelligent building luminous effect algorithm series

If you have used image processing software such as PS, I believe that the fuzzy filter is not strange, image processing software provides many fuzzy algorithms. Gaussian blur is one of them.

In our smart building project, it is required to achieve the effect of building light. For example, here is a simple building effect:

Gaussian blur is one of the algorithms needed for building glow effect.

Introduction to Gaussian blur

Gaussian blur algorithm is a widely used technology in the field of computer graphics. It is an image space effect, which is used to blur the image and create a soft fuzzy version of the original image. Using gaussian blur effects, combined with some other algorithms, can also produce glow, halo, depth of field, thermal fog and blurred glass effects.

The principle of Gaussian blur

The principle of image blur, in simple terms, is to take the average value of the color of each pixel of the image. Different fuzzy algorithms, the definition of periphery is different, the average algorithm is also different. For example, in my previous article, WebGL implements radial blur, which is one of the fuzzy algorithms.

The mean of fuzzy

Before we look at Gaussian blur, let’s look at the easier mean blur. Mean obfuscating works by taking the average of the pixels (up, down, left, and right) around the pixel (including itself). As shown below:

It can be seen that for a pixel point, when the search radius is 1, the pixels that affect its color value are 9 pixels (including 8 pixels of itself and surrounding pixels). Assuming that each pixel has the same impact on the center pixel, each pixel has a 1/9 impact. As shown below:

The 3*3 influence numerical matrix above is often referred to as the convolution kernel.

Then the sum of the values of the final center point is shown in the figure below:The final value is:

8 times 1 plus 1 times 2 over 8 plus 1 is 10/9Copy the code

When calculating the color of the pixel, perform the above average calculation for each channel of the RGB of the pixel.

The calculation above is a kind of convolution filter. A convolution filter, in layman’s terms, is an algorithm that combines a set of values.

If the search radius is 2, it becomes an average of 25 pixels, and the larger the search radius, the more blurry it becomes. The relationship between the number of pixels and search radius is as follows:

(1 + r * 2) squared // r= 1, the result is 9, r=2, the result is 25, r=3, the result is 49.Copy the code

NxN is usually referred to as the size of the convolution kernel. Let’s say 3×3, 5×5.

In the calculation of mean blur, the contribution value of each participating pixel to the center pixel is the same, which is the characteristic of mean blur. That is, each pixel has the same weight.

Normal distribution

If you use a simple average, it’s obviously not very reasonable, because the graph is continuous, the closer the points are, the more distant the points are. Therefore, the weighted average is more reasonable. The closer the distance is, the greater the weight is, and the farther the distance is, the smaller the weight is.

The normal distribution satisfies the above distribution requirements, as shown in the figure below:

And you can see that the normal distribution is a bell curve, the closer you get to the center, the bigger it gets, and the farther you get away from the center, the smaller it gets.

When we calculate the average, we just take the center point as the origin, assign weight to the other points according to their position on the normal curve, and we get a weighted average.

Gaussian function

Gaussian function is a mathematical formula describing normal distribution. The formula is as follows:

Where μ is the mean of x, which can be interpreted as the center of the normal distribution, σ is the variance of X. Because when you average, the center is the origin, so μ is equal to 0.

If it is two-dimensional, then:

You can see that in a two-dimensional Gaussian,x and y are relatively independent. In other words:

G(x,y) = G(x) + G(y)
Copy the code

The nice thing about this property is that you can take a two-dimensional Gaussian and break it up into two independent one-dimensional Gaussian functions. It can improve efficiency. In fact, Gaussian blur applies one-dimensional Gaussian functions instead of using two dimensions.

Gaussian blur

The principle of Gaussian blur is basically the same as the principle of mean blur introduced above, except that the weight of surrounding pixels is the same when calculating the mean value of mean blur. In Gaussian blur, the weight value of surrounding pixels is calculated by Gaussian function, which is the reason why Gaussian blur is called Gaussian blur.

For example, if σ is, the weight matrix of the fuzzy radius 1 is as follows:

The sum of the weights of the 9 points is equal to 0.4787147. If only the weighted average of the 9 points is calculated, the sum of their weights must be equal to 1. Therefore, the above 9 values should be divided by 0.4787147 respectively to get the final weight matrix.

Rendering process

After understanding the basic principle of Gaussian blur, let’s have a look at the basic rendering process of Gaussian blur in WebGL:

  1. First, to render a scene or image onto a texture object as normal, you need to use the FrameBuffer function.
  2. Gaussian fuzzy algorithm is applied to texture object to obtain the final gaussian fuzzy texture object.

In the second part above, gaussian fuzzy algorithm is applied, which is generally divided into two steps:

  1. Gaussian fuzzy algorithm in vertical direction is applied first.
  2. On the basis of vertical fuzzy, horizontal gaussian fuzzy algorithm is carried out.

Of course, you can go horizontal and then vertical, and you get the same result. The results of two-step Gauss fuzzy algorithm and one-step gauss fuzzy algorithm are basically the same, but the efficiency of the algorithm can be improved. Some people may say, how ambiguous step, why also improve efficiency. To put it this way, if it’s a 3×3 Gaussian blur: the number of pixels to get in two steps is 3 + 3 = 6; The step is 3 x 3 is 9. If it is a 5×5 Gaussian blur: the number of pixels to be obtained in two steps is 5+5=10; One step is 5 x 5 is 25. Obviously you can algorithm the execution efficiency.

Render flow code

Framebuffer.bind (); bind(); bind(); bind(); renderScene(); frameBuffer.unbind(); …

Placing the renderScene on frameBuffer. Bind draws the scene onto the texture object associated with the frameBuffer.

Then the second step is to perform gaussian fuzzy algorithm

pass(params={},count = 1,inputFrameBuffer){ let {options,fullScreen } = this; inputFrameBuffer = inputFrameBuffer || this.inputFrameBuffer; let {gl,gaussianBlurProgram,verticalBlurFrameBuffer,horizontalBlurFrameBuffer} = this; let {width,height} = options; gl.useProgram(gaussianBlurProgram); if(width == null){ width = verticalBlurFrameBuffer.width; height = verticalBlurFrameBuffer.height; } verticalBlurFrameBuffer.bind(); fullScreen.enable(gaussianBlurProgram,true); gl.activeTexture(gl.TEXTURE0 + inputFrameBuffer.textureUnit); / / activate the gl. TEXTURE0 gl. BindTexture (gl. TEXTURE_2D, inputFrameBuffer. ColorTexture); / / bind map object gl. Uniform1i (gaussianBlurProgram uColorTexture, inputFrameBuffer. TextureUnit); gl.uniform2fv(gaussianBlurProgram.uTexSize, [width,height]); Gl. Uniform2fv (gaussianBlurProgram uDirection, [0, 1]). / / vertical gl. Uniform1f (gaussianBlurProgram uExposure, params. Exposure | | 3); gl.uniform1f(gaussianBlurProgram.uRadius,params.radius || 5); Gl. Uniform1f (gaussianBlurProgram uUseLinear, params. UseLinear | | 0.0); fullScreen.draw(); verticalBlurFrameBuffer.unbind(); if(horizontalBlurFrameBuffer){ // renderToScreen horizontalBlurFrameBuffer.bind(gl); } gl.activeTexture(gl.TEXTURE0 + verticalBlurFrameBuffer.textureUnit); / / activate the gl. TEXTURE0 gl. BindTexture (gl. TEXTURE_2D, verticalBlurFrameBuffer. ColorTexture); / / bind map object gl. Uniform1i (gaussianBlurProgram uColorTexture, verticalBlurFrameBuffer. TextureUnit); gl.uniform2fv(gaussianBlurProgram.uTexSize, [width,height]); Gl. Uniform2fv (gaussianBlurProgram uDirection, [1, 0]). / / horizontal gl. Uniform1f (gaussianBlurProgram uExposure, params. Exposure | | 2); gl.uniform1f(gaussianBlurProgram.uRadius,params.radius || 5); Gl. Uniform1f (gaussianBlurProgram uUseLinear, params. UseLinear | | 0.0); fullScreen.draw(); if(horizontalBlurFrameBuffer){ horizontalBlurFrameBuffer.unbind(); } if(count > 1){ this.pass(params,count - 1,this.horizontalBlurFrameBuffer); } return horizontalBlurFrameBuffer; }Copy the code

InputFrameBuffer is the frameBuffer object from the first rendering step, which is passed as input parameter. Then it starts to perform a gaussian blur algorithm in the vertical direction,

verticalBlurFrameBuffer.bind(); fullScreen.enable(gaussianBlurProgram,true); gl.activeTexture(gl.TEXTURE0 + inputFrameBuffer.textureUnit); / / activate the gl. TEXTURE0 gl. BindTexture (gl. TEXTURE_2D, inputFrameBuffer. ColorTexture); / / bind map object gl. Uniform1i (gaussianBlurProgram uColorTexture, inputFrameBuffer. TextureUnit); gl.uniform2fv(gaussianBlurProgram.uTexSize, [width,height]); Gl. Uniform2fv (gaussianBlurProgram uDirection, [0, 1]). / / vertical gl. Uniform1f (gaussianBlurProgram uExposure, params. Exposure | | 3); gl.uniform1f(gaussianBlurProgram.uRadius,params.radius || 5); Gl. Uniform1f (gaussianBlurProgram uUseLinear, params. UseLinear | | 0.0); fullScreen.draw(); verticalBlurFrameBuffer.unbind();Copy the code

Then perform the horizontal fuzzy algorithm:

if(horizontalBlurFrameBuffer){ // renderToScreen horizontalBlurFrameBuffer.bind(gl); } gl.activeTexture(gl.TEXTURE0 + verticalBlurFrameBuffer.textureUnit); / / activate the gl. TEXTURE0 gl. BindTexture (gl. TEXTURE_2D, verticalBlurFrameBuffer. ColorTexture); / / bind map object gl. Uniform1i (gaussianBlurProgram uColorTexture, verticalBlurFrameBuffer. TextureUnit); gl.uniform2fv(gaussianBlurProgram.uTexSize, [width,height]); Gl. Uniform2fv (gaussianBlurProgram uDirection, [1, 0]). / / horizontal gl. Uniform1f (gaussianBlurProgram uExposure, params. Exposure | | 2); gl.uniform1f(gaussianBlurProgram.uRadius,params.radius || 5); Gl. Uniform1f (gaussianBlurProgram uUseLinear, params. UseLinear | | 0.0); fullScreen.draw(); if(horizontalBlurFrameBuffer){ horizontalBlurFrameBuffer.unbind(); }Copy the code

Shader code

The shader code is divided into two parts, a vertex shader code:

const gaussianBlurVS = ` attribute vec3 aPosition; attribute vec2 aUv; varying vec2 vUv; void main() { vUv = aUv; Gl_Position = vec4 (aPosition, 1.0); } `;Copy the code

The other is the fragment shader code:

const gaussianBlurFS = ` precision highp float; precision highp int; #define HIGH_PRECISION #define SHADER_NAME ShaderMaterial #define MAX_KERNEL_RADIUS 49 #define SIGMA 11 varying vec2 vUv; uniform sampler2D uColorTexture; uniform vec2 uTexSize; uniform vec2 uDirection; uniform float uExposure; uniform bool uUseLinear; uniform float uRadius; Float gaussianPdf(in float x, in float sigma) {return 0.39894 * exp(-0.5 * x * x/(sigma * sigma))/sigma; Void main() {vec2 invSize = 1.0 / uTexSize; float fSigma = float(SIGMA); Float weightSum = gaussianPdf(0.0, fSigma); vec4 diffuseSum = texture2D( uColorTexture, vUv).rgba * weightSum; float radius = uRadius; for( int i = 1; i < MAX_KERNEL_RADIUS; i ++ ) { float x = float(i); if(x > radius){ break; } float gaussianPdf(x, fSigma),t = x; vec2 uvOffset = uDirection * invSize * t; vec4 sample1 = texture2D( uColorTexture, vUv + uvOffset).rgba; vec4 sample2 = texture2D( uColorTexture, vUv - uvOffset).rgba; diffuseSum += (sample1 + sample2) * w; WeightSum = 2.0 * w; } vec4 result = vec4(1.0) -exp (-diffusesum /weightSum * uExposure); gl_FragColor = result; } `Copy the code

The final rendering looks like this, in this case a sphere wireframed:

The application case

At present, the main effect used in the project is the luminous building effect. Here are a few examples to share with you:

Of course, there are many more application scenarios that readers can explore on their own.

Reference documentation

www.ruanyifeng.com/blog/2012/1…

conclusion

If you are interested in visualization, you can communicate with me on wechat 541002349. In addition, you can receive more valuable articles in time through the public account “ITMan Biao Shu”.