First thing first, let’s demo Bloom effects

Demo Download Address

The article is long overdue.

First, the purpose. Today is to implement a blur effect algorithm, this algorithm is commonly used in the game post processing to achieve Bloom effect, also known as full-screen glow, is gaussian blur in essence. Sometimes the game sees something like a wonderland effect in the game, or a blurry effect after the ios desktop dropdown. There is a slight difference between these effects, but not much. The basis of my implementation of Gaussian blur today is convolution. And then you can implement all sorts of other fuzzy algorithms based on that.


First let’s take a look at the Bloom effect, talking Coder my home page, gaussian blur. Click your nickname in the upper right corner to enter “My home page” to view.


If you scroll the mouse, you’ll notice that the image becomes more and more blurred as you scroll. This is gaussian blur, Bloom special effects of the key algorithm implementation.




Brief introduction of Bloom implementation method

Let’s start with a few classic implementations of glow.


1. The first way: A very easy way is to put a glow map on top of a Billboard material. This method is simple and crude, the so-called effect is not enough to gather pictures. It’s simple, but it’s done in many games.


2. The second kind: convolution operation on the picture. The convolution operation of a picture is the superposition of other elements near the element onto itself. Why would this be a blur? Interested students can try to make multiple copies of any image and put them together in Photoshop. Each time you place an image, shift it down one pixel from the previous one, and overlay them. This is essentially a blur filter.


The first way is very simple. We use photoshop as a designer offline to do it for us. We use table lookup to retrieve the results, just like normal mapping. We focus here on the implementation of the second method.


1. Pass texture to shader (temple.html file already implemented)

2. Get the texture pixel of the map (temple.html file already implemented)

3. Realize convolution operation

We have implemented template files to reduce unnecessary difficulty. Although it is in the file and the comments are clearly written, I have posted the main shader code here. You can use GLSL to implement the algorithm. The following code directly implements steps 1 and 2.


<script id="vertexShaderprocess" type="x-shader/x-vertex"> // The variable passed to the slice shader, then use uniform VEC2 uvScale; // The texture coordinates passed from the texture shader to the slice shader varying VEC2 vUv; Void main() {vUv = uvScale * uv; vUv = uvScale * uv; // The model view matrix is passed to the WebGL system and can be ignored for the time being. Vec4 mvPosition = modelViewMatrix * vec4(position, 1.0); gl_Position = projectionMatrix * mvPosition; } </script>

Copy the code
<script id="fragmentShaderprocess" type="x-shader/x-fragment"> <script id="fragmentShaderprocess" type="x-shader/x-fragment"> Xy axis uniform Vec2 resolution; // Uniform float fogDensity is the fog parameter passed to the flake shader; uniform vec3 fogColor; Uniform sampler2D TextureNoise passed to the slice shader; Uniform sampler2D Texture2 texture passed to the slice shader; // The texture coordinate varying VEC2 vUv is passed from the vertex shader to the slice shader. Void main(void) {// Get the noise texture vec4 noise = texture2D(texturenoise, vUv); Vec4 = texture2D(texture2, vUv * 1.0); // Pass the color to the webGL shader gl_FragColor = color; Gl_FragColor = color/length (2.0) - vUv * 1.0); } </script>

Copy the code

Now we’re going to implement the third step, which is to use convolution to achieve the effect of ambiguity.

Vec4: = texture2D(texture2, vec2(vuv.x-1.0 * h, vuv.y + 1.0 * h)); Vec4: = texture2D(texture2, vec2(vuv. x-0.0 * h, vuv. y + 1.0 * h)); Vec4: = texture2D(texture2, vec2(vuv. x + 1.0 * h, vuv. y + 1.0 * h)); Vec4 color21 = texture2D(texture2, vec2(vuv.x-1.0 * h, vuv.y)); vec4 color22 = texture2D( texture2, vec2( vUv.x , vUv.y) ); Vuv.x + 1.0 * h, vuv.y)); Vec4: = texture2D(texture2, vec2(vuv.x-1.0 *h, vuv.y -1.0*h)); Vec4: = texture2D(texture2, vec2(vuv.x-0.0 *h, vuv.y-1.0 *h)); Vec4 color33 = texture2D(texture2, vec2(vuv. x + 1.0*h, vuv. y-1.0*h)); Gl_FragColor = (1.0*color11+1.0*color12+1.0*color13 +1.0*color21+2.0*color22+1.0*color23 + 1.0 + 1.0 + 1.0 * * color31 color32 * color33) / 10.0;

Copy the code


We can see that the test1.html code does get blurred, but it’s not obvious because our convolution kernel isn’t big enough, or in other words we don’t have enough offsets. We’ll talk about changing the convolution product kernel later. So let’s talk about how to implement convolution.


1. Calculate the offset value

Float h = 1.0/512.0; The image I chose here is 512*512, and my image offset is 1 pixel, so I calculated an offset of 1/512.


2. Obtain an unoffset image

Color22 = texture2D(texture2, vec2(vuv.x, vuv.y));


3. Obtain the offset image.

Color32 = texture2D(texture2, vec2(vuv. x-0.0 *h, vuv. y-1.0*h)); Get positive is color12,,,,, and so on


4. Picture overlay

Add color13 to color22, but divide by 2, because the two images are superimposed, so all the color values are about twice the original value, so divide by the number of superimposed images, here is 2, why is superimposed here to achieve blur? You can use Photoshop to simulate the principle

1. Open Photoshop, create a new layer and place an image

2. Create another layer and place the previous image again

3. The second picture is slightly offset downward

4. Click ON FX in the lower right corner of PHOTOSHOP to have a color mix


You can see that the two images achieve a motion-like blur effect due to offset and overlay. This is the principle of blur, and the reason why there are various kinds of gaussian blur, weighted average blur ,,,, and so on, is because the specified number of pictures in different directions are superimposed.


5. Multiply by the appropriate Gaussian blur coefficient


So this is our gaussian blur convolution template.

Gl_FragColor = (1.0*color11+1.0*color12+1.0*color13 +1.0*color21+2.0*color22+1.0*color23 + 1.0 + 1.0 + 1.0 * * color31 color32 * color33) / 10.0;Copy the code

So now we have gaussian blur, but as you might have noticed when you ran the demo, it doesn’t seem to be very fuzzy. In the current code, there are two factors that affect the ambiguity: the size of the convolution kernel (what we have now is a 3*3 convolution kernel, and how to change the convolution kernel is not discussed here temporarily), and the offset coefficient. So float h = 1.0/512.0. The current offset is one pixel, we can try to reduce it by 10 pixels to make it very fuzzy.


This article is now in the stage of trial release, because I haven’t written an article for a long time, it is hard to get close to the readers’ thinking logic inevitably, and it is also difficult for beginners to write this article from the perspective of thinking, there are difficult to understand the place, welcome you to directly add group 151269709 feedback questions, thanks.