The purpose of this case is to understand how to use GLSL to achieve zoom + out-of-body + shake + flash + burr + illusion 6 dynamic effect filter
The preparatory work
The preparation code is the same as in OpenGL ES case 11: Split screen filter, only need to modify the corresponding bottom item array and the corresponding shader name, etc., this part of the content is not explained here
Filter algorithm
The ideas and implementation of the six filters are shown below
The following explains only the GLSL code in the filter’s custom shader
Scaling filter
Scaling principle: Scaling is achieved by modifying the mapping between vertices and textures in the vertex shader
The scaling effect is as follows
Zooming filter algorithm mainly has the following steps:
-
The time period corresponding to the current timestamp is calculated by the mod mod function
-
Calculate the magnification of the vertices, i.e. the amplitude of the scaling, in the range [1.0, 1.3]
-
Calculate the vertex coordinates as shown in the figure below after scaling according to the amplitude
Vertex shader
attribute vec4 Position; attribute vec2 TextureCoords; varying vec2 TextureCoordsVarying; // Timestamp (updated with timer method calls) : incrementing uniform float Time from 0 all the way up; Const float PI = 3.1415926; Void main(){float duration = 0.6; // Float maxAmplitude = 0.3; Float time = mod(time,duration); float time = mod(time,duration); float time = mod(time,duration) / / amplitude said the amplitude, the introduction of PI is for the purpose of using the sine function, to control the amplitude range between 1.0 ~ 1.3, and over time can / / here you don't have to take the absolute value, because the Angle range is [0, PI], Float amplitude = 1.0 + maxAmplitude * abs(sin(time * (PI/duration))); float amplitude = 1.0 + maxAmplitude * abs(sin(time * (PI/duration))); // Zoom in key code: Multiply the vertex coordinates x and y by a magnification factor, i.e. the amplitude, respectively, to achieve the effect of stretching without changing texture coordinates //xy magnification, Gl_Position = vec4(position. x * amplitude, position. y * amplitude, position.zw); TextureCoordsVarying = TextureCoords; // The texture coordinates are passed to TextureCoordsVarying = TextureCoords; }Copy the code
The amplitude can be calculated without the ABS function, mainly because the Angle range of time * (PI/duration is always within the range of [0, π], and the result range of the sin function within this range is [0,1], so the absolute value of the ABS function can be calculated without the abs function, except the scaling needs to be implemented in the vertex shader. The rest of the filter effects are implemented in the slice shader. In fact, scaling can also be implemented in the chip shader.
Out-of-body filter
Principle: This is achieved by superimposing a layer on the original image, mixing the colors of the two layers, and the superimposed layer enlarges gradually over time and the opacity decreases to 0, and so on. The filter algorithm needs to be performed in the slice shader
The effect of the OUT-OF-body filter is as follows:
Filter algorithm steps:
- The current period is calculated as a percentage of the period duration based on the current timestamp, that is, the progress
- Calculates the transparency corresponding to the current progress
- Calculate the scale corresponding to the current progress
- Gets magnified texture coordinates
- Get the original texture coordinates & amplify the texture coordinates
- Blend the original texture with the enlarged texture
The figure below shows the calculation process of kinetic efficiency in out-of-body experience
Chip shader
precision highp float; uniform sampler2D Texture; varying vec2 TextureCoordsVarying; // Timestamp uniform float Time; Void main (void) {float duration = 0.7; Float maxAlpha = 0.4; Float maxScale = 1.8; Float progress = mod(Time, duration)/duration; float progress = mod(Time, duration)/duration; Float alpha = maxAlpha * (1.0 - progress); Float scale = 1.0 + (maxscale-1.0) * progress; float scale = 1.0 + (maxscale-1.0) * progress; // Reduce the distance between the x/ Y values of the texture coordinates corresponding to the vertex coordinates and the center point by a certain proportion. Only change the texture coordinates, but keep the vertex coordinates unchanged. Float weakX = 0.5 + (TextureCoordsVarying. X-0.5)/scale; float weakX = 0.5 + (TextureCoordsVarying. X-0.5)/scale; Float weakY = 0.5 + (TextureCoordsVarying. Y-0.5)/scale; float weakY = 0.5 + (TextureCoordsVarying. Y-0.5)/scale; vec2 weakTextureCoords = vec2(weakX, weakY); Vec4 weakMask = texture2D(Texture, weakTextureCoords); vec4 mask = texture2D(Texture, TextureCoordsVarying); //2, color mixing built-in function mix/mix equation gl_FragColor = mask * (1.0-alpha) + weakMask * alpha; }Copy the code
Tiktok shake filter
Principle: Color offset + weak magnification effect, that is, first magnify the texture, and then color offset the texture coordinate pixel after magnification
The dither filter effect is as follows:
Algorithm steps of dither filter:
- Calculates the percentage of current progress based on the current timestamp
- Calculates the color offset corresponding to the current progress
- Calculate the scale corresponding to the current progress
- Gets magnified texture coordinates
- The amplified texture elements were color offset to obtain three groups of colors
- Get RGBA values from each of the three groups of colors
The figure for obtaining the timestamp and calculating the current progress value is the same as that in OUT-OF-Body. The figure below shows the calculation of color offset & zoom ratio and the change process of the dither filter in the process of increasing the timestamp
Chip shader
Void main(){float duration = 0.7; Float maxScale = 1.1; // Float offset = 0.02; Float progress = mod(Time, duration)/duration; // Offset 0 ~ 0.02 vec2 offsetCoords = vec2(offset, offset) * progress; Float scale = 1.0 + (maxscale-1.0) * progress; float scale = 1.0 + (maxscale-1.0) * progress; Vec2 ScaleTextureCoords = vec2(0.5, + (TextureCoordsVarying - VEC2 (0.5, 0.5))/scale; Offset vec4 maskR = texture2D(Texture, ScaleTextureCoords + offsetCoords); // Offset maskB = texture2D(Texture, ScaleTextureCoords - offsetCoords); Vec4 mask = texture2D(Texture, ScaleTextureCoords); Gl_FragColor = VEC4 (maskr.r, maskb.g, mask.b, mask.a); }Copy the code
Flash of white filter
How it works: Add a white mask to the original image, and the opacity of the white layer changes over time, and mix the color with the original image
The flashing effect is as follows:
Algorithm steps of blinking white filter:
- Calculate the time period corresponding to the current timestamp through the mod function
- Set a white mask
- Calculate the amplitude of the white mask, the amplitude range is [0,0, 1.0]
- Get the texture of the original image and mix it with the white mask color
Mix function
orDefault mix equation: mask*(1-alpha + weakMask*alpha
The calculation process of the algorithm is shown as follows
Chip shader
precision highp float; uniform sampler2D Texture; varying vec2 TextureCoordsVarying; // Timestamp uniform float Time; // const float PI = 3.1415926; Void main(){float duration = 0.6; Float time = mod(time, duration); float time = mod(time, duration); Vec4 whiteMask = vec4(1.0, 1.0, 1.0, 1.0); // float amplitude = abs(sin(time * (PI/duration))); float amplitude = sin(time * (PI / duration)); Vec4 mask = texture2D(Texture, TextureCoordsVarying); Gl_FragColor = mix(mask, whiteMask, amplitude); }Copy the code
Burr filter
Principle: tear + weak color offset, that is, set a threshold value, when the pixel offset value is less than the threshold value, offset, otherwise, multiplied by a reduction factor, the final rendering effect is that most of the small offset, only a small number of rows will be large offset
The burr filter looks like this:
Burr filter algorithm:
- The mod function calculates the time period
- Calculate the amplitude in the range “0, 1”
- Get the random offset value of pixel point, the range is [-1,1]
- Determine if offset is required & calculate the texture’s X coordinate
- The offset is required and the tear is large, that is, the color offset of X is large
- No, the tear is small, which means the color offset of x is very small
- Gets the texture coordinates after the tear
- Three groups of striin after tearing were calculated and RGBA values in different groups were obtained
The calculation process of the above steps is illustrated below
Chip shader
precision highp float; uniform sampler2D Texture; varying vec2 TextureCoordsVarying; // Timestamp uniform float Time; // const float PI = 3.1415926; Float rand(float n){//fract(x) returns the decimal part of x //sin(n) * 43758.5453123 //sin(n) * Max, with a decimal point, Return fract(sin(n) * 43758.5453123); return fract(sin(n) * 43758.5453123); } void main(){float maxJitter = 0.06; Float duration = 0.3; float duration = 0.3; // red color offset float colorROffset = 0.01; // Green color offset float colorBOffset = -0.025; Float time = mod(time, duration * 2.0); float time = mod(time, duration * 2.0); Float amplitude = Max (sin(time * (PI/duration)), 0.0); float amplitude = Max (time * (PI/duration)); Float jitter = rand(TextureCoordsVarying. Y) * 2.0-1.0; float jitter = rand(TextureCoordsVarying. < < jitter > < < jitter > < < amplitude > // abs(jitter) < < 0, 1] // maxJitter * amplitude < 0, 0, 0 0.06】 bool needOffset = abs(jitter) < maxJitter * amplitude; NeedOffset = YES; //needOffset = NO; //needOffset = NO; Need to reduce tear = * amplitude * very small number float textureX = TextureCoordsVarying. X + (needOffset? Jitter: (jitter * amplitude * 0.006)); Vec2 textureCoords = VEC2 (textureX, TextureCoordsVarying. Y); Vec4 mask = texture2D(Texture, textureCoords); Veckr = texture2D(Texture, textureCoords + Vec2 (colorROffset * amplitude, 0.0)); Vec4 maskB = texture2D(Texture, textureCoords + Vec2 (colorBOffset * amplitude, 0.0)); // Color is mostly torn, red and blue parts, so only adjust red gl_FragColor = vec4(maskr.r, mask.g, maskb.b, mask.a); }Copy the code
GLSL does not have random value function, need to customize the function, using FRACT function to obtain random number
Illusion filter
Principle: residual shadow + color offset overlay
- Residual shadow: Every once in a while, a new layer will be created, and the layer is mainly red. As time goes by, the opacity of this layer will gradually decrease. Therefore, many layers of different transparency can be seen superimposed together within a period of time, thus forming residual shadow and making the picture move in a circle over time
- Color offset: in the process of moving the picture, the blue is in the front and the red is in the back. In other words, during the process of moving, some of the values of the red channel are lost in the original position at an interval of time, and the values of this part of the red channel will gradually recover with time offset.
The illusion filter looks like this:
Illusion filter algorithm:
- Calculate the current time period by mod function
- Set magnification
- Calculate the magnified texture coordinates
- Obtain the pixel pixel in the whole process of rotation
- Use the for loop to create a new layer, the phantom color
- Gets the color superimposed by the original layer and the new layer
The calculation process is illustrated below
Chip shader
precision highp float; uniform sampler2D Texture; varying vec2 TextureCoordsVarying; uniform float Time; Const float PI = 3.1415926; Const float duration = 2.0; // This function can calculate the exact position of the picture at a certain time. It can use this function to calculate the position of the picture at a certain time. Vec4 getMask(float time, vec2 textureCoords, vec4 getMask(float time, vec2 textureCoords, Float padding) {// Vec2 translation = vec2(sin(time * (PI * 2.0 / duration)),cos(time * (PI * 2.0 / duration))); // New texture coordinates = original texture coordinates + offset * circular coordinates (new layers are spaced between layers, Vec2 translationTextureCoords = textureCoords + padding * translation; // Get the Texture of the new layer vec4 mask = texture2D(Texture, translationTextureCoords); return mask; } // This function can calculate the transparency of the layer created at the current time // progress: Float maskAlphaProgress(float currentTime, float hideTime, float startTime) {//mod (duration + duration - startTime, Float time = mod(Duration + CurrentTime-startTime, duration); float time = mod(Duration + CurrentTime-startTime, duration); // If less than 0.9, return time, otherwise, return 0.9 min(time, hideTime); } void main(){float time = mod(time, duration); float time = mod(time, duration); // Float scale = 1.2; // Offset = 0.083 float padding = 0.5 * (1.0-1.0 / scale); Vec2 textureCoords = VEC2 (0.5, 0.5) + (TextureCoordsVarying - VEC2 (0.5, 0.5))/scale; Float hideTime = 0.9; float hideTime = 0.9; // Create a new layer at 0.2s float timeGap = 0.2; // Note: only the red transparent channel value is retained, because the illusion effect remains red // Phantom residual data // Max RGB alpha // The new layer's R transparency float maxAlphaR = 0.5; // New layer G opacity float maxAlphaG = 0.05; // New layer B transparency float maxAlphaB = 0.05; Vec4 mask = getMask(time, textureCoords, padding); //RGB: float alphaR = 1.0 for loop; Float alphaG = 1.0; Float alphaB = 1.0; // The final layer color: initialize vec4 resultMask = vec4(0, 0, 0, 0); // Loop: each loop gets the color of the new layer, the phantom color. For (float f = 0.0; f < duration; f += timeGap) { float tmpTime = f; Vec4 tmpMask = getMask(tmpTime, textureCoords, padding); vec4 tmpMask = getMask(textureCoords, padding); // The layer created at some point, Float tmpAlphaR = maxAlphaR - maxAlphaR * float tmpAlphaR = maxAlphaR - maxAlphaR * float tmpAlphaR = maxAlphaR - maxAlphaR * float tmpAlphaR = maxAlphaR - maxAlphaR * maskAlphaProgress(time, hideTime, tmpTime) / hideTime; float tmpAlphaG = maxAlphaG - maxAlphaG * maskAlphaProgress(time, hideTime, tmpTime) / hideTime; float tmpAlphaB = maxAlphaB - maxAlphaB * maskAlphaProgress(time, hideTime, tmpTime) / hideTime; // Result += temporary color * transparency, ResultMask += vec4(tmpmask. r * tmpAlphaR, tmpmask. g * tmpAlphaG, tmpmask. b * tmpAlphaB,1.0); AlphaR -= tmpAlphaR; alphaG -= tmpAlphaG; alphaB -= tmpAlphaB; } // Final color += RGB * transparency of the original texture resultMask += vec4(mask.r * alphaR, mask.g * alphaG, mask.b * alphaB, 1.0); // Fill the pixels with the final color gl_FragColor = resultMask; }Copy the code
In the getMask function in the code, as the value range of sin and cos is [-1, 1], the offset of texture coordinate can be understood as a pixel moving in a circle around the original texture coordinate, namely the rotation in the effect drawing, as shown in the figure below
Complete code see Github – 15_ zoom + out-of-body + shake, etc. 6 OC, 15 zoom + out-of-body + shake, etc. 6 _Swift