This paper explains the practical application of GLSL by imitating the realization of several special effects in Douyin.
preface
This article was inspired by the article “When an Android Developer Goes Crazy on Tiktok (Ii)”.
This blogger tried to restore several video effects on Douyin through his own analysis on Android platform. He does this by “part GLSL code + part Java code”.
After reading the book, and admiring it, I had a bold idea: could I do something similar on iOS just by writing GLSL?
That’s a good idea, but since Tiktok’s effects are based on video filters, we’ve only talked about rendering images so far, so if we jump right into the video, it might be a bit too much.
So I came up with a more daring idea: could I do something similar with static images on iOS just by writing GLSL?
In this way, we can pay more attention to GLSL itself, rather than video capture and output.
Hence, this article. To make the transition seamless, I will use the previous GLSL rendering example and change only the Shader part of the code to try to restore the six effects implemented in that article.
Ling, animation,
You may ask: All the effects on Tiktok are dynamic. How do you add dynamic effects to a static image?
Good question, so the first step is to animate the static image.
Recall that all we did in UIKit was send instructions to the CoreAnimation, and when the screen refreshes, the CoreAnimation does a frame-by-frame calculation of what image should be displayed.
The emphasis here is on “frame by frame”. Similarly, in OpenGL ES, the way we animate, is to calculate the image that each frame should display, and then re-render it when the screen refreshes.
This frame-by-frame calculation is done in a Shader. We can then pass in the current time with a time parameter during re-rendering and let Shader calculate the progress of the current animation. As for the timing of re-rendering, CADisplayLink is used.
The code looks something like this:
self.displayLink = [CADisplayLink displayLinkWithTarget:self selector:@selector(timeAction)];
[self.displayLink addToRunLoop:[NSRunLoop mainRunLoop] forMode:NSRunLoopCommonModes];
Copy the code
- (void)timeAction {
glUseProgram(self.program);
glBindBuffer(GL_ARRAY_BUFFER, self.vertexBuffer);
// Pass time
CGFloat currentTime = self.displayLink.timestamp - self.startTimeInterval;
GLuint time = glGetUniformLocation(self.program, "Time");
glUniform1f(time, currentTime);
// Clear the canvas
glClear(GL_COLOR_BUFFER_BIT);
glClearColor(1.1.1.1);
/ / redraw
glDrawArrays(GL_TRIANGLE_STRIP, 0.4);
[self.context presentRenderbuffer:GL_RENDERBUFFER];
}
Copy the code
Accordingly, there is a uniform Time parameter in the Shader:
uniform float Time;
Copy the code
This allows Shader to use Time to calculate what image should be displayed.
A, scaling
1. Final effect
The first effect we want to achieve is “scaling”, which looks simple and can be achieved by modifying the correspondence between vertex coordinates and texture coordinates.
This is a very basic effect that will be used in the rest of the effects below. Changing the coordinate correspondence can be done by modifying the vertex shader or by modifying the fragment shader. How to change the vertex shader is discussed here first, and how to change the fragment shader will be discussed later in the special effects.
2. Code implementation
Vertex shader code:
attribute vec4 Position;
attribute vec2 TextureCoords;
varying vec2 TextureCoordsVarying;
uniform float Time;
const float PI = 3.1415926;
void main (void) {
float duration = 0.6;
float maxAmplitude = 0.3;
float time = mod(Time, duration);
float amplitude = 1.0 + maxAmplitude * abs(sin(time * (PI / duration)));
gl_Position = vec4(Position.x * amplitude, Position.y * amplitude, Position.zw);
TextureCoordsVarying = TextureCoords;
}
Copy the code
Here, duration represents the duration of a scaling cycle, mod(Time, duration) means to convert the Time passed in to a cycle, that is, the range of Time is 0 ~ 0.6, amplitude represents, PI is introduced to use the sine function, Keep amplitude in the range of 1.0 to 1.3 and change over time.
Vec4 (position. x * amplitude, position. y * amplitude, position.zw) is the key to magnification. We multiply x and Y of vertex coordinates by an magnification factor. You get the stretching effect.
Out-of-body experience
1. Final effect
An out-of-body appearance appears to be a superposition of two layers, with the upper layer enlarging and decreasing in opacity over time. Zoom in is also used here, this time using the fragment shader.
2. Code implementation
Fragment shader code:
precision highp float;
uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
uniform float Time;
void main (void) {
float duration = 0.7;
float maxAlpha = 0.4;
float maxScale = 1.8;
float progress = mod(Time, duration) / duration; / / 0 ~ 1
float alpha = maxAlpha * (1.0 - progress);
float scale = 1.0 + (maxScale - 1.0) * progress;
float weakX = 0.5 + (TextureCoordsVarying.x - 0.5) / scale;
float weakY = 0.5 + (TextureCoordsVarying.y - 0.5) / scale;
vec2 weakTextureCoords = vec2(weakX, weakY);
vec4 weakMask = texture2D(Texture, weakTextureCoords);
vec4 mask = texture2D(Texture, TextureCoordsVarying);
gl_FragColor = mask * (1.0 - alpha) + weakMask * alpha;
}
Copy the code
The first is the effect of amplification. The key point is the calculation of weakX and weakY, such as 0.5 + (TextureCoordsVarying. X-0.5)/scale, which means that the distance between the x value of the texture coordinate corresponding to the vertex coordinate and the point in the texture is reduced by a certain proportion. This time we changed the texture coordinates and left the vertex coordinates unchanged to achieve the same stretching effect.
Then there is the effect of two layers. Through the above calculation, we get two texture color values weakMask and mask. WeakMask is amplified on the basis of mask.
The formula we use to superimpose the two color values is final color = base color * A % + Blend color * (1 – a%). This formula comes from the normal mode in blend mode.
This formula shows the final color value of an opaque layer and a translucent layer superimposed on each other. Therefore, the end result of the superposition above is Mask * (1.0-alpha) + weakMask * alpha.
Three, jitter,
1. Final effect
“Dithering” is a classic tiktok color offset effect, which is actually quite simple to implement. In addition, in addition to color offset, you can see a slight amplification effect.
2. Code implementation
Fragment shader code:
precision highp float;
uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
uniform float Time;
void main (void) {
float duration = 0.7;
float maxScale = 1.1;
float offset = 0.02;
float progress = mod(Time, duration) / duration; / / 0 ~ 1
vec2 offsetCoords = vec2(offset.offset) * progress;
float scale = 1.0 + (maxScale - 1.0) * progress;
vec2 ScaleTextureCoords = vec2(0.5.0.5) + (TextureCoordsVarying - vec2(0.5.0.5)) / scale;
vec4 maskR = texture2D(Texture, ScaleTextureCoords + offsetCoords);
vec4 maskB = texture2D(Texture, ScaleTextureCoords - offsetCoords);
vec4 mask = texture2D(Texture, ScaleTextureCoords);
gl_FragColor = vec4(maskR.r, mask.g, maskB.b, mask.a);
}
Copy the code
Here the zoom is similar to the one above, and we’ll focus on the color offset. Color offsets separate the three color channels and add different position offsets for the red channel and the blue channel, making the code easy to read.
Fourth, flash of white
1. Final effect
“Flash white” actually doesn’t look cool at all, and it’s easy to be blinded by flash over time. The effect is also simple to implement. It simply adds a white layer, and the opacity of the white layer changes over time.
2. Code implementation
Fragment shader code:
precision highp float;
uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
uniform float Time;
const float PI = 3.1415926;
void main (void) {
float duration = 0.6;
float time = mod(Time, duration);
vec4 whiteMask = vec4(1.0.1.0.1.0.1.0);
float amplitude = abs(sin(time * (PI / duration)));
vec4 mask = texture2D(Texture, TextureCoordsVarying);
gl_FragColor = mask * (1.0 - amplitude) + whiteMask * amplitude;
}
Copy the code
In the “out-of-body” example above, we have seen how to achieve the superposition of two layers. Here we just need to create a white layer whiteMask and calculate the final color value based on the current transparency.
Five, the burr
1. Final effect
Finally, we have a slightly more complex effect where the “burr” looks like “tear + faint color offset”. Color offset we have implemented above, here is mainly to explain the effect of tearing.
The idea is that we randomly offset each row of pixels by a distance of -1 to 1 (here -1 to 1 is for texture coordinates), but if the entire image is offset by a larger value, we may not even see what the original image looks like. So our logic is to set a threshold below which we offset, and beyond which we multiply a reduction factor.
The end result is that most rows are slightly offset, and only a few rows are significantly offset.
2. Code implementation
Fragment shader code:
precision highp float;
uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
uniform float Time;
const float PI = 3.1415926;
float rand(float n) {
return fract(sin(n) * 43758.5453123);
}
void main (void) {
float maxJitter = 0.06;
float duration = 0.3;
float colorROffset = 0.01;
float colorBOffset = 0.025;
float time = mod(Time, duration * 2.0);
float amplitude = max(sin(time * (PI / duration)), 0.0);
float jitter = rand(TextureCoordsVarying.y) * 2.0 - 1.0; / / - 1 ~ 1
bool needOffset = abs(jitter) < maxJitter * amplitude;
float textureX = TextureCoordsVarying.x + (needOffset ? jitter : (jitter * amplitude * 0.006));
vec2 textureCoords = vec2(textureX, TextureCoordsVarying.y);
vec4 mask = texture2D(Texture, textureCoords);
vec4 maskR = texture2D(Texture, textureCoords + vec2(colorROffset * amplitude, 0.0));
vec4 maskB = texture2D(Texture, textureCoords + vec2(colorBOffset * amplitude, 0.0));
gl_FragColor = vec4(maskR.r, mask.g, maskB.b, mask.a);
}
Copy the code
The random pixel offset mentioned above requires the use of random numbers. Unfortunately, GLSL does not have a built-in random function, so we need to implement one ourselves.
This float RAND (float N) implementation looks amazing, but it actually comes from here, and is known as the “noise function.”
It’s actually a pseudo-random function, essentially a Hash function. But here we can use it as a random function with a return value in the range of 0 to 1. If you want to know more about this function you can see it here.
Six, hallucinations,
1. Final effect
The “illusion” effect is a bit of a puzzle, because it doesn’t really look like it. The original effect is based on the results of the previous frame of the video to synthesize, static images are difficult to simulate this situation. Anyway, since we have tried our best, not like it is not like it, let’s talk about my implementation ideas.
You can see that the effect is a combination of residual shadows and color offsets.
The effect of the shadow is good, in the process of moving, after a period of time to create a new layer based on the current position, and the new layer of opacity gradually decreased over time. As a result, many layers of different transparency can be seen superimposed on each other in a single movement period, resulting in the effect of residual shadows.
And then we have this color offset. We can see that the object is moving with the blue in front and the red behind. Therefore, the whole process can be understood as: in the process of movement, some red channel values are lost in their original positions at an interval of time, and the values of these red channels will gradually recover as time shifts.
2. Code implementation
Fragment shader code:
precision highp float;
uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;
uniform float Time;
const float PI = 3.1415926;
const float duration = 2.0;
vec4 getMask(float time, vec2 textureCoords, float padding) {
vec2 translation = vec2(sin(time * (PI * 2.0 / duration)),
cos(time * (PI * 2.0 / duration)));
vec2 translationTextureCoords = textureCoords + padding * translation;
vec4 mask = texture2D(Texture, translationTextureCoords);
return mask;
}
float maskAlphaProgress(float currentTime, float hideTime, float startTime) {
float time = mod(duration + currentTime - startTime, duration);
return min(time, hideTime);
}
void main (void) {
float time = mod(Time, duration);
float scale = 1.2;
float padding = 0.5 * (1.0 - 1.0 / scale);
vec2 textureCoords = vec2(0.5.0.5) + (TextureCoordsVarying - vec2(0.5.0.5)) / scale;
float hideTime = 0.9;
float timeGap = 0.2;
float maxAlphaR = 0.5; // max R
float maxAlphaG = 0.05; // max G
float maxAlphaB = 0.05; // max B
vec4 mask = getMask(time, textureCoords, padding);
float alphaR = 1.0; // R
float alphaG = 1.0; // G
float alphaB = 1.0; // B
vec4 resultMask = vec4(0.0.0.0);
for (float f = 0.0; f < duration; f += timeGap) {
float tmpTime = f;
vec4 tmpMask = getMask(tmpTime, textureCoords, padding);
float tmpAlphaR = maxAlphaR - maxAlphaR * maskAlphaProgress(time, hideTime, tmpTime) / hideTime;
float tmpAlphaG = maxAlphaG - maxAlphaG * maskAlphaProgress(time, hideTime, tmpTime) / hideTime;
float tmpAlphaB = maxAlphaB - maxAlphaB * maskAlphaProgress(time, hideTime, tmpTime) / hideTime;
resultMask += vec4(tmpMask.r * tmpAlphaR,
tmpMask.g * tmpAlphaG,
tmpMask.b * tmpAlphaB,
1.0);
alphaR -= tmpAlphaR;
alphaG -= tmpAlphaG;
alphaB -= tmpAlphaB;
}
resultMask += vec4(mask.r * alphaR, mask.g * alphaG, mask.b * alphaB, 1.0);
gl_FragColor = resultMask;
}
Copy the code
Judging by the number of lines of code, this effect should be the most complex of all. To achieve the residual image, we first move the image in a circle over time.
Vec4 getMask(Float time, Vec2 textureCoords, float padding) calculates the exact position of the image at any given moment. It allows us to generate a new layer every period of time.
Float maskAlphaProgress(float currentTime, float hideTime, float startTime) this function can calculate the transparency of the layer created at a certain time at the currentTime.
MaxAlphaR, maxAlphaG, and maxAlphaB specify the transparency of the initial three color channels of the new layer respectively. Because the end effect is residual red, the values of the red channel are mostly preserved.
Then comes superposition, which is similar to the situation of two-layer superposition. Here, each channel of each layer is accumulated by multiplying its transparency value through the for loop to calculate the final color value resultMask.
Note: On the iOS emulator, only the CPU can simulate GPU functionality. So running the above code on an emulator can be very slow. Especially the last effect, due to too much calculation, the simulator can not show it. So if you want to run code, it’s best to run it on a real machine.
The source code
Check out the full code on GitHub.
reference
- When an Android developer goes crazy playing Douyin (2)
Use GLSL for Tiktok effects on iOS