In the last article, I gave an overview of short video shooting, which is mainly the acceleration and deceleration of audio and video. In this article, I will introduce the realization of the special effects of douyin video.

1. Special effects overview


List of special effects

List of special effects

There are currently nine video effects on Douyin, and this article will introduce the implementation of the first six. One might ask why the last three special effects were left out.

Because I’m lazy, of course.


I didn’t think!

2. “Out-of-body experience”

The effect of Douyin is as follows:


Soul /

My implementation looks like this:


ezgif.com-rotate.gif

Code implementation

By observing the effect of Douyin, it can be seen that there are two layers, one is the original image of the video, and the other is a layer enlarged from the center and gradually reduced in transparency. The key codes are as follows.

2.1 Vertex shaders
uniform mat4 uTexMatrix; attribute vec2 aPosition; attribute vec4 aTextureCoord; varying vec2 vTextureCoord; uniform mat4 uMvpMatrix; Void main(){gl_Position = uMvpMatrix * vec4(aPosition,0.1,1.0); vTextureCoord = (uTexMatrix * aTextureCoord).xy; }Copy the code
2.2 Chip shaders
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform samplerExternalOES uTexture;
uniform float uAlpha;
void main(){
    gl_FragColor = vec4(texture2D(uTexture,vTextureCoord).rgb,uAlpha);
}
Copy the code

These two parts of the code are relatively simple, there is no special operation, just render the texture into memory, right

2.3 Animation Code
Private float mProgress = 0.0f; Private int mFrames = 0; Private static final int mMaxFrames = 15; Private static final int mSkipFrames = 5; Private float[] mMvpMatrix = new float[16]; // OpengL parameter location private int mMvpMatrixLocation; private int mAlphaLocation; Public void onDraw(int textureId,float[] texMatrix){// glEnable(GL_BLEND); glBlendFunc(GL_SRC_ALPHA, GL_DST_ALPHA); mProgress = (float) mFrames / mMaxFrames; if (mProgress > 1f) { mProgress = 0f; } mFrames++; if (mFrames > mMaxFrames + mSkipFrames) { mFrames = 0; } Matrix.setIdentityM(mMvpMatrix, 0); GlUniformMatrix4fv (mMvpMatrixLocation, 1, false, mMvpMatrix, 0); // Opacity of the bottom layer float backAlpha = 1f; // Zoom in layer transparency float alpha = 0f; If (mProgress > 0f) {alpha = 0.2f -mprogress * 0.2f; backAlpha = 1 - alpha; } glUniform1f(mAlphaLocation, backAlpha); glUniformMatrix4fv(mUniformTexMatrixLocation, 1, false, texMatrix, 0); / / initialize the vertex shader data, including texture coordinates and vertex coordinates mRendererInfo. GetVertexBuffer (). The position (0); glVertexAttribPointer(mAttrPositionLocation, 2, GL_FLOAT, false, 0, mRendererInfo.getVertexBuffer()); mRendererInfo.getTextureBuffer().position(0); glVertexAttribPointer(mAttrTexCoordLocation, 2, GL_FLOAT, false, 0, mRendererInfo.getTextureBuffer()); GLES20.glActiveTexture(GLES20.GL_TEXTURE0); GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureId); Gles20.gldrawarrays (gles20.gl_triangle_strip, 0, 4); If (mProgress > 0f) {// Draw the enlarged layer glUniform1f(mAlphaLocation, alpha); Float scale = 1.0f + 1f * mProgress; Matrix.scaleM(mMvpMatrix, 0, scale, scale, scale); glUniformMatrix4fv(mMvpMatrixLocation, 1, false, mMvpMatrix, 0); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0); GLES20.glUseProgram(0); glDisable(GL_BLEND); }Copy the code

The code above eventually draws the out-of-body effect

3. “Shake”

The effect of Douyin is as follows:


shake

My implementation looks like this:


ezgif-4-d0c993e10f.gif

Code implementation

To do this effect, we first analyze the effect of Tiktok. This special effect consists of two parts:

  • Center of amplification
  • Color deviation

After we pause the video and take a screenshot, we can see the following image:


WX20180910-191159.png

As can be seen from the picture, the original text in the keyboard turns blue, while there are more green and red words in the upper left corner and lower right corner respectively, so the color separation is to separate out the RGB value of a pixel.

2.1 Vertex shaders
uniform mat4 uTexMatrix; attribute vec2 aPosition; attribute vec4 aTextureCoord; varying vec2 vTextureCoord; uniform mat4 uMvpMatrix; Void main(){gl_Position = uMvpMatrix * vec4(aPosition,0.1,1.0); vTextureCoord = (uTexMatrix * aTextureCoord).xy; }Copy the code
2.2 Chip shaders
#extension GL_OES_EGL_image_external : require precision mediump float; varying vec2 vTextureCoord; uniform samplerExternalOES uTexture; // Color offset uniform float uTextureCoordOffset; void main(){ vec4 blue = texture2D(uTexture,vTextureCoord); vec4 green = texture2D(uTexture,vec2(vTextureCoord.x + uTextureCoordOffset,vTextureCoord.y + uTextureCoordOffset)); vec4 red = texture2D(uTexture,vec2(vTextureCoord.x - uTextureCoordOffset,vTextureCoord.y - uTextureCoordOffset)); gl_FragColor = vec4(red.x,green.y,blue.z,blue.w); }Copy the code

To implement pixel offset, we must first understand that the pixel shader works for each pixel. The vTextureCoord in the code contains the coordinates of the current pixel (x,y), x and y are from 0 to 1 respectively. If we want to separate the colors of the pixels, all we need to do is transform the coordinates in the texture2D function. For example, the point (0.1,0.1) has a white pixel, and the current pixel’s coordinates are (0.0,0.0). If we want the green component of the white pixel to be displayed at the current pixel’s position, we can add the x and y coordinates of the current pixel to 0.1, and the actual effect is that the white pixel is shifted toward the upper left corner. Red offset has a similar meaning. You take the red-green values of the upper left and lower right pixels and combine them with the blue values of the current pixel to create the image.

2.3 Animation key code
private float[] mMvpMatrix = new float[16]; Private float mProgress = 0.0f; private int mFrames = 0; private static final int mMaxFrames = 8; private static final int mSkipFrames = 4; @Override protected void onDraw(int textureId, float[] texMatrix) { mProgress = (float) mFrames / mMaxFrames; if (mProgress > 1f) { mProgress = 0f; } mFrames++; if (mFrames > mMaxFrames + mSkipFrames) { mFrames = 0; } float scale = 1.0f + 0.2f * mProgress; Matrix.setIdentityM(mMvpMatrix, 0); // Set the zoom percentage matrix. scaleM(mMvpMatrix, 0, scale, scale, 1.0f); glUniformMatrix4fv(mMvpMatrixLocation, 1, false, mMvpMatrix, 0); Float textureCoordOffset = 0.01f * mProgress; float textureCoordOffset = 0.01f * mProgress; glUniform1f(mTextureCoordOffsetLocation, textureCoordOffset); super.onDraw(textureId, texMatrix); }Copy the code

4. “Burrs”

Effect of Douyin:


burr

My implementation renderings:


burr

The “burr” effect is not fully restored, the animation parameters are not properly adjusted.

Code implementation

To see this effect, we first analyze the video frame by frame, you can see the following screenshot:


Burr screenshots

Look closely at the picture, we can find that, in fact the burr effect is a row of pixels value migration distance, looked like the picture is torn, and the deviation is random variation with the y axis, it looks more natural, and observe the GIF picture you can see, in addition to tear, and the effect of color value migration. Color offset was covered in the introduction of “jitter” effects, so we just need to deal with tear effects.

4.1 Vertex shaders
uniform mat4 uTexMatrix; attribute vec2 aPosition; attribute vec4 aTextureCoord; varying vec2 vTextureCoord; uniform mat4 uMvpMatrix; Void main(){gl_Position = uMvpMatrix * vec4(aPosition,0.1,1.0); vTextureCoord = (uTexMatrix * aTextureCoord).xy; }Copy the code
4.2 Chip shaders
#extension GL_OES_EGL_image_external : require precision highp float; varying vec2 vTextureCoord; uniform samplerExternalOES uTexture; Uniform vec2 uScanLineJitter; uniform vec2 uScanLineJitter; // Color offset uniform float uColorDrift; Float nrand(in float x, in float y){return fract(sin(dot(vec2(x, y), vec2(12.9898, 78.233))) * 43758.5453); } void main(){ float u = vTextureCoord.x; float v = vTextureCoord.y; Float jitter = nrand(v,0.0) * 2.0-1.0; float jitter = nrand(v,0.0) * 2.0-1.0; float drift = uColorDrift; float offsetParam = step(uScanLineJitter.y,abs(jitter)); jitter = jitter * offsetParam * uScanLineJitter.x; vec4 color1 = texture2D(uTexture,fract(vec2( u + jitter,v))); vec4 color2 = texture2D(uTexture,fract(vec2(u + jitter + v*drift ,v))); Gl_FragColor = vec4 (color1. J r, color2. J g, color1. B, 1.0); }Copy the code

Here we focus on the fragment shader code, the random function is the nRAND function in the code

Fract (x) = x-floor (x); fract(x) = x; Dot is the dot product, and sine is the sine function

As shown in the code above, we first extract the x and Y values of the current pixel, and then use Y to calculate the random number

Float jitter = nrand(v,0.0) * 2.0-1.0; float jitter = nrand(v,0.0) * 2.0-1.0; // I get a number between -1 and 1Copy the code

And then next, we calculate whether the pixels in the current row are going to be left or right

float offsetParam = step(uScanLineJitter.y,abs(jitter)); If the first argument is greater than the second, return 0, otherwise return 1Copy the code

Determine whether the current random number is greater than a certain threshold. If it is greater than this threshold, offset, otherwise not offset. By controlling this threshold, we can change how cluttered the current video is (the messier it is, the more pixels it will tear).

This is followed by calculating the offset value of a row of pixels

jitter = jitter * offsetParam * uScanLineJitter.x; X, where uscanlinejitter. x is the maximum offset. // Here we calculate the final pixel value. So the image goes to the right of the screen, if it's more than 1, it goes to the left of the screen. vec4 color1 = texture2D(uTexture,fract(vec2( u + jitter,v))); vec4 color2 = texture2D(uTexture,fract(vec2(u + jitter + v*drift ,v)));Copy the code
4.3 Animation Code

The animation code is not pasted here, but is probably controlled by the current frame number

Uniform vec2 uScanLineJitter; uniform vec2 uScanLineJitter; // Color offset uniform float uColorDrift;Copy the code

The value of these two parameters, uscanlinejitter. x is larger, the transverse tearing distance is larger; The larger the uscanlinejitter. y, the more pixels are torn on the screen

5. “Zoom”

Effect of Douyin:


The zoom

My implementation renderings:


The zoom

Code implementation

This effect is relatively simple, is to zoom in and out of the loop

5.1 Vertex shaders
uniform mat4 uTexMatrix; attribute vec2 aPosition; attribute vec4 aTextureCoord; varying vec2 vTextureCoord; // Uniform mat4umvpmatrix; Void main(){gl_Position = uMvpMatrix * vec4(aPosition,0.1,1.0); vTextureCoord = (uTexMatrix * aTextureCoord).xy; }Copy the code
5.2 Chip shaders
#extension GL_OES_EGL_image_external : require
precision mediump float;
varying vec2 vTextureCoord;
uniform samplerExternalOES uTexture;
void main(){
    gl_FragColor = texture2D(uTexture,vTextureCoord);
}
Copy the code
5.3 Animation Code

Animation code is relatively simple, is to control the zoom matrix to zoom in and out, the key code is as follows:

private int mScaleMatrixLocation; Private static final float mScale = 0.3f; private static final float mScale = 0.3f; private int mFrames; Private int mMaxFrames = 14; private int mMaxFrames = 14; private int mMiddleFrames = mMaxFrames / 2; private float[] mScaleMatrix = new float[16]; Public void onDraw(int textureId,float texMatrix[]){// Initialize Matrix Matrix. float progress; If (mFrames <= mMiddleFrames) {progress = mFrames * 1.0f/mMiddleFrames; } else {progress = 2f-mframes * 1.0f/mMiddleFrames; } float scale = 1f + mScale * progress; Matrix.scaleM(mScaleMatrix, 0, scale, scale, scale); glUniformMatrix4fv(mScaleMatrixLocation, 1, false, mScaleMatrix, 0); mFrames++; if (mFrames > mMaxFrames) { mFrames = 0; }... }Copy the code

6. “Flash white”

Effect picture of Douyin implementation:


Flash of white

My implementation renderings:


Flash of white

Code implementation

This effect is quite simple, just like the feeling of overexposure of the camera, and is implemented by adding a fixed value to each component of RGB.

6.1 Vertex shaders
uniform mat4 uTexMatrix; attribute vec2 aPosition; attribute vec4 aTextureCoord; varying vec2 vTextureCoord; Void main(){gl_Position = vec4(aPosition,0.1,1.0); vTextureCoord = (uTexMatrix * aTextureCoord).xy; }Copy the code
6.2 Chip shaders
#extension GL_OES_EGL_image_external : require precision mediump float; varying vec2 vTextureCoord; uniform samplerExternalOES uTexture; Uniform float uAdditionalColor; void main(){ vec4 color = texture2D(uTexture,vTextureCoord); gl_FragColor = vec4(color.r + uAdditionalColor,color.g + uAdditionalColor,color.b + uAdditionalColor,color.a); }Copy the code
6.3 Animation Code
public void onDraw(int textureId,float[] texMatrix){ float progress; If (mFrames <= mHalfFrames) {progress = mFrames * 1.0f/mHalfFrames; } else {progress = 2.0f-mframes * 1.0f/mHalfFrames; } mFrames++; if (mFrames > mMaxFrames) { mFrames = 0; } glUniform1f(mAdditionColorLocation, progress); . Draw}Copy the code

7. “Hallucinations”

Effect of Douyin:


huanjue.gif

My implementation effect:


huanjue1.gif

Code implementation

When I first saw this effect, I was a little confused because I didn’t have a clue and I just wanted to throw the computer away.


throw-away-your-laptop

Then, frame by frame, I found a clue. The effect can be summed up in three parts:

  • A filter
  • The ghosting
  • Residual shadow color separation
7.1 filter

Let’s compare these two pictures, and you’ll get the idea

Before the filter


Before the filter

After the filter


751536631171_.pic.jpg

As you can see, after the illusion effect is used, the image has a darker blue feel. What happens in this case? There are generally two options, find visual students to help you restore, or decompile the APK package search code. I chose the latter. After unzipped Tiktok apk, I searched the resource files and found a picture — lookup_Vertigo.png — that was it


lut

What is this? It is a color lookup table, and filters can be converted manually by code or written in a LUT file and then looked up directly from the image when needed. LUT files use the following code:

Uniform sampler2D uTexture2 for LUT files; Vec4 lookup(in vec4 textureColor){mediump float blueColor = texturecolor.b * 63.0; vec4 lookup(in vec4 textureColor){mediump float blueColor = texturecolor.b * 63.0; mediump vec2 quad1; Quad1.y = floor(floor(blueColor) / 8.0); Quad1.x = floor(blueColor) - (quad1.y * 8.0); mediump vec2 quad2; Quad2.y = floor(ceil(blueColor) / 8.0); Quad2.x = ceil(blueColor) - (quad2.y * 8.0); highp vec2 texPos1; X = (quad1.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Y = (quad1.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); TexPos1. Y = 1.0 - texPos1. Y; highp vec2 texPos2; X = (quad2.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Texpos2.y = (quad2.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); TexPos2. Y = 1.0 - texPos2. Y; lowp vec4 newColor1 = texture2D(uTexture2, texPos1); lowp vec4 newColor2 = texture2D(uTexture2, texPos2); lowp vec4 newColor = mix(newColor1, newColor2, fract(blueColor)); return newColor; }Copy the code

After converting our video frames through this LUT file, this is what the illusion filter looks like. There was a problem with the filter that the normal sampler2D texture could not be shared with the samplerExternalOES texture. In this case, the GLSL code would not work if both textures were present. So what’s the solution? If it is just a video preview, there are many solutions, such as using Camera class previewCallback to get the byte array (YUV data) of each frame, convert yuV data to RGB, and then convert RGB to texture for display. Although this method is feasible, the efficiency is relatively poor because of the need for data conversion. Is there a more elegant and efficient solution? The answer is FBO.

In the OpenGL rendering pipeline, geometric data and textures are transformed and tested many times, and finally displayed on the screen as two-dimensional pixels. The final rendering destination of the OpenGL pipeline is called the framebuffer. The frame buffer is a collection of two-dimensional arrays and storage areas used by OpenG: color cache, depth cache, template cache, and accumulative cache. In general, the frame cache is completely generated and managed by the Window system and used by OpenGL. This default frame cache is called a window-system-provided frame cache. In the OpenGL extension, GL_EXT_framebuffer_object provides an interface for creating additional frame cache objects that cannot be displayed. To distinguish it from the default window-system generated framebuffer, this framebuffer is called the Application-created framebuffer. By using the Frame Cache Object (FBO), OpenGL can send display output to the reference program frame cache instead of the traditional “Window system generated” frame cache. And it’s completely controlled by OpenGL.

To sum up, FBO creates a Canvas in memory. We can bind the Canvas to a texture and draw the content onto the Canvas first. Then we can use the texture to do whatever we want with the content in the Canvas.

The use of FBO is explained below.

7.2 the ghosting

One of the most obvious effects of “illusion” is that there will be residual shadows when objects move in the screen. How to solve this problem? With a little thought we can get the answer — keep the content of the previous frame, make it transparent, and then mix it with the content of the current frame. Repeat this process over and over again, and you get the effect of residual shadows. So how do you keep the last frame? Again, the answer is FBO.

7.3 Color separation of residual shadows

This might be a little confusing, but if you look at the screenshot, you’ll see.


Residual shadow color separation

As you can see, the residual shadow of the pen in the screenshot is colorful.

How do we solve this? When we mix the current frame with the previous frame, we must manipulate the RGB component of each pixel point, so the seven colors should start from here, there must be a mixing formula

vec4 currentFrame; vec4 lastFrame; gl_FragColor = vec4(a1 * currentFrame.r + a2 * lastFrame.r,b1 * currentFrame.g + b2 * lastFrame.g,c1 * currentFrame.b + C2 * lastFrame. B, 1.0);Copy the code

All we have to do is figure out the values of A,b, and c in this formula. So how do you calculate that? So here’s a little trick, we’re going to assume that currentFrame is RGB 0 and lastFrame is RGB 1. You may ask, what kind of horseworm operation is this? Let’s make the last frame black and this one white. Without further ado, look at the picture.

Let’s take a black background, a white object — a black mouse pad and a paper towel, and it looks something like this:


Color separation renderings

We go frame by frame, and we get what we want pretty quickly.

First let’s look at the first three frames


Frame by frame analysis 1

As you can see, when the paper moves down, the exposed part is blue (the current frame is white, a frame is black), and the part above is orange (at this point in a frame is white, the current frame is black), then we draw a conclusion from this is that c1 = 1, c2 = 0, because orange part of the blue color value is 0.

Look at the next few frames


Frame by frame analysis 1

And you can see that the remnant at the very top ends up being very red, so we know that A1 is a number that’s close to zero, and A2 is a number that’s close to one, so why can’t it be one? Because if it’s 1, the lastFrame color stays there and doesn’t fade away as the number of frames increases.

So once we get the values of A and C, the values of B let’s make a rough guess, try a few numbers and then we’ll get our answer. The final formula is as follows:

Gl_FragColor = vec4 (0.95 * lastFrame. R + 0.05 * currentFrame. R, currentFrame. G * 0.2 + lastFrame. G * 0.8, CurrentFrame. B, 1.0);Copy the code

This formula is pretty close.

7.4 Key Codes
private RenderBuffer mRenderBuffer; private RenderBuffer mRenderBuffer2; private RenderBuffer mRenderBuffer3; private int mLutTexture; Private int mCurrentFrameProgram; Private int mLastFrameProgram; private boolean mFirst = true; @Override public void draw(int textureId, float[] texMatrix, int canvasWidth, int canvasHeight) { if (mRenderBuffer == null) { mRenderBuffer = new RenderBuffer(GL_TEXTURE8, canvasWidth, canvasHeight); mRenderBuffer2 = new RenderBuffer(GL_TEXTURE9, canvasWidth, canvasHeight); mRenderBuffer3 = new RenderBuffer(GL_TEXTURE10, canvasWidth, canvasHeight); mLastFrameProgram = GLUtils.buildProgram(FileUtils.readFromRaw(R.raw.vertex_common), FileUtils.readFromRaw(R.raw.fragment_common)); mCurrentFrameProgram = GLUtils.buildProgram(FileUtils.readFromRaw(R.raw.vertex_common), FileUtils.readFromRaw(R.raw.fragment_current_frame)); mLutTexture = GLUtils.genLutTexture(); android.opengl.GLUtils.texImage2D(GL_TEXTURE_2D, 0, BitmapFactory.decodeResource(AppProfile.getContext().getResources(), R.raw.lookup_vertigo), 0); } mRenderBuffer.bind(); // Here we use the samplerExternalOES texture to draw the current video content into the cache super.draw(textureId, texMatrix, canvasWidth, canvasHeight); mRenderBuffer.unbind(); // Draw the current frame gles20.glclear (gles20.gl_COLOR_buffer_bit); drawCurrentFrame(); // Save the contents of the current frame to the cache mrenderBuffer3.bind (); drawCurrentFrame(); mRenderBuffer3.unbind(); // Draw the contents of cache 3 into cache 2. The contents of cache 2 will be used in the next frame with mrenderBuffer2.bind (); drawToBuffer(); mRenderBuffer2.unbind(); mFrames++; mFirst = false; } private void drawCurrentFrame() { glUseProgram(mCurrentFrameProgram); int textureId = mRenderBuffer.getTextureId(); setup(mCurrentFrameProgram, new int[]{textureId, mFirst ? textureId : mRenderBuffer2.getTextureId(), mLutTexture}); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } private void drawToBuffer() { glUseProgram(mLastFrameProgram); setup(mLastFrameProgram, new int[]{mRenderBuffer3.getTextureId()}); GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4); } private void setup(int programId, int[] textureId) { glUseProgram(programId); int aPositionLocation = glGetAttribLocation(programId, "aPosition"); int aTexCoordLocation = glGetAttribLocation(programId, "aTextureCoord"); mRendererInfo.getVertexBuffer().position(0); glEnableVertexAttribArray(aPositionLocation); glVertexAttribPointer(aPositionLocation, 2, GL_FLOAT, false, 0, mRendererInfo.getVertexBuffer()); mRendererInfo.getTextureBuffer().position(0); glEnableVertexAttribArray(aTexCoordLocation); glVertexAttribPointer(aTexCoordLocation, 2, GL_FLOAT, false, 0, mRendererInfo.getTextureBuffer()); for (int i = 0; i < textureId.length; i++) { int textureLocation = glGetUniformLocation(programId, "uTexture" + i); glActiveTexture(GL_TEXTURE0 + i); glBindTexture(GLES20.GL_TEXTURE_2D, textureId[i]); glUniform1i(textureLocation, i); }}Copy the code

Frame cache code

public class RenderBuffer { private int mTextureId; private int mActiveTextureUnit; private int mRenderBufferId; private int mFrameBufferId; private int mWidth, mHeight; public RenderBuffer(int activeTextureUnit, int width, int height) { this.mActiveTextureUnit = activeTextureUnit; this.mWidth = width; this.mHeight = height; int[] buffer = new int[1]; GLES20.glActiveTexture(activeTextureUnit); mTextureId = GLUtils.genTexture(); IntBuffer texBuffer = ByteBuffer.allocateDirect(width * height * 4).order(ByteOrder.nativeOrder()).asIntBuffer(); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, texBuffer); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); GLES20.glTexParameterf(GLES20.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); // Generate frame buffer GLES20.glGenFramebuffers(1, buffer, 0); mFrameBufferId = buffer[0]; // Bind frame buffer GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId); // Generate render buffer GLES20.glGenRenderbuffers(1, buffer, 0); mRenderBufferId = buffer[0]; // Bind render buffer GLES20.glBindRenderbuffer(GLES20.GL_RENDERBUFFER, mRenderBufferId); GLES20.glRenderbufferStorage(GLES20.GL_RENDERBUFFER, GLES20.GL_DEPTH_COMPONENT16, width, height); } public void bind() { GLES20.glViewport(0, 0, mWidth, mHeight); checkGlError("glViewport"); GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBufferId); checkGlError("glBindFramebuffer"); GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0, GLES20.GL_TEXTURE_2D, mTextureId, 0); checkGlError("glFramebufferTexture2D"); GLES20.glFramebufferRenderbuffer(GLES20.GL_FRAMEBUFFER, GLES20.GL_DEPTH_ATTACHMENT, GLES20.GL_RENDERBUFFER, mRenderBufferId); checkGlError("glFramebufferRenderbuffer"); } public void unbind() { GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0); } public int getTextureId(){ return mTextureId; }}Copy the code

Shader code

precision mediump float; varying vec2 vTextureCoord; uniform sampler2D uTexture0; uniform sampler2D uTexture1; uniform sampler2D uTexture2; Vec4 lookup(in vec4 textureColor){mediump float blueColor = texturecolor.b * 63.0; vec4 lookup(in vec4 textureColor){mediump float blueColor = texturecolor.b * 63.0; mediump vec2 quad1; Quad1.y = floor(floor(blueColor) / 8.0); Quad1.x = floor(blueColor) - (quad1.y * 8.0); mediump vec2 quad2; Quad2.y = floor(ceil(blueColor) / 8.0); Quad2.x = ceil(blueColor) - (quad2.y * 8.0); highp vec2 texPos1; X = (quad1.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Y = (quad1.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); TexPos1. Y = 1.0 - texPos1. Y; highp vec2 texPos2; X = (quad2.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Texpos2.y = (quad2.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); TexPos2. Y = 1.0 - texPos2. Y; lowp vec4 newColor1 = texture2D(uTexture2, texPos1); lowp vec4 newColor2 = texture2D(uTexture2, texPos2); lowp vec4 newColor = mix(newColor1, newColor2, fract(blueColor)); return newColor; } void main(){ vec4 lastFrame = texture2D(uTexture1,vTextureCoord); vec4 currentFrame = lookup(texture2D(uTexture0,vTextureCoord)); Gl_FragColor = vec4 (0.95 * lastFrame. R + 0.05 * currentFrame. R, currentFrame. G * 0.2 + lastFrame. G * 0.8, CurrentFrame. B, 1.0); }Copy the code

conclusion

Trill effects is probably the case, if you want to on video post-processing, we only need to remember that each specific start time and end time, and then to deal with each frame in the background, finally saved to a new video file, this is almost as recording, is the operation of an off-screen rendering. If you think this article is helpful to you, you are welcome to like it. If you think this article has shortcomings, you are welcome to point it out. Thank you!