Hello everyone, I am the Cang King.
The following is my series of related articles, you can refer to, you can give a like or follow my article.
OpenGL and audio and video related articles will be presented in the [OpenGL] Future Vision -MagicCamera3 utility open source library
It will record some of my experiences in writing this library.
When it comes to douyin special effects, I believe many people will read this article
When an Android developer goes crazy playing Douyin (2)
It provides the compilation and implementation of six kinds of douyin special effects, which are realized by Using Java code. The demo provided does not provide which effect can be selected. By default, the illusion effect is used after recording the small video.
It took some time to code these six effects into C++ opengles, which can be selected in the preview interface, and already supports 15-second short video recording (hard decompressed recording).
The entire framework here is written in C++, so if you’re looking for a C++ framework and a variety of filter effects, it’s perfect for you. MagicCamera3, welcome fork and star.
1. Out-of-body experience
The effect shows that the opacity of the previous frame is continuously reduced, superimposed on the current frame, and has the effect of amplification and diffusion.
void MagicSoulOutFilter::onDrawArraysPre() {// There are two layers, turn on color blend glEnable(GL_BLEND); GlBlendFunc (GL_SRC_ALPHA,GL_DST_ALPHA); glBlendFunc(GL_SRC_ALPHA,GL_DST_ALPHA); // The maximum number of frames is 15, and only 15 frames are displayed.float)mFrames/mMaxFrames; // set progress to 0 when progress is greater than 1if(mProgress > 1.0f){mProgress = 0; } mFrames++; //skipFrames is 8 and 23 frames are 0if(mFrames > mMaxFrames + mSkipFrames){ mFrames = 0; } / /setThe IdentityM function is ported to Matrix.setIdentity in Java with the initialization Matrix all set to 0setIdentityM(mMvpMatrix,0); GlUniformMatrix4fv (mMvpMatrixLocation,1,GL_FALSE,mMvpMatrix); // Transparency is 1float backAlpha = 1;
if(mProgress > 0){// Calculate the transparency of the oBE alpha = 0.2f - mProgress * 0.2f; backAlpha = 1 - alpha; } // Set the background opacity to be glUniform1f(mAlphaLocation,backAlpha) when the out-of-body effect is not displayed; /** /} void MagicSoulOutFilter::onDrawArraysAfter() {
if(mProgress>0){// glUniform1f(mAlphaLocation,alpha); // Set the zoom valuefloatscale = 1 + mProgress; / / set the orthogonal matrix amplification scaleM (mMvpMatrix, 0, scale, scale, scale); GlUniformMatrix4fv (mMvpMatrixLocation,1,GL_FALSE,mMvpMatrix); GlDrawArrays (GL_TRIANGLE_STRIP,0,4); } // Turn off glDisable(GL_BLEND); }Copy the code
One thing to notice about this effect
1. Turn on the color blend, otherwise the out-of-body image will be overlayed directly into the effect of circular amplification
2. The number of frames captured by the camera is not constant, it will change, depending on the FPS, so it will occasionally feel the effect of the frame is fast or slow
3. The function of Matrix in Java was used to realize the transformation of orthogonal Matrix. Some others recommended the function of GLM, I wonder if it is more efficient.
2. The jitter
There are two basic effects of dithering
1. Enlarge
2. Color offset
The effect of the last out-of-body experience has been analyzed and magnified. It’s the same.
Here is the most important color value offset effect, the following is the calculation of color value offset.
#version 300 es
precision mediump float; // The xy coordinates of each pointinvec2 textureCoordinate; Uniform sampler2D inputImageTexture; uniformfloat uTextureCoordOffset;
out vec4 glFragColor;
void mainDirect sampling () {/ / blue color value vec4 blue = texture (inputImageTexture textureCoordinate); // The color values of green and red are very obvious, so we need to offset them. Green and red need to go in separate directions, otherwise they will mix together. // The coordinate is shifted up to the left, Vec4 green = texture(inputImageTexture, vec2(textureCoordinate. X + utexrecoordoffset, textureCoordinate.y + uTextureCoordOffset)); // The coordinates shift to the lower right, Then sample color value vec4 red = texture (inputImageTexture, vec2 (textureCoordinate. X-ray uTextureCoordOffset textureCoordinate. Y - uTextureCoordOffset)); / / RG two samples after migration, B values continue to use the original color, transparency is 1, combination of final output glFragColor = vec4 (red. R, green. J g, blue. B, blue. A); }Copy the code
Call to calculate offset and magnification values before drawing
void MagicShakeEffectFilter::onDrawArraysPre() {
mProgress = (float)mFrames/mMaxFrames;
if (mProgress>1){
mProgress = 0;
}
mFrames++;
if (mFrames>mMaxFrames + mSkipFrames){
mFrames = 0;
}
floatScale = 1.0 + 0.2 f f * mProgress; // Clear the orthogonal matrixsetIdentityM(mMvpMatrix,0); / / set the orthogonal matrix, where the original position of the amplification length-width scaleM (mMvpMatrix, 0, scale, scale, 1.0 f); glUniformMatrix4fv(mMvpMatrixLocation,1,GL_FALSE,mMvpMatrix); // Set the color offsetfloatTextureCoordOffset = 0.01 f * mProgress; glUniform1f(mTextureCoordOffsetLocation,textureCoordOffset); }Copy the code
3. The burr
The effect is divided into two parts,
1. A row of pixel values is offset by a certain distance, resulting in a sense of fragmentation, which changes along the Y-axis
(1) By entering the y value in the texture into the generated (-1 to 1) value jitter
(2) Jitter uses step to compare the input Y offset value to determine whether the offset is generated
(3) Assign the input x offset value to Jitter
(4) Calculate the RGB component by calculating the offset value
(5) Finally combined output
2. Color offset
The following is the fragment shader code, estimate beginners like me, look at the estimate is not very understand, here need to understand GLSL built-in function, and color value offset, and color sensitivity.
#version 300 es
precision highp float;
invec2 textureCoordinate; uniform sampler2D inputImageTexture; Uniform vec2 uScanLineJitter; uniform vec2 uScanLineJitter; // The color offset value is uniformfloat uColorDrift;
out vec4 glFragColor;
float nrand(in float x,in floaty){ //fract(x) = x - floor(x); //dot is the vector dot, and sine is the sine functionreturnFract (sin (dot (vec2 (x, y), vec2 (12.9898, 78.233))) * 43758.5453); } voidmain()
{
float u = textureCoordinate.x;
floatv = textureCoordinate.y; // Use y to calculate the random value from 0 to 1, and then take the value from -1 to 1floatJitter = nrand(v,0.0) * 2.0-1.0;floatdrift = uColorDrift; If the first argument is greater than the second, then return 0, otherwise return 1floatoffsetParam = step(uScanLineJitter.y,abs(jitter)); Jitter = jitter * offsetParam * uscanlinejitter. x; The texture coordinates are between 0 and 1. If less than 0, the image is poked to the right of the screen. If more than 1, the image is poked to the left of the screen. Color1 = texture(inputImageTexture,fract(vec2(u + jitter,v))); vec4 color2 = texture(inputImageTexture,fract(vec2(u + jitter + v * drift ,v))); GlFragColor = vec4(color1.r,color2.g,color1.b,1.0); }Copy the code
The offset is calculated by the number of frames
void MagicGlitchFilter::onDrawArraysPre() {
glUniform2f(mScanLineJitterLocation,mJitterSequence[mFrames],mThreshHoldSequence[mFrames]);
glUniform1f(mColorDriftLocation,mDriftSequence[mFrames]);
mFrames ++;
if (mFrames>mMaxFrames){
mFrames = 0;
}
}
void MagicGlitchFilter::onDrawArraysAfter() {
}
void MagicGlitchFilter::onInit() {
GPUImageFilter::onInit();
mScanLineJitterLocation = glGetUniformLocation(mGLProgId,"uScanLineJitter");
mColorDriftLocation = glGetUniformLocation(mGLProgId,"uColorDrift");
}
void MagicGlitchFilter::onInitialized() { GPUImageFilter::onInitialized(); // mDriftSequence = newfloatF [9] {0.0, 0.03 f, f 0.032, 0.035 f, f 0.03, 0.032 f, f 0.031, 0.029 f, 0.025 f}; // Offset x value mJitterSequence = newfloatF [9] {0.0, 0.03 f, f 0.01, 0.02 f, f 0.05, 0.055 f, f 0.03, 0.02 f, 0.025 f}; // Offset y value mThreshHoldSequence = newfloatF [9] {1.0, 0.965 f, f 0.9, 0.9 f, f 0.9, 0.6 f, f 0.8, 0.5 f, 0.5 f}; }Copy the code
The zoom
The zoom effect is the simplest, by calculating the middle frame to zoom in and out of the orthogonal matrix
void MagicScaleFilter::onDrawArraysPre() {
if(mFrames <= mMiddleFrames){// Progress = mFrames * 1.0f /mMiddleFrames; }else{// mProgress = 2.0f - mFrames * 1.0f /mMiddleFrames; }setIdentityM(mMvpMatrix, 0);
floatScale = 1.0 + 0.3 f f * mProgress; / / orthogonal matrix amplification scaleM (mMvpMatrix, 0, scale, scale, scale); glUniformMatrix4fv(mMvpMatrixLocation,1,GL_FALSE,mMvpMatrix); mFrames++;if(mFrames>mMaxFrames){ mFrames = 0; }}Copy the code
4. The white
That is, calculate the white ratio by taking the frame time (RGB 0 is black, 1 is white)
#version 300 es
precision mediump float;
invec2 textureCoordinate; uniform sampler2D inputImageTexture; // Control exposure degree uniformfloat uAdditionalColor;
out vec4 glFragColor;
void main() { vec4 color = texture(inputImageTexture,textureCoordinate); // The maximum value is 1, the color values are all white, GlFragColor = vec4(color.r +uAdditionalColor, color.g+uAdditionalColor,color.b+uAdditionalColor,color.a); }Copy the code
Enter the progress scale value
void MagicShineWhiteFilter::onDrawArraysPre() {
if(mFrames<=mMiddleFrames){// Add color to mFrames*1.0f /mMiddleFrames; }else{// Reduce the color value mProgress = 2.0f-mFrames*1.0f /mMiddleFrames; } mFrames++;if (mFrames > mMaxFrames){
mFrames = 0;
}
glUniform1f(mAdditionColorLocation,mProgress);
}
Copy the code
5 illusion
Illusion this effect requires the use of Lut textures and FBO cache blending overlay
1.Lut diagram, to put it bluntly, is the color search and replace, Lut diagram can generally use PS output, given by the designer, can greatly reduce the preparation of the preparation of filters.
LUT principle description
How to make a damn LUT diagram correctly
2.FBO provides a series of buffers, The logical buffers including the color buffer, depth buffer, and template buffer (note that no cumulative buffer is provided in FBO) are called frameBuffer-attachable images, indicating that they are two-dimensional arrays of pixels that can be bound to FBO.
There are two types of bound objects in FBO: texture images and Renderbuffer images. If the texture object is bound to FBO, OpenGL performs render to Texture. If the render object is bound to FBO, OpenGL performs offscreen rendering. Both are used here.
Initialize vertex shaders and fragment shaders, Lut textures, and fBO IDS
void MagicVerigoFilter::onInitialized() { GPUImageFilter::onInitialized(); MLastFrameProgram = loadProgram(readShaderFromAsset(mAssetManager,"nofilter_v.glsl")->c_str(),readShaderFromAsset(mAssetManager,"common_f.glsl")->c_str()); MCurrentFrameProgram = loadProgram(readShaderFromAsset(mAssetManager,"nofilter_v.glsl")->c_str(),readShaderFromAsset(mAssetManager,"verigo_f2.glsl")->c_str()); / / texture mLutTexture = loadTextureFromAssetsRepeat Lut (mAssetManager,"lookup_vertigo.png"); } void MagicVerigoFilter::onInputSizeChanged(const int width, const int height) { mScreenWidth = width; mScreenHeight = height; RenderBuffer = new RenderBuffer(GL_TEXTURE8,width,height); mRenderBuffer2 = new RenderBuffer(GL_TEXTURE9,width,height); mRenderBuffer3 = new RenderBuffer(GL_TEXTURE10,width,height); }Copy the code
RenderBuffer is used for FBO initialization, texture binding, and texture logging
/ / fbo initialization RenderBuffer: : RenderBuffer (GLenum activeTextureUnit, int width, int height) {mWidth = width; mHeight = height; // Activate the texture slot glActiveTexture(activeTextureUnit); // mTextureId = get2DTextureRepeatID(); // texture id mTextureId = get2DTextureID(); // unsigned char* texBuffer = (unsigned char*)malloc(sizeof(unsigned char*) * width * height * 4); // glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE,texBuffer); glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,width,height,0,GL_RGBA,GL_UNSIGNED_BYTE, nullptr); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE); // Generate fb id glgenFrameBufferId (1,&mFrameBufferId); glBindFramebuffer(GL_FRAMEBUFFER,mFrameBufferId); Glgenrenderbufferid (1, &mrenderBufferId); glBindRenderbuffer(GL_RENDERBUFFER,mRenderBufferId); // Specify the width, height and color format of the images stored in the renderBuffer, GlRenderbufferStorage (GL_RENDERBUFFER,GL_DEPTH_COMPONENT16,width,height); / / reset glBindFramebuffer (GL_FRAMEBUFFER, 0); glBindRenderbuffer(GL_RENDERBUFFER,0); } // Configure void RenderBuffer before fBO ::bind() {// empty the viewport glViewport(0,0,mWidth,mHeight); / / bind fb texture id glBindFramebuffer (GL_FRAMEBUFFER mFrameBufferId); // Bind 2D texture to fBO glFramebufferTexture2D(GL_FRAMEBUFFER,GL_COLOR_ATTACHMENT0,GL_TEXTURE_2D,mTextureId,0); / / bind fbo texture to render buffer object glBindRenderbuffer (GL_RENDERBUFFER mRenderBufferId); / / will render buffer as the depth buffer is attached to the fbo glFramebufferRenderbuffer (GL_FRAMEBUFFER GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, mRenderBufferId); // Check the fBO statusif(glCheckFramebufferStatus(GL_FRAMEBUFFER) ! = GL_FRAMEBUFFER_COMPLETE){ ALOGE("framebuffer error"); } // void RenderBuffer::unbind() {// remove binding glBindFramebuffer(GL_FRAMEBUFFER,0); glBindRenderbuffer(GL_RENDERBUFFER,0); // glActiveTexture(GL_TEXTURE0); }Copy the code
1. Save the camera data to the first FBO.
2. After mixing the camera data with the Lut texture and using the color value combination of the previous frame (the second FBO texture), display it on the screen.
3. Save the screen data displayed in step 2 to the third FBO
4. Save the texture from the third FBO again to the second FBO for the next frame drawing. (Can’t reduce this step, otherwise it will be grey screen)
void MagicVerigoFilter::onDrawPrepare() {// bind texture mRenderBuffer->bind(a); glClear(GL_COLOR_BUFFER_BIT); } void MagicVerigoFilter::onDrawArraysAfterRenderbuffer ->unbind(); renderbuffer ->unbind(); DrawCurrentFrame (); drawCurrentFrame(); mRenderBuffer3->bind(a); // Draw the current frame into the fBO of mRenderBuffer3 and drawCurrentFrame(); MRenderBuffer3 ->unbind(); mRenderBuffer2->bind(a); Renderbuffer2: drawToBuffer(); renderBuffer2: drawToBuffer(); Renderbuffer2 ->unbind(); renderBuffer2 ->unbind(); mFirst =false;
}
Copy the code
Color mixing is up to Shader.
#version 300 es
precision mediump float;
inmediump vec2 textureCoordinate; uniform sampler2D inputImageTexture; Uniform sampler2D inputTextureLast; // Last texture Uniform sampler2D lookupTable; // Color find table texture out vec4 glFragColor; Vec4 getLutColor(vec4 textureColor,sampler2D lookupTexture){floatBlueColor = texturecolor.b * 63.0; mediump vec2 quad1; Quad1. Y = floor (floor (blueColor) / 8.0); Quad1.x = floor(blueColor) - quad1.y*8.0; mediump vec2 quad2; Quad2. Y = floor (ceil (blueColor) / 8.0); Quad2.x = ceil(blueColor) -quad2.y *8.0; highp vec2 texPos1; X = (quad1.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Y = (quad1.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); TexPos1. Y = 1.0 - texPos1. Y; highp vec2 texPos2; X = (quad2.x * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.r); Texpos2.y = (quad2.y * 0.125) + 0.5/512.0 + (0.125-1.0/512.0) * texturecolor.g); TexPos2. Y = 1.0 - texPos2. Y; lowp vec4 newColor1 = texture(lookupTexture,texPos1); lowp vec4 newColor2 = texture(lookupTexture,texPos2); lowp vec4 newColor = mix(newColor1,newColor2,fract(blueColor));return newColor;
}
void main() {/ / on a frame texture vec4 lastFrame = texture (inputTextureLast textureCoordinate); / / the frame to the corresponding Lut transform texture vec4 currentFrame = getLutColor (texture (inputImageTexture, textureCoordinate), lookupTable); / / on a frame and the frame color mixture processing glFragColor = vec4 (0.95 * lastFrame. R + 0.05 * currentFrame. R, currentFrame. G * 0.2 + lastFrame. G * 0.8, CurrentFrame. B, 1.0); }Copy the code
The hardest thing to understand here is the overlay, so when you move it, it’s obvious that you’re moving out blue, so if you take the blue part of the frame and move it out, the blue part of the frame should take all the blue value of the frame.
So the empirical value of trying it more than once is
GlFragColor = vec4 (0.95 * lastFrame. R + 0.05 * currentFrame. R, currentFrame. G * 0.2 + lastFrame. G * 0.8, CurrentFrame. B, 1.0);
Six filter effects are introduced temporarily, and the effects will be updated from time to time in the future. If you are interested, please give a thumbs up.
Create a new column group, hope interested students more discussion.