directory
- The principle of Gaussian blur
- Implementation analysis of GPUImage blur
- Gaussian fuzzy optimization
- data
- harvest
Blur is a very common skill in our daily development. In Android, there are Java open source schemes and RenderScript schemes. Today we will learn how to implement Gaussian blur through OpenGL. Gaussian blur used in the work, but also to do the basic simple and practical, why can be achieved and whether can improve the speed of performance optimization points to reduce memory, before are not considered. Through this article we will learn the principle of Gaussian blur, implementation and optimization, our journey begins.
First, the principle of Gaussian blur
This section will cover some basic concepts in mathematics, such as normal distribution, Gaussian function, convolution, fuzzy radius, etc. We will review them through the following learning practices.
“Fuzzy” can be understood as each pixel takes the average value of the surrounding pixels. There are many kinds of fuzzy classification. Let’s look at the mean value fuzzy and Gaussian fuzzy.
Mean ambiguity means that the value of each pixel takes the average value of surrounding elements, and no surrounding points have the same weight regardless of the distance from the current point
GAMES101- Introduction to Modern Computer Graphics – Lingqi Yan
The mean value blur can achieve the fuzzy effect, but if the blurred effect looks more similar to the original effect, the weight problem should be considered, that is, the closer the distance is, the greater the weight of the point, and the further the distance is, the smaller the weight of the point.
Normal distribution is a kind of weight distribution mode, the closer you get to the center, the bigger the value, the farther away from the center, the smaller the value.
Image from: Gaussian blur algorithm
The image is two-dimensional, corresponding to the two-dimensional normal distribution, whose density function is called “Gaussian Function”.
Sigma is the variance of x
With the Gaussian function, we can calculate the weight of each point. Assume that the fuzzy radius is 1, construct a 3×3 matrix, and assume that the Gaussian function is σ 1.5. Calculate the weight value of each point based on the coordinate value of XY. Then, the weight value of all points should add up to 1, so normalize the calculated values.
The normalized weight matrix is taken as the convolution kernel, and the original image is convolved to obtain the fuzzy value.
image
Gaussian blur is a low-pass filter that filters out the high frequency signals and the remaining low frequency signals. The boundary of the image content is removed to achieve blur
Second, the implementation analysis of GPUImage Gaussian blur
Understand the principle of the gaussian blur, under this section we look at how to implement a gaussian blur, GPUImage is a very powerful and rich OpenGL open source image processing library, which took part of the realization of the filter, the corresponding gaussian blur filter to GPUImageGaussianBlurFilter, Let’s look at how it works.
// Vertex shader attribute vec4 position; attribute vec4 inputTextureCoordinate; const int GAUSSIAN_SAMPLES = 9; uniform float texelWidthOffset; uniform float texelHeightOffset; varying vec2 textureCoordinate; varying vec2 blurCoordinates[GAUSSIAN_SAMPLES]; void main() { gl_Position = position; textureCoordinate = inputTextureCoordinate.xy; // Calculate the positions for the blur int multiplier = 0; vec2 blurStep; vec2 singleStepOffset = vec2(texelHeightOffset, texelWidthOffset); for (int i = 0; i < GAUSSIAN_SAMPLES; i++) { multiplier = (i - ((GAUSSIAN_SAMPLES - 1) / 2)); // Blur in x (horizontal) blurStep = float(multiplier) * singleStepOffset; blurCoordinates[i] = inputTextureCoordinate.xy + blurStep; } // Uniform sampler2D inputImageTexture; const lowp int GAUSSIAN_SAMPLES = 9; varying highp vec2 textureCoordinate; varying highp vec2 blurCoordinates[GAUSSIAN_SAMPLES]; Void main() {lowp vec3 sum = vec3(0.0); lowp vec4 fragColor=texture2D(inputImageTexture,textureCoordinate); Sum += texture2D(inputImageTexture, blurCoordinates[0]).rgb * 0.05; Sum += texture2D(inputImageTexture, blurCoordinates[1]).rgb * 0.09; Sum += texture2D(inputImageTexture, blurCoordinates[2]).rgb * 0.12; Sum += texture2D(inputImageTexture, blurCoordinates[3]).rgb * 0.15; Sum += texture2D(inputImageTexture, blurCoordinates[4]).rgb * 0.18; Sum += texture2D(inputImageTexture, blurCoordinates[5]).rgb * 0.15; Sum += texture2D(inputImageTexture, blurCoordinates[6]).rgb * 0.12; Sum += texture2D(inputImageTexture, blurCoordinates[7]).rgb * 0.09; Sum += texture2D(inputImageTexture, blurCoordinates[8]).rgb * 0.05; gl_FragColor = vec4(sum,fragColor.a); }Copy the code
Using the shader code, we can see GAUSSIAN_SAMPLES = 9; Four samples from the left and right, plus one sample point from the center point, i.e. 2×4+1=9, is a 9×9 matrix. BlurCoordinates stores the coordinates of the calculated texture. The convolution operation is then performed in the source shader.
GPUImage adopts Gaussian blur for X-axis and Y-axis respectively, which reduces the complexity of the algorithm.
The convolution kernel of a Gaussian filter is two-dimensional (m
N), then the algorithm complexity is O(m
n
M
N), high complexity, and the algorithm complexity becomes O(2
m
M*N)
Render the following
public class GPUImageRender implements GLSurfaceView.Renderer { private Context context; private int inputTextureId; private GPUImageGaussianBlurFilter blurFilter; private FloatBuffer glCubeBuffer; private FloatBuffer glTextureBuffer; Public static final float CUBE[] = {-1.0f, -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,}; Public static final float TEXTURE_NO_ROTATION[] = {0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f, 0.0f,}; public GPUImageRender(Context context) { this.context = context; } @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { String vertexStr = ShaderHelper.loadAsset(context.getResources(), "blur_vertex_gpuimage.glsl"); String fragStr = ShaderHelper.loadAsset(context.getResources(), "blur_frag_gpuimage.glsl"); blurFilter = new GPUImageGaussianBlurFilter(vertexStr,fragStr); blurFilter.ifNeedInit(); inputTextureId = TextureHelper.loadTexture(context, R.drawable.bg); glCubeBuffer = ByteBuffer.allocateDirect(CUBE.length * 4) .order(ByteOrder.nativeOrder()) .asFloatBuffer(); glCubeBuffer.put(CUBE).position(0); glTextureBuffer = ByteBuffer.allocateDirect(TEXTURE_NO_ROTATION.length * 4) .order(ByteOrder.nativeOrder()) .asFloatBuffer(); glTextureBuffer.put(TEXTURE_NO_ROTATION).position(0); } @Override public void onSurfaceChanged(GL10 gl, int width, int height) { GLES20.glViewport(0, 0, width, height); blurFilter.onOutputSizeChanged(width,height); } @Override public void onDrawFrame(GL10 gl) { GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT); GLES20.glClearColor(0f,0f,0f,1f); blurFilter.onDraw(inputTextureId,glCubeBuffer,glTextureBuffer); } } public class GPUImageTwoPassFilter extends GPUImageFilterGroup { public GPUImageTwoPassFilter(String firstVertexShader, String firstFragmentShader, String secondVertexShader, String secondFragmentShader) { super(null); addFilter(new GPUImageFilter(firstVertexShader, firstFragmentShader)); addFilter(new GPUImageFilter(secondVertexShader, secondFragmentShader)); } } public GPUImageGaussianBlurFilter(float blurSize,String vertexStr,String fragStr) { super(vertexStr, fragStr, vertexStr, fragStr); this.blurSize = blurSize; }Copy the code
The full code has been uploaded to github github.com/ayyb1988/me…
It uses the FBO technology discussed in the previous article
//com.av.mediajourney.opengl.gpuimage.GPUImageFilterGroup#onDraw public void onDraw(final int textureId, final FloatBuffer cubeBuffer, final FloatBuffer textureBuffer) { runPendingOnDrawTasks(); if (! isInitialized() || frameBuffers == null || frameBufferTextures == null) { return; } if (mergedFilters ! = null) { int size = mergedFilters.size(); int previousTextureId = textureId; for (int i = 0; i < size; i++) { GPUImageFilter filter = mergedFilters.get(i); boolean isNotLast = i < size - 1; // If it is not the last one, use FBO mode to render off-screen; If (isNotLast) {gles20.glbindFrameBuffer (gles20.gl_frameBuffer, frameBuffers[I]); GLES20.glClearColor(0, 0, 0, 0); } // the first filter takes the input texture id, vertex buffer, textureBuffer if (I == 0) {filter.ondraw (previousTextureId, cubeBuffer, textureBuffer); } else if (i == size - 1) { filter.onDraw(previousTextureId, glCubeBuffer, (size % 2 == 0) ? glTextureFlipBuffer : glTextureBuffer); } else { filter.onDraw(previousTextureId, glCubeBuffer, glTextureBuffer); If (isNotLast) {gles20.glbindFrameBuffer (gles20.gl_framebuffer, 0); if (isNotLast) {gles20.gl_framebuffer, 0); previousTextureId = frameBufferTextures[i]; }}}}Copy the code
For details, see github github.com/ayyb1988/me…
The effect of Gaussian blur is as follows:
Gaussian fuzzy optimization
On the premise of ensuring the fuzzy effect, how to improve the efficiency of fuzzy, that is, to reduce time consumption, is directly affected by the amount of computation, which can be optimized in the following directions:
- Reduce offset size (blur radius)
- Optimization algorithm implementation
- Zoom the image first, then gaussian blur to reduce the amount of data to be processed
- Understand GPU operation mode, reduce branch statements, use opengl3.0, etc
** Reduce offset size (fuzzy radius) and optimize algorithm implementation see GLSL
// Vertex shader attribute vec4 position; attribute vec4 inputTextureCoordinate; //const int GAUSSIAN_SAMPLES = 9; Const int SHIFT_SIZE =2; const int SHIFT_SIZE =2; const int SHIFT_SIZE =2; uniform float texelWidthOffset; uniform float texelHeightOffset; varying vec2 textureCoordinate; varying vec4 blurCoordinates[SHIFT_SIZE]; void main() { gl_Position = position; textureCoordinate = inputTextureCoordinate.xy; // Offset heigh2 singleStepOffset = vec2(texelHeightOffset, texelWidthOffset); // Offset heigh2 singleStepOffset = vec2(texelHeightOffset, texelWidthOffset); // int multiplier = 0; // vec2 blurStep; // // for (int i = 0; i < GAUSSIAN_SAMPLES; i++) // { // multiplier = (i - ((GAUSSIAN_SAMPLES - 1) / 2)); // // Blur in x (horizontal) // blurStep = float(multiplier) * singleStepOffset; // blurCoordinates[i] = inputTextureCoordinate.xy + blurStep; For (int I =0; i< SHIFT_SIZE; i++){ blurCoordinates[i] = vec4(textureCoordinate.xy - float(i+1)*singleStepOffset, textureCoordinate.xy + float(i+1)*singleStepOffset); } // Uniform sampler2D inputImageTexture; //const int GAUSSIAN_SAMPLES = 9; Const int SHIFT_SIZE =2; const int SHIFT_SIZE =2; const int SHIFT_SIZE =2; varying highp vec2 textureCoordinate; varying vec4 blurCoordinates[SHIFT_SIZE]; Void main() {/* lowp vec3 sum = vec3(0.0); lowp vec4 fragColor=texture2D(inputImageTexture,textureCoordinate); Mediump vec3 sum = fragcolor.rgb *0.18; Sum += texture2D(inputImageTexture, blurCoordinates[0]).rgb * 0.05; Sum += texture2D(inputImageTexture, blurCoordinates[1]).rgb * 0.09; Sum += texture2D(inputImageTexture, blurCoordinates[2]).rgb * 0.12; Sum += texture2D(inputImageTexture, blurCoordinates[3]).rgb * 0.15; Sum += texture2D(inputImageTexture, blurCoordinates[4]).rgb * 0.18; Sum += texture2D(inputImageTexture, blurCoordinates[5]).rgb * 0.15; Sum += texture2D(inputImageTexture, blurCoordinates[6]).rgb * 0.12; Sum += texture2D(inputImageTexture, blurCoordinates[7]).rgb * 0.09; Sum += texture2D(inputImageTexture, blurCoordinates[8]).rgb * 0.05; gl_FragColor = vec4(sum,fragColor.a); Vec4 currentColor = texture2D(inputTexture, textureCoordinate); mediump vec3 sum = currentColor.rgb; For (int I = 0; i < SHIFT_SIZE; i++) { sum += texture2D(inputTexture, blurShiftCoordinates[i].xy).rgb; sum += texture2D(inputTexture, blurShiftCoordinates[i].zw).rgb; Gl_FragColor = vec4(sum * 1.0 / float(2 * SHIFT_SIZE + 1), currentColor.a); }Copy the code
** Scale the image first, then gaussian blur to reduce the amount of data to process **
private static Bitmap getBitmap(Context context, int resourceId) { final BitmapFactory.Options options = new BitmapFactory.Options(); options.inScaled = false; // Read in the resource Bitmap bitmap = BitmapFactory.decodeResource( context.getResources(), resourceId, options); // Optimization point: If I scale the original image, 1/16 of the amount of data, Scale according to the specific scene bitmap size = bitmap. CreateScaledBitmap (bitmap, the bitmap. The getWidth () / 4, bitmap. GetHeight () / 4, true); return bitmap; }Copy the code
For details, see github github.com/ayyb1988/me…
Four, data
- B station video -GAMES101- Introduction to Modern Computer Graphics – Big old – Lingqi Yan
- The math you need to know — convolution
- Gaussian blur algorithm
- Digital image processing – Gaussian blur explanation
- Summary of algorithm for Gaussian blur filter in Photoshop
- Highly recommended -Android image processing – Gaussian blur principle and implementation
- OpenGLES Filter Development Summary – Gaussian blur implementation and optimization
- OpenGL Shader performance optimization Strategy (1) : Reduce branch statements
Five, the harvest
Through this study practice
- Understand the principle of Gaussian blur
- The realization process of GPUImage Gaussian blur is analyzed
- Optimization direction and realization of Gaussian blur
Thank you for reading the next article we learn to practice bezier curve/surface, welcome to pay attention to the public account “audio and video development journey”, learn and grow together. Welcome to communicate