GPUImage is an open source framework for image processing based on OpenGL on iOS. Later, someone used its idea to realize an Android version of GPUImage, and this paper mainly analyzes the Android version of GPUImage.
The profile
In GPUImage, there are both image processing and camera content processing. Here, the camera processing is mainly analyzed.
It will be roughly divided into three parts:
- Acquisition of camera data
- OpenGL image processing and display
- Camera shooting
Camera data acquisition
Camera data acquisition is actually the camera image data into OpenGL texture.
In camera business development, the camera is set with the PreviewCallback callback method, which is called repeatedly as long as the camera is in the preview phase, returning the contents of the current preview frame.
camera.setPreviewCallback(GPUImageRenderer.this);
camera.startPreview();
Copy the code
By default, the camera returns NV21 (YCbCr_420_SP) data, while OpenGL uses RGB, so YUV data needs to be converted to RGB in each callback method.
GPUImageNativeLibrary.YUVtoRBGA(data, previewSize.width, previewSize.height,
mGLRgbBuffer.array());
Copy the code
With the RGB data for the image, you can use glGenTextures to generate the texture and use the glTexImage2D method to use the image data as the texture.
In addition, if the texture is already generated, when more image data comes in, you just need to update the data and do not need to create the texture repeatedly.
// Load textures based on image data
public static int loadTexture(final IntBuffer data, final Size size, final int usedTexId) {
int textures[] = new int[1];
if (usedTexId == NO_TEXTURE) {
GLES20.glGenTextures(1, textures, 0);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textures[0]);
// Omit some code
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, size.width, size.height,
0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, data);
} else {
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, usedTexId);
// Update the texture data, no need to create the texture repeatedly
GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0.0.0, size.width,
size.height, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, data);
textures[0] = usedTexId;
}
return textures[0];
}
Copy the code
The operation in the PreviewCallback callback method completes the transformation of the image data into OpenGL textures.
The next step is to process the texture data and display it on the screen.
In camera data collection, there are some small details, such as the left and right mirror image flipping between front and rear cameras.
For the front camera, when the sensor content is displayed as a texture, the front camera needs to do a left and right flip process, because what we see is a mirror content, in line with the normal selfie process.
Texture coordinates are defined in the TextureRotationUtil class of GPUImage. The origin of these texture coordinates is not defined in the lower left corner, but in the upper left corner.
If the lower left corner is the origin of the texture coordinate system, then in addition to rotating the texture coordinate 90° clockwise to the right, you also need to rotate the texture coordinate up and down. For why you should rotate the texture coordinate 90° clockwise to the right, refer to this article:
Size and orientation issues in Android camera development
When we take the texture coordinates as the origin of the upper left corner and rotate them 90 ° clockwise relative to the vertex coordinates, we can display the image normally:
// Vertex coordinates
static final float CUBE[] = {
-1.0 f, -1.0 f.1.0 f, -1.0 f,
-1.0 f.1.0 f.1.0 f.1.0 f};// Texture coordinates rotated 90° clockwise relative to the vertex coordinates starting from the upper left corner
public static final float TEXTURE_ROTATED_90[] = {
1.0 f.1.0 f.1.0 f.0.0 f.0.0 f.1.0 f.0.0 f.0.0 f};Copy the code
Image processing and display
Once we have a texture, it’s important to make it clear that the texture is the content of the image captured by the camera, and we’re going to draw the texture onto the screen, essentially drawing a rectangle, and then we’re going to paste the texture onto the rectangle.
So, here’s a reminder of how OpenGL draws rectangles and attaches textures to them.
OpenGL Learning series – Texture
In GPUImage, this is done by the GPUImageFilter class, which is the base class for all filters in OpenGL.
GPUImageFilter:
The required vertex shader and fragment shader script content is identified in the constructor of GPUImageFilter.
The onInit method and the onInitialized method are called in the init method.
onInit
Method creates the Program in OpenGL and is bound to the Program declared in the shader scriptattribute
和uniform
Variable field.onInitialized
The method will give you someuniform
The GPUImageFilter class also has methods for assigning different types of variables, such as tofloat
Variables:
protected void setFloat(final int location, final float floatValue) {
runOnDraw(new Runnable() {
@Override
public void run(a) { GLES20.glUniform1f(location, floatValue); }}); }Copy the code
In the onDraw method, we do the actual drawing, and the runPendingOnDrawTasks method is executed during the drawing. This is because we assign variables in the shader language in init, and it doesn’t take effect immediately, but adds them to a linked list. So you need to finish the tasks in the list before you start drawing.
public void onDraw(final int textureId, final FloatBuffer cubeBuffer,
final FloatBuffer textureBuffer) {
GLES20.glUseProgram(mGLProgId);
// Perform the assignment task
runPendingOnDrawTasks();
// Vertex and texture coordinates
GLES20.glVertexAttribPointer(mGLAttribPosition, 2, GLES20.GL_FLOAT, false.0, cubeBuffer);
GLES20.glEnableVertexAttribArray(mGLAttribPosition);
GLES20.glVertexAttribPointer(mGLAttribTextureCoordinate, 2, GLES20.GL_FLOAT, false.0, textureBuffer);
GLES20.glEnableVertexAttribArray(mGLAttribTextureCoordinate);
// The last wave of operations before drawing
onDrawArraysPre();
// Finally draw
GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0.4);
}
Copy the code
GPUImageFilter does not manage vertex coordinates and texture coordinates. Instead, it passes parameters so that it does not have to deal with the relationship between front camera and rear front camera, mobile phone vertical and landscape placement.
Before executing the specific glDrawArrays method, there is also an onDrawArraysPre method, which also performs the last wave before drawing, useful in some filter implementations.
Finally, the glDrawArrays method does the drawing.
When we don’t need GPUImageFilter to draw, we need to destroy it, destroy it in the destroy method, and provide the onDestory method to provide custom destruction for some filters.
public final void destroy(a) {
mIsInitialized = false;
GLES20.glDeleteProgram(mGLProgId);
onDestroy();
}
public void onDestroy(a) {}Copy the code
The fragment shader script is defined in the GPUImageFilter method, which pastes the image content to the rectangle as is, without any special image manipulation.
In other filters, the shader script will be changed, and the image will be processed in other ways. In the whole GPUImage project, the most essential is the content of the shader script. How to do image processing through shaders is a profound knowledge ~~~
Parse the code implementation of GPUImageFilterGroup
When you want to process images multiple times, consider using the GPUImageFilterGroup.
GPUImageFilterGroup inherits from GPUImageFilter, which, as its name implies, is a series of GPUImageFilter filters. It can be likened to ViewGroup, which can contain views. You can also have a ViewGroup, and you can also have a GPUImageFilterGroup which can have a GPUImageFilter, which can have a GPUImageFilterGroup.
In the case that GPUImageFilterGroup contains GPUImageFilterGroup, all filter contents need to be drawn once. The mMergedFilters variable is used to represent all gpuImagefilterters that are not of type GPUImageFilterGroup.
// Get all non-gpuImagefilterGroup gpuImagefilters
public void updateMergedFilters(a) {
List<GPUImageFilter> filters;
for (GPUImageFilter filter : mFilters) {
// If the filter is of type GPUImageFilterGroup, split it
if (filter instanceof GPUImageFilterGroup) {
// Recursively call the updateMergedFilters method to split
((GPUImageFilterGroup) filter).updateMergedFilters();
// Get all non-gpuImagefilterGroup gpuImagefilters
filters = ((GPUImageFilterGroup) filter).getMergedFilters();
if (filters == null || filters.isEmpty())
continue;
// Add GPUImageFilter to mMergedFilters
mMergedFilters.addAll(filters);
continue;
}
// If non-gpuImagefilterGroup is added directlymMergedFilters.add(filter); }}Copy the code
The GPUImageFilterGroup also creates as many FrameBuffer frames and Texture textures as the number of filters before performing the specific drawing.
// Select the length of mMergedFilters, because mMergedFilters are the lengths of all the filters that are saved.
if(mMergedFilters ! =null && mMergedFilters.size() > 0) {
size = mMergedFilters.size();
// The number of frames cached
mFrameBuffers = new int[size - 1];
// Number of textures
mFrameBufferTextures = new int[size - 1];
for (int i = 0; i < size - 1; i++) {
// Generate a FrameBuffer
GLES20.glGenFramebuffers(1, mFrameBuffers, i);
// Generate a texture
GLES20.glGenTextures(1, mFrameBufferTextures, i);
GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, mFrameBufferTextures[i]);
GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, width, height, 0,
GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, null);
// Omit some code
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBuffers[i]);
// Texture is bound to the frame buffer
GLES20.glFramebufferTexture2D(GLES20.GL_FRAMEBUFFER, GLES20.GL_COLOR_ATTACHMENT0,
GLES20.GL_TEXTURE_2D, mFrameBufferTextures[i], 0);
// Omit some code}}Copy the code
If you are not familiar with the use of the FrameBuffer, please refer to this article:
OpenGL frame buffering practices
if(mMergedFilters ! =null) {
int size = mMergedFilters.size();
// The texture ID of the camera's original image conversion
int previousTexture = textureId;
for (int i = 0; i < size; i++) {
GPUImageFilter filter = mMergedFilters.get(i);
boolean isNotLast = i < size - 1;
// If it is not the last filter, draw it to the FrameBuffer, if it is the last filter, draw it to the screen
if (isNotLast) {
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, mFrameBuffers[i]);
GLES20.glClearColor(0.0.0.0);
}
// Filter drawing code
if (i == 0) {
// The first filter draws vertices and texture coordinates passed in using the camera's original image texture ID and parameters
filter.onDraw(previousTexture, cubeBuffer, textureBuffer);
} else if (i == size - 1) {
//
filter.onDraw(previousTexture, mGLCubeBuffer, (size % 2= =0)? mGLTextureFlipBuffer : mGLTextureBuffer); }else {
// The middle filter draws on the previous texture, using mGLTextureBuffer texture coordinates
filter.onDraw(previousTexture, mGLCubeBuffer, mGLTextureBuffer);
}
if (isNotLast) {
// Bind to screen
GLES20.glBindFramebuffer(GLES20.GL_FRAMEBUFFER, 0); previousTexture = mFrameBufferTextures[i]; }}}Copy the code
When a specific filter is drawn, it is bound to the FrameBuffer and then drawn on the FrameBuffer as long as it is not the last filter. The drawing is drawn to the texture bound to the FrameBuffer and then unbound to the screen.
If it is the last filter, it is not bound to the FrameBuffer and can be drawn directly to the screen.
There is a detail here, which is the following code:
filter.onDraw(previousTexture, mGLCubeBuffer, (size % 2= =0)? mGLTextureFlipBuffer : mGLTextureBuffer);Copy the code
If it is the last filter and there are an even number of filters, the texture coordinates of mGLTextureFlipBuffer are used, otherwise the texture coordinates of mGLTextureBuffer are used.
// The corresponding texture coordinates are TEXTURE_NO_ROTATION
mGLTextureBuffer.put(TEXTURE_NO_ROTATION).position(0);
// The corresponding texture coordinate is TEXTURE_NO_ROTATION, and the true parameter means to rotate vertically up and down
float[] flipTexture = TextureRotationUtil.getRotation(Rotation.NORMAL, false.true);
mGLTextureFlipBuffer.put(flipTexture).position(0);
Copy the code
The first filter is drawn using the vertex coordinates and texture coordinates passed in by the parameters. The middle filter uses the mGLTextureBuffer texture coordinates, which are TEXTURE_NO_ROTATION.
As mentioned earlier, the origin of the GPUImage’s texture coordinates is in the upper left corner, so using the TEXTURE_NO_ROTATION coordinates essentially flips the image up and down. When the TEXTURE_NO_ROTATION coordinates are called twice, the image is restored. This explains why the mGLTextureFlipBuffer texture coordinates are not needed when the number of filters is even when the image is flipped again.
Once you understand the implementation of GPUImageFilter and GPUImageFilterGroup, it’s much easier to look at the code for the Renderer.
Initialize and set the width and height of the filter in onSurfaceCreated and onSurfaceChanged methods, respectively, and call the specific draw in onDrawFrame method.
When switching filters, first destroy the previous filter, then initialize the new filter and set the width and height.
final GPUImageFilter oldFilter = mFilter;
mFilter = filter;
if(oldFilter ! =null) {
oldFilter.destroy();
}
mFilter.init();
GLES20.glUseProgram(mFilter.getProgram());
mFilter.onOutputSizeChanged(mOutputWidth, mOutputHeight);
Copy the code
Image shooting and saving processing
The Camera shooting in GPUImage is to call the takePicture method of the Camera, in which the original image data collected by the Camera is returned, and then the data is filtered and saved.
In the end, the processed image is read out through the glReadPixels method and saved as a Bitmap.
private void convertToBitmap(a) {
int[] iat = new int[mWidth * mHeight];
IntBuffer ib = IntBuffer.allocate(mWidth * mHeight);
mGL.glReadPixels(0.0, mWidth, mHeight, GL_RGBA, GL_UNSIGNED_BYTE, ib);
int[] ia = ib.array();
// glReadPixels Reads pixels up-and-down
for (int i = 0; i < mHeight; i++) {
for (int j = 0; j < mWidth; j++) {
iat[(mHeight - i - 1) * mWidth + j] = ia[i * mWidth + j];
}
}
mBitmap = Bitmap.createBitmap(mWidth, mHeight, Bitmap.Config.ARGB_8888);
mBitmap.copyPixelsFromBuffer(IntBuffer.wrap(iat));
}
Copy the code
summary
Analysis of GPUImage and filter architecture design is roughly like this, these are not the essence of it, the important thing is its shader scripts, from those shader scripts to learn how to achieve image processing algorithm through GLSL.
OpenGL interested friends, welcome to pay attention to wechat public number: [paper talk], get the latest article push ~~~