Hello everyone, today I will introduce how to use OpenGL to render camera effects on Android.

What is camera effect rendering? So-called special effects is a general concept, the camera images were collected to do certain modifications, plus some effect, display, can be called the special effects, such as we sometimes use some app to take, such as beauty, carry bright effect, also can add in the picture all kinds of stickers, or that the picture enlarged, reduced, jitter, etc., are all special effects.

In order to achieve camera rendering, there are generally 3 steps, first you need to capture the camera image, then effect it, and finally display it.

Let’s take a look at how to collect data from the camera. On Android, the camera has two ways to return frame data, one is as byte array, and the other is as texture.

The byte array returned by the former method can be directly operated on THE CPU, which can be converted into a bitmap and finally displayed on ImageView. However, this method is relatively inefficient, because CPU is far less efficient than GPU in image processing and rendering. However, this method has a low learning threshold and does not need to learn OpenGL.

The latter method returns the data directly from the texture, so you can use OpenGL to do everything on the GPU. It’s very efficient. Most commercial apps use this method, and this article will introduce this method. You can refer to my Android OpenGL ES 2.0 manual teaching series and OpenGL ES Advanced series of articles.

In the use of the camera, the first need to open the camera, of course, the camera permission is essential, here is not trembling, there are many articles on camera permission:

.val cameraId = getCameraId(Camera.CameraInfo.CAMERA_FACING_BACK)
camera = Camera.open(cameraId)
...
private fun getCameraId(facing : Int) : Int {
    val numberOfCameras = Camera.getNumberOfCameras()
    for (i in 0 until numberOfCameras) {
        val info = Camera.CameraInfo()
        Camera.getCameraInfo(i, info)
        if (info.facing == facing) {
            return i
        }
    }
    return - 1
}
Copy the code

First need to get to open the id of the camera, in general, the mobile phone has two front and rear camera, here for the rear camera id and opened it, pay attention to open here is not to say that the camera shot the works, and just get this camera to like, the camera did not start work, although there is no begin to work, but if you don’t release, Trying to open another application will fail.

Once you have the camera object, you need to set some parameters. There are many parameters you can set, such as preview resolution, focus mode, photo resolution, etc. Here we only set preview resolution and display Angle.

Camera preview resolution can appoint one set from the support list, can’t set any value, the actual use, general will do screen logic, such as a 720 p screen, chose a 1080 p resolution, it doesn’t make sense, will waste resources, simplicity, here I would take to support the zeroth directly in the list.

private fun setPreviewSize(parameters: Camera.Parameters) {
    parameters.setPreviewSize(
        parameters.supportedPreviewSizes[0].width,					   
        parameters.supportedPreviewSizes[0].height
    )    
}
Copy the code

Here’s how to set the rotation Angle:

val info = Camera.CameraInfo()
Camera.getCameraInfo(cameraId, info)
camera.setDisplayOrientation(info.orientation)
Copy the code

This rotation Angle will affect the rotation of the image we see. In general, this setting is ok, but some models will have compatibility problems, which requires special Settings for special models.

As mentioned above, we will return the frame image captured by the camera to the texture, which is set by the setPreviewTexture method:

camera.setPreviewTexture(surfaceTexture)
Copy the code

Note that the surfaceTexture is not texture, which is a value of the int class. The surfaceTexture is an object of the Class surfaceTexture, which is created by the texture, Think of it as wrapping a layer of Texture, where the texture and surfaceTexture are created by ourselves and set to the camera.

To create texture, you need the OpenGL environment. See my article OpenGL ES for details on the OpenGL environment. EGL and GL Thread, do OpenGL rendering, usually use GLSurfaceView, it comes with OpenGL environment, we don’t need to create it, here I use TextureView, it doesn’t have OpenGL environment, I have encapsulated an OpenGL environment, The other big advantage of TextureView over GLSurfaceView is that it has the same functionality as a regular view, For example, we can display it in a RecyclerView item like a regular view, and it doesn’t have a problem, but if you do that with GLSurfaceView, it will have a problem. Similarly, if you move a GLSurfaceView, you will have problems. The root of this is GLSurfaceView which is not on the view tree, it’s not the same as a regular view. So I’ve wrapped a TextureView with An OpenGL environment called GLTextureView, which is more powerful than GLSurfaceView, and the View that’s used to show the camera rendering inherits GLTextureView called GLCameraView, It can bind a custom camera class that implements the ICamera interface and provides GLCameraView with some methods for manipulating the camera and some methods for obtaining desired information.

Our texture is created from the OpenGL environment here. Note that the texture created here is not just a normal texture, it is a OES texture.

OES texture creation:

fun createOESTexture(a): Int {
    val textures = IntArray(1)
    GLES30.glGenTextures(textures.size, textures, 0)
    GLES30.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textures[0])
    GLES30.glTexParameteri(
        GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 
        GLES30.GL_TEXTURE_WRAP_S, 
        GLES30.GL_CLAMP_TO_EDGE
    )
    GLES30.glTexParameteri(
        GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 
        GLES30.GL_TEXTURE_WRAP_T, GLES30.GL_CLAMP_TO_EDGE
    )
    GLES30.glTexParameteri(
        GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 
        GLES30.GL_TEXTURE_MIN_FILTER, GLES30.GL_LINEAR
    )
    GLES30.glTexParameteri(
        GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 
        GLES30.GL_TEXTURE_MAG_FILTER, GLES30.GL_LINEAR
    )
    GLES30.glBindTexture(GLES30.GL_TEXTURE_EXTERNAL_OES, 0)
    return textures[0]}Copy the code

Once the SurfaceTexture is set, we need to call back to the SurfaceTexture to feel that the camera is returning data to us:

. st? .setOnFrameAvailableListener(this)...override fun onFrameAvailable(surfaceTexture: SurfaceTexture?). {
    ...
    surfaceTexture?.updateTexImage()
    ...
}
Copy the code

So when the camera gives us a frame of data, onFrameAvailable will call back once to tell us that we have data, notice that this is just telling us that we have data, it’s not telling us that we’ve already put the data in the texture, so we need to call updateTexImage, The camera frame data is then updated to the texture. Since this step operates on the texture, updateTexImage needs to be called in the GL thread.

Is onFrameAvailable a callback from the GL thread? Not necessarily, it depends where you create SurfaceTexture, if the thread that created the SurfaceTexture has a Looper, it will hold that Looper, The onFrameAvailable callback then creates a handler from the Looper to call back on the thread. If the thread that created SurfaceTexture has a Looper, it will call back on the main thread. Therefore, if the thread creating the SurfaceTexture has a Looper and is a GL thread, then the onFrameAvailable callback is a GL thread, otherwise it is not, and the updateTexImage cannot be called in onFrameAvailable.

In my code, I make sure that the onFrameAvailable callback comes from the GL thread, so I call updateTexImage directly inside.

With all this set up, we call startPreview and the camera is really working:

camera.startPreview()
Copy the code

The texture is then used as the input to render the effects. Here I use one of my libraries, FunRenderer, which encapsulates OpenGL and makes it very easy to use.

As mentioned above, we wrapped a GLCameraView to display the render result. Here I callback 3 methods, which are used for initialization, rendering, and release respectively:

interface RenderCallback {

    fun onInit(a)
    fun onRenderFrame(oesTexture: Int, stMatrix: FloatArray, cameraPreviewSize: Size, surfaceSize: Size)
    fun onRelease(a)

}
Copy the code

We are using FunRenderer in these three methods:

val cameraWrapper = CameraWrapper()
cameraView.bindCamera(cameraWrapper)
cameraView.renderCallback = object : GLCameraView.RenderCallback {

    private val oes2RGBARenderer = OES2RGBARenderer()
    private val cropRenderer = CropRenderer()
    private val effectRenderer = TestEffectRenderer()
    private val screenRenderer = ScreenRenderer()
    private lateinit var renderChain: RenderChain

    override fun onInit(a) {
        renderChain = RenderChain.create()
        .addRenderer(oes2RGBARenderer)
        .addRenderer(cropRenderer)
        .addRenderer(effectRenderer)
        .addRenderer(screenRenderer)
        renderChain.init()
    }

    override fun onRenderFrame(oesTexture: Int, stMatrix: FloatArray, cameraPreviewSize: Size, surfaceSize: Size) {
        GLES30.glClearColor(0f, 0f, 0f, 1f)
        GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT)
        val input = Texture(oesTexture, cameraPreviewSize.height, cameraPreviewSize.width, false)
        val data = mutableMapOf<String, Any>()
        data[Keys.ST_MATRIX] = stMatrix
        data[Keys.CROP_RATIO] = surfaceSize.width.toFloat() / surfaceSize.height
        data[Keys.SURFACE_WIDTH] = surfaceSize.width
        data[Keys.SURFACE_HEIGHT] = surfaceSize.height
        renderChain.render(input, data)}override fun onRelease(a) {
        renderChain.release()
    }

}
Copy the code

The actions done here per frame are OES to RGBA, cropping, a simple effect, screen up.

As mentioned earlier, the frame data returned by the camera is not carried by the normal texture, but by the OES texture, so the first step is to convert it to the RGBA texture via OES2RGBARenderer.

We then perform a clipping step through CropRenderer, because the preview resolution we choose may not be the same as the scale of the area we are implementing. If we do not clipping and force a full fill, it will be deformed.

Then we do a simple effect. TestEffectRenderer inherits SimpleRenderer. I implement a simple effect with a simple shader:

#version 300 es
precision mediump float;
in vec2 v_textureCoordinate;
layout(location = 0) out vec4 fragColor;
uniform sampler2D u_texture;
void main() {
    vec4 c = texture(u_texture, v_textureCoordinate);
    c.b = 0.5;
    fragColor = c;
}
Copy the code

Here I simply set the blue pass value to 0.5 and the result is a bit bluish, as if I had applied a filter. Of course, the actual use of color effects filter is much more complex than this, generally using LUT implementation, here is just a simple example.

Finally, the ScreenRenderer was used to render the camera effects. In practice, the camera effects you see are nothing more than writing a variety of shaders and combining various rendering steps.

Let’s take a look at the effect:

I packaged a library HiCamera(github.com/kenneycode/…) In the demo, I used my FunRenderer(github.com/kenneycode/…). You can extend FunRenderer’s SimpleRenderer, or else. The code in this article is also in the HiCamera demo.

Thanks for reading!