This article is original, reprinted please note: juejin.cn/user/393150…

Android camera development related articles have been numerous, today I want to give developers about android camera development some small secrets, of course, will also carry out some basic knowledge of the popularization 😄. If you do not have support for Camera development, it is recommended to open The Google documents Camera and Camera Guide for related learning, and then combine with the content of this article, it will surely achieve half the result with twice the effort.

Here in advance attached to the reference code clone address: PS: 😊 intimate bloggers specially use code cloud convenient domestic partners high-speed access to the code.

Yards cloud: Camera – Android

This article mainly introduces android Camera1 related introduction, Camera2 waiting for my update 🙂 😊

1. Start the camera

From API documentation and a lot of network information general boot routine code:

/** A safe way to get an instance of the Camera object. */
public static Camera getCameraInstance(a){
    Camera c = null;
    try {
        c = Camera.open(); // attempt to get a Camera instance
    }
    catch (Exception e){
        // Camera is not available (in use or does not exist)
    }
    return c; // returns null if camera is unavailable
}
Copy the code

However, when we call this function to obtain the camera instance, we usually call this function directly from the MainThread:

 @Override
 protected void onCreate(Bundle savedInstanceState) {
     // ... 
     Camera camera = getCameraInstance();
 }
Copy the code

Let’s look at the android source code is implemented, camera.java:

/**
 * Creates a new Camera object to access the first back-facing camera on the
 * device. If the device does not have a back-facing camera, this returns
 * null.
 * @see #open(int)
 */
public static Camera open(a) {
    int numberOfCameras = getNumberOfCameras();
    CameraInfo cameraInfo = new CameraInfo();
    for (int i = 0; i < numberOfCameras; i++) {
        getCameraInfo(i, cameraInfo);
        if (cameraInfo.facing == CameraInfo.CAMERA_FACING_BACK) {
            return newCamera(i); }}return null;
}
    
Camera(int cameraId) {
    mShutterCallback = null;
    mRawImageCallback = null;
    mJpegCallback = null;
    mPreviewCallback = null;
    mPostviewCallback = null;
    mUsingPreviewAllocation = false;
    mZoomListener = null;
    Looper looper;
    if((looper = Looper.myLooper()) ! =null) {
        mEventHandler = new EventHandler(this, looper);
    } else if((looper = Looper.getMainLooper()) ! =null) {
        mEventHandler = new EventHandler(this, looper);
    } else {
        mEventHandler = null;
    }
    String packageName = ActivityThread.currentPackageName();
    native_setup(new WeakReference<Camera>(this), cameraId, packageName);
}
Copy the code

Note that the default mEventHandler uses the default Looper of the UI thread if the current startup thread does not have one. From the source we can see that EventHandler handles the callback of the underlying message. Normally, we would expect all callbacks to be in the UI thread so that we can manipulate the relevant page logic directly. But for some special scenarios we can do some special operations, we can write down this knowledge point, for subsequent use.

2. Set the camera 📷 preview mode

2.1 the use ofSurfaceHolderpreview

According to the official Guide article, we directly use the SurfaceView as the preview display object.

@Override
protected void onCreate(Bundle savedInstanceState) {
	// ...
    SurfaceView surfaceView = findViewById(R.id.camera_surface_view);
    surfaceView.getHolder().addCallback(this);
}

@Override
public void surfaceCreated(SurfaceHolder holder) {
    // TODO: Connect Camera.
    if (null! = mCamera) {try {
            mCamera.setPreviewDisplay(holder);
            mCamera.startPreview();
            mHolder = holder;
        } catch(IOException e) { e.printStackTrace(); }}}Copy the code

Run the program again and I’m sure you can see the preview screen, although it may have some orientation problems. But at least we got a picture of the camera.

2.2 the use ofSurfaceTexturepreview

This method is mainly aimed at the mode that needs to use OpenGL ES as the camera GPU preview. Now the target View is changed to GLSurfaceView. When using ⚠️, pay attention to three small details:

  1. aboutGLSurfaceViewBasic setup of
GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2); // Enable OpenGL ES 2.0
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY); // Enable passive refresh.
surfaceView.setRenderer(this);
Copy the code

The third point about enabling passive refresh goes into more detail about what it means. 2. Create a SurfaceTexture for the texture

@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
	// Init Camera
	int[] textureIds = new int[1];
   	GLES20.glGenTextures(1, textureIds, 0);
   	GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, textureIds[0]);
   	// Out of the texture coordinates, use truncation to the edge
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);
   	// Filter (texture pixels mapped to coordinate points) (zoom out, zoom out: GL_LINEAR linear)
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
   	GLES20.glTexParameteri(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
   
   	mSurfaceTexture = new SurfaceTexture(textureIds[0]);
   	mCameraTexture = textureIds[0];
   
   	GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
    
    try {
    	// Create SurfaceTexture as a preview of the Texture
   		mCamera.setPreviewTexture(mSurfaceTexture); 
   		mCamera.startPreview();
   	} catch(IOException e) { e.printStackTrace(); }}Copy the code

The texture created here is a special extension from OpenGL ES, gles11ext.gl_TEXture_external_OES has and only uses this type of texture so that developers can do real-time processing of camera content through their GPU code. 3. Data-driven refresh

Change the GLSurfaceView continuous refresh mode to refresh only when data changes.

GLSurfaceView surfaceView = findViewById(R.id.gl_surfaceview);
surfaceView.setEGLContextClientVersion(2);
surfaceView.setRenderer(this);
// Add the following Settings to change to passive GL rendering.
// Change SurfaceView render mode to RENDERMODE_WHEN_DIRTY. 
surfaceView.setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);
Copy the code

We can notify data changes in the following ways

mSurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
	// Data can be displayed while the GL thread works.
	 mSurfaceView.requestRender();
});
Copy the code

The rest can stay the same, which has the advantage that the refresh frame rate can vary with the camera’s frame rate. It is not the automatic refresh itself that causes unnecessary GPU power consumption.

2.3 Preview the preview using YUV-NV21

This section will focus on how to use YUV data for camera preview technology. The main implementation scenario of this technical solution is Face Detection or real-time algorithm data processing in other CV fields.

2.3.1 Setting the callbackCameraPreview YUV data callbackBuffer

This step using the old version of the interface Camera. SetPreviewCallbackWithBuffer, but using the function need to do a necessary operation, is to add Camera callback data Buffer.

// Set the preview resolution of the target, you can directly use 1280*720 resolution of the current camera
parameters.setPreviewSize(previewSize.first, previewSize.second);
// Set the camera NV21 data callback to use the buffer set by the user
mCamera.setPreviewCallbackWithBuffer(this);
mCamera.setParameters(parameters);
// Add four byte[] buffer objects for camera processing.
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
mCamera.addCallbackBuffer(createPreviewBuffer(previewSize.first, previewSize.second));
Copy the code

Note here ⚠ ️, callback is using a Camera. If set preview setPreviewCallback so onPreviewFrame data back to the Camera (byte [] data, Camera Camera) of the data is created internally by the Camera.

@Override
public void onPreviewFrame(byte[] data, Camera camera) {
    // TODO:Preprocess camera input data
    if(! bytesToByteBuffer.containsKey(data)) { Log.d(TAG,"Skipping frame. Could not find ByteBuffer associated with the image "
                        + "data from the camera.");
    } else {
        / / because we use the setPreviewCallbackWithBuffer so you have to return the data backmCamera.addCallbackBuffer(data); }}Copy the code

Without mCamera. AddCallbackBuffer (byte []), when the callback after 4 times, will no longer trigger onPreviewFrame. The number of times you can find is exactly equal to the number of buffers added during camera initialization.

2.3.2 Start camera preview

Our purpose is to use to render onPreviewFrame return data, so set mCamera. SetPreviewTexture logic code will need to be removed, Because we don’t want the camera to continue sending preview data to the SurfaceTexture that was previously set, we’re wasting resources on the system.

Support annotation camera mCamera 😂. SetPreviewTexture (mSurfaceTexture); Code snippet:

try {
    // mCamera.setPreviewTexture(mSurfaceTexture);
    mCamera.startPreview();
} catch (Exception e) {
    e.printStackTrace();
}
Copy the code

OnPreviewFrame doesn’t work. Take a quick look at the documentation.

/**
 * Starts capturing and drawing preview frames to the screen
 * Preview will not actually start until a surface is supplied
 * with {@link #setPreviewDisplay(SurfaceHolder)} or
 * {@link #setPreviewTexture(SurfaceTexture)}.
 *
 * <p>If {@link #setPreviewCallback(Camera.PreviewCallback)},
 * {@link #setOneShotPreviewCallback(Camera.PreviewCallback)}, or
 * {@link #setPreviewCallbackWithBuffer(Camera.PreviewCallback)} were
 * called, {@link Camera.PreviewCallback#onPreviewFrame(byte[], Camera)}
 * will be called when preview data becomes available.
 *
 * @throws RuntimeException if starting preview fails; usually this would be
 *    because of a hardware or other low-level error, or because release()
 *    has been called on this Camera instance.
 */
public native final void startPreview(a);
Copy the code

The camera preview will not start properly until the appropriate Surface resource has been set.

Here are the moments of wonder:

/** * The dummy surface texture must be assigned a chosen name. Since we never use an OpenGL context, * we can choose any ID we want here. The dummy surface texture is not a crazy hack - it is * actually how the camera team recommends using the camera without a preview. */
private static final int DUMMY_TEXTURE_NAME = 100;


@Override
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
    // ... codes
	SurfaceTexture dummySurfaceTexture = new SurfaceTexture(DUMMY_TEXTURE_NAME);
    mCamera.setPreviewTexture(dummySurfaceTexture);
    // ... codes
}
Copy the code

After this action, the camera’s onPreviewFrame starts firing again. This virtual SurfaceTexture allows the camera to work, and by setting:

 dummySurfaceTexture.setOnFrameAvailableListener(surfaceTexture -> {
                Log.d(TAG, "dummySurfaceTexture working.");
            });
Copy the code

We’ll see that the system can figure out for itself whether SurfaceTexture is valid, and then onFrameAvailable doesn’t respond.

2.3.3 renderingYUVData drawing toSurfaceView.

Currently, the default YUV format of Android is NV21. Therefore, Shader is needed for format conversion. Only RGB colors can be drawn in OpenGL. For details about the script algorithm, see nv21_to_rgba_fs.glsl

#ifdef GL_ES
precision highp float;
#endif
varying vec2 v_texCoord;
uniform sampler2D y_texture;
uniform sampler2D uv_texture;

void main (void) {
    float r, g, b, y, u, v;
    //We had put the Y values of each pixel to the R,G,B components by
    //GL_LUMINANCE, that's why we're pulling it from the R component,
    //we could also use G or B
    y = texture2D(y_texture, v_texCoord).r;
    //We had put the U and V values of each pixel to the A and R,G,B
    //components of the texture respectively using GL_LUMINANCE_ALPHA.
    //Since U,V bytes are interspread in the texture, this is probably
    //the fastest way to use them in the shader
    u = texture2D(uv_texture, v_texCoord).a - 0.5;
    v = texture2D(uv_texture, v_texCoord).r - 0.5;
    //The numbers are just YUV to RGB conversion constants
    r = y + 1.13983*v;
    g = y - 0.39465*u - 0.58060*v;
    b = y + 2.03211*u;
    //We finally set the RGB color of our pixel
    gl_FragColor = vec4(r, g, b, 1.0);
}
Copy the code

The main idea is to directly separate N21 data into 2 pieces of texture data, fragment shader to calculate the color format, and calculate back to RGBA.

 mYTexture = new Texture();
created = mYTexture.create(mYuvBufferWidth, mYuvBufferHeight, GLES10.GL_LUMINANCE);
if(! created) {throw new RuntimeException("Create Y texture fail.");
}

mUVTexture = new Texture();
created = mUVTexture.create(mYuvBufferWidth/2, mYuvBufferHeight/2, GLES10.GL_LUMINANCE_ALPHA);	// The format of the data is GL_LUMINANCE_ALPHA because uv is two channels
if(! created) {throw new RuntimeException("Create UV texture fail.");
}

/ /... Omit some logical code

//Copy the Y channel of the image into its buffer, the first (width*height) bytes are the Y channel
yBuffer.put(data.array(), 0, mPreviewSize.first * mPreviewSize.second);
yBuffer.position(0);

//Copy the UV channels of the image into their buffer, the following (width*height/2) bytes are the UV channel; the U and V bytes are interspread
uvBuffer.put(data.array(), mPreviewSize.first * mPreviewSize.second, (mPreviewSize.first * mPreviewSize.second)/2);
uvBuffer.position(0);

mYTexture.load(yBuffer);
mUVTexture.load(uvBuffer);
Copy the code

2.3.4 Performance Optimization

The speed of camera callback YUV and OpenGL ES rendering camera preview images are not necessarily compatible, so we can optimize. Since this is a camera preview we have to make sure that the current rendering is up to date. We can use pendingFrameData, a common resource, to synchronize the rendering thread with the camera data callback thread to ensure the timeliness of the picture.

synchronized (lock) {
	if(pendingFrameData ! =null) { // frame data tha has not been processed. Just return back to Camera.
		camera.addCallbackBuffer(pendingFrameData.array());
		pendingFrameData = null;
	}

	pendingFrameData = bytesToByteBuffer.get(data);
	// Notify the processor thread if it is waiting on the next frame (see below).
    // The Demo tells the GLThread to wake up if it handles the wait state.
    lock.notifyAll();
}

// Notify GLSurfaceView that it can refresh
mSurfaceView.requestRender();
Copy the code

One final tip for optimization, ㊙️, needs to be combined with the Handler mentioned in the launch camera. If we call Camera camera.open () from the main thread of Android or a child thread without Looper, we end up with all Camera call-back messages being processed from the Looper of the main thread looper.getMainLooper (). We can imagine that if the current UI thread is performing heavy operations, it will certainly affect the frame rate of the camera preview, so the best way is to open a child thread for camera opening operations.

final ConditionVariable startDone = new ConditionVariable();

new Thread() {
	@Override
	public void run(a) {
		Log.v(TAG, "start loopRun");
        // Set up a looper to be used by camera.
        Looper.prepare();
        // Save the looper so that we can terminate this thread
        // after we are done with it.
        mLooper = Looper.myLooper();
        mCamera = Camera.open(cameraId);
        Log.v(TAG, "camera is opened");
        startDone.open();
        Looper.loop(); // Blocks forever until Looper.quit() is called.
        if (LOGV) Log.v(TAG, "initializeMessageLooper: quit.");
        }
}.start();

Log.v(TAG, "start waiting for looper");

if(! startDone.block(WAIT_FOR_COMMAND_TO_COMPLETE)) { Log.v(TAG,"initializeMessageLooper: start timeout");
    fail("initializeMessageLooper: start timeout");
}
Copy the code

3. The camera Angle is faulty

The data preview of the camera is related to the installation position of the camera sensor. Relevant content can be discussed in a separate article, and I will directly go to the code.

private void setRotation(Camera camera, Camera.Parameters parameters, int cameraId) {
	WindowManager windowManager = (WindowManager)getSystemService(Context.WINDOW_SERVICE);
    int degrees = 0;
    int rotation = windowManager.getDefaultDisplay().getRotation();
    switch (rotation) {
        case Surface.ROTATION_0:
            degrees = 0;
            break;
        case Surface.ROTATION_90:
            degrees = 90;
            break;
        case Surface.ROTATION_180:
            degrees = 180;
            break;
        case Surface.ROTATION_270:
            degrees = 270;
            break;
        default:
            Log.e(TAG, "Bad rotation value: " + rotation);
    }

    Camera.CameraInfo cameraInfo = new Camera.CameraInfo();
    Camera.getCameraInfo(cameraId, cameraInfo);

    int angle;
    int displayAngle;
    if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) {
        angle = (cameraInfo.orientation + degrees) % 360;
        displayAngle = (360 - angle) % 360; // compensate for it being mirrored
    } else { // back-facing
        angle = (cameraInfo.orientation - degrees + 360) % 360;
        displayAngle = angle;
    }

    // This corresponds to the rotation constants.
    mRotation = angle;

    camera.setDisplayOrientation(displayAngle);
    parameters.setRotation(angle);
}
Copy the code

The YUV data preview mode does not work because the Angle parameter does not directly affect the return of PreviewCallback#onPreviewFrame. We are more certain of this by looking at the source code comments.

 /**
  * Set the clockwise rotation of preview display in degrees. This affects
  * the preview frames and the picture displayed after snapshot. This method
  * is useful for portrait mode applications. Note that preview display of
  * front-facing cameras is flipped horizontally before the rotation, that
  * is, the image is reflected along the central vertical axis of the camera
  * sensor. So the users can see themselves as looking into a mirror.
  *
  * <p>This does not affect the order of byte array passed in {@link* PreviewCallback#onPreviewFrame}, JPEG pictures, or recorded videos. This * method is not allowed to be called during preview. * * <p>If you want to make the camera image show in the same orientation as * the display, you can use the following code. * <pre> * public static void setCameraDisplayOrientation(Activity activity, * int cameraId, android.hardware.Camera camera) { * android.hardware.Camera.CameraInfo info = * new android.hardware.Camera.CameraInfo(); * android.hardware.Camera.getCameraInfo(cameraId, info); * int rotation = activity.getWindowManager().getDefaultDisplay() * .getRotation(); * int degrees = 0; * switch (rotation) { * case Surface.ROTATION_0: degrees = 0; break; * case Surface.ROTATION_90: degrees = 90; break; * case Surface.ROTATION_180: degrees = 180; break; * case Surface.ROTATION_270: degrees = 270; break; * } * * int result; * if (info.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { * result = (info.orientation + degrees) % 360; * result = (360 - result) % 360; // compensate the mirror * } else { // back-facing * result = (info.orientation - degrees + 360) % 360; * } * camera.setDisplayOrientation(result); * } * </pre> * * <p>Starting from API level 14, this method can be called when preview is * active. * * <p><b>Note: </b>Before API level 24, the default value for orientation is 0. Starting in * API level 24, the default orientation will be such that applications in forced-landscape mode * will have correct preview orientation,  which may be either a default of 0 or * 180. Applications that operate in portrait mode or allow for changing orientation must still * call this method after each orientation change to ensure correct preview display in all * cases.</p> * *@param degrees the angle that the picture will be rotated clockwise.
  *                Valid values are 0, 90, 180, and 270.
  * @throws RuntimeException if setting orientation fails; usually this would
  *    be because of a hardware or other low-level error, or because
  *    release() has been called on this Camera instance.
  * @see #setPreviewDisplay(SurfaceHolder)
  */
  public native final void setDisplayOrientation(int degrees);
Copy the code

In order to get the right Angle. What we need to do YUV rendering is change the coordinate points. Here I used a very violent method, directly to adjust the texture coordinates

    private static final float FULL_RECTANGLE_COORDS[] = {
            -1.0 f, -1.0 f.// 0 bottom left
            1.0 f, -1.0 f.// 1 bottom right
            -1.0 f.1.0 f.// 2 top left
            1.0 f.1.0 f.// 3 top right
    };
    // FIXME:In order to draw the correct Angle, the texture coordinates are calculated by 90 degrees, and a mirror processing of the texture data is included in the middle
    private static final float FULL_RECTANGLE_TEX_COORDS[] = {
            1.0 f.1.0 f.// 0 bottom left
            1.0 f.0.0 f.// 1 bottom right
            0.0 f.1.0 f.// 2 top left
            0.0 f.0.0 f      // 3 top right
    };
Copy the code

Restart the program Perfect.

conclusion

When it comes to Android camera development, it boils down to being in a trample. It is recommended that students who are studying, it is best to combine the additional content in my reference materials and camera source code for learning. You will gain a great deal. At the same time, I also hope that my experience article can help you to learn. 🍻 🍻 🍻

The resources

  1. Grafika
  2. Firbase Quick Start Samples
  3. Android Camera CTS