Welcome toTencent Cloud + community, get more Tencent mass technology practice dry goods oh ~

This article is published in cloud + community column by qzone development team

Recently, I was responsible for developing a requirement related to Android cameras. The new feature allows users to quickly take certain size photos (1:1 or 3:4) using the phone’s camera, and also supports making stickers on the resulting photos. I had no experience with Android camera development before, so I spent a lot of time and energy in the whole development process. This article summarizes the Android camera development related knowledge, process, and easy to encounter pits, hoping to help friends who may contact Android camera development in the future to get started quickly, save time, less detours.

Two ways to develop camera applications in Android

The Android system provides two methods to use the camera resources of the mobile phone to achieve the shooting function. One is to directly invoke the camera component of the system with the Intent. This method is fast and convenient, and is suitable for directly obtaining photos, such as uploading albums, Posting photos on Weibo and wechat moments. The other is to use the camera API to customize the camera. This method is suitable for scenes that need to customize the camera interface or develop special camera functions, such as cropping, filtering, adding stickers, emoticons, location tags, etc. This article focuses on how to use the camera API to customize your camera.

Key class analysis in camera API

The following key classes and interfaces are involved in implementing the shooting function through the camera API:

Camera: The primary class for managing and manipulating Camera resources. It provides a complete camera interface, supporting camera resource switching, setting preview/shooting size, setting aperture, exposure, focus and other related parameters, obtaining preview/shooting frame data and other functions. The main methods are as follows:

  • Open () : Gets the camera instance.
  • SetPreviewDisplay (SurfaceHolder) : Bind the surface to draw the preview image. Surface is a handle to the raw buffer of the screen window, through which you can obtain the corresponding canvas on the screen, and then complete the work of drawing a View on the screen. The Camera and surface can be connected through the surfaceHolder. When the Camera and surface are connected, the preview frame data obtained by the Camera can be displayed on the screen through the Surface.
  • SetPrameters sets camera parameters, including front and rear cameras, flash mode, focus mode, preview and photo size, etc.
  • StartPreview (): startPreview, display the preview frame data from the underlying hardware of the camera on the bound surface.
  • StopPreview (): stopPreview and close camra underlying frame data transfer and drawing on surface.
  • Release (): Releases the Camera instance
  • TakePicture (Camera. Shuttershutter, Camera.PictureCallback RAW, Camera. Contains three callback parameters. Shutter is a callback when the shutter is pressed, RAW is a callback to get the raw data of the photo, and JPEG is a callback to get the image data compressed into JPG format.

SurfaceView: Class for drawing camera preview images, providing the user with real-time preview images. Ordinary views and derived classes share the same surface, and all drawing must be done in the UI thread. Surfaceview is a special view. It does not share the surface with other ordinary views, but has an independent surface inside. Surfaceview is responsible for managing the format, size and display position of the surface. Since the UI thread is also processing other interactive logic, updating the View is not guaranteed to be fast and framerate, whereas the SurfaceView, which holds a separate surface and can be drawn in a separate thread, provides a higher framerate. The preview image of custom camera is suitable for surfaceView because it requires high update speed and frame rate.

SurfaceHolder: Surfaceholder is an abstract interface to control the surface. It can control the size and format of the surface, modify the pixels of the surface, monitor the changes of the surface, etc. The typical application of surfaceHolder is used in the SurfaceView. The SurfaceView gets the SurfaceHolder instance through the getHolder() method, which manages and listens to the state of the surface.

The surfaceholder.callback interface is responsible for listening for surface state changes, with three methods:

  • SurfaceCreated (SurfaceHolder holder) : invoked immediately after the surface is created. When developing a custom camera, you can override this function to call Camera.open () and camera.setpreviewDisplay () to obtain camera resources and connect the camera to the surface.
  • SurfaceChanged (SurfaceHolder holder, int Format, int Width, int height): Called when the surface changes in format or size. When developing a custom camera, you can override this function to enable camera preview by calling Camera. startPreview so that the camera preview frame data can be passed to the Surface to display the camera preview image in real time.
  • SurfaceDestroyed (SurfaceHolder holder) : Called before the Surface is destroyed. When developing custom cameras, you can override this function to stop camera preview and release camera resources by calling Camera.stopPreview () and camera.release().

3. The development process of custom camera

To customize a custom camera application, you usually need to complete the following steps, and the flow chart is shown in Figure 1:

  • Check and access camera resources Check whether the mobile phone has camera resources. If yes, request access to camera resources.
  • Create a shot preview class that inherits from the SurfaceView and implements the SurfaceHolder interface. This class displays a live preview of the camera.
  • Create preview Layout With the Shot Preview class, you can create a layout file that blends preview images with designed user interface controls.
  • Set Photo Listener Binds a listener to a user interface control so that it responds to user actions, such as pressing a button, to begin the photo process.
  • Take photos and save files Convert the captured images into bitmap files, and the final output is saved in various commonly used formats.
  • Release camera Resources A camera is a shared resource and its life cycle must be carefully managed. When the camera is used, the application must release it correctly to avoid conflicts when other programs access it.

Figure 1 The process of customizing a custom camera

The corresponding code writing can be divided into three steps:

Step 1: Add camera-related permissions to androidmanifest.xml, specifically declaring the following:

Step 2: Write the camera operation function class CameraOperationHelper. Singleton pattern is used to unified management resources camera, wrap the camera API calls directly, and to provide for Activity to do with the custom camera UI interaction callback interface, its functions are as follows, mainly has created \ release the camera, closed connection \ \ starting preview interface, take pictures, autofocus, cameras, before and after the switch switch flash mode, etc., For details, please refer to the official API documentation.

Step 3: Write a custom camera Activity, mainly to customize the camera interface, to achieve UI interaction logic, such as button click event processing, icon resource switching, lens size switching animation, etc. You need to declare a SurfaceView object to display the camera preview in real time. The SurfaceHolder and its Callback interface are used to manage the connection between the surface and camera resources, and display/close the camera preview image.

Iv. Some potholes encountered in the development process

Here are some of the pitfalls I’ve had in developing custom cameras:

1. When the Activity is set to portrait, the SurfaceView preview image is reversed 90 degrees.

To address this issue, here are a few concepts for Android phones:

Screen orientation: in Android, the top left corner of the screen is the origin (0,0) of the coordinate system. The origin goes to the right in the positive X direction, the origin goes down in the positive Y direction.

Camera sensor direction: The image data of the mobile phone camera comes from the image sensor of the camera hardware. After the sensor is fixed to the mobile phone, there is a default view direction, as shown in Figure 2 below. The coordinate origin is located at the upper left corner of the mobile phone when it is laid horizontally, which is consistent with the X direction of the screen of the landscape application. In other words, it’s 90 degrees from the screen X direction of the portrait application.

FIG. 2 Schematic diagram of camera sensor direction

Camera preview direction: Since the screen can be rotated 360 degrees, in order to ensure that users see the “correct” preview no matter how they rotate the phone (the “correct” preview is the same as what the human eye sees in front of them), The underlying Android system rotates the data collected by the image sensor based on the current orientation of the phone’s screen before sending it to the display system, so the preview is always “correct.” You can set the camera preview orientation using setDisplayOrientation() in the camera API. By default, this value is 0, consistent with the image sensor. So for landscape apps, the preview image won’t be inverted 90 degrees because the screen orientation is the same as the preview orientation. However, for portrait apps, the screen orientation is perpendicular to the preview orientation, so it can be reversed by 90 degrees. To get the correct preview, you must use the API to rotate the camera’s preview orientation by 90, keeping it consistent with the screen orientation, as shown in Figure 3.

Figure 3. Schematic diagram of camera preview direction

(Red arrow is preview direction, blue direction is screen direction)

Photo direction of camera: When clicking the photo button, the photo taken is directly generated by the data collected by the image sensor and stored on the SDCard. Therefore, the photo direction of the camera is consistent with the direction of the sensor.

2. SurfaceView preview images and take photos stretching deformation

Before addressing this issue, let me also mention a few camera sizes.

SurfaceView size: The size of the View used to display the camera’s preview image in the custom camera app, which is the size of the screen when it fills the full screen. The preview image that surfaceView displays here will be called the mobile preview.

Previewsize: The data size of the preview frame provided by the camera hardware. The preview frame data is passed to SurfaceView to display the preview image. The preview image corresponding to the preview frame data here is tentatively called the camera preview image.

Picturesize: The size of the frame data provided by the camera hardware. Frame data can be generated into bitmap files, which can be saved as.jpg or.png images. The image corresponding to the frame data captured here is called the camera image. Figure 4 illustrates the relationship between the above images and photos. Mobile phone preview image is the image directly provided to the user. It is generated from the camera preview image, and the data of the photo is from the camera shot image.

FIG. 4 Relationship between several images

Here are three tensile deformation phenomena I encountered during the development process:

1. Objects in the mobile phone preview screen are stretched and deformed.

2. Objects in the photos are stretched and deformed.

3. When you click to take a picture, the preview screen of the mobile phone will pause. At this time, the image is stretched and deformed, and then it will be normal again after the preview screen is restored.

The cause of phenomenon 1 is that SurfaceView and Previewsize have inconsistent aspect ratios. Since the mobile phone preview image is scaled by the camera preview image according to the SurfaceView size, an inconsistent aspect ratio will inevitably result in distorted images. The reason for the latter two phenomena is that the length and width ratio of Previewsize and Picturesize is inconsistent. After checking relevant information, it is found that the specific reason is related to the underlying implementation of some mobile phone camera hardware. In short, in order to avoid the occurrence of the above deformation phenomenon, in the development of the best SurfaceView, PreviewSize, PictureSize three dimensions to ensure the length and width of the same proportion. Concrete implementation can through the camera first. GetSupportedPreviewSizes () and camera getSupportedPictureSizes () to obtain the camera hardware support all preview and shot size, Then select a size that matches the aspect ratio of the SurfaceView and is the right size, and update the Settings with camera.setPrameters. Note: Mobile phone camera hardware in the market is generally supported by the mainstream 4:3 or 16:9 size, so the SurfaceView size should not be too weird, it is best to set this aspect ratio.

3. All kinds of crash

The reasons for the first two crashes are as follows: the camera hardware must be connected to the Surface before focusing and taking photos, and the camera preview must be opened, and the surface has received preview data. This runtime exception occurs if you call Camera. autofocus or Camera. takepicture before you have executed camera. setPreviewDisplay or called Camera. startPreview. In the code corresponding to the custom camera, note that before executing camera.autofocus or Camera. takepicture in the camera button event response, be sure to check whether the camera has the preview Surfaceview set and camera preview enabled. Here is a method to judge the preview status: Camera. SetPreviewCallback is preview frame data callback function, it will be received in the SurfaceView Camera preview is invoked when the frame data, so the pictures of the inside can set whether to allow focus and flags.

Also note that camera.takepicture () will execute camera.stopPreview to get the data of the shot frame, and the preview will be stuck. If the user clicks the button, it will call Camera.takepicture. The above crash will also occur, so you may also want to block the continuous clicking of the photo button during development.

The third crash involves image clipping, because 1:1 or 4 is supported: 3 lens size, it will need to cut of preview view, because it is vertical screen applications, so the direction of cutting area coordinate system with the camera sensor is a 90 – degree Angle, the performance is in the cutting, the x direction on the screen, the corresponding on the shooting images are highly direction and y direction, on the screen corresponding to the image is the width. Therefore, attention must be paid to coordinate system transformation and out of bounds protection during calculation.

4. Mirror effect of front camera

A special feature of Android camera hardware is that it displays a mirror-like preview view of the front-facing camera, showing a mirror image of the camera’s image. The resulting photos are still captured by a camera. If you’re a little skeptical, try the front-facing camera on your Android phone right now and see the difference between a preview image and a photo. This is due to the fact that the underlying camera does a horizontal flip when passing the preview data from the front camera, that is, flipping the x image 180 degrees. Before the change of the direction of the vertical screen preview will also impact, originally for the rear camera can be rotated 90 degrees the preview view is correct, and the front-facing camera, if also rotate 90 degrees, to see the preview image is upside down (because the x direction turn 180 degrees), and must be rotated 180 degrees, to show the right, as shown in figure 5, You can use the camera preview orientation diagram to understand.

Figure 5. Schematic diagram of preview direction of front camera

In addition, since the image is not flipped horizontally, users will find that the photos taken by the front-facing camera are flipped left and right as they saw in the preview. This will affect the user experience to some extent. To solve this problem, a horizontal flip matrix transformation can be added to generate bitmap files for images taken by front-facing cameras.

5. Release of camera resources on the lock screen

In order to save battery power and not waste camera resources, if the preview image does not need to be displayed in the customized camera developed, for example, after pressing the Home keyboard to switch the background or lock the screen, the preview should be closed and the camera resources should be released. When the surfaceView becomes visible, the Surface is created and the surfaceCreated callback in the SurfaceHolder. callback interface is triggered. When the SurfaceView becomes invisible, the Surface is destroyed and the SurfaceDestroyed callback is triggered. In the corresponding callback function, we can handle camera related operations, such as connecting surface, opening/closing preview. The release of camera resources can be done in onPause on Acelasticity. Accordingly, to restore preview images, apply for camera resources and initialize them in onResume on AcElasticity, and then connect the camera to the Surface and preview it by creating a SurfaceView.

However, in the process of development, it was found that the program could run normally when the background scene was cut by the HOME button. In a locked screen scenario, a crash occurs during the re-application for camera resources, indicating that the access to camera resources fails. So what’s the reason? I added debug log to the code to check the code execution order, the result is as follows:

The execution flow of the custom camera page when pressing the HOME button:

  • Run the program -> press the HOME button
  • Activities are called in the order onPause->onStop
  • The SurfaceView calls the surfaceDestroyed method
  • And then cut back to the program
  • The Activity is called in the order onRestart->onStart->onResume
  • The SurfaceView calls surfaceCreated->surfaceChanged method
  • For the lock screen, the execution flow is as follows:
  • The Activity only calls the onPause method
  • Once unlocked, the Activity calls the onResume method
  • None of the surfaceHolder.callback methods in the SurfaceView are executed

The problem was found. Because the callback method was not executed when the screen was locked, the camera resources were released before the connection between the camera and the preview was disconnected. As a result, the system reported crash when the camera resources were re-applied. The surfaceView’s visibile property is set manually during onPause and onResume, and the callback function can be triggered normally. Since the surfaceView is not supposed to be visible to the user when the background is cut or the screen is locked, manually changing the visibility of the SurfaceView does not affect the user experience.

Question and answer

Android – How to fix permission exceptions?

reading

In-depth understanding of Autorelease Pool

ComponentKit framework analysis of one – first know CK

Android memory leak analysis tips

Machine learning in action! Quick introduction to online advertising business and CTR knowledge

This article has been authorized by the author to Tencent Cloud + community, more original text pleaseClick on the

Search concern public number “cloud plus community”, the first time to obtain technical dry goods, after concern reply 1024 send you a technical course gift package!

Massive technical practice experience, all in the cloud plus community!