Audio and video series:
Android audio and video development (a) — use AudioRecord to record PCM(recording); AudioTrack plays audio
Android audio and video development (two) – Camera1 to achieve preview, photo function
Project: VideoDemo
In the previous chapter, we implemented the preview and camera functions with Camera1, but we also stated that Camera1 was deprecated in API21 because its API capabilities and flexibility were not sufficient for today’s increasingly complex camera development, so it was introduced after API21.
A brief introduction to Camera2
In terms of functionality, Camera2 scrapped the Camera1 framework and supported more features such as:
- Get more frame (preview/photo) information and the parameter configuration for each frame
- Support more image formats (YUV/RAW)
- Some new features…
Here’s what I’m going to do today:
1.1. Pipeline
Camera2 wraps the camera into a Pipeline, which captures a single frame of input requests, each capturing a single image, and then wraps the data into packets that output the image data to the image buffer.
These requests are processed sequentially. It processes each frame sequentially and returns the request result to the client, as shown in the following figure:
- Create a CaptureRequest for Pipeline to get image information
- Create two different surfaces to receive image data and add them to capture balance
- Send the configured CaptureRequest to the Pipeline and wait for the photo result to return
Keep In mind that before CaptureReuqest is created, we have configured all the camera data, such as focus, flash, etc., and then we input it to the underlying Camrea2, which is put into a Queue called in-flight Capture Queue. When the In-flight Capture Queue is idle, we can get different image data from it to the Surface and get the result information of CaptureResult.
1.2. Supported Hardware Level
Camera2 supports many new features, but it also depends on the support Level of your phone manufacturer. Therefore, Camera2 uses the Supported Hardware Level to determine whether you support Camera2 features. There are four registrations:
- LEGACY: Backward compatibility mode, which supports Camera1 functions but does not support new Camera2 features
- LIMITED: supports some of the advanced features of Camera2 in addition to the features of Camera1
- FULL: supports all advanced Camera2 features
- LEVEL_3: Added more Camera2 features for YUV data
We will use these features to complete the development of Camera2.
Ii. Camera preview
Note that Camera2 and Camera1 are two different frameworks. Do not be imprisoned by the idea of Camera1. Take it as a new knowledge to learn.
- Obtain the CameraManager from context.getsystemservice (context.camera_service).
- GetCameraCharacteristics () to get all the camera information, such as supported preview size, level, etc
- The CameraDevice, which represents the current CameraDevice, is retrieved from the callback using the CameraManager openCamera() method
- CameraDevice creates CaptureRequest.Builder using createCaptureRequest to configure camera properties. Create CameraCaptureSession via createCaptureSession, which is an instance of Pipeline, and hand it off to the underlying layer for processing
Don’t forget to add the following permissions:
<uses-permission android:name="android.permission.CAMERA"/ > <! Support camera to run --> <! --> <uses-feature Android :name="android.hardware.camera"
android:required="true" />
<uses-feature android:name="android.hardware.camera.autofocus" />
Copy the code
2.1. Obtain camera information
CameraManager’s getCameraCharacteristics() method is used to obtain camera information. CameraManager is a system service that is responsible for querying and establishing camera connections. It has few functions, mainly as follows:
- Load the camera information into Camera Acteristics.
- Connect the camera based on the specified camera ID
- Provides a shortcut to set the flash as a flashlight
So it looks like this:
CameraManager = (CameraManager) getSystemService(context.camera_service); CameraManager = (CameraManager) getSystemService(context.camera_service); / / traverse device support camera ID, such as the front, rear and String [] cameraIdList = mCameraManager. GetCameraIdList ();for(String cameraId : CameraIdList) {// Get the CameraCharacteristics class CameraCharacteristics of all camera information = mCameraManager.getCameraCharacteristics(cameraId); / / get the direction of the camera, front, rear, external Integer facing = characteristics. The get (CameraCharacteristics. LENS_FACING);if(facing ! = null) {// Rear cameraif (facing == CameraCharacteristics.LENS_FACING_BACK) {
mBackCameraId = cameraId;
mBackCameraCharacteristics = characteristics;
}else if(facing the = = CameraCharacteristics. LENS_FACING_FRONT) {/ / front-facing camera mFrontCameraId = cameraId; mFrontCameraCharacteristics = characteristics; } mCameraId = cameraId; } / / support Camera2 the advanced features of Integer level = characteristics. The get (CameraCharacteristics. INFO_SUPPORTED_HARDWARE_LEVEL); Camera2 */ is not supportedif (level == null || level == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY){
// Toast.makeText(this, "Your phone does not support Camera2 advanced effects.", Toast.LENGTH_SHORT).show();
// break;
}
}
} catch (CameraAccessException e) {
e.printStackTrace();
}
Copy the code
CameraCharacteristics is used to know the current camera orientation. In addition to this, it also contains other information about the camera, such as:
- Check whether a flash FLASH_INFO_AVAILABLE is available
- Whether there is an AE mode CONTROL_AE_AVAILABLE_MODES
- If you are familiar with Camera1, CameraCharacteristics is a bit like Camera1 Camera.CameraInfo or Camera.parameters
2.2 Enabling the Camera
What’s better than Camera1 is that you can configure parameters like preview size before opening the camera. We know that the camera needs the Surface to load the data, here we use TextureView to load:
mTextureView = findViewById(R.id.surface);
Copy the code
So, once it’s created and has the width and height, we can turn on the camera:
Private void openCamera(int width, int height) { CameraCharacteristics CameraCharacteristics Characteristics = McAmeraid.equals (mBackCameraId)? mBackCameraCharacteristics : mFrontCameraCharacteristics; / / get the configuration of the map StreamConfigurationMap map = characteristics. The get (CameraCharacteristics. SCALER_STREAM_CONFIGURATION_MAP); / / get the direction of the camera sensor mSensorOrientation = characteristics. The get (CameraCharacteristics. SENSOR_ORIENTATION); Size[] previewSizes = map.getOutputSizes(SurfaceTexture. Class); Sizebestsize = getBestSize(width, height, previewSizes); /** * Set the preview properties * Instead of Cmaera1, Camera2 will give the size information to the Surface (SurfaceView or ImageReader). Output the corresponding size of the screen; * Note that the camera width > height, while we are using portrait, So want to change the * / wide mTextureView. GetSurfaceTexture () setDefaultBufferSize (bestSize. GetHeight (), bestSize. GetWidth ()); */ Size[] sizes = map.getOutputSizes(imageformat.jpeg); */ sizes [] Sizes = map.getOutputSizes(imageformat.jpeg); Size largest = Collections.max( Arrays.asList(sizes), new CompareSizesByArea()); // Set the imagereader size and maximum Image to 1. JPEG mImageReader = Imagereader.newinstance (largest.getwidth (),largest.getheight (), imageFormat.jpeg,1); / / photos to monitor mImageReader. SetOnImageAvailableListener (new ImageAvailable (), null); Try {/ / open the camera, surveillance data mCameraManager. OpenCamera (mCameraId, new CameraDeviceCallback (), null); } catch (CameraAccessException e) { e.printStackTrace(); }}Copy the code
In mCameraManager. OpenCamera () before open the camera, we passed
/ / get the configuration of the map StreamConfigurationMap map = characteristics. The get (CameraCharacteristics. SCALER_STREAM_CONFIGURATION_MAP);Copy the code
Get a map of available streams, so you can get all the sizes supported by the camera via getOutputSizes(), etc. Unlike Camera1, Camera2 will output images of the corresponding size according to the configuration of the Surface, so here set the size of mTextureView, note that the camera width > height, and we use portrait, so the width and height should be changed.
Then set the size of the image. The lighter the better, so choose the maximum size. We’ll talk about ImageReader later when we take photos.
The last call mCameraManager. OpenCamera (), it has three parameters:
- CameraId: The ID of the Camera, such as front, back, and outside
- CameraDevice. StateCallback: when connected to the camera, the callback is invoked, the CameraDevice
- Handler: call CameraDevice StateCallback handler, pass null, call the main thread, suggest to HandlerThread hander, after all, this is time consuming.
2.3. CameraDevice
CameraDevice represents the current CameraDevice. Its main responsibilities are:
- Create CameraCaptureSession with the specified parameters
- Create a CaptureRequest based on the specified template
- Turn off camera equipment
- Monitor the camera status, such as disconnect, enable successful and failed monitoring
As follows:
class CameraDeviceCallback extends CameraDevice.StateCallback{ @Override public void onOpened(@NonNull CameraDevice camera) { mCameraDevice = camera; CreatePreviewPipeline (camera); } @Override public void onDisconnected(@NonNull CameraDevice camera) { camera.close(); } @Override public void onError(@NonNull CameraDevice camera, int error) { camera.close(); }}Copy the code
Create a CaptureRequest in onOpened and set the camera parameters as follows:
Private void createPreviewPipeline(CameraDevice CameraDevice){try {// Create CaptureRequst. Builder final as preview CaptureRequest.Builder captureBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); Surface surface = new Surface(mTextureView.getSurfaceTexture()); Capturebuilder.addtarget (Surface); CameraCaptureSession. This object is responsible for handling preview requests and photo requests. This object must be prepared before Seesion is created. Passed to the bottom for skin contain pipeline cameraDevice. CreateCaptureSession (arrays.aslist (surface, mImageReader getSurface ()), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession session) { mCameraCaptureSession = session; Try {/ / set up automatic focusing captureBuilder. Set (CaptureRequest. CONTROL_AE_MODE, CaptureRequest. CONTROL_AF_MODE_CONTINUOUS_PICTURE); / / set up automatic exposure captureBuilder. Set (CaptureRequest CONTROL_AE_MODE, CaptureRequest. CONTROL_AE_MODE_ON_AUTO_FLASH); CaptureRequest build = captureBuilder.build(); / / set the preview for the capture picture data session. The setRepeatingRequest (build, null, null); }catch (Exception e){ } } @Override public void onConfigureFailed(@NonNull CameraCaptureSession session) { Toast.makeText(Camera2Activity.this,"Configuration failed", Toast.LENGTH_SHORT).show(); } },null); } catch (Exception e) { e.printStackTrace(); }}Copy the code
Before we start previewing, we need to create a CaptureRequest. CaptureRequest is the information carrier that submits Capture requests to CameraCaptureSession. It contains the Capture parameter configuration and the Surface that accepts the image data.
CaptureRequest allows you to configure a lot of information, such as image format, image resolution, focus, and flash control. Most of the configuration is done through CaptureRequest.
Through above cameraDevice. CreateCaptureRequest () to create a CaptureRequest Builder object, The createCaptureRequest() method is templateType, which templateType is used to specify which template to use.
- TEMPLATE_PREVIEW: A template for configuring preview
- TEMPLATE_RECORD: Template for video recording.
- TEMPLATE_STILL_CAPTURE: A template for taking photos.
- TEMPLATE_VIDEO_SNAPSHOT: A template that supports taking photos during video recording.
- TEMPLATE_MANUAL: For templates that want to configure most parameters manually.
Here we need a preview CaptureRequest, so select the TEMPLATE_PREVIEW template.
Next, we need to set up the Surface to hold the image data. We use two surfaces, one for TextureView for preview and one for ImageReader for taking photos.
The current CaptureRequest is used for preview, so set it to addTarget:
Capturebuilder.addtarget (Surface);Copy the code
Create CameraCaptureSession. Finally through the cameraDevice createCaptureSession (), and then configure your focus and exposure of configuration, can give the underlying CaptureRequest sent through the Session:
CameraCaptureSession. This object is responsible for handling preview requests and photo requests. This object must be prepared before Seesion is created. Passed to the bottom is used to configure pipeline cameraDevice. CreateCaptureSession (arrays.aslist (surface, mImageReader getSurface ()), new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession session) { mCameraCaptureSession = session; Try {/ / set up automatic focusing captureBuilder. Set (CaptureRequest. CONTROL_AE_MODE, CaptureRequest. CONTROL_AF_MODE_CONTINUOUS_PICTURE); / / set up automatic exposure captureBuilder. Set (CaptureRequest CONTROL_AE_MODE, CaptureRequest. CONTROL_AE_MODE_ON_AUTO_FLASH); CaptureRequest build = captureBuilder.build(); / / set the preview for the capture picture data session. The setRepeatingRequest (build, null, null); }catch (Exception e){ } } @Override public void onConfigureFailed(@NonNull CameraCaptureSession session) { Toast.makeText(Camera2Activity.this,"Configuration failed", Toast.LENGTH_SHORT).show();
}
},null);
Copy the code
2.4. Enable and disable preview
In Camera2, you essentially repeat the Captrue process over and over again. Each time Capture outputs the preview data to the corresponding Surface, you need to use:
session.setRepeatingRequest(build,null,null);
Copy the code
Its three arguments are as follows:
-
Request: A CaptureRequest object used to repeatedly execute Capture
-
The callback: Monitored every state of the Capture CameraCaptureSession. CaptureCallback objects, such as onCaptureStarted () means the beginning of the Capture, OnCaptureCompleted () means the end of a Capture.
-
Handler: used to perform CameraCaptureSession. CaptureCallback handler object, pass null as the main thread, can also use other threads handler
Closing previews is also easy:
/ / stop preview mCameraCaptureSession stopRepeating ();Copy the code
3. Take pictures
We learned about previewing, and we also mentioned that ImageReader is used to receive image data, so how do you take pictures?
A photo is a Captrue, so we can create a CaptureRequest to perform the photo.
// Create a session final CaptureRequest.Builder CaptureRequest = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE); Capturerequest.addtarget (mImagereader.getSurface ())); // Focus capturerEquest. set(capturerequest. CONTROL_AF_MODE, capturerEquest. CONTROL_AF_MODE_CONTINUOUS_PICTURE); Set (captureRequest. CONTROL_AF_MODE, captureRequest. CONTROL_AE_MODE_ON_AUTO_FLASH); Rotation = getWindowManager().getDefaultDisplay().getrotation (); CaptureRequest. Set (captureRequest. JPEG_ORIENTATION, getOrientation(Rotation)); / / stop first preview mCameraCaptureSession stopRepeating ();Copy the code
Code is better understood, only through the mCameraDevice. CreateCaptureRequest (CameraDevice. TEMPLATE_STILL_CAPTURE) created a photo of the template
And stop previewing before performing the photo:
mCameraCaptureSession.stopRepeating();
Copy the code
Then you can use the mCameraCaptureSession. The capture () pictures:
mCameraCaptureSession.capture(captureRequest.build(), new CameraCaptureSession.CaptureCallback() { @Override public void onCaptureCompleted(@NonNull CameraCaptureSession session, @NonNull CaptureRequest request, @NonNull TotalCaptureResult result) { super.onCaptureCompleted(session, request, result); Try {// After the shot, Keep it can preview CaptureRequest. Builder captureRequest1 = mCameraDevice. CreateCaptureRequest (CameraDevice. TEMPLATE_PREVIEW); captureRequest1.addTarget(new Surface(mTextureView.getSurfaceTexture())); mCameraCaptureSession.setRepeatingRequest(captureRequest1.build(),null,null); } catch (CameraAccessException e) { e.printStackTrace(); } } },null);Copy the code
As you can see, when the photo was taken, we previewed it again, and of course it’s up to you to set it.
3.1 Saving Images
Where can I save the pictures? Remember when we turned on the camera, we configured ImageReader listener:
/ / photos to monitor mImageReader. SetOnImageAvailableListener (new ImageAvailable (), null);Copy the code
Before we save, let’s take a look at, what is ImageReader
3.2 ImageReader
In Camrea2, Imageread is an important way to get image data. We can get image data in a variety of formats, such as JPEG, YUV and RAW. Create an ImageReader object using the imagereader.newinstance () method as follows:
// Set the imagereader size and maximum Image to 1. JPEG mImageReader = Imagereader.newinstance (largest.getwidth (),largest.getheight (), imageFormat.jpeg,1); mImageReader.setOnImageAvailableListener(new ImageAvailable(),null);Copy the code
The first two numbers are understood, the third parameter is the format of the Image data you want to obtain, using JPEG here, and the last parameter represents the maximum number of images, which can be understood as the size of the Image pool.
When an image data generated is called ImageReader. OnImageAvailableListener onImageAvailable inside () method
/** * Callback the interface * / class ImageAvailable implements ImageReader. OnImageAvailableListener {@ Override public void onImageAvailable(ImageReader reader) { new SavePicAsyncTask(reader).executeOnExecutor(AsyncTask.THREAD_POOL_EXECUTOR); }}Copy the code
Then we can call ImageReader’s acquireNextImage() method to obtain the latest Image object, and the Image data in the Image object is divided into multiple parts according to different formats, respectively stored in a separate Plane object. We can call image.getPalnes () to get an array of all the existing Palne objects, as shown below
/ / gain data capture photo Image Image = imageReader. AcquireLatestImage (); Plane[] planes = image.getPlanes();Copy the code
Finally, each image buffer stored in Plane is obtained by plane.getBuffer (). Such as
format | Image number | The Plane level |
---|---|---|
JPEG | 1 | Compressed data, so the number of rows is 0, decompressed using BitmapFactory#decodeByteArray |
YUV | 3 | One brightness channel + two color CbCr channels, UV width and height is half of Y. |
YUV can be represented by the following image:
FileOutputStream fos = null; Image image = null; try { fos = new FileOutputStream(file); / / gain data capture photo image = imageReader. AcquireLatestImage (); Plane[] planes = image.getPlanes(); ByteBuffer = planes[0].getBuffer(); ByteBuffer = planes[0].getbuffer (); data = new byte[buffer.remaining()]; // Add the data to the byte array buffer.get(data); Bitmap bitmap = BitmapFactory.decodeByteArray(data,0,data.length); // Rotate the imageif (mCameraId.equals(mFrontCameraId)){
bitmap = BitmapUtils.rotate(bitmap,270);
bitmap = BitmapUtils.mirror(bitmap);
}else{
bitmap = BitmapUtils.rotate(bitmap,90);
}
bitmap.compress(Bitmap.CompressFormat.JPEG,100,fos);
fos.flush();
return bitmap;
}catch (Exception e){
Log.d(TAG, "zsr doInBackground: "+e.toString()); }finally { CloseUtils.close(fos); // Remember to close imageif (image != null) {
image.close();
}
}
Copy the code
So, we’re done. In the next chapter, we’re going to do CameraX.
Reference: developer. The android. Google. Cn/reference/a…
www.jianshu.com/p/9a2e66916…
www.jianshu.com/p/23e8789fb…
www.jianshu.com/p/067889611…