How to draw a picture on Android platform
【Android audio and video learning road (2) 】AudioRecord API detailed explanation and application
【Android audio and video learning road (3) 】AudioTrack use and detailed explanation
1. Camera Introduction
This is Camera2, and the directory is Android.hardware.camera2.Camera is not recommended anymore because this class has been marked outdated since API21. Google now strongly recommends android.hardware.camera2. However, Camera2 is only supported from API21. I’ll also talk about the camera, because it’s easy to use
The Camera allows you to preview images, take photos, and take videos using the device’s Camera. To use the Camera need to add the Manifest in the Manifest file. The permission. The auto-focus Camera permission if you want to at the same time, also need to feature.
<uses-permission android:name="android.permission.CAMERA" /> <uses-feature android:name="android.hardware.camera" /> < USES - feature android: name = "android. Hardware. Camera. An autofocus" / > / / storage access also need < USES - the permission android:name="android.permission.WEITE_EXTERNAL_STORAGE" />Copy the code
Note: If you want to simply take photos and video shots, you can use the Intent to enable the functionality provided by the system. Mediastore. ACTION_IMAGE_CAPTURE Capture photos; MediaStore.ACTION_VIDEO_CAPTURE Capture video;
Basic architecture
In terms of API architecture, Camera2 is very different from the previous Camera. The APP and the underlying Camera can be imagined to be connected in a pipe way, as shown in the figure below:
As shown in the figure above, Camera APP sends CaptureRequest through CameraCaptureSession, CameraDevices returns the corresponding data to the corresponding Surface after receiving the request, and preview data is usually sent to TextureView. After the request is completed, you can query the corresponding request parameters and the current state of the CameraDevice in the callback. In general, the preview/photo/video data in the Camera2 is received by the Surface. CaptureRequest represents the Camera parameters controlled by the request, and CameraMetadata(CaptureResult) represents the parameters used by the Camera in the currently returned frame and the current state.
Using the process
- Get the CameraManager from context.getSystemService(context.camera_Service).
- Call the cameramanager.open () method to get the CameraDevice in the callback.
- Through the CameraDevice. CreateCaptureSession CameraCaptureSession () in the callback.
- To build CaptureRequest, you can choose from three modes: preview/Photo/video.
- The CaptureRequest is sent through CameraCaptureSession. Capture means that the request is sent only once, and setRepeatingRequest means that the request is sent continuously.
- Photo. Data can be in the ImageReader OnImageAvailableListener callback, CaptureCallback can get photographed in the actual parameters and the Camera’s current state
None Example Query the function support of Camera2
Because the function of Camera2 is more, so not all mobile phones support the full function of Camera2, maybe some low-end machine function is not complete, If some low-end mobile phones use HAL1, use HAL1 will lead to some advanced functions of Camera2 can not be used, the following talk about how to query the device corresponding to the support of Camera2
INFO_SUPPORTED_HARDWARE_LEVEL Level of the Camera2 function supported by the hardware layer, which can be divided into five levels:
-
INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY
In backward compatibility mode, if it is at this level, there are almost no additional features, and the HAL layer is most likely HAL1
-
INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED
It has the most basic features, but also supports some additional advanced features that are a subset of LEVEL_FULL
-
INFO_SUPPORTED_HARDWARE_LEVEL_FULL
Support for each frame data control, but also support high rate of picture shooting
-
INFO_SUPPORTED_HARDWARE_LEVEL_3
Support YUV post processing and Raw format photo shooting, also support additional output stream configuration
-
INFO_SUPPORTED_HARDWARE_LEVEL_EXTERNAL
API28 added, should be an external camera, similar to LIMITED functionality
private fun getCameraHardwareSupport() { // cameraCharacteristics: The camera device property class, GetCameraCharacteristics (String cameraId) getCameraCharacteristics(String cameraId) val CameraManager = cameraManager ? : Return val cameraIdList = cameraManager. CameraIdList / / here directly with the first camera val cameraCharacteristics = cameraManager.getCameraCharacteristics(cameraIdList[0]) val deviceLevel = cameraCharacteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL) ? : return when(deviceLevel) { CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY -> {} CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED -> {} CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_FULL -> {} CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_3 -> {} CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_EXTERNAL -> {} } }Copy the code
What functions are supported can also be queried via REQUEST_AVAILABLE_CAPABILITIES, see the official documentation for details
2. Preview Camera data
There are two well-known ways to do this: SurfaceView and TextureView. (But both essentially get the Surface)
- The use of the camera
private fun previewCamera() { requestMultiplePermissions.launch( arrayOf( Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE ) ) camera = Camera.open() camera? .setDisplayOrientation(90); binding.surfaceView.holder.addCallback(object : SurfaceHolder.Callback { override fun surfaceCreated(surfaceHolder: SurfaceHolder) { camera? .setPreviewDisplay(surfaceHolder) camera? .startPreview() } override fun surfaceChanged(p0: SurfaceHolder, p1: Int, p2: Int, p3: Int) { } override fun surfaceDestroyed(p0: SurfaceHolder) { camera? .release() } }) }Copy the code
- The use of camera2
Android 9 + recommends using SessionConfiguration to create CameraCaptureSession
private fun previewCamera2(surface: Surface) { val cameraIdList = cameraManager!! .cameraIdList requestMultiplePermissions.launch( arrayOf( Manifest.permission.CAMERA, Manifest.permission.WRITE_EXTERNAL_STORAGE ) ) if (ActivityCompat.checkSelfPermission( this, Manifest.permission.CAMERA ) ! = PackageManager. PERMISSION_GRANTED) {ALog. E (" xiao ", "without camera authority") return} else {/ / handler: used to specify the thread to invoke the callback. If this parameter is not specified, it is null. In this case, the system uses the looper of the current thread by default. When you need to put this callback into the background thread, you can set it by specifying the handler's looper: cameraManager? .openCamera(cameraIdList[0], object : CameraDevice.StateCallback() { override fun onOpened(camera: CameraDevice) {// calls back to this method when the camera is successfully turned on. Now that the camera is ready, you can begin a series of operations on the camera, . / / by calling the CameraCaptureSession createCaptureSession methods to set the first capture session onOpenCamera (camera, surface) } override fun onDisconnected(camera: CameraDevice) {// This method is called when the camera is no longer available, or when calling cameramanager.opencamera () fails to open the camera. Any attempt to call the CameraDevice method at this point will fail with a //CameraAccessException thrown. } Override Fun onError(Camera: CameraDevice, p1: Int) {// Call cameraDevice.close () and do something else to release the camera resources and prevent a series of questions if the camera fails. CameraDevice) {super.onclosed (camera) // calls back to this method after calling cameraDevice.close () to close the camera device. A camera is a camera device that is not turned off. }} Private fun onOpenCamera(cameraDevice: CameraDevice, surface: Surface) { val stateCallback = object : CameraCaptureSession.StateCallback() { override fun onConfigured(cameraCaptureSession: CameraCaptureSession) { val builder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW) // Will CaptureRequest builder with object Surface binding builder. The addTarget (Surface) cameraCaptureSession. SetRepeatingRequest (builder. The build (), null, null) } override fun onConfigureFailed(cameraCaptureSession: CameraCaptureSession) { ALog.e("xiao", "OnConfigureFailed ")}} if (build.version.sdk_int >= build.version_codes.p) {// Android 9 + camera open val sessionConfiguration = SessionConfiguration( SessionConfiguration.SESSION_REGULAR, listOf(OutputConfiguration(surface)), mainExecutor, StateCallback) cameraDevice. CreateCaptureSession (sessionConfiguration)} else {/ / create CaptureSession session. The first parameter, outputs, is a List array in which the camera passes the captured image data to the Surface. The second parameter, StateCallback, is the StateCallback to create the session. The third parameter describes StateCallback is called the thread cameraDevice. CreateCaptureSession (listOf (surface), StateCallback, null)}}Copy the code
3. Call back data obtained from NV21
There are two commonly used YUV formats for Camera Preview Callback supported by Google in Android: NV21 and YV12. Android generally uses the YCbCr_420_SP format (NV21) by default.
What is NV21?
NV21 is a storage mode for YUV420p. The storage order is Y first, then U, and then VU alternate storage.
What’s YUV
YUV is a color coding method mainly used in TV systems and analog video. YUV represents the three components, Y represents lightness, and U and V represent chromaticity.
We can configure the format of the data callback:
- camera
val parameter = camera? .parameters parameter? .previewFormat = ImageFormat.NV21 camera? .parameters = parameter camera? .setPreviewCallback { bytes, camera -> }Copy the code
- camera2
// maxImages Specifies the maximum number of images the user wants to access at the same time. This value should be as small as possible to limit memory usage. Once the user has acquired maxImages Images, one of the Images must be freed before accessing a new Image via AcquireLatestImage() or AcquireNextImage(). It has to be greater than 0. val reader = ImageReader.newInstance( binding.surfaceViewPreview.width, binding.surfaceViewPreview.width, ImageFormat.YUV_420_888, / / camera2 does not support NV21, official recommended YUV_420_888. 2) the reader setOnImageAvailableListener ({imageReader - > val image = ImageReader. AcquireNextImage () if (image. The format = = ImageFormat. YUV_420_888) {/ YUV_420_888 is a * * * Y, U, V according to 4-1: 1 ratio sampling format, that is, each UV is shared by four Y's, 888 means each component is 8bits * NV21 is also the YUV_420 sampling format, but the U and V components are arranged differently. * Y, U, V data is stored in 3 planes; Plane# 0 is the Y component, Plane# 1 is the U component, plane#2 is the V component; **/ ALog.e("xiao", "Y: ${image.planes[0].buffer}") ALog.e("xiao", "U: ${image.planes[1].buffer}") ALog.e("xiao", "V: ${image.planes[2].buffer}") } image.close() }, null)Copy the code
builder.addTarget(reader.surface)
Copy the code
The source address
Four, cameraX
Official statement:
In short, the camera is outdated, camerA2 is too cumbersome to use, and Google Dad helped build the package
Add the dependent
Dependencies {// CameraX core library using the camera2 implementation val camerax_version = "1.0.0" // the following line is optional, as the core library is included indirectly by camera-camera2 implementation("androidx.camera:camera-core:${camerax_version}") implementation("androidx.camera:camera-camera2:${camerax_version}") // If you want to additionally use the CameraX Lifecycle library implementation("androidx.camera:camera-lifecycle:${camerax_version}") // If you want to additionally Use the CameraX View class implementation(" AndroidX. Camera: Camera-view :1.0.0-alpha26") // If you want to to Use the CameraX Extensions library implementation(" AndroidX. camera: Camera-extensions :1.0.0-alpha26")}Copy the code
Use cameraX to preview and get the frame data
private fun previewCameraX() { val cameraProviderFuture = ProcessCameraProvider.getInstance(this) cameraProviderFuture.addListener(Runnable { val cameraX = cameraProviderFuture.get() val preview = Preview.Builder().build() preview.setSurfaceProvider(binding.preview.surfaceProvider) val imageAnalyzer = ImageAnalysis.Builder().build() imageAnalyzer.setAnalyzer(Executors.newSingleThreadExecutor(), {image -> if (image.format == imageformat.yuv_420_888) {/** * YUV_420_888 = Y: U: V = 4:1: 1 ratio sampling format, that is, each UV is shared by four Y's, 888 means each component is 8bits * NV21 is also the YUV_420 sampling format, but the U and V components are arranged differently. * Y, U, V data is stored in 3 planes; Plane# 0 is the Y component, Plane# 1 is the U component, plane#2 is the V component; **/ ALog.e("xiao", "Y: ${image.planes[0].buffer}") ALog.e("xiao", "U: ${image.planes[1].buffer}") ALog.e("xiao", "V: ${image.planes[2].buffer}") } image.close() }) val cameraSelector = CameraSelector.DEFAULT_BACK_CAMERA try { cameraX.unbindAll() cameraX.bindToLifecycle(this, cameraSelector, preview, imageAnalyzer) } catch (e: Exception) { Log.e("xiao", "Use case binding failed", e) } }, ContextCompat.getMainExecutor(this)) }Copy the code