The background,

AS Lint starts reporting warning after recently upgrading project minSdkVersion to 21 or higher:

'android.hardware.Camera' is deprecated as of API 21: Android 5.0 (Lollipop)
Copy the code

The reason is that the camera API used in the scan tool is deprecated. If you look at the Camera class, you can see javadoc’s comments describing Camera2 that suggest using the new API.

/** * @deprecated We recommend using the new {@link android.hardware.camera2} API for new * applications. */ @Deprecated  public class Camera { }Copy the code

Camera2 does not simply change the @param and @return interfaces of the old camera, nor does it replace individual calling interfaces. Instead, it completely changes the calling method.

Second, the framework

The scanning tool in the project was upgraded and reconstructed from Camera to Camera 2, and the decoding part was still achieved by using Zxing library. The framework diagram of the scanning tool is shown below.In view of the huge difference between Camera2 and Camera, rather than refactoring on the original project, it is better to discard the old code and write a new one. The design of each level of a pair of frameworks is described below.

Iii. UI layer design

1. Scan the code box View and customize the View according to your own needs.

public final class QRFinderView extends View { private Rect frame; public QRFinderView(Context context, AttributeSet attrs) { super(context, attrs); } public void setFrame(Rect frame) { this.frame = frame; } public void onDraw(Canvas canvas) { if (frame == null) { return; }... this.postInvalidateDelayed(14L, frame.left, frame.top, frame.right, frame.bottom); }}Copy the code

2. To make the interaction more friendly, you can use MediaPlayer to load local Voice resources to play the prompt tone when the code scan is successful.

soundPlayer.setDataSource(context, uri)
soundPlayer.start()
Copy the code

3. Flash. Unlike the old version of camera which sets camera parameters via camera.setParameters (), CamerA2 starts the flash by issuing CaptureRequest.FLASH_MODE request.

public void openFlashLight() { previewRequestBuilder.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_TORCH); try { captureSession.setRepeatingRequest(previewRequestBuilder.build(), sessionCaptureCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); }}Copy the code

4. Preview the screen and use TextureView to render.

/ / build preview request previewRequestBuilder = cameraDevice createCaptureRequest (cameraDevice. TEMPLATE_PREVIEW); SurfaceTexture texture = textureView.getSurfaceTexture(); texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); Surface surface = new Surface(texture); / / add show preview view previewRequestBuilder. AddTarget (surface).Copy the code

Instead of using TextureView’s size directly, choose from all the preview sizes supported by the camera, up to the maximum view size, as the actual preview size.

private void setUpCameraOutputs(int viewWidth, int viewHeight) { try { CameraCharacteristics characteristics = getCameraManager().getCameraCharacteristics(getBackCameraId()); StreamConfigurationMap map = characteristics.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP); Size[] supportedSizes = map.getOutputSizes(SurfaceTexture. Class); // Screen orientation int windowRotation = getActivity().getwinDowManager ().getDefaultDisplay().getrotation (); / / the camera direction int cameraOrientation = characteristics. The get (CameraCharacteristics. SENSOR_ORIENTATION); // Whether to adjust camera rotation based on screen orientation (required if screen orientation and camera orientation are inconsistent) Boolean needSwappedDimensions = false; switch (windowRotation) { case Surface.ROTATION_0: case Surface.ROTATION_180: if (cameraOrientation == 90 || cameraOrientation == 270) { needSwappedDimensions = true; } break; case Surface.ROTATION_90: case Surface.ROTATION_270: if (cameraOrientation == 0 || cameraOrientation == 180) { needSwappedDimensions = true; } break; } int rotatedPreviewWidth = viewWidth; int rotatedPreviewHeight = viewHeight; if (needSwappedDimensions) { rotatedPreviewWidth = viewHeight; rotatedPreviewHeight = viewWidth; } mPreviewSize = new Size(rotatedPreviewWidth, rotatedPreviewHeight); float ration = (float) rotatedPreviewWidth / rotatedPreviewHeight; for (Size option : supportedSizes) { if ((float) option.getWidth() / option.getHeight() == ration && option.getWidth() <= rotatedPreviewWidth && option.getHeight() <= rotatedPreviewHeight) { mPreviewSize = option; break; } } } catch (CameraAccessException | NullPointerException e) { e.printStackTrace(); }}Copy the code

Four, scan code layer design

1. Turn on the camera. Instead of using camera.open () to open the camera, you need to specify the cameraId first and then listen for the camera state when the camera is open.

Private void openCamera() {try {cameramanager. openCamera(getBackCameraId(), deviceStateCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); Private String getCameraid () {CameraManager CameraManager = getCameraManager(); try { String[] ids = cameraManager.getCameraIdList(); for (String id : ids) { CameraCharacteristics characteristics = cameraManager.getCameraCharacteristics(id); int orientation = characteristics.get(CameraCharacteristics.LENS_FACING); if (orientation == CameraCharacteristics.LENS_FACING_BACK) { return id; } } } catch (CameraAccessException e) { e.printStackTrace(); } return null; } / / camera state private CameraDevice. StateCallback deviceStateCallback = new CameraDevice. StateCallback () {@ Override public void onOpened(@NonNull CameraDevice camera) { cameraDevice = camera; startPreview(); } @Override public void onDisconnected(@NonNull CameraDevice camera) { camera.close(); cameraDevice = null; } @Override public void onError(@NonNull CameraDevice camera, int error) { } };Copy the code

2. Enable the preview function. Once you listen on the CameraDevice onPened(), you can start the preview. In older VERSIONS of the API, the preview was started by camera.startPreview () and received by PreviewCallback. Instead of startPreview(), CameraDevice splits the display preview and capture preview results into two separate request processes. The request to show a preview is explained earlier in the UI layer design. And requests for preview results need to be through createCaptureSession () sets the preview monitor CameraCaptureSession. StateCallback, and onConfigured get CameraCaptureSession () callback,

Private void startPreview() {try {// Build preview previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW) SurfaceTexture texture = textureView.getSurfaceTexture(); texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight()); Surface surface = new Surface(texture); / / add show preview view previewRequestBuilder. AddTarget (surface). Preview / / send the request (request results will be output to the ImageReader) cameraDevice. CreateCaptureSession (arrays.aslist (surface, ImageReader getSurface ()), sessionStateCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); }} / / preview state private CameraCaptureSession. StateCallback sessionStateCallback = new CameraCaptureSession. StateCallback () { @Override public void onConfigured(@NonNull CameraCaptureSession session) { captureSession = session; previewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE); previewRequestBuilder.set(CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH); Try {/ / has been request preview session. SetRepeatingRequest (previewRequestBuilder. The build (), sessionCaptureCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); }}}Copy the code

Finally, the preview request is issued by CameraCaptureSession.

session.setRepeatingRequest(previewRequestBuilder.build(), sessionCaptureCallback, null);
Copy the code

3. Receive the preview result. Since the real-time preview screen needs to be displayed, it is necessary to specify the Surface for the preview screen, which has been mentioned in the previous UI layer design. The key is to receive the preview results, you need to specify the preview to receive the ImagerReader.

ImageReader = Imagereader.newinstance (mpreviewsize.getwidth ()), mPreviewSize.getHeight(), ImageFormat.YUV_420_888, /*maxImages*/2); / / listening preview output imageReader setOnImageAvailableListener (imageAvailableListener, null);Copy the code

Listen on onImagImagaliablelistener. When a preview Image is captured, the onImageAvailable() method will be called back to return Image byte stream data.

. / / the preview picture complete private ImageReader OnImageAvailableListener imageAvailableListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader reader) Image image = reader.acquireLatestImage(); if (image == null) { return; } ByteBuffer buffer = image.getPlanes()[0].getBuffer(); int imageWidth = image.getWidth(); int imageHeight = image.getHeight(); byte[] data = new byte[buffer.remaining()]; buffer.get(data); image.close(); decodeImageByZxing(data, imageWidth, imageHeight); }};Copy the code

Five, decoding layer design

Zxing is a very popular decoding tool from Google. Check its website for the latest version number when you introduce it.

Implementation 'com. Google. Zxing: core: 3.4.1 track'Copy the code

Zxing decoding process: Convert the image data stream byte[] obtained by the camera into the corresponding PlanarYUVLuminanceSource data, then parse the source to obtain Result, and judge whether the code sweep is successful according to the Result.

1. Build PlanarYUVLuminanceSource data. The CameraDevice will output an image data stream byte[] via ImagerReader. In combination with the specified output preview area boundary, the CameraDevice can construct PlanarYUVLuminanceSource data required by Zxing decoding.

private void decodeImageByZxing(byte[] imageData, int imageWidth, int imageHeight) {
    Rect rect = new Rect(mFramingRectInPreview);
    PlanarYUVLuminanceSource planarYUVLuminanceSource = new PlanarYUVLuminanceSource(imageData, imageWidth, imageHeight, rect.left, rect.top, rect.width(), rect.height(), false);
}
Copy the code

2. Specify the decoded data format. The format type is selected from the BarcodeFormat enumeration, where to recognize the QR code you need to add the barcodeformat.qr_code format.

private MultiFormatReader getMultiFormatReader() { MultiFormatReader mMultiFormatReader = new MultiFormatReader(); Collection<BarcodeFormat> decodeFormats = EnumSet.noneOf(BarcodeFormat.class); // decodeFormats. Add (barcodeformat.qr_code); decodeFormats.add(BarcodeFormat.DATA_MATRIX); final Map<DecodeHintType, Object> hints = new EnumMap<>(DecodeHintType.class); hints.put(DecodeHintType.POSSIBLE_FORMATS, decodeFormats); hints.put(DecodeHintType.CHARACTER_SET, "UTF8"); hints.put(DecodeHintType.NEED_RESULT_POINT_CALLBACK, new QRFinderResultPointCallback(finderView)); mMultiFormatReader.setHints(hints); return mMultiFormatReader; }Copy the code

3. Perform decoding. The PlanarYUVLuminanceSource is wrapped as a BinaryBitmap and decoded using the MultiFormatReader with the specified data format. The decoded Result is output as a Result object.

private Result decode(PlanarYUVLuminanceSource planarYUVLuminanceSource) { MultiFormatReader multiFormatReader = getMultiFormatReader(); Result result = null; if (planarYUVLuminanceSource ! = null) { BinaryBitmap bitmap = new BinaryBitmap(new HybridBinarizer(planarYUVLuminanceSource)); try { result = multiFormatReader.decodeWithState(bitmap); } catch (ReaderException re) { Log.e(TAG, ": ", re); } finally { multiFormatReader.reset(); } } return result; }Copy the code

4. Parse and decode Result. Scanning code adopts the method of requesting preview all the time. After capturing Image data in preview every time, it will be passed to Zxing for decoding, and stop preview to wait for decoding results. When the QR code is not scanned, the output Result of decoding is NULL. At this time, it is necessary to continue to send the preview request until the valid QR code is scanned, and the obtained decoding Result is not NULL, and the QR code information is successfully identified.

private void handleScanResult(Result result) { if (result ! = null) {if (qrscancallback! = null) { qrScanResultCallback.onResult(result.getText()); }} else {// Scan code failed, Yet again to try {captureSession. SetRepeatingRequest (previewRequestBuilder. The build (), sessionCaptureCallback, null); } catch (CameraAccessException e) { e.printStackTrace(); }}}Copy the code

6. Integrate QRScanSdk

Check out github source code

Integration in AndroidStudio:

Step 1. Add it in your root build.gradle at the end of repositories:

allprojects {
	repositories {
		...
		maven { url 'https://jitpack.io' }
	}
}
Copy the code

Step 2. Add the dependency

Dependencies {implementation 'com. Making. Zouhecan: QRScanSdk: 2.1'}Copy the code

Step 3. Using QRScanSdk in your activity

class MainActivity : AppCompatActivity(), QRScanResultCallback {
             fun startScan() {
                    QRScanManager.startScan(this, this)
              }

              override fun onResult(result: String?) {
                  //do something when the scan is completed
              }
        }
Copy the code