blog.piasy.com
Blog.piasy.com/2017/07/24/…

The front shared a set of WebRTC development environment out of the box, hope to bring help to friends interested in WebRTC. But with the development environment is just the first step in the long march, the things behind have to carefully read the source code, so here to bring WebRTC Android Demo project – APPRTC-Android source guide series. (Dagger2 will have to wait, Haihan haihan…)

The amount of WebRTC code is too large to read all at once. In this series, I’m going to try to figure out three things:

  1. How to establish a connection between clients?
  2. How to transfer data between clients?
  3. The complete process of audio and video data collection, preview, coding, transmission, decoding and rendering.

This article is the first, I will start from the most familiar acquisition, analysis of the implementation of weBRTC-Android camera acquisition.

Webrtc-android camera collection is expected of Enumerator, CapErator, Session, And SurfaceTextureHelper. That’s what’s expected of you.

Expected Of Women, Caperator generates Session, which is expected of Women. SurfaceTextureHelper receives SurfaceTexture data.

Enumerator

The CameraEnumerator interfaces are as follows:

public interface CameraEnumerator {
  public String[] getDeviceNames();
  public boolean isFrontFacing(String deviceName);
  public boolean isBackFacing(String deviceName);
  public List<CaptureFormat> getSupportedFormats(String deviceName);
  public CameraVideoCapturer createCapturer(
      String deviceName, CameraVideoCapturer.CameraEventsHandler eventsHandler);
}
Copy the code

It’s getting the device list, checking the orientation, creating capWomen. That’s what’s expected of you.

This is a typical abstraction that consolidates (and encapsulates) the different apis of Camera1 and Camera2 with apI-independent interfaces, and deviceName is at the heart of this unification. Map all camera devices under different apis to their own deviceName, and subsequent operations can specify deviceName. You can also use an integer ID instead of a string, but the readability becomes worse.

Capturer

WebRTC video acquisition interface definitions for VideoCapturer, which defines the initialization, start-stop, destruction, such as operation, as well as receiving callback start-stop events and data. Cameracapstorm is expected of women. Camera1capstorm is expected of women. Camera2capstorm is expected of women. Most of the camera capture logic is encapsulated in Cameracapwomen, that’s what’s expected of you. CameraSession has different implementations in the two subclasses.

Women, there’s a storm of API implementation logic that’s expected of videocapStorm.

initialize

Initialize is simple and simply holds the related objects passed in.

startCapture

StartCapture checks whether a session is being created or an existing session is running. This ensures that multiple sessions are not running at the same time. The access of many state members is protected by stateLock to avoid multi-thread security issues.

If a session needs to be created, the session is created in the camera operation thread and the timeout of the camera operation is detected in the main thread. All camera operations are switched to a separate camera thread to avoid blocking the main thread, and the timeout check cannot be performed on the camera thread, otherwise the timeout callback will not be executed after the camera thread is blocked.

We found the capturer and there is no actual camera operation code, open camera, preview code are encapsulated in the CameraSession, that such capturer logic simplified, switch camera retry fail, you just need to create a session, Capwomen focuses on state maintenance and error handling logic.

CameraCapturer state maintenance and error handling logic is very comprehensive, open camera, camera operation, switch camera state, error retry, camera, open a timeout, all considered. In addition, camera switching, camera switching, error events, all have callback notifications. This is where the difference between a demo and a product comes in. A demo with a camera preview on can be as simple as ten lines of code, but it takes a lot of work to fully consider all the exceptions.

There is a slight glitch, though. The error callback takes a string, which is handy for logging, but it doesn’t help code determine the type of error. It is best to use error codes for arguments and prepare a conversion function from error codes to error messages.

stopCapture

StopCapture determines whether a session is being created. If a session is being created, wait until it is created. After this check, close the session in the camera thread if there is one currently running.

changeCaptureFormat

To change the collection format, restart stopCapture and then startCapture. Both operations are asynchronous, will there be a problem? The Handler queue will execute all the tasks submitted to it in the same order, so we don’t have to worry about turning off the camera and turning on the camera in the wrong order.

switchCamera

SwitchCamera will also stop the old session and create a new session, but also need to check the number of cameras, switch status notification logic.

There should be a slight problem with this code: StartCapture sets openAttemptsRemaining to MAX_OPEN_CAMERA_ATTEMPTS, but only sets it to 1 when switching the camera. This asymmetry should not make sense, so I think it should be consistent.

Session

CameraSession (CameraSession) : CameraSession (CameraSession) : CameraSession (CameraSession)

The code to start the camera, start the preview, and set the event callback is in the factory methods that create the session, camera1session.create and camera2Session.create. Stop camera and preview define a Stop interface.

The specific camera API is relatively simple to use.

Camera1

  • createCameraObject:Camera.open;
  • Set the preview SurfaceTexture to receive frame data (in video memory) :camera.setPreviewTexture;
  • Select the appropriate camera preview parameters (size, frame rate, focus) :Parameterscamera.setParameters;
  • If you need to get a memory data callback, you need to set buffer and listener:camera.addCallbackBuffercamera.setPreviewCallbackWithBuffer;
  • If you need the camera service to adjust the data direction for us, you can set the rotation Angle:camera.setDisplayOrientation;
  • Enable preview:camera.startPreview;
  • Stop preview:camera.stopPreviewcamera.release;

Camera2

  • createCameraManagerObject, camera operation begins with “camera manager” :context.getSystemService(Context.CAMERA_SERVICE);
  • createCameraDeviceObject:cameraManager.openCamera;
  • Unlike Camera1, Camera2 operations are asynchronous, calledopenCameraWe pass in a callback that receives an event about the state of the camera operation.
  • Successful creation:CameraDevice.StateCallback#onOpened;
  • After creating the camera object, open the preview session and set the data callback:camera.createCaptureSessionAgain, this operation passes a callback;
  • Session started successfully:CameraCaptureSession.StateCallback#onConfigured;
  • After opening session, set data format (size, frame rate, focus) and send data request:CaptureRequest.Buildersession.setRepeatingRequest;
  • Stop preview:cameraCaptureSession.stopcameraDevice.close;

2017.07.27 UPDATE: I found that I missed a piece of content that would bother many people: image orientation.

Image direction

Usually the front camera outputs an image orientation of 270° counterclockwise and the rear camera 90°, but there are a few exceptions, such as the Nexus 5X’s 270° front and back.

We can through the camera in Camera1. SetDisplayOrientation interface to control the output image of the camera Angle, but in fact whether access to memory data, or access to memory data (SurfaceTexture), the call will change the data, It only affects the direction of the transformation matrix carried by the camera as it outputs data. There is no corresponding interface in Camera2, but the camera service automatically adjusts the orientation of the transformation matrix for us, so it is equivalent to calling a similar interface correctly.

If using the camera. SetPreviewDisplay or camera. SetPreviewTexture realize preview, the camera. The setDisplayOrientation do let preview image direction of change, Because the camera service applies a transformation matrix when rendering to SurfaceView/TextureView, the preview screen is rotated.

Besides the problems the direction as well as a mirror, Camera1 in front camera automatically flip the picture for us (of course) transformation matrix is modified, for example, the front-facing camera output image direction is counterclockwise, 270 °, the image upside down, would have been if we set up a rotated 90 °, the image is, That’s the equivalent of flipping left to right, and it’s a mirror image. In the front camera, we touch the left side of the face with our left hand, and the preview also shows it on the left side of the screen. .

To set the camera. SetPreviewDisplay parameter so that the direct preview can be oriented correctly, you can use the following code:

private static int getRotationDegree(int cameraId) { int orientation = 0; WindowManager wm = (WindowManager) applicationContext .getSystemService(Context.WINDOW_SERVICE); switch (wm.getDefaultDisplay().getRotation()) { case Surface.ROTATION_90: orientation = 90; break; case Surface.ROTATION_180: orientation = 180; break; case Surface.ROTATION_270: orientation = 270; break; case Surface.ROTATION_0: default: orientation = 0; break; } if (cameraInfo.facing == Camera.CameraInfo.CAMERA_FACING_FRONT) { return (720 - (cameraInfo.orientation + orientation)) % 360; } else { return (360 - orientation + cameraInfo.orientation) % 360; }}Copy the code

SurfaceTextureHelper

SurfaceTextureHelper is responsible for creating SurfaceTexture, receiving SurfaceTexture data, and managing the camera thread.

Creating SurfaceTexture comes with a few caveats:

  • The thread in which the OpenGL Texture is created needs to have the GL context ready, and WebRTC encapsulates this logic inEglBaseIn the class;
  • createSurfaceTextureThe thread on which it is located will be its data callbackonFrameAvailableThe thread that occurs; However, API 21 introduces a new overloaded version that supports a Handler that specifies the thread in which the callback is called;
// The onFrameAvailable() callback will be executed on the SurfaceTexture ctor thread. // See: http://grepcode.com/file/repository.grepcode.com/java/ext/com.google.android/ // Android / 5.1.1 _r1 / android/graphics/SurfaceTexture java# 195. / / Therefore, in order to control the callback thread on API lvl < 21, // the SurfaceTextureHelper is constructed on the |handler| thread.Copy the code

What are the pit

  • On earlier (pre-5.0) systems, when Camera1 stops previewing, do not manually call the following interface to set null values:setPreviewDisplay/setPreviewCallback/setPreviewTexture(The documentation does say not to call…) Otherwise, all system services may crash and the phone may restart:

  • There may be a deadlock in stopping Camera1 preview (not resolved) :
// Note: stopPreview or other driver code might deadlock. Deadlock in
// android.hardware.Camera._stopPreview(Native Method) has been observed on
// Nexus 5 (hammerhead), OS version LMY48I.
camera.stopPreview();
Copy the code
  • Camera2 related code encounters VerifyError on systems prior to 4.4.2:
try {
    return cameraManager.getCameraIdList();
    // On Android OS pre 4.4.2, a class will not load because of VerifyError if it contains a
    // catch statement with an Exception from a newer API, even if the code is never executed.
    // https://code.google.com/p/android/issues/detail?id=209129
} catch (/* CameraAccessException */ AndroidException e) {
    Logging.e(TAG, "Camera access exception: " + e);
    return new String[] {};
}
Copy the code
  • Using SurfaceTexture receive frame data, some models may access to data is black screen (MX5 met) : you need to set up SurfaceTexture of buffer size, SurfaceTexture. SetDefaultBufferSize
  • Using SurfaceTexture to receive frame data throughSurfaceTexture.getTimestampInterface to get the timestamp and the timestamp is relative to time, and there will be a few frames in front of the value is 0, the problem of relative time can be in the first frame and the physical time difference, and then calculate the subsequent physical time stamp of each frame, but the head of a few frames timestamp is 0, so we have to wait until write down the difference is not zero, while the first few frames can be used directly physical time as a timestamp;
  • surfaceTexture.updateTexImageeglSwapBuffersDeadlocks occur and we need to lock them ourselves:
// SurfaceTexture.updateTexImage apparently can compete and deadlock with eglSwapBuffers,
// as observed on Nexus 5. Therefore, synchronize it with the EGL functions.
// See https://bugs.chromium.org/p/webrtc/issues/detail?id=5702 for more info.
synchronized (EglBase.lock) {
  surfaceTexture.updateTexImage();
}

synchronized (EglBase.lock) {
  EGL14.eglSwapBuffers(eglDisplay, eglSurface);
}
Copy the code

Memory jitter Optimization

Running apprtC-Android, we will find that memory jitter is very serious:

We can use Allocation Tracker to analyze and optimize this part. We will arrange and release the specific content and other basic content after release. Please look forward to it

Subsequent articles

  • Webrtc-android source code guide (two) : preview implementation analysis
  • Webrtc-android source code guide (three) : video hard coding implementation analysis

  • Please Previous Post
  • The Next Post –