I recently encountered an interesting issue: when decoding video with rotation Angle through MediaCodec, if the Output Surface is provided by TextureView or SurfaceView, the video frame on the screen will be displayed normally (with rotation Angle handled); If a Surface is created from a SurfaceTexture, then OES textures obtained from SurfaceTexture do not handle rotation angles and need to be compatible with the angles themselves.

In addition, I also test the Camera, first by Camera. The setDisplayOrientation set the clockwise rotation Angle, and then set the Output Surface receives the Camera preview video frames. Not surprisingly, they came to the same conclusion.

Processing of video rotation Angle

Since all video frames are received through Surface, why is there such a difference? MediaCodec and Camera produce video frames and display them on the screen as follows:

  1. First of all,MediaCodecandCamerathroughnative_window_set_buffers_transformFunction to the Surface (ativeWindow)Transform FlagAnd stored in theSurface.mTransformThe variable.
  2. Next, the production sideMediaCodecandCameraHeld through the SurfaceBufferQueueProducerSo when you queue up a BufferItem, it putsSurface.mTransformAssigned toBufferItem.mTransform.
  3. Then, the consumer sideGLConsumerthroughBufferQueueComsumerOnce you get the BufferItem, you get theBufferItem.mTransform, restore a 4*4 texture transformation matrix and save it inGLConsumer.mCurrentTransformMatrixVariable,GLConsumerThe user can pass throughGLConsumer.getTransformMatrixMethod to obtain the texture transformation matrix.
  4. In the end,GLConsumerThe user is responsible for applying the texture transformation matrix to the texture to correctly display the video frame.

GLConsumer is responsible for the BufferItem mGraphicBuffer converted to texture, and according to BufferItem. MTransform texture transformation matrix is calculated, the texture is the original state of texture matrix (not application). It is the responsibility of the GLConsumer consumer to take the texture transformation matrix and apply it to the texture.

Whether the Surface is provided by TextureView or created by SurfaceTexture, the first 3 steps above are the same process. The root cause of the difference is step 4: whether the GLConsumer user applies the texture transformation matrix to the texture

When the Surface is provided by TextureView, the GLConsumer user is DeferredLayerUpdater, which takes the texture transformation matrix, Will populate the layer. texTransform variable (frameworks\base\libs\hwui\ layer. h), and then in the hardware-accelerated asynchronous rendering thread, OpenGLRenderer applies this texture transformation matrix when rendering a DrawLayerOp (created based on the Layer above).

When the Surface is provided by SurfaceView, the Surface is independent window, GLConsumer users is frameworks/native/services/surfaceflinger/Layer, it takes the texture transformation matrix, To fill Layer. MTexture. MTextureMatrix, subsequent SurfaceFlinger synthesis Layer, application of the texture transformation matrix.

It can be seen that when the Output Surface is provided by TextureView or SurfaceView, the GLConsumer user actively obtains and uses the texture transformation matrix, so the video frame we see on the screen is normal.

While the Output Surface is created from the SurfaceTexture we created based on OES texture IDS, the GLConsumer is the business side, . So I need business partner through SurfaceTexture getTransformMatrix initiative to obtain texture transformation matrix, and apply it to engage in texture. That is, the OES texture we get is the original texture, which can be displayed normally only after the texture transformation matrix is applied.

How does the business use the texture transformation matrix?

  1. First of all, throughSurfaceTexture.getTransformMatrixGet the texture transformation matrix, which is stored in column priority and can be passed directlyglUniformMatrix4fvFunction is uploaded to the vertex shader.
  2. In the vertex shader, multiply the texture coordinates by the texture transformation matrix and pass the latest texture coordinates to the pixel shader.
  3. In the fragment shader, it is ok to take the color from the texture using texture coordinates normally (video frames are displayed correctly).

The flow of video rotation Angle

Let’s take a look at the rotation flow in MediaCodec and Camera.

MediaCodec

When using MediaCodec to decode videos, you need to use MediaFormat to set decoding parameters, such as SPS and PPS. If the video has a rotation Angle, use mediaformat. KEY_ROTATION to configure the rotation Angle, indicating how many degrees the video frame should be rotated clockwise to display correctly. But be warned: The rotation Angle only works if MediaCodec decodes directly to the Surface.

The rotation Angle flow in MediaCodec is as follows:

  1. MediaCodec::configureConfigure the decoder with MediaFormat as a parameter.
  2. MediaCodec sendkWhatConfigureMessage to configure the decoder in the own thread.
  3. callACodec->initiateConfigureComponentSet ACodec (MediaFormat as parameter).
  4. Send ACodeckWhatConfigureComponentThe message is configured ACodec on its own thread.
  5. Then there isACodec::LoadedState::onConfigureComponentMethods.
  6. And then theACodec.configureCodecMethod, which is responsible for configuring ACodec in MediaFormat, with “rotation-degrees” extracted from format and saved inACodec.mRotationDegreesVariable.
  7. In the end,ACodec.setupNativeWindowSizeFormatAndUsageCall global functions insetNativeWindowSizeFormatAndUsageSet for Surface (ativeWindow)transform flag. The core code in setNativeWindowSizeFormatAndUsage function:
// Obtain the transform flag according to the rotation Angle
int transform = 0;
if ((rotation % 90) = =0) {
    switch ((rotation / 90) & 3) {
        case 1:  transform = HAL_TRANSFORM_ROT_90;  break;
        case 2:  transform = HAL_TRANSFORM_ROT_180; break;
        case 3:  transform = HAL_TRANSFORM_ROT_270; break;
        default: transform = 0;                     break; }}// Set transform for Surface (ANativeWindow)
err = native_window_set_buffers_transform(nativeWindow, transform);
Copy the code

The native_WINDOW_set_buffers_transform function triggers the ANativeWindow subclass Surface’s Perform method to process the native_WINDOW_set_buffers_transform message, Followed by dispatchSetBuffersTransform – > setBuffersTransform, finally storing the transform in the Surface. The mTransform variable.

At this point, the rotation Angle is set to surface.mTransform, which is used by Surface when producing image data.

MTransform is assigned to BufferItem.mTransform. The element of the BufferQueue is the BufferItem.

Then, GLConsumer obtains a BufferItem from BufferQueueComsumer and restores a 4*4 texture transformation matrix according to bufferItem. mTransform. Stored in GLConsumer. MCurrentTransformMatrix variable, GLConsumer users can invoke GLConsumer. GetTransformMatrix method to get the texture transformation matrix, and then when using corresponding texture, Apply the matrix to the texture.

Camera

The flow of rotation Angle in Camera is as follows:

  1. Camera.setDisplayOrientationSets the clockwise rotation Angle for preview display.
  2. Android_hardware_Camera_setDisplayOrientation corresponds to the Native method.
  3. Then walk to theCamera.sendCommandMethod, where the Camera is the client, is called to the server by BinderCameraClient::sendCommand.
  4. Then go to CameraClient: : sendCommand set orientation.
  5. Then went on to CameraHardwareInterface setPreviewTransform
  6. Then throughnative_window_set_buffers_transformSet the rotation Angle for the Camera Preview Window (Surface).
  7. Finally, Surface is responsible for useSurface.mTransformThis part of the process is similar to MediaCodec.

It should be said that MediaCodec and Camera have a different process for setting the rotation Angle, but they both end up setting the rotation Angle to surface. mTransform.

conclusion

Both MediaCodec and Camera are the source of Surface image data, but the source image data may have a certain rotation Angle or mirror image. In this case, MediaCodec and Camera set the Transform Flag for the Surface via native_window_set_buffers_transform, which is stored in the surf. mTransform variable. Represents the generated image data, which must be transformed before it can be displayed normally. The Transform flags available are as follows:

// Transform Flag
typedef enum android_transform {
    // horizontal mirror
    HAL_TRANSFORM_FLIP_H    = 0x01.// Vertical mirror
    HAL_TRANSFORM_FLIP_V    = 0x02.// Rotate 90 degrees clockwise
    HAL_TRANSFORM_ROT_90    = 0x04.// Rotate 180 degrees clockwise
    HAL_TRANSFORM_ROT_180   = 0x03.// Rotate 270 degrees clockwise
    HAL_TRANSFORM_ROT_270   = 0x07.// don't use. see system/window.h
    HAL_TRANSFORM_RESERVED  = 0x08,}android_transform_t;
Copy the code

MTransform assigns Surface. MTransform to BufferItem.mTransform, and then joins the BufferQueue.

After obtaining BufferItem through BufferQueueComsumer, GLConsumer restores a 4*4 texture transformation matrix according to bufferItem. mTransform Flag. Stored in GLConsumer. MCurrentTransformMatrix variable, GLConsumer users call GLConsumer. GetTransformMatrix obtain the texture transformation matrix, and then applied to the corresponding texture, can correctly display video frames.