I recently encountered an interesting issue: when decoding video with rotation Angle through MediaCodec, if the Output Surface is provided by TextureView or SurfaceView, the video frame on the screen will be displayed normally (with rotation Angle handled); If a Surface is created from a SurfaceTexture, then OES textures obtained from SurfaceTexture do not handle rotation angles and need to be compatible with the angles themselves.
In addition, I also test the Camera, first by Camera. The setDisplayOrientation set the clockwise rotation Angle, and then set the Output Surface receives the Camera preview video frames. Not surprisingly, they came to the same conclusion.
Processing of video rotation Angle
Since all video frames are received through Surface, why is there such a difference? MediaCodec and Camera produce video frames and display them on the screen as follows:
- First of all,
MediaCodec
andCamera
throughnative_window_set_buffers_transform
Function to the Surface (ativeWindow)Transform Flag
And stored in theSurface.mTransform
The variable. - Next, the production side
MediaCodec
andCamera
Held through the SurfaceBufferQueueProducer
So when you queue up a BufferItem, it putsSurface.mTransform
Assigned toBufferItem.mTransform
. - Then, the consumer side
GLConsumer
throughBufferQueueComsumer
Once you get the BufferItem, you get theBufferItem.mTransform
, restore a 4*4 texture transformation matrix and save it inGLConsumer.mCurrentTransformMatrix
Variable,GLConsumer
The user can pass throughGLConsumer.getTransformMatrix
Method to obtain the texture transformation matrix. - In the end,
GLConsumer
The user is responsible for applying the texture transformation matrix to the texture to correctly display the video frame.
GLConsumer is responsible for the BufferItem mGraphicBuffer converted to texture, and according to BufferItem. MTransform texture transformation matrix is calculated, the texture is the original state of texture matrix (not application). It is the responsibility of the GLConsumer consumer to take the texture transformation matrix and apply it to the texture.
Whether the Surface is provided by TextureView or created by SurfaceTexture, the first 3 steps above are the same process. The root cause of the difference is step 4: whether the GLConsumer user applies the texture transformation matrix to the texture
When the Surface is provided by TextureView, the GLConsumer user is DeferredLayerUpdater, which takes the texture transformation matrix, Will populate the layer. texTransform variable (frameworks\base\libs\hwui\ layer. h), and then in the hardware-accelerated asynchronous rendering thread, OpenGLRenderer applies this texture transformation matrix when rendering a DrawLayerOp (created based on the Layer above).
When the Surface is provided by SurfaceView, the Surface is independent window, GLConsumer users is frameworks/native/services/surfaceflinger/Layer, it takes the texture transformation matrix, To fill Layer. MTexture. MTextureMatrix, subsequent SurfaceFlinger synthesis Layer, application of the texture transformation matrix.
It can be seen that when the Output Surface is provided by TextureView or SurfaceView, the GLConsumer user actively obtains and uses the texture transformation matrix, so the video frame we see on the screen is normal.
While the Output Surface is created from the SurfaceTexture we created based on OES texture IDS, the GLConsumer is the business side, . So I need business partner through SurfaceTexture getTransformMatrix initiative to obtain texture transformation matrix, and apply it to engage in texture. That is, the OES texture we get is the original texture, which can be displayed normally only after the texture transformation matrix is applied.
How does the business use the texture transformation matrix?
- First of all, through
SurfaceTexture.getTransformMatrix
Get the texture transformation matrix, which is stored in column priority and can be passed directlyglUniformMatrix4fv
Function is uploaded to the vertex shader. - In the vertex shader, multiply the texture coordinates by the texture transformation matrix and pass the latest texture coordinates to the pixel shader.
- In the fragment shader, it is ok to take the color from the texture using texture coordinates normally (video frames are displayed correctly).
The flow of video rotation Angle
Let’s take a look at the rotation flow in MediaCodec and Camera.
MediaCodec
When using MediaCodec to decode videos, you need to use MediaFormat to set decoding parameters, such as SPS and PPS. If the video has a rotation Angle, use mediaformat. KEY_ROTATION to configure the rotation Angle, indicating how many degrees the video frame should be rotated clockwise to display correctly. But be warned: The rotation Angle only works if MediaCodec decodes directly to the Surface.
The rotation Angle flow in MediaCodec is as follows:
MediaCodec::configure
Configure the decoder with MediaFormat as a parameter.- MediaCodec send
kWhatConfigure
Message to configure the decoder in the own thread. - call
ACodec->initiateConfigureComponent
Set ACodec (MediaFormat as parameter). - Send ACodec
kWhatConfigureComponent
The message is configured ACodec on its own thread. - Then there is
ACodec::LoadedState::onConfigureComponent
Methods. - And then the
ACodec.configureCodec
Method, which is responsible for configuring ACodec in MediaFormat, with “rotation-degrees” extracted from format and saved inACodec.mRotationDegrees
Variable. - In the end,
ACodec.setupNativeWindowSizeFormatAndUsage
Call global functions insetNativeWindowSizeFormatAndUsage
Set for Surface (ativeWindow)transform flag
. The core code in setNativeWindowSizeFormatAndUsage function:
// Obtain the transform flag according to the rotation Angle
int transform = 0;
if ((rotation % 90) = =0) {
switch ((rotation / 90) & 3) {
case 1: transform = HAL_TRANSFORM_ROT_90; break;
case 2: transform = HAL_TRANSFORM_ROT_180; break;
case 3: transform = HAL_TRANSFORM_ROT_270; break;
default: transform = 0; break; }}// Set transform for Surface (ANativeWindow)
err = native_window_set_buffers_transform(nativeWindow, transform);
Copy the code
The native_WINDOW_set_buffers_transform function triggers the ANativeWindow subclass Surface’s Perform method to process the native_WINDOW_set_buffers_transform message, Followed by dispatchSetBuffersTransform – > setBuffersTransform, finally storing the transform in the Surface. The mTransform variable.
At this point, the rotation Angle is set to surface.mTransform, which is used by Surface when producing image data.
MTransform is assigned to BufferItem.mTransform. The element of the BufferQueue is the BufferItem.
Then, GLConsumer obtains a BufferItem from BufferQueueComsumer and restores a 4*4 texture transformation matrix according to bufferItem. mTransform. Stored in GLConsumer. MCurrentTransformMatrix variable, GLConsumer users can invoke GLConsumer. GetTransformMatrix method to get the texture transformation matrix, and then when using corresponding texture, Apply the matrix to the texture.
Camera
The flow of rotation Angle in Camera is as follows:
Camera.setDisplayOrientation
Sets the clockwise rotation Angle for preview display.- Android_hardware_Camera_setDisplayOrientation corresponds to the Native method.
- Then walk to the
Camera.sendCommand
Method, where the Camera is the client, is called to the server by BinderCameraClient::sendCommand
. - Then go to CameraClient: : sendCommand set orientation.
- Then went on to CameraHardwareInterface setPreviewTransform
- Then through
native_window_set_buffers_transform
Set the rotation Angle for the Camera Preview Window (Surface). - Finally, Surface is responsible for use
Surface.mTransform
This part of the process is similar to MediaCodec.
It should be said that MediaCodec and Camera have a different process for setting the rotation Angle, but they both end up setting the rotation Angle to surface. mTransform.
conclusion
Both MediaCodec and Camera are the source of Surface image data, but the source image data may have a certain rotation Angle or mirror image. In this case, MediaCodec and Camera set the Transform Flag for the Surface via native_window_set_buffers_transform, which is stored in the surf. mTransform variable. Represents the generated image data, which must be transformed before it can be displayed normally. The Transform flags available are as follows:
// Transform Flag
typedef enum android_transform {
// horizontal mirror
HAL_TRANSFORM_FLIP_H = 0x01.// Vertical mirror
HAL_TRANSFORM_FLIP_V = 0x02.// Rotate 90 degrees clockwise
HAL_TRANSFORM_ROT_90 = 0x04.// Rotate 180 degrees clockwise
HAL_TRANSFORM_ROT_180 = 0x03.// Rotate 270 degrees clockwise
HAL_TRANSFORM_ROT_270 = 0x07.// don't use. see system/window.h
HAL_TRANSFORM_RESERVED = 0x08,}android_transform_t;
Copy the code
MTransform assigns Surface. MTransform to BufferItem.mTransform, and then joins the BufferQueue.
After obtaining BufferItem through BufferQueueComsumer, GLConsumer restores a 4*4 texture transformation matrix according to bufferItem. mTransform Flag. Stored in GLConsumer. MCurrentTransformMatrix variable, GLConsumer users call GLConsumer. GetTransformMatrix obtain the texture transformation matrix, and then applied to the corresponding texture, can correctly display video frames.