How to draw a picture on Android platform

【Android audio and video learning road (2) 】AudioRecord API detailed explanation and application

【Android audio and video learning road (3) 】AudioTrack use and detailed explanation

【Android Audio and Video learning road (4) 】 The use of Camera

【Android audio and video learning road (five) 】MediaExtractor and MediaMuxer explanation

【Android audio and video learning road (6) 】 MediaCodec simple explanation

【Android audio and video learning road (7) 】 audio and video recording process practice

“This is the second day of my participation in the August More Text Challenge. For details, see: August More Text Challenge.”

One, YUV origin

Among the common color models, RGB is mainly used to express and display color in electronic systems, CMYK printing four-color mode is used for color printing, and YUV is a color coding method adopted by the European television system.

There are two advantages to using YUV:

  1. YUV is designed to optimize the transmission of color video signals and make them backward compatible with older black-and-white TVS, a feature used in TV signals.
  2. YUV is a data size less than RGB (but in YUV444, it is 24bits just like RGB888)

YUV is divided into three components. The Y is Luminance, or Luma, which is the gray value; The “U” and “V” represent Chrominance (or Chroma), which specifies the color of a pixel.

UV is CbCr(C for color, B for blue, r for red)

classification

There are two broad classes of YUV formats: planar and packed==.

For Planar’s YUV format, the Y of all pixels is stored successively, followed by the U of all pixels, followed by the V of all pixels, or first V followed by U

For packed’s YUV format, the Y,U, and V of each pixel are stored consecutively and alternately.

The sampling

There are three mainstream sampling methods: YUV4:4:4, YUV4:2:2, and YUV4:2:0

YUV 4:4:4 sampling, each Y corresponds to a set of UV components, a YUV is 8+8+8 = 24bits 3 bytes.

YUV 4:2:2 sampling, each two Y share a set of UV components, a YUV is 8+4+4 = 16bits 2 bytes.

YUV 4:2:0 sampling, each four Y share a set of UV components, a YUV is 8+2+2 = 12bits 1.5 bytes.

The most common YUV420P and YUV420SP are both based on 4:2:0 sampling, so if the width of the image is width and the height is heigth, the memory space is width * height * 3/2, where the first width * height space is stored in the Y component, Then width * height / 4 stores the U component, and finally width * height / 4 stores the V component.

Two, YUV different formats

YUV420 is divided into several different formats according to the order in which the color data is stored, but the actual information stored in these formats is completely the same. For example, for a 4×4 image, under YUV420, there are 16 Y values, 4 U values, and 4 V values in any format, and the formatting of Y, U, and V changes. I420 is YYYYYYYYYYYYYYYYUUUUVVVV, YUV420 is a set of formats, YUV420 can not completely determine the storage order of color data.

PlaneProxy/Plane

The Y, U, and V components are stored in the three Plane classes, which are arrays obtained via getPlanes(). A Plane is actually an encapsulation of a ByteBuffer.

The Image guarantees that planes[0] must be Y, planes[1] must be U, and planes[2] must be V. And for plane [0], the y-component data must be stored continuously, and there will be no U or V data interleaved, that is to say, we must be able to get all the y-component values at one time.

However, for UV data, the following two situations may exist:

1. planes[1] = {UUUU... }, planes[2] = {VVVV... }; //I420 2. planes[1] = {UVUV... }, planes[2] = {VUVU... }.Copy the code

PixelStride

So when we fetch the data, we need to determine how to fetch the corresponding U or V data based on another information in the Plane.

Var pixelStride = plane.pixelstride; var pixelStride = plane.pixelstride; var pixelStride = planeCopy the code

Based on this property, we determine how the data is stored, so if we need to fetch byte[] representing I420 format, then :YUV420, Y data length :width *height, U, V both :width / 2 *height / 2.

Var planes = image.planes // var planes = image.planes 1 val yPixelStride = planes[0].pixelStride val uPixelStride = planes[1].pixelStride Log.e("xiao", "yPixelStride: $yPixelStride uPixelStride: $uPixelStride")Copy the code

yPixelStride: 1 uPixelStride: 2
Copy the code

The PixelStride of Mi MIX2 planes[0] is 1 and that of MI MIX2 planes[1] is 2

Indicates UVUV cross storage.

Var var var [0]; var var [0]; var var [0]; var var [0]; var var [0] ByteArray(image.width / 2 * image.height / 2) val pixelStride = planes[1].pixelStride if (pixelStride == 1) { Var uBuffer = planes[1].buffer for (I in 0 until 0 uBuffer.remaining() step 2) { u[i] = uBuffer.get(); // We'll drop the data, but we'll get the V data in the planes[2] ubuffer.get ()}}Copy the code

But if you use the above code to get YUV data, you may be surprised to find that it does not work with all the Width and Height(resolution) Settings you set. What did we miss and why did it go wrong?

In Plane, we have used getBuffer and getPixelStride, but there is one getRowStride not used.

RowStride

RowStride indicates the row step size. The row step size corresponding to Y data may be:

  • Is equal to the Width;
  • Is greater than the Width;

In the case of a 4×4 I420, the data can be seen as

If the RowStride is equal to Width, then we can get Y data directly through planes[0].getBuffer() without any problems.

But if the RowStride is greater than Width, for example for 4×4 I420, if each row needs to be aligned in 8 bytes, then the possible RowStride is not equal to 4(Width), but 8. At this point, invalid data will be added at the end of each row:

Get Y data
Val planes = image.planes val size = image.width * image.height * 3/2 val yuv420 = ByteBuffer.allocate(size) /** * Y */ val plane = planes[0] //y data //pixelStride = 1: //pixelStride = 2: Val pixelStride = plane.pixelStride = 1; val pixelStride = 1; // It's possible to get 640 // it's possible to get 650, RowStride // The rowStride may be filled at the end of the row. Val buffer = plane.buffer val row = ByteArray(image.width) // Exclude invalid data from each row Val skipRow = ByteArray(rowStride - image.width) for (I in 0 until image.height) {buffer[row] val skipRow = ByteArray(rowStride - image.width) for (I in 0 until image.height) {buffer[row] Yuv420.put (row) if (I < image.height - 1) {buffer[skipRow] = = 0}} yuv420.put(row) = = 0}Copy the code

For U and V data, the corresponding row step size may be:

  1. Is equal to the Width;
  2. Is greater than the Width;
  3. Is equal to the Width / 2;
  4. Greater than the Width / 2

Is equal to the Width

This means that we get that planes[1] not only contains U data, but also contains V data, and the pixelStride==2.

U V U V
U V U V

Then V data: PLANES [2], then:

V U V U
V U V U

Is greater than the Width

Like Y data, the RowStride may be greater than Width due to byte alignment. Like equal to Width, planes[1] contains not only U data, but also V data, and the pixelStride==2.

U V U V 0 0 0 0
U V U V The last line is not standing

Planes [2], then:

V U V U 0 0 0 0
V U V U The last line is not standing

Is equal to the Width / 2

U When the RowStride corresponding to the obtained u data is equal to Width/2, it means that the planes[1] obtained only contains u data. At this point pixelStride = = 1. Then planes[1]+planes[2] are:

U U
U U
V V
V V

Planes [1]. GetBuffer can get the complete U data directly.

Greater than the Width / 2

Similarly, the planes[1] we obtained only contains U data, but like Y data, there may be placeholder data. At this point pixelStride = = 1. Planes + planes [1] [2] as follows:

U U 0 0 0 0 0 0
U U The last line is not standing
V V 0 0 0 0 0 0
V V The last line is not standing
for (i in 1.. 2) {/ / planes [1] | planes [2] the uv data processing parts relate = planes. The get (I) pixelStride = plane. GetPixelStride I420 () / / 1, 2 cross the packed UVUV // The rowStride for uv data may be PixelStride = 1; pixelStride = 1; pixelStride = 1; pixelStride = 1; pixelStride = 1; PixelStride = 2 // May be greater than 640, PixelStride = 2 rowStride = plane.getrowstride () buffer = plane.getBuffer() val uvWidth = image.width / 2 val UvHeight = image.height / 2 for (j in 0 until uvHeight) {for (k in 0 until rowStride) { If (j == uvHeight - 1) {// If (U/V) == 1) {// If (U/V) == 1; If (k >= uvWidth) {break}} else if (k >= uvWidth) {break}} else if (pixelStride == 2) {  // planes[2]:uvu // planes[3]:vuv if (k >= image.width - 1) { break } } } val b: Byte = buffer.get() if (pixelStride == 2) {Byte = buffer.get() if (pixelStride == 2) {  0 2 4 6 if (k < image.width && k % 2 == 0) { yuv420.put(b) } } else if (pixelStride == 1) { if (k < uvWidth) { yuv420.put(b) } } } } }Copy the code

YUV format

Common YUV format YUY2, YUYV, YVYU, UYVY, AYUV, Y41P, Y411, Y211, IF09, IYUV, YV12, YVU9, YUV411, YUV420, etc., Android is more common YUV420 is divided into two kinds: YUV420P and YUV420SP. So let’s get to know YUV420P and YUV420SP

LibX264 for YUV definition of various formats

#define X264_CSP_I400 0x0001 /* monochrome 4:0:0 */ #define X264_CSP_I420 0x0002 /* yuv 4:2:0 planar */ #define X264_CSP_YV12 0x0003 /* yvu 4:2:0 planar */ #define X264_CSP_NV12 0x0004 /* yuv 4:2:0, with one y plane and one packed u+v */ #define X264_CSP_NV21 0x0005 /* yuv 4:2:0, with one y plane and one packed v+u */ #define X264_CSP_I422 0x0006 /* yuv 4:2:2 planar */ #define X264_CSP_YV16 0x0007 /* yvu 4:2:2 planar */ #define X264_CSP_NV16 0x0008 /* yuv 4:2:2, with one y plane and one packed u+v */ #define X264_CSP_YUYV 0x0009 /* yuyv 4:2:2 packed */ #define X264_CSP_UYVY 0x000a  /* uyvy 4:2:2 packed */ #define X264_CSP_V210 0x000b /* 10-bit yuv 4:2:2 packed in 32 */ #define X264_CSP_I444 0x000c /* yuv 4:4:4 planar */ #define X264_CSP_YV24 0x000d /* yvu 4:4:4 planar */Copy the code

YUV420P

YUV420P is the plane mode, Y, U, and V are in different planes, so there are three planes, and it’s the YUV standard format 4:2:0

The actual byte stream is spelled out line by line from left to right:

==YUV420P is divided into YU12 and YV12==

YU12 format

It’s also called I420 format in Android, and it’s all Y values first, then all U values, and finally all V values. For example, for a 6×6 image, the memory size is 6x6x3/2=54 bytes. For clarity, let’s take a newline look at the actual byte[] data stream.

YYYYYY
YYYYYY
YYYYYY
YYYYYY
UUUUUU
VVVVVV
Copy the code
YV12 format

YV12 has the same format as YU12, first all Y values, then all V values, and finally all U values. For example, for a 6×6 image, the memory size is 6x6x3/2=54 bytes

YYYYYY
YYYYYY
YYYYYY
YYYYYY
VVVVVV
UUUUUU
Copy the code

YUV420SP

YUV420SP is also in flat mode. There are two NV21 and NV12 formats. Y is a plane, UV is a plane, and UV/VU is stored alternately, rather than divided into three planes.

NV21 and YV12 are strongly recommended in the Android Camera documentation because they support all Camera devices. The default YUV output format of the Camera is NV21. In Camera2, however, the recommended format is YUV_420_888.

NV21 format

In Android Camera, the previews that the phone collects from the Camera are NV21 by default.

NV21 storage order is to store Y value first, then VU alternate storage: YYYYVUVUVU, such as 6×6 pictures, memory size is 6x6X1/2 =54 bytes

YYYYYY
YYYYYY
YYYYYY
YYYYYY
VUVUVU
VUVUVU
Copy the code
NV12 format

NV12 storage order is Y value first, then UV alternate storage: YYYYUVUVUV, such as 6×6 pictures, memory size is 6x6X1/2 =54 bytes

YYYYYY
YYYYYY
YYYYYY
YYYYYY
UVUVUV
UVUVUV
Copy the code

Here to familiarize yourself with the YUV420P and YUV420SP common in Android. Generally when we use YUV data, yuV data will be transformed, such as: camera data rotation, from one format to another data.