directory

  1. FFmpeg decoding process flow chart and key data structures
  2. Mp4 decodes YUV naked video data through FFmpeg
  3. Problems encountered
  4. data
  5. harvest

FFmpeg decoding process flow chart and key data structure

FFmpeg decoding involves more knowledge points, it is easy to be fixed by the function and structure do not know what is wrong, we first have a cognition of the decoding process as a whole, draw a decoding flow chart, as follows

1.1 Decoding process is as follows

  1. Avformat_open_input opens the media file
  2. Avformat_find_stream_info Initializes AVFormatContext_
  3. Matches the index to the video stream
  4. Avcodec_find_decoder finds the corresponding decoder based on the codec_id of the video stream information
  5. Avcodec_open2 initializes AVCodecContext_ with the given AVCodec
  6. Initialize the output file and decode AVPacket and AVFrame structures
  7. Av_read_frame Starts reading frame by frame
  8. avcodec_send_packet
  9. avcodec_receive_frame
  10. Format conversion, respectively write YUV file
  11. Opengl rendering
  12. Release resources

1.2 Key Functions

Let’s look at the key functions in the decoding process

1. Av_register_all must be called before using ffMPEG’s multiplexer/demultiplexer or codec in 3.x or earlier versions. But after 4.x ffmPEG fixed the internal implementation, this function can be omitted.

2. avformat_open_input

attribute_deprecated int av_open_input_file(AVFormatContext **ic_ptr,constchar *filename,

                       AVInputFormat *fmt,

                       int buf_size,

                       AVFormatParameters *ap);
Copy the code

When a media file is opened as input, Codecs does not open it, only reads the header information of the file.

3. avformat_find_stream_info

Read packets of a media file to get stream information

int avformat_find_stream_info(AVFormatContext *ic, AVDictionary **options);
Copy the code

Access to multimedia information

4. avcodec_find_decoder

Find a registered decoder with a matching codec ID

AVCodec *avcodec_find_decoder(enum AVCodecID id);
Copy the code

Found a registered decoder based on the codecID

5. avcodec_open2

Initialize the AVCodecContext to use the given AVCodec

int avcodec_open2(AVCodecContext *avctx, const AVCodec *codec, AVDictionary **options);
Copy the code

Initialize AVCodecContext_ with the given AVCodec

6. av_read_frame

Return the next frame of a stream.
@return 0 if OK, < 0 on error or end of file

int av_read_frame(AVFormatContext *s, AVPacket *pkt);
Copy the code

Read a frame of data and read AVPacket

7. avcodec_send_packet

Supply raw packet data as input to a decoder.
@return 0 on success, otherwise negative error code

int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
Copy the code

Send a frame of compressed AVPacket data to the decoder

8. avcodec_receive_frame

Return decoded output data from a decoder.
@return 0 on success

int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);
Copy the code

Receive a frame of AVFrame data decoded by the decoder

9. sws_scale

int sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[],
              const int srcStride[], int srcSliceY, int srcSliceH,
              uint8_t *const dst[], const int dstStride[]);
Copy the code

After decoding, YUV format video pixel data is saved in Data 0-3 of AVFrame. However, these pixels are not stored consecutively. After each row of valid pixels, some invalid pixels are stored, which is processed by this function to remove invalid data. Otherwise there will be a splash screen.

10. Resource release related functions

    av_packet_unref(packet);

    sws_freeContext(img_convert_ctx);

    fclose(pYUVFile);

    av_frame_free(&pFrameYUV);
    av_frame_free(&pFrame);
    avcodec_close(pCodecContext);
    avformat_close_input(&avFormatContext);
Copy the code

1.3 Key structures

Key structures include AVFormatContext, AVStream, AVCodecContext, AVCodec, AVCodecParameters, AVPacket, AVFrame, etc

AVFormatCotext and AVInputFormat are structures related to encapsulation formats

AVStream, AVCodecContex and AVCodec are related structures

AVPacket and AVFrame

1.4 Supplementary Knowledge

1. Macro definition ## inside the define

Conn(x,y) x##y then int n = Conn(123,456); So n is equal to 123456;Copy the code

2. How to open the file

File * fp = fopen(info.txt,"wb+") fprintf() or fwirte fclose(fp);Copy the code

The instructions for opening are as follows

.r + to open a file in read-write mode, the file must exist. Rb + Read-write Opens a binary file, allowing only data to be read and written. Rt + Read-write Opens a text file, allowing read and write. W open only write the file, if the file exists, the file length is 0, if the file does not exist, create the file. W + Open the read/write file. If the file exists, the file length is zero. If the file does not exist, the file is created. A opens write - only files in attachment mode A + opens read - and - write files in attachment mode. If the file does not exist, the file will be created. If the file exists, the data written to it will be appended to the end of the file. Wb will only open or create a new binary file. Only write data is allowed. Wb + Read-write Opens or creates a binary file, allowing read and write. Open or create a text file. Allow reading and writing. At + Read-write Opens a text file, allowing data to be read or appended to the end of the text. Ab + Read/write Opens a binary file, allowing data to be read or appended to the end of the file.Copy the code

3. YUV data type

Output the H264 bit stream before decoding and YUV information after decoding. Use Elecard StreamEye Tools to view the output H264 data

The process of video display is the process of “drawing” pixel data on the screen. To display YUV, for example, is to “draw” the YUV in the window of the system.

YUV 4:4:4 sampling, each Y corresponds to a set of UV components. YUV 4:2:2 sampling, every two Y share a set of UV components. YUV 4:2:0 sampling, each four Y share a set of UV components. YUV420P, Y, U and V components are all planar formats, which are divided into I420 and YV12. The difference between I420 format and YV12 format lies in the position of the U plane and the V plane. In the I420 scheme, the U plane follows the Y plane, and then the V plane (i.e., YUV); YV12 is the opposite (YVU). YUV420SP, Y component plane format, UV package format, namely NV12. NV12 is similar to NV21 in that U and V are interleaved, differing in the UV sequence. I420: YYYYYYYY UU VV =>YUV420P (the most common) YV12: YYYYYYYY VV UU =>YUV420P NV12: YYYYYYYY UVUV =>YUV420SP NV21: YYYYYYYY VUVU =>YUV420SPCopy the code

Mp4 is decoded into YUV raw data by FFmpeg

From the previous section, we learned about the FFmpeg decoding process and the key constructs. In this section, we will practice.

Specific steps and code implementation are as follows:

#include <jni.h> #include <string> extern "C" { #include "include/libavcodec/avcodec.h" #include "include/libavformat/avformat.h" #include "include/log.h" #include <libswscale/swscale.h> #include <libavutil/imgutils.h> } extern "C" JNIEXPORT jint JNICALL Java_android_spport_mylibrary2_Demo_decodeVideo(JNIEnv *env, Jobject thiz, jString inputPath, jString outPath) {// Apply avFormatContext space, Remember to release AVFormatContext * AVFormatContext = avformat_alloc_context(); const char *url = env->GetStringUTFChars(inputPath, 0); Int reuslt = avformat_open_input(&avFormatContext, URL, NULL, NULL); if (reuslt ! = 0) { LOGE("open input error url=%s, result=%d", url, reuslt); return -1; } //2. If (avformat_find_stream_info(avFormatContext, NULL) < 0) {LOGE("find stream error"); return -1; } //3. Int videoIndex = -1; for (int i = 0; i < avFormatContext->nb_streams; i++) { AVMediaType codecType = avFormatContext->streams[i]->codecpar->codec_type; LOGI("avcodec type %d", codecType); if (AVMEDIA_TYPE_VIDEO == codecType) { videoIndex = i; break; } } if (videoIndex == -1) { LOGE("not find a video stream"); return -1; } AVCodecParameters *pCodecParameters = avFormatContext->streams[videoIndex]->codecpar; AVCodec *pCodec = avcodec_find_decoder(pCodecParameters->codec_id); if (pCodec == NULL) { LOGE("Couldn`t find Codec"); return -1; } AVCodecContext *pCodecContext = avFormatContext->streams[videoIndex]->codec; Int openResult = avCoDEC_open2 (pCodecContext, pCodec, NULL); if (openResult < 0) { LOGE("avcodec open2 result %d", openResult); return -1; } const char *outPathStr = env->GetStringUTFChars(outPath, NULL); PYUVFile = fopen(outPathStr, "wb+"); pYUVFile = fopen(outPathStr, "wb+"); if (pYUVFile == NULL) { LOGE(" fopen outPut file error"); return -1; } auto *packet = (AVPacket *) av_malloc(sizeof(AVPacket)); AVFrame *pFrame = AV_frame_ALLOc (); avCODEC_receive_frame = av_frame_alloc(); AVFrame *pFrameYUV = AV_frame_alloc (); AVFrame *pFrameYUV = AV_frame_alloc (); Uint8_t *out_buffer = (unsigned char *) av_malloc( av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pCodecContext->width, pCodecContext->height, 1)); av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize, out_buffer, AV_PIX_FMT_YUV420P, pCodecContext->width, pCodecContext->height, 1); Struct SwsContext *img_convert_ctx = sws_getContext(pCodecContext->width, pCodecContext->height, pCodecContext->pix_fmt, pCodecContext->width, pCodecContext->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL); int readPackCount = -1; int frame_cnt = 0; clock_t startTime = clock(); While (readPackCount = avFormatContext (avFormatContext, packet) >= 0)) {LOGI(" read fame count is %d", readPackCount); if (packet->stream_index == videoIndex) { //8. send AVPacket int sendPacket = avcodec_send_packet(pCodecContext, packet); //return 0 on success, otherwise negative error code: if (sendPacket ! = 0) { LOGE("avodec send packet error %d", sendPacket); continue; } //9. receive frame // 0: success, a frame was returned int receiveFrame = avcodec_receive_frame(pCodecContext, pFrame); if (receiveFrame ! LOGE(" AVCODEC_receive_frame error %d", receiveFrame); LOGE(" avCODEC_receive_frame error %d", receiveFrame); continue; Sws_scale (img_convert_ctx, (const uint8_t *const *) pFrame->data, pFrame->linesize, 0, pCodecContext->height, pFrameYUV->data, pFrameYUV->linesize); Int y_size = pCodecParameters->width * pCodecParameters->height; //YUV420p fwrite(pFrameYUV->data[0], 1, y_size, pYUVFile); //Y fwrite(pFrameYUV->data[1], 1, y_size / 4, pYUVFile); //U fwrite(pFrameYUV->data[2], 1, y_size / 4, pYUVFile); Char pictypeStr[10] = {0}; char pictypeStr[10] = {0}; switch (pFrame->pict_type) { case AV_PICTURE_TYPE_I: { sprintf(pictypeStr, "I"); break; } case AV_PICTURE_TYPE_P: { sprintf(pictypeStr, "P"); break; } case AV_PICTURE_TYPE_B: { sprintf(pictypeStr, "B"); break; } } LOGI("Frame index %5d. Tpye %s", frame_cnt, pictypeStr); frame_cnt++; } } LOGI("frame count is %d", frame_cnt); clock_t endTime = clock(); LOGI("decode video use Time %ld", (endtime-starttime)); Av_packet_unref (packet); sws_freeContext(img_convert_ctx); fclose(pYUVFile); av_frame_free(&pFrameYUV); av_frame_free(&pFrame); avcodec_close(pCodecContext); avformat_close_input(&avFormatContext); return 0; }}Copy the code

The decoded data is played using ffPlayer. Pay attention to parameter Settings, such as format and resolution

Eg: ffplay/Users/yabin/Desktop/TMP/ffmpeg/output8 yuv - pix_fmt yuv420p -s 784 x480Copy the code

The code has been uploaded to github github.com/ayyb1988/ff…

3. Problems encountered

  1. avformat_open_input -13

Cause: There is no read/write permission. -13 is a permission-related error.

Add the following permissions to androidmanifest.xml and then add dynamic permission checking to your codeCopy the code
  1. The generated YUV is exported and played with ffPlay or YUvPlayer

    Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 784x480, 338 kb/s, 25 fps, 25 tbr, 16k tbn, 50 tbc (default)
    Copy the code

    ffplay /Users/yabin/Desktop/tmp/ffmpeg/output.yuv -pix_fmt yuv420p -s 784×480

Reason: YUV data is not formatted, resolution, etc

uint8_t *out_buffer = (unsigned char *) av_malloc( av_image_get_buffer_size(AV_PIX_FMT_YUV420P, pCodecContext->width, pCodecContext->height, 1)); av_image_fill_arrays(pFrameYUV->data, pFrameYUV->linesize, out_buffer, AV_PIX_FMT_YUV420P, pCodecContext->width, pCodecContext->height, 1); struct SwsContext *img_convert_ctx = sws_getContext(pCodecContext->width, pCodecContext->height, pCodecContext->pix_fmt, pCodecContext->width, pCodecContext->height, AV_PIX_FMT_YUV420P, SWS_BICUBIC, NULL, NULL, NULL); . sws_scale(img_convert_ctx, (const uint8_t *const *) pFrame->data, pFrame->linesize, 0, pCodecContext->height, pFrameYUV->data, pFrameYUV->linesize);Copy the code

  1. If the received fame is not equal to 0, ignore the receiver or a green frame will appear

    int sendPacket = avcodec_send_packet(pCodecContext, packet); //return 0 on success, otherwise negative error code: if (sendPacket ! = 0) { LOGE(“avodec send packet error %d”, sendPacket); continue; } // 0: success, a frame was returned int receiveFrame = avcodec_receive_frame(pCodecContext, pFrame);

    if (receiveFrame ! LOGE(" AVCODEC_receive_frame error %d", receiveFrame); LOGE(" avCODEC_receive_frame error %d", receiveFrame); continue; }Copy the code

Four, data

  1. Advanced Audio and Video Development
  2. Relationships between the most critical structures in FFMPEG
  3. Ffmpeg function introduction
  4. 100 lines of code to implement the simplest FFMPEG+SDL based video player (SDL1.x)
  5. Simplest ffMPEg-based mobile example: Android Video Decoder – single library version
  6. YUV420 data format
  7. Ffmpeg FLV MP4 tips
  8. FFmpeg codec 1- Transcoding process brief introduction
  9. Initialization and destruction of common structures (AVFormatContext, AVFrame, etc.)

Test video from: FFmpeg codec processing 1- Transcoding full process introduction download test file (right-click save as) : TNMIL2.flV

Five, the harvest

  1. Understand ffMPEG decoding process
  2. Understand the key structures of FFMPEG and the relationships between them
  3. Decode MP4 into video raw data YUV
  4. Analyze and solve the problem of screen painting and screen recording

Compared with Mediacodec decoding, YUV rendering and playing, we will learn practice in the following chapters.

Thank you for reading

Next we learn the practice of using FFmpeg decoding audio, welcome to pay attention to the public account “audio and video development journey”, learn to grow together.

Welcome to communicate