If the paper is original articles, reprint please indicate the original source This article blog: blog.csdn.net/qq21497936/… Readers, knowledge is boundless but manpower is poor, either change demand, or find a professional, or do your own research

Development technology set (including Qt practical technology, raspberry PI, 3D, OpenCV, OpenGL, FFMPEG, OSG, SCM, soft and hard combination, etc.) continue to update… (Click on the portal)

FFmpeg and SDL Development Column (click portal)

Previous: FFmpeg development Notes (8) : FFmpeg decodes audio and uses SDL to synchronize audio playback next: Stay tuned


preface

Ffmpeg plays RTSP network streams and camera streams.


Demo

Use FFMPEG to play LAN RTSP1080P Hikon camera: delay 0.2s, there is Mosaic

Play network RTSP file streams using FFMPEG: occasionally stutters and the latency is not noticeableUse VLC software to play LAN RTSP1080P Hikon camera: demo 2s, no MosaicPlay network RTSP file streams using VLC software: no lag, no noticeable latency


FFmpeg Basic playback process

Ffmpeg decoding process

Ffmpeg new API decoding execution process. The basic process of new API decoding is as follows:

Step 1: Register:

Use ffMPEG corresponding libraries, all need to be registered, can be registered subitems can also be registered in full.

Step 2: Open the file:

Open the file and obtain the ffMPEG global context according to the file name information.

Step 3: Detect flow information:

It is necessary to detect the stream information and get the encoding format of the stream code. If the stream encoder does not detect the stream information, the encoding type obtained by the stream encoder may be empty, and the original format cannot be known during subsequent data conversion, resulting in errors.

Step 4: Find the corresponding decoder

Find the decoder according to the format of the stream, soft decoding or hard decoding is determined here, but pay special attention to whether hardware is supported. You need to find the corresponding identifier of the local hardware decoder and check whether it is supported. The common operation is to enumerate all decoders supporting the decoding of file suffixes to find, find is hard solution (here, do not do too much discussion, corresponding hard decoding will be followed by the article for further study). (Note: decoding to find the decoder, encoding to find the encoder, the two functions are different, do not make a mistake, otherwise it can be opened later but the data is wrong)

Step 5: Open the decoder

When the decoder is opened, the RTSP stream is played, and some parameters need to be set, which in FFMPEG is set via AVDictionary. Pass in and open the obtained decoder using the parameters set above.

Av_dict_set (&pAVDictionary, "buffer_size", "1024000", 0); // set timeout time 20s av_dict_set(&pAVDictionary, "stimeout", "20000000", 0); Av_dict_set (&pAVDictionary, "max_delay", "30000000", 0); Av_dict_set (&pAVDictionary, "rtSP_transport "," TCP ", 0); ret = avcodec_open2(pAVCodecContext, pAVCodec, &pAVDictionary); if(ret) { LOG << "Failed to avcodec_open2(pAVCodecContext, pAVCodec, pAVDictionary)"; return; }Copy the code

Step 6: Apply scaling data format conversion structures

It should be noted here that basically all the decoded data are in YUV series format, but the data we display are RGB and other relevant color space data. Therefore, the conversion structure here is the description from before to after the conversion and provides the transcoding basis for the subsequent conversion function. It is a very key and commonly used structure.

Step 7: Apply for a cache

Apply for a buffer outBuffer and fill it to the data of our target frame, such as RGB data. The data of QAVFrame is stored in the specified format and stored in rules. Fill it to the outBuffer (the buffer of the target format you apply for is a frame). Is the order in which the data format is stored. For example, the decoded and converted data is RGB888. Actually using data directly is wrong, but using outBuffer is correct, so it should be ffmPEG’s fill function that does some conversion here. Enter the loop decode:

Step 8: Packet sent to decoder (step 8 and Step 9 instead of one step)

Take a packet encapsulated, determine the type of packet data and send it to the decoder for decoding.

Step 9: Retrieve decoded data from the decoder cache

There may be multiple groups of data in a package. The old API obtains the first group of data. After the new API is separated, the data can be obtained repeatedly until the data cannot be obtained.

Step 11: Do it yourself

Get the raw data and do what you want. The loop continues until the pakcet function is retrieved, but a frame cannot get, indicating that the file decoding is complete. The frame rate needs to be controlled by the loop itself. Here it is only taken in the loop, and delay can be added.

Step 12: Release QAVPacket

It is listed separately here because, in fact, many codes on the Internet and developers: av_new_packet is carried out before entering the loop decoding, but av_free_packet is not in the loop, causing memory overflow; Av_new_packet is carried out before entering the loop decoding, and AV_free_pakcet is carried out in the loop. Then, one new corresponds to countless free times, which does not conform to the one-to-one before and after corresponding specification on the encoder. Check the source code, it can be found that av_new_packet() is automatically carried out when av_read_frame, so in fact, for packet, only av_packet_alloc() is needed once, after decoding av_free_packet. After the execution, return to “Step 8: Obtain a frame packet”, and the cycle ends.

Step 13: Release the transformation structure

After all decoding is complete, install the application sequence and release the corresponding resources.

Step 14: Turn off decoder/encoder

Close the previously open decoder/encoder.

Step 15: Close the context

After closing the file context, release the previously applied variables according to the application sequence.


supplement

Ffmpeg open RTSP appear serious Mosaic and part of the lag, need to modify file udp.c cache size, modify need to recompile. The mosaics after actual measurement changes will be better, but there are some gaps compared with the software. This part needs to be further optimized. For compilation, please refer to FFmpeg Development Notes (3) : INTRODUCTION to FFmpeg, Windows Compilation and Development environment construction.


The Demo source code

void FFmpegManager::testDecodeRtspSyncShow()
{
    QString rtspUrl = "http://devimages.apple.com.edgekey.net/streaming/examples/bipbop_4x3/gear2/prog_index.m3u8";
//    QString rtspUrl = "rtsp://admin:[email protected]:554/h264/ch1/main/av_stream";

    // SDL相关变量预先定义
    SDL_Window *pSDLWindow = 0;
    SDL_Renderer *pSDLRenderer = 0;
    SDL_Surface *pSDLSurface = 0;
    SDL_Texture *pSDLTexture = 0;
    SDL_Event event;

    qint64 startTime = 0;                           // 记录播放开始
    int currentFrame = 0;                           // 当前帧序号
    double fps = 0;                                 // 帧率
    double interval = 0;                            // 帧间隔

    // ffmpeg相关变量预先定义与分配
    AVFormatContext *pAVFormatContext = 0;          // ffmpeg的全局上下文,所有ffmpeg操作都需要
    AVStream *pAVStream = 0;                        // ffmpeg流信息
    AVCodecContext *pAVCodecContext = 0;            // ffmpeg编码上下文
    AVCodec *pAVCodec = 0;                          // ffmpeg编码器
    AVPacket *pAVPacket = 0;                        // ffmpag单帧数据包
    AVFrame *pAVFrame = 0;                          // ffmpeg单帧缓存
    AVFrame *pAVFrameRGB32 = 0;                     // ffmpeg单帧缓存转换颜色空间后的缓存
    struct SwsContext *pSwsContext = 0;             // ffmpeg编码数据格式转换
    AVDictionary *pAVDictionary = 0;                // ffmpeg数据字典,用于配置一些编码器属性等

    int ret = 0;                                    // 函数执行结果
    int videoIndex = -1;                            // 音频流所在的序号
    int numBytes = 0;                               // 解码后的数据长度
    uchar *outBuffer = 0;                           // 解码后的数据存放缓存区

    pAVFormatContext = avformat_alloc_context();    // 分配
    pAVPacket = av_packet_alloc();                  // 分配
    pAVFrame = av_frame_alloc();                    // 分配
    pAVFrameRGB32 = av_frame_alloc();               // 分配

    if(!pAVFormatContext || !pAVPacket || !pAVFrame || !pAVFrameRGB32)
    {
        LOG << "Failed to alloc";
        return;
    }
    // 步骤一:注册所有容器和编解码器(也可以只注册一类,如注册容器、注册编码器等)
    av_register_all();
    avformat_network_init();
    // 步骤二:打开文件(ffmpeg成功则返回0)
    LOG << "打开:" << rtspUrl;
    ret = avformat_open_input(&pAVFormatContext, rtspUrl.toUtf8().data(), 0, 0);
    if(ret)
    {
        LOG << "Failed";
        return;
    }
    // 步骤三:探测流媒体信息
    ret = avformat_find_stream_info(pAVFormatContext, 0);
    if(ret < 0)
    {
        LOG << "Failed to avformat_find_stream_info(pAVFormatContext, 0)";
        return;
    }
    // 步骤四:提取流信息,提取视频信息
    for(int index = 0; index < pAVFormatContext->nb_streams; index++)
    {
        pAVCodecContext = pAVFormatContext->streams[index]->codec;
        pAVStream = pAVFormatContext->streams[index];
        switch (pAVCodecContext->codec_type)
        {
        case AVMEDIA_TYPE_UNKNOWN:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_UNKNOWN";
            break;
        case AVMEDIA_TYPE_VIDEO:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_VIDEO";
            videoIndex = index;
            LOG;
            break;
        case AVMEDIA_TYPE_AUDIO:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_AUDIO";
            break;
        case AVMEDIA_TYPE_DATA:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_DATA";
            break;
        case AVMEDIA_TYPE_SUBTITLE:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_SUBTITLE";
            break;
        case AVMEDIA_TYPE_ATTACHMENT:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_ATTACHMENT";
            break;
        case AVMEDIA_TYPE_NB:
            LOG << "流序号:" << index << "类型为:" << "AVMEDIA_TYPE_NB";
            break;
        default:
            break;
        }
        // 已经找打视频品流
        if(videoIndex != -1)
        {
            break;
        }
    }

    if(videoIndex == -1 || !pAVCodecContext)
    {
        LOG << "Failed to find video stream";
        return;
    }

    // 步骤五:对找到的视频流寻解码器
    pAVCodec = avcodec_find_decoder(pAVCodecContext->codec_id);
    if(!pAVCodec)
    {
        LOG << "Fialed to avcodec_find_decoder(pAVCodecContext->codec_id):"
            << pAVCodecContext->codec_id;
        return;
    }
    // 步骤六:打开解码器
    // 设置缓存大小 1024000byte
    av_dict_set(&pAVDictionary, "buffer_size", "1024000", 0);
    // 设置超时时间 20s
    av_dict_set(&pAVDictionary, "stimeout", "20000000", 0);
    // 设置最大延时 3s
    av_dict_set(&pAVDictionary, "max_delay", "30000000", 0);
    // 设置打开方式 tcp/udp
    av_dict_set(&pAVDictionary, "rtsp_transport", "tcp", 0);
    ret = avcodec_open2(pAVCodecContext, pAVCodec, &pAVDictionary);
    if(ret)
    {
        LOG << "Failed to avcodec_open2(pAVCodecContext, pAVCodec, pAVDictionary)";
        return;
    }

    // 显示视频相关的参数信息(编码上下文)
    LOG << "比特率:" << pAVCodecContext->bit_rate;

    LOG << "宽高:" << pAVCodecContext->width << "x" << pAVCodecContext->height;
    LOG << "格式:" << pAVCodecContext->pix_fmt;  // AV_PIX_FMT_YUV420P 0
    LOG << "帧率分母:" << pAVCodecContext->time_base.den;
    LOG << "帧率分子:" << pAVCodecContext->time_base.num;
    LOG << "帧率分母:" << pAVStream->avg_frame_rate.den;
    LOG << "帧率分子:" << pAVStream->avg_frame_rate.num;
    LOG << "总时长:" << pAVStream->duration / 10000.0 << "s";
    LOG << "总帧数:" << pAVStream->nb_frames;
    // 有总时长的时候计算帧率(较为准确)
//    fps = pAVStream->nb_frames / (pAVStream->duration / 10000.0);
//    interval = pAVStream->duration / 10.0 / pAVStream->nb_frames;
    // 没有总时长的时候,使用分子和分母计算
    fps = pAVStream->avg_frame_rate.num * 1.0f / pAVStream->avg_frame_rate.den;
    interval = 1 * 1000 / fps;
    LOG << "平均帧率:" << fps;
    LOG << "帧间隔:" << interval << "ms";
    // 步骤七:对拿到的原始数据格式进行缩放转换为指定的格式高宽大小
    pSwsContext = sws_getContext(pAVCodecContext->width,
                                 pAVCodecContext->height,
                                 pAVCodecContext->pix_fmt,
                                 pAVCodecContext->width,
                                 pAVCodecContext->height,
                                 AV_PIX_FMT_RGBA,
                                 SWS_FAST_BILINEAR,
                                 0,
                                 0,
                                 0);
    numBytes = avpicture_get_size(AV_PIX_FMT_RGBA,
                                  pAVCodecContext->width,
                                  pAVCodecContext->height);
    outBuffer = (uchar *)av_malloc(numBytes);
    // pAVFrame32的data指针指向了outBuffer
    avpicture_fill((AVPicture *)pAVFrameRGB32,
                   outBuffer,
                   AV_PIX_FMT_RGBA,
                   pAVCodecContext->width,
                   pAVCodecContext->height);

    ret = SDL_Init(SDL_INIT_VIDEO);
    if(ret)
    {
        LOG << "Failed";
        return;
    }
    pSDLWindow = SDL_CreateWindow(rtspUrl.toUtf8().data(),
                                  0,
                                  0,
                                  pAVCodecContext->width,
                                  pAVCodecContext->height,
                                  SDL_WINDOW_OPENGL | SDL_WINDOW_RESIZABLE);
    if(!pSDLWindow)
    {
        LOG << "Failed";
        return;
    }
    pSDLRenderer = SDL_CreateRenderer(pSDLWindow, -1, 0);
    if(!pSDLRenderer)
    {
        LOG << "Failed";
        return;
    }

    startTime = QDateTime::currentDateTime().toMSecsSinceEpoch();
    currentFrame = 0;


    pSDLTexture = SDL_CreateTexture(pSDLRenderer,
//                                  SDL_PIXELFORMAT_IYUV,
                                    SDL_PIXELFORMAT_YV12,
                                    SDL_TEXTUREACCESS_STREAMING,
                                    pAVCodecContext->width,
                                    pAVCodecContext->height);
    if(!pSDLTexture)
    {
        LOG << "Failed";
        return;
    }
    // 步骤八:读取一帧数据的数据包
    while(av_read_frame(pAVFormatContext, pAVPacket) >= 0)
    {
        if(pAVPacket->stream_index == videoIndex)
        {
            // 步骤八:对读取的数据包进行解码
            ret = avcodec_send_packet(pAVCodecContext, pAVPacket);
            if(ret)
            {
                LOG << "Failed to avcodec_send_packet(pAVCodecContext, pAVPacket) ,ret =" << ret;
                break;
            }
            while(!avcodec_receive_frame(pAVCodecContext, pAVFrame))
            {
                sws_scale(pSwsContext,
                          (const uint8_t * const *)pAVFrame->data,
                          pAVFrame->linesize,
                          0,
                          pAVCodecContext->height,
                          pAVFrameRGB32->data,
                          pAVFrameRGB32->linesize);
                // 格式为RGBA=8:8:8:8”
                // rmask 应为 0xFF000000  但是颜色不对 改为 0x000000FF 对了
                // gmask     0x00FF0000                  0x0000FF00
                // bmask     0x0000FF00                  0x00FF0000
                // amask     0x000000FF                  0xFF000000
                // 测试了ARGB,也是相反的,而QImage是可以正确加载的
                // 暂时只能说这个地方标记下,可能有什么设置不对什么的
                qDebug() << __FILE__ << __LINE__  << pSDLTexture;
                SDL_UpdateYUVTexture(pSDLTexture,
                                     NULL,
                                     pAVFrame->data[0], pAVFrame->linesize[0],
                                     pAVFrame->data[1], pAVFrame->linesize[1],
                                     pAVFrame->data[2], pAVFrame->linesize[2]);
                qDebug() << __FILE__ << __LINE__  << pSDLTexture;

                SDL_RenderClear(pSDLRenderer);
                // Texture复制到Renderer
                SDL_Rect        sdlRect;
                sdlRect.x = 0;
                sdlRect.y = 0;
                sdlRect.w = pAVFrame->width;
                sdlRect.h = pAVFrame->height;
                qDebug() << __FILE__ << __LINE__ << SDL_RenderCopy(pSDLRenderer, pSDLTexture, 0, &sdlRect) << pSDLTexture;
                // 更新Renderer显示
                SDL_RenderPresent(pSDLRenderer);
                // 事件处理
                SDL_PollEvent(&event);
            }
            // 下一帧
            currentFrame++;
            while(QDateTime::currentDateTime().toMSecsSinceEpoch() - startTime < currentFrame * interval)
            {
                SDL_Delay(1);
            }
            LOG << "current:" << currentFrame <<"," << time << (QDateTime::currentDateTime().toMSecsSinceEpoch() - startTime);
        }
    }
    LOG << "释放回收资源";
    if(outBuffer)
    {
        av_free(outBuffer);
        outBuffer = 0;
    }
    if(pSwsContext)
    {
        sws_freeContext(pSwsContext);
        pSwsContext = 0;
        LOG << "sws_freeContext(pSwsContext)";
    }
    if(pAVFrameRGB32)
    {
        av_frame_free(&pAVFrameRGB32);
        pAVFrame = 0;
        LOG << "av_frame_free(pAVFrameRGB888)";
    }
    if(pAVFrame)
    {
        av_frame_free(&pAVFrame);
        pAVFrame = 0;
        LOG << "av_frame_free(pAVFrame)";
    }
    if(pAVPacket)
    {
        av_free_packet(pAVPacket);
        pAVPacket = 0;
        LOG << "av_free_packet(pAVPacket)";
    }
    if(pAVCodecContext)
    {
        avcodec_close(pAVCodecContext);
        pAVCodecContext = 0;
        LOG << "avcodec_close(pAVCodecContext);";
    }
    if(pAVFormatContext)
    {
        avformat_close_input(&pAVFormatContext);
        avformat_free_context(pAVFormatContext);
        pAVFormatContext = 0;
        LOG << "avformat_free_context(pAVFormatContext)";
    }

    // 步骤五:销毁渲染器
    SDL_DestroyRenderer(pSDLRenderer);
    // 步骤六:销毁窗口
    SDL_DestroyWindow(pSDLWindow);
    // 步骤七:退出SDL
    SDL_Quit();
}
Copy the code


Project template V1.5.0

Corresponding to project template V1.5.0: Added RTSP to play Demo using SDL


Previous: FFmpeg development Notes (8) : FFmpeg decodes audio and uses SDL to synchronize audio playback next: Stay tuned