This is the 21st day of my participation in the August Text Challenge.More challenges in August
background
Since I have set goals and plans for myself to learn FFMPEG in 2021, this article is the first step towards achieving them.
The ffmPEG 4.2 Windows x64 lib library and header file are available for download.
Download.csdn.net/download/u0…
This article also belongs to the blogger’s learning summary and sharing, so we start from ffMPEG soft decoding, from decoding to drawing, experience the joy of hands-on together. The language environment of this paper is based on C++, and the interface part is QT.
Process analysis
Before we start looking at the code, we must first understand the general flow of ffMPEG soft solution:
Av_regeister_all we’ve seen a lot in previous tutorials. This is the use of av_regeister_all in older versions of FFmPEG, which must be initialized at the beginning, and is no longer required after ffmpeg4.0. See Github for details: Av_register_all () has been deprecated in FFmPEG 4.0.
1, avformat_open_input
Allocate space to AVFormatContext, open the input video data and probe the format of the video. This function contains complex format parsing and probe algorithms, including: video stream, audio stream, video stream parameters, audio stream parameters, video frame index, etc. In Thor’s words, FFmpeg is the “soul.”
2, avformat_find_stream_info
Obtain information about multimedia streams, including bit stream, frame rate, and duration. But some of the early format or raw stream data is not indexed in the header, so it needs to be probed later. Notice A video file may contain multiple media streams, such as video files, audio files, and subtitle files.
3, av_find_best_stream
When the video is unsealed, the audio and video needs to be processed separately. The corresponding audio stream and video stream need to be found and the stream_index corresponding to the audio and video needs to be obtained.
4, AVCODEC_FIND_decoder (enum AVCodecID ID)
“Find a registered decoder with a matching codec ID.” The coDECPAR type is AVCodecParameters. AVCodecContext->codec_id. Deprecated in FFMPEG3.4 and above, codecPAR is officially recommended.
5, avcodec_alloc_context3
Create AVCodecContext and allocate space.
6, avcodec_parameters_to_context
This function is used to copy the parameters of the stream, i.e. the parameters of AVStream, directly into the context of the AVCodecContext, performing a true copy of the content. Avcodec_parameters_to_context () is the new API that replaces the old version of avcodec_copy_context().
7, avcodec_open2
Initialize AVCodecContext with the given AVCodec.
At this point, the initialization of the decoder is complete. Now we can begin the actual decoding operation.
8 av_read_frame.
The av_read_frame() function is a new ffMPEG usage that encapsulates av_read_packet. The old usage was abandoned because the data acquired before may not be complete. Av_read_frame () ensures the integrity of a frame of video data, so that the data read is always a complete frame.
8 avcodec_send_packet.
Sends data to the background decoding queue.
It can be NULL (or an AVPacket with data set to NULL and size set to 0); in this case, it is considered a flush packet, which signals the end of the stream. Sending the first flush packet will return success. Subsequent ones are unnecessary and will return AVERROR_EOF. If the decoder still has frames buffered, it will return them after sendingCopy the code
Packet. Data = NULL; if av_read_frame can’t read the data, you need to send the packet. packet.size = 0; Send a packet of empty data to FFMPEG, i.e., avCOdec_send_packet once, to flush out all cached frames in FFMPEG and solve the problem that the last few frames were not decoded.
9 avcodec_receive_frame.
Read the frame data from the decoder. Once this function is done, it’s ready to get our frame data, which is stored in AVFrame. Note that one avCOdec_send_packet () corresponds to one AVCOdec_receive_frame (), but one avcodec_receive_frame() corresponds to more than one avcodec_receive_frame(). This depends on the specific stream, and is common in audio streams. There may be a situation where one AVPacket corresponds to multiple Avframes. So if you look at my flowchart above there are two while loops.
Code sample
// header file #ifndef MAINWINDOW_H #define MAINWINDOW_H #include< QMainWindow> #include<thread> extern "C" {#include "libavcodec/avcodec.h" #include "libavformat/avformat.h" #include "libavutil/pixfmt.h" #include "libswscale/swscale.h" #include "libavdevice/avdevice.h" #include <libavutil/pixdesc.h> #include <libavutil/hwcontext.h> #include <libavutil/opt.h> #include <libavutil/avassert.h> #include <libavutil/imgutils.h> } namespace Ui { class MainWindow; } class MainWindow : public QMainWindow { Q_OBJECT public: explicit MainWindow(QWidget *parent = nullptr); ~MainWindow(); void init(); void play(); private: std::thread m_decodecThread; Ui::MainWindow *ui; AVFormatContext *pAVFormatCtx; AVCodecContext *pAVCodecCtx; SwsContext *pSwsCtx; uint8_t *pRgbBuffer; AVPacket packet; AVFrame *pAVFrame = NULL; AVFrame *pAVFrameRGB; int iVideoIndex = -1; QImage m_image; bool isFinish =false; void decodec(); signals: void signalDraw(); public slots: void slotDraw(); protected: void paintEvent(QPaintEvent *event) override; }; #endif // MAINWINDOW_HCopy the code
//CPP file #include "mainwindow.h" #include "ui_mainwindow.h" #include< QDebug> #include< QPainter> #include<thread> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); connect(this,&MainWindow::signalDraw,this,&MainWindow::slotDraw); } MainWindow::~MainWindow() { delete ui; } void MainWindow::init() { std::string file = "E:/Video/bb.mp4"; If (avformat_open_input(&pAVFormatCtx, file.data(), NULL, NULL)! = 0) { qDebug() <<"open file fail"; avformat_free_context(pAVFormatCtx); return; If (avformat_find_stream_info(pAVFormatCtx, avformat_find_stream_info, avformatctx, avformat_find_stream_info) NULL) < 0) { qDebug() <<"vformat find stream fail"; avformat_close_input(&pAVFormatCtx); return; } // Find the decoder AVCodec *pAVCodec; int ret = av_find_best_stream(pAVFormatCtx, AVMEDIA_TYPE_VIDEO, -1, -1, &pAVCodec, 0); if (ret < 0) { qDebug()<< "av_find_best_stream faliture"; avformat_close_input(&pAVFormatCtx); return; } iVideoIndex = ret; pAVCodec = avcodec_find_decoder(pAVFormatCtx->streams[iVideoIndex]->codecpar->codec_id); if (pAVCodec == NULL) { qDebug()<<"not find decoder"; return; } qDebug()<<"avcodec_open2 pAVCodec->name:" << QString::fromStdString(pAVCodec->name); if(pAVFormatCtx->streams[iVideoIndex]->avg_frame_rate.den ! = 0) { float fps_ = pAVFormatCtx->streams[iVideoIndex]->avg_frame_rate.num / pAVFormatCtx->streams[iVideoIndex]->avg_frame_rate.den; qDebug() <<"fps:" << fps_; } int64_t video_length_sec_ = pAVFormatCtx->duration/AV_TIME_BASE; qDebug() <<"video_length_sec_:" << video_length_sec_; pAVCodecCtx = avcodec_alloc_context3(pAVCodec); if (pAVCodecCtx == NULL) { qDebug() <<"get pAVCodecCtx fail"; avformat_close_input(&pAVFormatCtx); return; } ret = avcodec_parameters_to_context(pAVCodecCtx,pAVFormatCtx->streams[iVideoIndex]->codecpar); if (ret < 0) { qDebug() <<"avcodec_parameters_to_context fail"; avformat_close_input(&pAVFormatCtx); return; } if (avcodec_open2(pAVCodecCtx, pAVCodec, NULL) < 0) { qDebug()<<"avcodec_open2 fail"; return; AVFrame = av_frame_alloc(); AVFrame = av_frame_alloc(); pAVFrameRGB = av_frame_alloc(); PSwsCtx = sws_getContext(pAVCodecCtx->width, pAVCodecCtx->height, pAVCodecCtx->pix_fmt, pAVCodecCtx->width, pAVCodecCtx->height, AV_PIX_FMT_RGB32, SWS_BICUBIC, NULL, NULL, NULL); int m_size = av_image_get_buffer_size(AVPixelFormat(AV_PIX_FMT_RGB32), pAVCodecCtx->width, pAVCodecCtx->height, 1); pRgbBuffer = (uint8_t *)(av_malloc(m_size)); Avpicture_fill ((AVPicture *)pAVFrameRGB, pRgbBuffer, AV_PIX_FMT_BGR32, avpicture_fill((AVPicture *)pAVFrameRGB, pRgbBuffer, AV_PIX_FMT_BGR32, pAVCodecCtx->width, pAVCodecCtx->height); // Av_image_fill_arrays // Av_new_packet (&packet, pAVCodecCtx->width * pAVCodecCtx->height); } void MainWindow::play() { m_decodecThread = std::thread([this]() { decodec(); }); m_decodecThread.detach(); } void MainWindow::decodec() {int ret = av_read_frame(pAVFormatCtx, &packet); if(ret ! = 0) { qDebug()<<"file end"; isFinish = ! isFinish; return; } if (packet.stream_index ! = iVideoIndex) { av_packet_unref(&packet); continue; } int iGotPic = AVERROR(EAGAIN); // // Decode a frame of video data iGotPic = AVCOdec_send_packet (pAVCodecCtx, &packet); if(iGotPic! =0){ qDebug()<<"avcodec_send_packet error"; continue; } iGotPic = avcodec_receive_frame(pAVCodecCtx, pAVFrame); If (iGotPic == 0){// Convert pixels sws_scale(pSwsCtx, (uint8_t const * const *)pAVFrame->data, pAVFrame->linesize, 0, pAVCodecCtx->height, pAVFrameRGB->data, pAVFrameRGB->linesize); QImage QImage img(pRgbBuffer, pAVCodecCtx->width, pAVCodecCtx->height, QImage::Format_RGB32); qDebug()<<"decode img"; m_image = img; emit signalDraw(); }else { qDebug()<<"decode error"; } av_packet_unref(&packet); std::this_thread::sleep_for(std::chrono::milliseconds(25)); } // Av_free (pAVFrame); av_free(pAVFrameRGB); sws_freeContext(pSwsCtx); avcodec_close(pAVCodecCtx); avformat_close_input(&pAVFormatCtx); } void MainWindow::slotDraw() { update(); } void MainWindow::paintEvent(QPaintEvent *event) { QPainter painter(this); painter.setBrush(Qt::black); painter.drawRect(0, 0, this->width(), this->height()); if (m_image.size().width() <= 0) return; // QImage img = m_image.scaled(this->size(),Qt::KeepAspectRatio); int x = this->width() - img.width(); int y = this->height() - img.height(); x /= 2; y /= 2; DrawImage (QPoint(x,y),img); }Copy the code