background
QT MediaPlayer is a native MediaPlayer that comes with QT. It can be used in Windows only if a decoder is installed, such as LAV Filters.
This article is divided into two parts. One is to use MediaPlayer to get every frame of the video. The second is to use VideoOutput to display custom data flow. These two parts are independent, which means that the first point is that each frame we get can control the rendering itself, either using qWidget or OpengL, as described in my previous two articles: QT + VS2015, get every frame of VLC and render to Qwidget, QT uses OpengL for video clipping, splicing, 4 grids, 9 grids, the data source in the article is VLC.
Today we will combine the two parts of the content, namely decoding and rendering using QMediaPlayer and VideoOutput respectively, but the process is not wrapped by QT, but we custom.
Part1, QMediaPlayer extracts each frame
In this part, we will mainly use the QAbstractVideoSurface class. The official website document explains:
The QAbstractVideoSurface class defines The standard interface that video producers use to inter-operate with video presentation surfaces. You can subclass this interface to receive video frames from sources like decoded media or cameras to perform your own processing.\
The QAbstractVideoSurface class defines a standard interface that video producers use to interact with video presentation surfaces. You can subclass this interface to receive video frames from sources such as decoded media or cameras to perform your own processing.
Therefore, we need to customize a class that inherits QAbstractVideoSurface, and then implement its present and supportedPixelFormats methods. QMediaPlayer will automatically send the frame to present.
In QML we just need to register this class into QML,
qmlRegisterType<VideoSurfaces>("com.nova.videoSurfaces", 1, 0, "VideoSurfaces");
Copy the code
Then specify MediaPlayer’s VideoOutput element object as the VideoSurfaces object, as follows:
Import QtQuick 2.0 import QtMultimedia 5.15 import com.nova.videoSurfaces 1.0 Rectangle {anchors. Fill: parent MediaPlayer { id: player source: "file:///D:/video/123.mp4" autoLoad: true autoPlay: true videoOutput: videosurfaces } VideoSurfaces { id: videosurfaces } }Copy the code
//videosurface.h
#ifndef VIDEOSURFACE_H
#define VIDEOSURFACE_H
#include <QObject>
#include <QAbstractVideoSurface>
#include <QVideoSurfaceFormat>
class VideoSurfaces : public QAbstractVideoSurface
{
Q_OBJECT
public:
explicit VideoSurfaces(QObject *parent = 0);
~VideoSurfaces();
Q_INVOKABLE QVideoFrame::PixelFormat getpixformat();
Q_INVOKABLE void pause();
signals:
Q_INVOKABLE void sendImage(const QVideoFrame& frame);
protected:
bool present(const QVideoFrame &frame) override;
QList<QVideoFrame::PixelFormat> supportedPixelFormats(
QAbstractVideoBuffer::HandleType handleType =
QAbstractVideoBuffer::NoHandle) const override;
};
#endif // VIDEOSURFACE_H
Copy the code
//videosurface.cpp #include "videosurface.h" #include <QDebug> VideoSurfaces::VideoSurfaces(QObject *parent) : QAbstractVideoSurface(parent) { } VideoSurfaces::~VideoSurfaces() { this->stop(); } QVideoFrame::PixelFormat VideoSurfaces::getpixformat() { qDebug() << "VideoSurfaces getformat"; return QVideoFrame::Format_YUV420P; } QList<QVideoFrame::PixelFormat> VideoSurfaces::supportedPixelFormats(QAbstractVideoBuffer::HandleType handleType) const { if (handleType == QAbstractVideoBuffer::NoHandle) { qDebug() << "VideoSurface NoHandle supportedPixelFormats" << (void*)this; QList<QVideoFrame::PixelFormat> listPixelFormats; listPixelFormats << QVideoFrame::Format_ARGB32 << QVideoFrame::Format_ARGB32_Premultiplied << QVideoFrame::Format_RGB32 << QVideoFrame::Format_RGB24 << QVideoFrame::Format_RGB565 << QVideoFrame::Format_RGB555 << QVideoFrame::Format_ARGB8565_Premultiplied << QVideoFrame::Format_BGRA32 << QVideoFrame::Format_BGRA32_Premultiplied << QVideoFrame::Format_BGR32 << QVideoFrame::Format_BGR24 << QVideoFrame::Format_BGR565 << QVideoFrame::Format_BGR555 << QVideoFrame::Format_AYUV444 << QVideoFrame::Format_AYUV444_Premultiplied << QVideoFrame::Format_YUV444 << QVideoFrame::Format_YUV420P << QVideoFrame::Format_YV12 << QVideoFrame::Format_UYVY << QVideoFrame::Format_YUYV << QVideoFrame::Format_NV12 << QVideoFrame::Format_NV21 << QVideoFrame::Format_IMC1 << QVideoFrame::Format_IMC2 << QVideoFrame::Format_Y8 << QVideoFrame::Format_Y16 << QVideoFrame::Format_Jpeg << QVideoFrame::Format_CameraRaw << QVideoFrame::Format_AdobeDng; //qDebug() << listPixelFormats; // Return the formats you will support return listPixelFormats; } else { return QList<QVideoFrame::PixelFormat>(); } } bool VideoSurfaces::present(const QVideoFrame &frame) { qDebug() << "VideoSurfaces present"; if (frame.isValid()) { QVideoFrame cloneFrame(frame); cloneFrame.map(QAbstractVideoBuffer::ReadOnly); QImage image(cloneFrame.bits(), cloneFrame.width(), cloneFrame.height(), QVideoFrame::imageFormatFromPixelFormat(frame.pixelFormat())); QVideoFrame f = QVideoFrame(image); emit sendImage(f); return true; }Copy the code
This will receive our frame in the present function of our VideoSurfaces class. But be careful: The above frame data QVideoFrame cannot be addressed by CPU, and its data format is supported by GPU. Therefore, if it is to be used in memory or transmitted through the signal slot, map operation must be carried out and converted into QImage (I have tried directly sending QVideoFrame is not feasible).
Part2, customize the VideoOutput data source
To customize the VideoOutput data source, the official document says:
If you are extending your own C++ classes to interoperate with VideoOutput, you can either provide a QObject based class with a mediaObject property that exposes a QMediaObject derived class that has a QVideoRendererControl available, or you can provide a QObject based class with a writable videoSurface property that can accept a QAbstractVideoSurface based class and can follow the correct protocol to deliver QVideoFrames to it.\
If you want to extend your C++ class to interoperate with VideoOutput, you can provide a qobject-based class with a mediaObject property that exposes a QMediaObject derived class, The class has a QVideoRendererControl available, or you can provide a QObject-based class with a writable videoSurface property that accepts a QAbstractVideoSurface based class, And qvideo of rames can be passed to it following the correct protocol.
The second method, which is our example below, is to customize a class that inherits from QObject and holds a QAbstractVideoSurface object, Provide both the setVideoSurface method and the videoSurface method, and pass the received frame to the m_Surface object.
VideoOutput {
id: vo
anchors.fill: parent
source: frameProvider
fillMode: VideoOutput.Stretch
}
Copy the code
frameprovider.h
#include <QObject>
#include <QAbstractVideoSurface>
#include <QVideoSurfaceFormat>
#include <QDebug>
class FrameProvider : public QObject
{
Q_OBJECT
Q_PROPERTY(QAbstractVideoSurface *videoSurface READ videoSurface WRITE setVideoSurface)
public:
FrameProvider();
~FrameProvider();
QAbstractVideoSurface* videoSurface();
private:
QAbstractVideoSurface *m_surface = NULL;
QVideoSurfaceFormat m_format;
public:
void setVideoSurface(QAbstractVideoSurface* surface);
Q_INVOKABLE void setFormat(int width, int heigth, QVideoFrame::PixelFormat format);
public slots:
void onNewVideoContentReceived(const QVideoFrame& frame);
};
Copy the code
frameprovider.cpp #include <frameprovider.h> FrameProvider::FrameProvider() { qDebug() << "FrameProvider construct"; } FrameProvider::~FrameProvider() { qDebug() << "FrameProvider destruct"; } QAbstractVideoSurface* FrameProvider::videoSurface() { qDebug() << "FrameProvider return videoSurface"; return m_surface; } void FrameProvider::setVideoSurface(QAbstractVideoSurface* surface) { qDebug() << "FrameProvider setVideoSurface:" << surface; if (m_surface && m_surface ! = surface && m_surface->isActive()) { m_surface->stop(); } m_surface = surface; if (m_surface && m_format.isValid()) { m_format = m_surface->nearestFormat(m_format); m_surface->start(m_format); qDebug() << "FrameProvider setVideoSurface start m_surface ,m_format:" << m_format.pixelFormat(); } } void FrameProvider::setFormat(int width, int heigth, QVideoFrame::PixelFormat format) { qDebug() << "FrameProvider setFormat width:" << width << ".height:" << heigth << "format:" << format; QSize size(width, heigth); QVideoSurfaceFormat formats(size, format); m_format = formats; if (m_surface) { if (m_surface->isActive()) { m_surface->stop(); } m_format = m_surface->nearestFormat(m_format); m_surface->start(m_format); qDebug() << "FrameProvider setFormat start m_surface ,m_format:" << m_format.pixelFormat(); } } void FrameProvider::onNewVideoContentReceived(const QVideoFrame& frame) { qDebug() << "FrameProvider onNewVideoContentReceived"; if (m_surface) { if (frame.isValid()) { m_surface->present(frame); }}}Copy the code
To register in QML:
qmlRegisterType<FrameProvider>("com.nova.frameProvider", 1, 0, "FrameProvider");
Copy the code
The following code is a complete example of use in QML:
Import QtQuick 2.0 import QtMultimedia 5.15 import com.nova.frameProvider 1.0 import com.nova.videosurfaces 1.0 Rectangle { anchors.fill: parent property string url: "file:///D:/video/123.mp4" MediaPlayer { id: player source: url autoLoad: true autoPlay: true videoOutput: videosurfaces } function play() { videosurfaces.sendImage.connect(frameProvider.onNewVideoContentReceived) frameProvider.setFormat(parent.width, parent.height, videosurfaces.getpixformat()) player.play() } VideoOutput { id: vo anchors.fill: parent source: frameProvider fillMode: VideoOutput.Stretch } FrameProvider { id: frameProvider } VideoSurfaces { id: videosurfaces } }Copy the code