This is the 18th day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021″

One, foreword

OpenCV is open source computer vision, machine learning software library, its image processing function is very powerful, and very fast. As a target detection function, OpenCV itself comes with a lot of models, such as: Eye detection, nose detection, mouth detection, face detection, human body detection, cat face detection, etc., download OpenCV, you can directly carry out image recognition test experience, and OpenCV can also directly call YOLO model, accurate identification of various objects, YOLO V3 built-in model file can accurately identify many common objects: For example: dogs, cars, bicycles, human bodies, books, mobile phones and so on.

This article introduces how to deploy opencV environment in Qt to complete object detection.

Deploy OpenCV development environment

Let’s start with my development environment:

I deployed the OpenCV environment using two commonly used compilers.

  1. MSVC 2017 64
  2. MinGW 730 32-bit

OpenCV can be downloaded from opencv.org/releases/pa…

OpenCV has x86 libraries as of 2.x, x64 libraries since 3.x, and only supports the MSVC compiler.

The current version of OpenCV I use is: OpenCV 3.4.7

The download address is the above address, just scroll down to find this version.

Download down is an EXE file, double-click can install, is actually decompression, you can choose the decompression path, decompression out of the file contains a lot of source files, library files, relatively large, can be directly placed in a fixed directory, behind the program directly fill the path to call. The downloaded library contains only X64 libraries for the MSVS 64-bit compiler.

For a MinGw compiler, you can get it from here github.com/huihut/Open… Download the corresponding OpenCV library for use.

GitHub’s address in CodeChina has a mirror, can be downloaded from here, relatively fast: gitcode.net/mirrors/hui…

After opening the link, choose the version that works for your compiler. My MinGW is 730, which happens to be the version below.

Here is how the VS2017 64-bit compiler and the MinGW 32-bit compiler reference the OpenCV library.

  1. MSVC 64-bit compiler -QT xx.pro project file
INCLUDEPATH += C:/opencv/build/include\
INCLUDEPATH += C:/opencv/build/include/opencv\
INCLUDEPATH += C:/opencv/build/include/opencv2

LIBS += -LC:/opencv/build/x64/vc14/lib\
          -lopencv_world347d
LIBS += -LC:/opencv/build/x64/vc14/lib\
          -lopencv_world347
Copy the code
  1. MinGW 32-bit compiler -QT xx. Pro project files
INCLUDEPATH+=C:/OpenCV-MinGW-Build-OpenCV3.47./include \
             C:/OpenCV-MinGW-Build-OpenCV3.47./include/opencv \
             C:/OpenCV-MinGW-Build-OpenCV3.47./include/opencv2
LIBS+=C:/OpenCV-MinGW-Build-OpenCV3.47./x86/mingw/bin/libopencv_*.dll

Copy the code

After the project programming is successful, copy the DLL file corresponding to OpenCV to the exe directory. Otherwise, the program will end abnormally if the DLL cannot be found during the runtime. These DLL files are in the Bin directory of OpenCV.

3. Call OpenCV to complete target detection (take face detection as an example)

OpenCV models are stored in C:\ OpenCV \sources\data\haarcascades_cuda.

This is the face detection model file:

There are two ways to call OpenCV to accomplish target detection.

3.1 Method 1: cvHaarDetectObjects

// Face detection code
void ImageHandle::opencv_face(QImage qImage)
{
    QTime time;
    time.start(a);static CvMemStorage* storage = nullptr;
    static CvHaarClassifierCascade* cascade = nullptr;
    // Load classifiers: positive face detection
    cascade = (CvHaarClassifierCascade*)cvLoad("C:/opencv/sources/data/haarcascades_cuda/haarcascade_frontalface_alt2.xml".0.0.0 );
    if(! cascade) {qDebug() < <"Classifier loading error.\n";
        return ;
    }

    // Create memory space
    storage = cvCreateMemStorage(0);

    // Load the image to be detected
    IplImage* img = QImageToIplImage(&qImage);

    if(img ==nullptr )
    {
        qDebug() < <"Image loading error.\n";
        return;
    }

    double scale=2;

    // Create the image header address and allocate storage space
    IplImage* gray = cvCreateImage(cvSize(img->width,img->height),8.1);

    // Create the image header address and allocate storage space
    IplImage* small_img=cvCreateImage(cvSize(cvRound(img->width/scale),cvRound(img->height/scale)),8.1);
    cvCvtColor(img,gray, CV_BGR2GRAY);
    cvResize(gray, small_img, CV_INTER_LINEAR);
    cvEqualizeHist(small_img,small_img); // Histogram equalization
    /* * Specify the corresponding face feature detection classifier, you can detect all the faces in the picture, and the detected face through the rectangular way back. * There are 8 parameters in total, function description: Parameter 1: indicates the input image, try to use grayscale map to speed up detection. Parameter 2: Represents the Haar feature classifier. You can use the cvLoad() function to load an XML file from disk as the Haar feature classifier. Parameter 3: The memory cache area used to store detected candidate targets. Parameter 4: indicates the scaling coefficient of the search window in two successive scans. Parameter 5: indicates the minimum number of adjacent rectangles that constitute the detection target (3 by default). If the number of small rectangles that make up the detection target is smaller than min_neighbors - 1, both are excluded. If min_neighbors is 0, the function returns all candidate rectangles without doing anything. This setting is used in user-defined combinators of detection results. Parameter 6: Either use the default value or CV_HAAR_DO_CANNY_PRUNING. If set to CV_HAAR_DO_CANNY_PRUNING, the function will use Canny edge detection to exclude areas with too many or too few edges, so these areas will not normally be areas with faces. Parameter 7: indicates the minimum value of the detection window, which is generally set to the default value. Parameter 8: indicates the maximum value of the detection window. The default value is generally used. Function return value: The function will return the CvSeq object containing a series of CvRect representing the detected face rectangles. * /
    CvSeq* objects = cvHaarDetectObjects(small_img,
                                           cascade,
                                           storage,
                                           1.1.3.0/*CV_HAAR_DO_CANNY_PRUNING*/.cvSize(50.50)/* The size determines how long the test takes */);

    qDebug() < <"Number of faces :"<<objects->total;

    // Walk through to find the object and the surrounding paint box
    QPainter painter(&qImage);// Construct the QPainter drawing object
    QPen pen;
    pen.setColor(Qt::blue); // Brush color
    pen.setWidth(5); // Brush width
    painter.setPen(pen); // Set the brush

    for(int i=0; i<(objects->total); ++i) {// Get the coordinate position and width height information of the face
        CvRect* r=(CvRect*)cvGetSeqElem(objects,i);
        // Draw a rectangle around the face area
        painter.drawRect(r->x*scale,r->y*scale,r->width*scale,r->height*scale);
    }


    cvReleaseImage(&gray);  // Free image memory
    cvReleaseImage(&small_img);  // Free image memory
    cvReleaseHaarClassifierCascade(&cascade); // Free memory --> classifiers
    cvReleaseMemStorage(&objects->storage); // Free memory --> Detect all faces in the picture

    // Release the image
    cvReleaseImage(&img);
    qDebug() < <tr("Time :% 1ms \n").arg(time.elapsed());

    qDebug() < <"Child thread :"<<QThread::currentThread(a);// Save the result
    m_image=qImage.copy(a); }/* Convert QImage image to OPECV QImage format */
IplImage *ImageHandle::QImageToIplImage(const QImage * qImage)
{
    int width = qImage->width(a);int height = qImage->height(a); CvSize Size; Size.height = height; Size.width = width; IplImage *IplImageBuffer =cvCreateImage(Size, IPL_DEPTH_8U, 3);
    for (int y = 0; y < height; ++y)
    {
        for (int x = 0; x < width; ++x)
        {
            QRgb rgb = qImage->pixel(x, y);
            CV_IMAGE_ELEM( IplImageBuffer, uchar, y, x*3+0 ) = qBlue(rgb);
            CV_IMAGE_ELEM( IplImageBuffer, uchar, y, x*3+1 ) = qGreen(rgb);
            CV_IMAGE_ELEM( IplImageBuffer, uchar, y, x*3+2 ) = qRed(rgb); }}return IplImageBuffer;
}
Copy the code

3.2 Method 2: face_cascade.detectMultiScale

// Face detection code
void ImageHandle::opencv_face(QImage qImage)
{
    QTime time;
    time.start(a);// Define the cascade classifier
    CascadeClassifier face_cascade;
    // Load the classification file
    if(! face_cascade.load("C:/opencv/sources/data/haarcascades_cuda/haarcascade_frontalface_alt2.xml")) {qDebug() < <"Classifier loading error";
        return;
    }
    Mat frame=QImage2cvMat(qImage);
    cvtColor(frame, frame, COLOR_BGR2GRAY );// Convert to grayscale image

    std::vector<Rect> faces;

    // Face detection
    face_cascade.detectMultiScale(frame,faces);
    qDebug() < <tr("Time :% 1ms Identification :% 2Quantity :%3\n").arg(time.elapsed()).arg(faces.size()).arg(faces.size());


    for ( size_t i = 0; i < faces.size(a); i++ ) {// Draw frames and circles
        Point center(faces[i].x + faces[i].width/2, faces[i].y + faces[i].height/2);
        ellipse(frame, center, Size( faces[i].width/2, faces[i].height/2 ), 0.0.360.Scalar( 255.0.255 ), 4 );
        rectangle(frame,
                  cvPoint(cvRound(faces[i].x), cvRound(faces[i].y)),
                  cvPoint(cvRound((faces[i].x + faces[i].width- 1)),
                  cvRound((faces[i].y + faces[i].height- 1))),
                  Scalar(255.255.255), 3.8.0);

// // Extract recognition results Frame1 extracted is the framed image
// Mat frame1;
// for(size_t i=0; i
/ / {
// Point center(faces[i].x + faces[i].width / 2, faces[i].y + faces[i].height / 2);
// frame1= frame(Rect(faces[i].x, faces[i].y, faces[i].width, faces[i].height));
/ /}

    }

    /* Displays */ on the control
    m_image=Mat2QImage(frame);
}

Mat ImageHandle::QImage2cvMat(QImage image)
{
    Mat mat;
    switch(image.format())
    {
    case QImage::Format_ARGB32:
    case QImage::Format_RGB32:
    case QImage::Format_ARGB32_Premultiplied:
        mat = Mat(image.height(), image.width(), CV_8UC4, (void*)image.constBits(), image.bytesPerLine());
        break;
    case QImage::Format_RGB888:
        mat = Mat(image.height(), image.width(), CV_8UC3, (void*)image.constBits(), image.bytesPerLine());
        cvtColor(mat, mat, CV_BGR2RGB);
        break;
    case QImage::Format_Indexed8:
        mat = Mat(image.height(), image.width(), CV_8UC1, (void*)image.constBits(), image.bytesPerLine());
        break;
    }
    return mat;
}

QImage ImageHandle::Mat2QImage(const Mat& mat)
{
    // 8-bits unsigned, NO. OF CHANNELS = 1
    if(mat.type() == CV_8UC1)
    {
        QImage image(mat.cols, mat.rows, QImage::Format_Indexed8);
        // Set the color table (used to translate colour indexes to qRgb values)
        image.setColorCount(256);
        for(int i = 0; i < 256; i++)
        {
            image.setColor(i, qRgb(i, i, i));
        }
        // Copy input Mat
        uchar *pSrc = mat.data;
        for(int row = 0; row < mat.rows; row ++)
        {
            uchar *pDest = image.scanLine(row);
            memcpy(pDest, pSrc, mat.cols);
            pSrc += mat.step;
        }
        return image;
    }
    // 8-bits unsigned, NO. OF CHANNELS = 3
    else if(mat.type() == CV_8UC3)
    {
        // Copy input Mat
        const uchar *pSrc = (const uchar*)mat.data;
        // Create QImage with same dimensions as input Mat
        QImage image(pSrc, mat.cols, mat.rows, mat.step, QImage::Format_RGB888);
        return image.rgbSwapped(a); }else if(mat.type() == CV_8UC4)
    {
        // Copy input Mat
        const uchar *pSrc = (const uchar*)mat.data;
        // Create QImage with same dimensions as input Mat
        QImage image(pSrc, mat.cols, mat.rows, mat.step, QImage::Format_ARGB32);
        return image.copy(a); }else
    {
        return QImage();
    }
}
Copy the code