preface

The thing is, algorithm elder brother recently asked me to do the docking work of mobile terminal NCNN model. Having not looked at the logic of the docking model for a long time, I went back to the project’s ancestral code. Since it is possible to step on repeated pits every time this piece is made, an article is planned to record the general access steps. Because it belongs to the application layer, we can only briefly introduce the process, and the details depend on the specific scenario.

This article will cover the logic of mobile terminal native development docking with native layer (C/C++), including iOS and Android. Students who need to read can follow the table of contents structure.

Environment to prepare

When accessing NCNN, we need to integrate the library about NCNN. Since it is native layer for image recognition, OpenCV is also used here (PS: this is just because OpenCV provides various APIS, which can be extended for image processing and added for visual real scene).

This article used NCNN, OpenCV version

NCNN precompiled library 20211208 8916D1E

OpenCV – 3.4.5

Both NCNN and OpenCV in this article will choose the official compiled dynamic library

  • ncnn

  • OpenCV

Ps: Because OpenCV covers more API, actual projects can be deleted according to its source code to reduce package volume.

Simple NCNN model recognition

Let’s take a look at the NCNN access logic of the pure native layer. The author wrote a simple C++ code as a native layer of NCNN model docking. (PS: Since this part is not my specialty, this article only provides an example based on my contact.)

#include <cstdio>
#include "Reco.h"
#include <opencv2/highgui/highgui.hpp>
#include <ncnn/platform.h>
#include <ncnn/net.h>

ncnn::Net *net = nullptr;

void unInit(a) {
    if(net ! =nullptr) {
        delete net;
        net = nullptr; }}void init(const std::string &paramPath, const std::string &binPath) {
    unInit(a); net =new ncnn::Net;
    net->load_param(paramPath.c_str());
    net->load_model(binPath.c_str());
}

int detect(const cv::Mat &bgr, std::vector<float> &cls_scores) {
    if (net == nullptr) return 1;
    ncnn::Mat in = ncnn::Mat::from_pixels_resize(bgr.data, ncnn::Mat::PIXEL_BGR, bgr.cols, bgr.rows,
                                                 224.224);

    const float mean[3] = {0.0 f.0.0 f.0.0 f};
    const float normal[3] = {0.0 f.0.0 f.0.0 f};

    in.substract_mean_normalize(mean, normal);
    ncnn::Extractor ex = net->create_extractor(a); ex.input("input", in);
    ncnn::Mat out;
    ex.extract("output", out);
    out = out.reshape(out.w * out.h * out.c);
    cls_scores.resize(out.w);
    for (int j = 0; j < out.w; j++) {
        cls_scores[j] = out[j];
    }
    return 0;
}

int process(const std::vector<float> &cls_scores) {
    return 0;
}

int interface(const cv::Mat &bgr) {
    std::vector<float> cls_scores;
    if (detect(bgr, cls_scores) == 0) {
        return 0;
    }
    process(cls_scores);
    return 0;
}
Copy the code

It is mainly divided into three parts:

  • Model loading (initialization): THE NCNN model needs to provide onebinFiles and aparamFiles are required for model loading.
  • Pre-processing and identification/detection:
    1. The first is to usecv::MatPoint to theThe memory addressPassed to thencnn::MatAnd then proceedresizeinto224 * 224.
    2. Conduct identification/detection.
    3. Last to getcls_scoresIs the final result, the subsequent post-processing according to the result.
  • Post-processing: post-processing depends on the specific business and will not be given here.

Based on the C++ code above, the header file is declared as:

void init(const std::string &paramPath, const std::string &binPath);
int interface(const cv::Mat &bgr);
Copy the code

Ps: the above C++ code files are named reco.h, reco.cpp.

Android access NCNN

Environmental integration

Ps: This paper only supports armeabI-V7A and ARM64-V8A architectures.

  • OpenCV integration

  • NCNN integration

As shown above, willOpenCVandncnntheDynamic libraries and header files are copied to the corresponding directory of the project.

Reco. H and Reco. CPP are C++ codes for accessing NCNN. Next you need to create a new cmakelists.txt file for cmake declaration. Concrete logic visible comments.

cmake_minimum_required(VERSION 3.4.1)
set(CMAKE_VERBOSE_MAKEFILE on)

set(CMAKE_CXX_STANDARD 14)

# Compile the included source code
include_directories(${CMAKE_SOURCE_DIR})
include_directories(${CMAKE_SOURCE_DIR}/include)

FIND_PACKAGE(OpenMP REQUIRED)
if (OPENMP_FOUND)
    message("OPENMP FOUND")
    set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${OpenMP_C_FLAGS}")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${OpenMP_CXX_FLAGS}")
    set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} ${OpenMP_EXE_LINKER_FLAGS}")
endif(a)# add OpenCV libopencv_java3.so and set the path
add_library(opencv_java3 SHARED IMPORTED)
set_target_properties(opencv_java3 PROPERTIES IMPORTED_LOCATION
        ${PROJECT_SOURCE_DIR}/.. /.. /.. /libs/${ANDROID_ABI}/libopencv_java3.so)

# Add the dynamic library libncnn.so corresponding to the above NCNN and set the specific path
add_library(ncnn SHARED IMPORTED)
set_target_properties(ncnn PROPERTIES IMPORTED_LOCATION
        ${PROJECT_SOURCE_DIR}/.. /.. /.. /libs/${ANDROID_ABI}/libncnn.so)

# Name the so library generated by the native code of this project example (libexample.so)
add_library(example
        SHARED
        Reco.cpp
        JNIReco.cpp)

Declare all libraries needed
target_link_libraries( # Specifies the target library.
        example
        ncnn
        opencv_java3
        android
        log
        jnigraphics)
Copy the code

Build. gradle in this module:

  • androidadd
externalNativeBuild {
    cmake {
        path "src/main/cpp/CMakeLists.txt"}}Copy the code
  • android.defaultConfigadd
externalNativeBuild {
    cmake {
        cppFlags "-frtti -fexceptions -std=c++11"
        arguments '-DANDROID_TOOLCHAIN=clang'.'-DANDROID_PLATFORM=android-21'.'-DANDROID_STL=gnustl_static'
        abiFilters 'armeabi-v7a'.'arm64-v8a'}}Copy the code

So, libopencv_java3. So, and libncnn.so.

Ps: it should be noted that my example is created using the latest AndroidStudio and uses the 7.x Gradle plugin, so the above integration process may be different. For example, the so library in the old version needs to be placed in the SRC /main/jniLibs directory, refer to the Android Gradle plugin version instructions. Therefore, you need to make adjustments based on the build.

JNI layer

JVM calls to the C++ layer require a JNI layer. Directly on the code:

#define ASSERT(status, ret)     if(! (status)) { return ret; }
#define ASSERT_FALSE(status)    ASSERT(status, false)

#define JNI_METHOD(return_type, method_name) \
  JNIEXPORT return_type JNICALL              \
  Java_me_xcyoung_ncnn_Reco_##method_name

extern "C" {
bool bitmapToMat(JNIEnv *env, jobject input_bitmap, cv::Mat &output) {
    void *bitmapPixels;
    AndroidBitmapInfo bitmapInfo;
    ASSERT_FALSE(AndroidBitmap_getInfo(env, input_bitmap, &bitmapInfo) >= 0)
    ASSERT_FALSE(bitmapInfo.format == ANDROID_BITMAP_FORMAT_RGBA_8888)

    ASSERT_FALSE(AndroidBitmap_lockPixels(env, input_bitmap, &bitmapPixels) >= 0)
    ASSERT_FALSE(bitmapPixels)

    cv::Mat tmp(bitmapInfo.height, bitmapInfo.width, CV_8UC4, bitmapPixels);
    cv::cvtColor(tmp, output, cv::COLOR_RGBA2BGR);

    AndroidBitmap_unlockPixels(env, input_bitmap);
    return true;
}

JNI_METHOD(void, nativeInit)(JNIEnv *env, jobject instance, jstring paramPath, jstring binPath) {
    jboolean isCopy;
    std::string mParamPath = env->GetStringUTFChars(paramPath, &isCopy);
    std::string mBinPath = env->GetStringUTFChars(binPath, &isCopy);
    init(mParamPath, mBinPath);
}

JNI_METHOD(jint, nativeInterface)(JNIEnv *env, jobject instance, jobject bitmap) {
    cv::Mat input;

    bool res = bitmapToMat(env, bitmap, input);
    if (res) {
        return interface(input);
    } else {
        return 0; }}}Copy the code

The JNI layer is the bridge layer for C++ code. The latter two methods, nativeInit and nativeInterface, correspond to the init and interface bridge methods in Reco. CPP respectively.

The bitmapToMat method basically converts a Bitmap into a CV ::Mat. For details about how to use Bitmap in JNI, see the Android JNI Bitmap operation.

  • The method is mainlyGet the memory address of the image pointed to by the Bitmap, will be an examplecv::Mat.
  • cv::cvtColor(tmp, output, cv::COLOR_RGBA2BGR);Is toColor space RGBA converted to BGRBecause of the identification of C++ code in the algorithm modelBGR format is required.

Note that CV ::Mat defaults to BGR, but since bitmaps are obtained by loading image files and default to RGBA, the memory address is referenced directly here, so the color space is also RGBA.

Finally, there is the Java class associated with this JNI:

public class Reco {
    Reco() {
        System.loadLibrary("example");
    }

    void init(String paramPath, String binPath) {
        nativeInit(paramPath, binPath);
    }

    int reco(Bitmap bitmap) {
        return nativeInterface(bitmap);
    }

    native void nativeInit(String paramPath, String binPath);
    native int nativeInterface(Bitmap bitmap);
}
Copy the code

IOS access NCNN

Environmental integration

As shown above, willDownload OpenCV and NCNN relatedframeworkCopy to a custom framework directory in the Project and reference the directory to the Project. The resulting dependencies:

If xxx.h is not found, consider adding the Headers directory from the framework to the Header Search Paths in Build Settings.

Objective – c bridging layer

Calling C++ code in iOS is much easier, we just need to implement an Objective-C code to the C++ code. If the upper layer is using Swift, you just need to bridge objective-C to Swift.

Define a recointerface.mm (ps: where the.mm suffix can be used to refer to the.cpp code)

@implementation RecoInterface

- (void)init: (NSString *)paramPath binPath:(NSString *)binPath {
    init([paramPath UTF8String], [binPath UTF8String]);
}

- (int)interface:(UIImage *)image {
    cv::Mat input = [self image2Mat:image];
    return interface(input);
}

- (cv::Mat)image2Mat:(UIImage *)image {
    CGColorSpaceRef colorSpace = CGImageGetColorSpace(image.CGImage);
    CGFloat cols = image.size.width;
    CGFloat rows = image.size.height;
    
    cv::Mat cvMat(rows, cols, CV_8UC4);
    CGContextRef contextRef = CGBitmapContextCreate(cvMat.data,
                                                cols,
                                                rows,
                                                8,
                                                cvMat.step[0],
                                                colorSpace,
                                                kCGImageAlphaNoneSkipLast |
                                                kCGBitmapByteOrderDefault);
    CGContextDrawImage(contextRef, CGRectMake(0.0, cols, rows), image.CGImage);
    CGContextRelease(contextRef);
    cv::cvtColor(cvMat, cvMat, CV_RGBA2BGR);
    return cvMat;
}

@end
Copy the code

The image2Mat method converts UIImage to CV ::Mat. The principle is to use the native CGContextDrawImage to draw a copy of the contents of UIImage into the memory space of the newly created CV ::Mat, which is essentially a copy.

cv::cvtColor(cvMat, cvMat, CV_RGBA2BGR); This occurs again because the original color space of UIImage is RGBA, and the CGContextDrawImage is drawn according to RGBA, so it needs to be converted to BGR. There is a difference between this and Android.

Ps: It was found in the test that the alignment of subsequent model results would be a few decimal places off after this method of transformation. The author speculated that the accuracy of CGContextDrawImage would be lost. However, due to the lack of an appropriate explanation, a proper solution was not found. If conditions permit, images can be read using CV ::imread to avoid accuracy problems. And image2Mat is after all a memory copy, for high resolution images is also a small overhead.

The last

The above is about the method for mobile terminal to access NCNN model. Since most of the methods involved are actually calling native layer (C/C++), they can be transferred to other logic. Finally, the code for this article is posted, but due to well-known reasons, the model and test pictures cannot be provided, the code is for reference only. The iOS framework file is too large and has not been uploaded. You can integrate the iOS framework file according to the preceding process.

Example: xcyoung/ncnn-example