It’s 2021, and deep learning has been in the wind for four or five years. From face payment to live beauty, many achievements have already been implemented in our daily apps. Mobile developers need the power of an inference framework to implement deep learning models into their applications. Among the numerous mobile terminal inference frameworks, NCNN is undoubtedly one of the best.

For those who don’t know, NCNN is a mobile terminal deep learning reasoning framework written by Nihui, with fast speed and clear structure. It has already had ten thousand stars on Github. If you need artificial intelligence in your application, NCNN could be a big help.

Unfortunately, most of the examples in the project are from Android, and only a few are from iOS. The only ones are objective-C based, which makes me want to cry as I write iOS app with Swift.

If not, build your own wheels! See how to use NCNN in Swift.

Since nihui already provides us with a ready-made ncnn.framework, the problem is simplified to how to call the C++ library in Swift.

For those of you familiar with Both Swift and Objective-C, these two languages have the following features:

  • Swift does not support calls to C++, but does support calls to Objective-C headers;
  • Objective-c and Objective-C++ actually use the same header file;
  • Objective-c ++ can call C++.

So the implementation path becomes:

1. Create objective-C files and bridging headers

Create a new Objective-C file in your project, name it nCNNWrapper. m, and Xcode will automatically generate the corresponding objective-C header nCNNwrapper.h for you. And ask if you want to create a Bridging header as well. The bridging header (the name should be xxX-bridge-header. h) will automatically convert the corresponding Objective-C code to something Swift can call.

To use objective-C functions or classes in Swift, simply add the bridging headers we created:

//
// Use this file to import your target's public headers that you would like to expose to Swift.
//

#import "NcnnWrapper.h"
Copy the code

2. Rename the Objective-C file to an Objective-C++ file

The next step is to rename the ncnNWrapper. m file to nCNNwrapper. mm so that Xcode recognizes it as an Objective-C++ file. In objective-c ++ files, we can call NCNN’s C++ code at will.

3. Create intermediate structures for auxiliary use

Although we can write C++ in our.mm files, we still don’t have C++ specific syntax in our header files. This requires a transitional structure. In code, it looks like this:

// NcnnWrapper.h
struct _Net;
@interface NcnnNet : NSObject
{
    @public struct _Net* _net;
}
- (int)loadParam:(NSString*)paramPath; . @end// NcnnWrapper.mm
// MARK: Net
struct _Net {
    ncnn::Net _net;
};

@implementation NcnnNet
...
- (int)loadParam:(NSString *)paramPath
{
    return _net->_net.load_param([paramPath UTF8String]); }... @endCopy the code

In the above code, we successfully left NCNN ::Net in the nCNNWrapper.mm file using the _Net structure. After that, we can use the NcnnNet class directly within Swift.

I wrote two examples in NCNN-Swift based on the above approach:

Picture category – SqueezenetSwift

SwiftUI automatically ADAPTS apps to dark mode

This project is to read the cat picture from the project, and then classify it with Squeezenet. The code of the project mainly refers to NIhui/NCN-Android-Squeezenet. The Swift code for the model part is as follows:

class Squeezenet {
    .
    let net: NcnnNet
    
    init?(a) {
        // Initialize the network and load related parameters
        net = NcnnNet(a)let paramBinPath = Bundle.main.path(forResource: "Squeezenet_v1. 1. The param", ofType: "bin")
        guard net.loadParamBin(paramBinPath) = = 0 else {
            return nil
        }
        let modelPath = Bundle.main.path(forResource: "Squeezenet_v1. 1", ofType: "bin")
        guard net.loadModel(modelPath) = = 0 else {
            return nil
        }
        .
    }
    
    func predict(for image: UIImage.top: Int = 1)- > [(Int.Float)] {
        .
        let inputData: Data = .
        // Create input
        // 65540 is ncnn::Mat::PIXEL_RGBA2RGB
        let input: NcnnMat = NcnnMat.init(fromPixels: inputData, 65540.227.227)
        let mean: [NSNumber] = [NSNumber(value: 104.0), NSNumber(value: 117.0), NSNumber(value: 123.0)]
        // Normalize input
        input.substractMeanNormalize(mean, nil)
        // Run the network and get the output
        // BLOB_data is 0, BLOB_prob is 82
        let output: [NSNumber: NcnnMat] = net.run([0: input], [82])
        // Convert the output back to the Swift array
        let outputData: Data = output[82]!.toData()!
        let outputProb: [Float] = outputData.toArray(type: Float.self)
        .}}Copy the code

And android version of the code comparison, will find Swift code and C++ code of the corresponding relationship is relatively clear.

Object detection — YoloV5Swift

This object detection project is also referred to by nihui large NCNN-Android-YOLOV5. The wrapper used in this project is a bit more complete than the previous one. In addition to the interface that supports more wrappers, the main purpose of this project is to show how to create a Custom Layer. NCNN registers custom Layer with DEFINE_LAYER_CREATOR and DEFINE_LAYER_DESTROYER macros, registering layer creation and destruction functions, for example:

DEFINE_LAYER_CREATOR(YoloV5Focus)
Copy the code

The above macro essentially implements the YoloV5Focus_layer_creator function. After use

net.register_custom_layer("YoloV5Focus", YoloV5Focus_layer_creator);
Copy the code

Such an interface to register.

From the perspective of implementing Swift wrapper, I want the user to continue to reuse the previous C++ implementation of the custom layer, but I don’t want the user to intrude into the C++ code of the wrapper to register. So we use a macro like this to register a Custom OP in TensorFlow:

#define DEFINE_CUSTOM_LAYER(name, T)                            \
    CustomLayerRegistrar<T> name##CustomLayerRegistrar(#name)
Copy the code

YoloV5Focus = YoloV5Focus = YoloV5Focus = YoloV5Focus

DEFINE_CUSTOM_LAYER(YoloV5Focus, YoloV5Focus);
Copy the code

The CustomLayerRegistrar

variable is created and the class constructor is:

    CustomLayerRegistrar(const std::string& name) {
        CustomLayerRegistry::Entry entry;
        entry.creator = [](void *) -> ncnn::Layer* {
            return new T;
        };
        CustomLayerRegistry::Global() - >Register(name, entry);
    }
Copy the code

That is automatically storing layer in CustomLayerRegistry: : Global ().

Then in the Swift code, only net. RegisterCustomLayer (” YoloV5Focus “), network from CustomLayerRegistry: : Global () gets the name of the corresponding implementation, thus successful registration.

In terms of performance, the time in the figure above also includes about 40ms of OpencV imread time, with the input image remaining roughly the same length to width ratio, scaled to 640 on the long side. This capability almost aligns with the discussion in the original project.


Finally, in order to let more iOS developers, especially independent developers, enjoy the powerful functions of NCNN, I hope to create a Cocoapod for NCNN based on some of the above designs, similar to tensorFlow Lite’s TensorFlow wift. If you are interested in this work, you can contact me, or leave a comment in the following issue (thanks for nihui’s support in the issue) :

Support Swift for NCNN · Issue #2642 · Tencent/ NCNN

Also hope interested friends to nCNN-Swift point a star ~