Start with simple requirements

Recently, I used Electron to make an App, and met a very simple requirement, which was to load the Pytorch deep learning model trained in Python environment into Electron for execution.

Pytorch provides the libtorch library, the C++ side of Pytorch, so you can save the Pytorch model as.pt and load it with libtorch. Then use node-gyp to compile the dynamic link file. Node, let Nodejs load.

Libtorch introduction

Official website: pytorch.org/cppdocs/fro…

Libtorch is a C++ front end to Pytorch, a C++14 library for CPU and GPU tensor calculations that provides automatic differentiation and various higher levels of abstraction for machine learning and neural networks. In other words, the C++ version of Pytorch has a similar API to the Python version. In some cases where performance and portability requirements may not allow the use of a Python interpreter, such as in low-latency, high-performance or multithreaded environments, or in model deployments, a C++ front end can be used.

Libtorch provides a C++API that is similar to the Python API. Getting familiar with the Python version of Pytroch is relatively easy

Component Description
torch::Tensor Automatic differential, efficient CPU/GPU tensor module
torch::nn A collection of composable modules for neural network modeling
torch::optim Optimizer module, which uses SGD, Adam and other optimization algorithms to train the model
torch::data Data sets, data pipelines and multithreading, asynchronous loaders
torch::serialize Used to store and load model checkpoints and serialization apis
torch::python The C++ model is bound to Python
torch::jit Pure C++ access to the TorchScript JIT compiler

After downloading libtorch, you can see the structure, mainly include (containing various header files), lib (dynamic/static linked libraries), and a share directory for cmake files.

Simple code

Using libtorch, you can write a function that loads the.pt model and executes it

// torch_script.cpp
#include "torch/script.h"
#include "torch_script.h"

vector<float> module_forward(const char *pathname, const vector<float> &input) {
    try {
     // Load the model
     torch::jit::Module module =  torch::jit::load(pathname);
        vector<torch::jit::IValue> in_batch;
        at::Tensor in = torch::tensor(input);
        in_batch.emplace_back(torch::reshape(in, {1.int64_t(input.size()})); at::Tensor output =module.forward(in_batch).toTensor(a);// run model

        auto float_out = output.data_ptr<float> ();return vector<float>(float_out, float_out + output.size(1));

    } catch (const c10::Error &e) {
        cerr << e.msg() << endl;
    }

    return vector<float> (); }Copy the code

It is then converted to type V8 using the Node-api-Addon library and exposes the moduleForward function for the Nodejs side to call

// node_script.cpp
#include "node_script.h"

Napi::Array ModuleForward(const Napi::CallbackInfo& info) {
    Napi::Env env = info.Env(a); Napi::Array result = Napi::Array::New(env);
    Napi::String pathname = info[0].ToString(a); Napi::Array input = info[1].As<Napi::Array>();

    vector<float> in;
    for (size_t i = 0; i < input.Length(a); i++) in.push_back(input.Get(i).ToNumber());
    vector<float> r = module_forward(pathname.Utf8Value().c_str(), in);

    for (size_t i = 0; i < r.size(a); i++) result.Set(i, Napi::Number::New(env, r[i]));
    return result;
}

Napi::Object Init(Napi::Env env, Napi::Object exports) {
    exports.Set("moduleForward", Napi::Function::New(env, ModuleForward));
    return exports;
}

NODE_API_MODULE(torch_script, Init)
Copy the code

Start stepping in all kinds of holes

Node – gyp compilation

Node-gyp:github.com/nodejs/node…

The original idea is to compile the.node file directly from node-gyp, so the corresponding binding.gyp is also easy

{
	"targets": [{"target_name": "torch_script"."include_dirs": [
       	"
      ."libtorch/include"].Add the following dependent libraries based on the current node.js version
       "dependencies": [
         "
      ]."cflags!": ["-fno-exceptions"]."cflags_cc!": ["-fno-exceptions"]."defines": [
         "NAPI_DISABLE_CPP_EXCEPTIONS" Remember to add this macro]."sources": [
         "torch_script.cpp"."node_script.cpp",]}]}Copy the code

Then performnode-gyp configure && node-gyp buildThe libtorch library uses a C++ exception mechanism, and node-gyp is disabled by default. The libtorch library uses a C++ exception mechanism, and node-gyp is disabled by default"cflags! : ["-fno-exceptions"]"Command to exclude exceptions, however, this is actually related to the C++ compiler on the computer, so it needs to be inbinding.gypTurn on all kinds of exception mechanismsModify thebinding.gypTo addconditionsFields, forOS == "mac"When directly modifyxcode_setting, enablingGCC_ENABLE_CPP_EXCEPTIONS

{
    "targets": [
        {
          ... ,
+ "cflags": ["-fexceptions"],
+ "cflags_cc": ["-fexceptions"],
+ "conditions": [
+ ['OS==" MAC "', {# turn on exception catching directly in Xcode
+ 'xcode_settings': {
+ 'GCC_ENABLE_CPP_EXCEPTIONS': 'YES'
+}
+}]
+],
          "defines": [
- "NAPI_DISABLE_CPP_EXCEPTIONS"],... ,}}]Copy the code

Libtorch: libTorch: libTorch: libTorch: libTorch: libTorch: libTorch: libTorchdynamic_cast/typeidThis needs to be added to the C++ compiler-frttioptionsModify thebinding.gyp, added at compile time-frttiOptions, at the same timexcode_settingsIn the openingGCC_ENABLE_CPP_RTTI

{
  "targets": [
    ...,
    
+ "cflags!" : ["-fno-exceptions", "-fno-rtti"],
+ "cflags_cc!" : ["-fno-exceptions", "-fno-rtti"],
+ "cflags": ["-fexceptions", "-frtti"],
+ "cflags_cc": ["-fexceptions", "-frtti"],'xcode_settings': {'GCC_ENABLE_CPP_EXCEPTIONS': 'YES',+ 'GCC_ENABLE_CPP_RTTI': 'YES'}}]], ···,]}Copy the code

And then you can compile itSee no error or more happy, so did not want to directly write a JS file test, the code is very simple

/ / load node
const torchScript = require("./build/Release/torch_script");
// Run the model
const t = torchScript.moduleForward("./resnet24_se.pt".Array.from({length: 256}, v= > 1));
console.log(t);
Copy the code

Then must report wrong ah, in the world where there is such a simple success. However, it is possible to translate the problem to libtorch without linking the dynamic and static libraries in libtorchHow do I load static/dynamic link libraries in a NodeJS plug-in.First try to get frombinding.gypStart and trylibrariesandlink_settingsThese two commands

{
    'targets': [
	{ 
	...,
+ 'libraries': [
+ '
      
+],
+ 'link_settings': {
+ 'library_dirs': [
+ '/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/libtorch/lib'
+]
+},}... ]},Copy the code

The result is not good, probably the dynamic link library did not load

Compile directly with cmake

Cmake official website: cmake.org/

Cmakelists. TXT is a dynamic link library, so why not use cmake to create a dynamic link library and start working on cmakelists. TXT

# CMakeLists.txt
cmake_minimum_required(VERSION 3.19)
project(NodeScript)

# libtorch
set(CMAKE_PREFIX_PATH /Users/dengpengfei/Documents/Project/JavaScript/sei-app/lib/libtorch)

Set to C++14 because libtorch is written in C++14
set(CMAKE_CXX_STANDARD 14)

add_compile_options(-std=c++14)

# link header file, absolute path, nodejs, node-addon-api, libtorch
include_directories(/Users/dengpengfei/.node-gyp/12.16.2/include/node)
include_directories(/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/node_modules/node-addon-api)
include_directories(/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/libtorch/include)
Link to the libtorch library file
link_directories(/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/libtorch/lib)

file(GLOB SOURCE_FILES "./*.cpp" "./*.h")

find_package(Torch REQUIRED)

set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")

Add the compiler target, the dynamic link library
add_library(${PROJECT_NAME} SHARED ${SOURCE_FILES})

Set to C++14
set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 14)

set_property(TARGET ${PROJECT_NAME} PROPERTY LINKER_LANGUAGE CXX)

# link
target_include_directories(${PROJECT_NAME} PRIVATE /Users/dengpengfei/.node-gyp/12.16.2/include/node)

target_include_directories(${PROJECT_NAME} PRIVATE /Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/node_modules/node-addon-api)

# compile the target suffix
set_target_properties(${PROJECT_NAME} PROPERTIES PREFIX "" SUFFIX ".node")

Link to libtorch's link library
target_link_libraries(${PROJECT_NAME} ${TORCH_LIBRARIES})

add_definitions(-Wall -O2 -fexceptions)
Copy the code

thenmkdir build && cd build && cmake .. && cmake --build .Nodejs has some link libraries as well, but the compiled stuff is executed in the nodeJS environment, so we need to skip this errorSo giveCMakeLists.txtAdd this commandset(CMAKE_SHARED_LINKER_FLAGS "-undefined dynamic_lookup")Here,CMAKE_SHARED_LINKER_FLAGSIs an additional compiler flag used to build dynamically linked libraries when set to-undefined dynamic_lookupWill skip the error of unparsed symbols (such as undefined symbols above).

. find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")+ set(CMAKE_SHARED_LINKER_FLAGS "-undefined dynamic_lookup")

add_library(${PROJECT_NAME} SHARED ${SOURCE_FILES})

set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 14)
...

Copy the code

Then you can compile successfully, if the error will be reported, it is recommended to deletecmakeCMakeFiles, cmake_install.cmake, CmakeCache. TXT and other files in the build directoryRun our test.js and it’s a success

Cmake – js compilation

Cmake-js:github.com/cmake-js/cm…

Cmake.js is a plug-in builder for NodeJS and works in a similar way to Node-gyp. Unlike Node-gyp, cmake.js builds on Cmake.

Cmake-js can solve this problem, so we can rewrite our cmakelists.txt file

Cmake_minimum_required (VERSION 3.19) project (NodeScript) set (CMAKE_PREFIX_PATH /Users/dengpengfei/Documents/Project/JavaScript/sei-app/lib/libtorch) set(CMAKE_CXX_STANDARD 14) Add_compile_options (-std=c++14) # header add_compile_options(-std=c++14) # header add_compile_options(-std=c++14) # header+ include_directories(${CMAKE_JS_INC})
- include_directories (/ Users/dengpengfei/node - gyp / 12.16.2 / include/node)include_directories(/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/node_modules/node-addon-api) include_directories(/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/libtorch/include) link_directories(/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/libtorch/lib) file(GLOB SOURCE_FILES "./*.cpp" "./*.h") find_package(Torch REQUIRED) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") Cmake-js will automatically add this paragraph- set(CMAKE_SHARED_LINKER_FLAGS "-undefined dynamic_lookup")add_library(${PROJECT_NAME} SHARED ${SOURCE_FILES}) set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 14) Set_property (TARGET ${PROJECT_NAME} PROPERTY LINKER_LANGUAGE CXX) #- target_include_directories (${PROJECT_NAME} PRIVATE/Users/dengpengfei /. Node - gyp / 12.16.2 / include/node)
+ target_include_directories(${PROJECT_NAME} PRIVATE ${CMAKE_JS_INC})target_include_directories(${PROJECT_NAME} PRIVATE "/Users/dengpengfei/Documents/Project/C++/Node-addon-libtorch/node_modules/node-addon-api") Set_target_properties (${PROJECT_NAME} PROPERTIES PREFIX "" SUFFIX ".node") # Some link libraries, the MAC system should be empty strings+ target_link_libraries(${PROJECT_NAME} ${CMAKE_JS_LIB})

target_link_libraries(${PROJECT_NAME} ${TORCH_LIBRARIES})

add_definitions(-Wall -O2 -fexceptions)
Copy the code

CMAKE_JS_INC, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB, CMAKE_JS_LIB It’s a little bit more comprehensive than just using CMake.

// lib/cMake.js getCinfigureCommand()
CMake.prototype.getConfigureCommand = async function () {
  // Create command:
  let command = [this.path, this.projectRoot, "--no-warn-unused-cli"];

  let D = [];

  // CMake.js watermark
  D.push({"CMAKE_JS_VERSION": environment.moduleVersion});

  // Build configuration:
  D.push({"CMAKE_BUILD_TYPE": this.config});
  
  if (environment.isWin) D.push({"CMAKE_RUNTIME_OUTPUT_DIRECTORY": this.workDir});
  else D.push({"CMAKE_LIBRARY_OUTPUT_DIRECTORY": this.buildDir});

  // Include and lib:
  let incPaths;
  if (this.dist.headerOnly) {
    incPaths = [path.join(this.dist.internalPath, "/include/node")];
  }
  else {
    let nodeH = path.join(this.dist.internalPath, "/src");
    let v8H = path.join(this.dist.internalPath, "/deps/v8/include");
    let uvH = path.join(this.dist.internalPath, "/deps/uv/include");
    incPaths = [nodeH, v8H, uvH];
  }

  // NAN
  let nanH = await locateNAN(this.projectRoot);
  if (nanH) incPaths.push(nanH);

  // Includes:
  D.push({"CMAKE_JS_INC": incPaths.join(";")});

  // Sources:
  let srcPaths = [];
  if (environment.isWin) {
    let delayHook = path.normalize(path.join(__dirname, 'cpp'.'win_delay_load_hook.cc'));
    srcPaths.push(delayHook.replace(/\\/gm.'/'));
  }
  D.push({"CMAKE_JS_SRC": srcPaths.join(";")}); // This is null on non-window systems
  // Runtime:
  D.push({"NODE_RUNTIME": this.targetOptions.runtime});
  D.push({"NODE_RUNTIMEVERSION": this.targetOptions.runtimeVersion});
  D.push({"NODE_ARCH": this.targetOptions.arch});
  if (environment.isWin) {
    // Win
    let libs = this.dist.winLibs;
    if (libs.length) D.push({"CMAKE_JS_LIB": libs.join(";")});
  }
  // Custom options
  for (let k of _.keys(this.cMakeOptions)) D.push({[k]: this.cMakeOptions[k]});
  // Toolset:
  await this.toolset.initialize(false);

  if (this.toolset.generator) command.push("-G".this.toolset.generator);
  if (this.toolset.platform) command.push("-A".this.toolset.platform);
  if (this.toolset.toolset) command.push("-T".this.toolset.toolset);
  if (this.toolset.cppCompilerPath) D.push({"CMAKE_CXX_COMPILER": this.toolset.cppCompilerPath});
  if (this.toolset.cCompilerPath) D.push({"CMAKE_C_COMPILER": this.toolset.cCompilerPath});
  if (this.toolset.compilerFlags.length) D.push({"CMAKE_CXX_FLAGS": this.toolset.compilerFlags.join("")});
  if (this.toolset.linkerFlags.length) D.push({"CMAKE_SHARED_LINKER_FLAGS": this.toolset.linkerFlags.join("")});
  if (this.toolset.makePath) D.push({"CMAKE_MAKE_PROGRAM": this.toolset.makePath});

  // Load NPM config. Omit command = command. Concat (d.map. (function (p) {
    return "-D" + _.keys(p)[0] + "=" + _.values(p)[0];
  }));

  return command;
};
Copy the code

Then cmake-js compile directly, no error or very comfortable

Finally, test.js should be all right

Other problems

In fact, in addition to the above problems, but also met all sorts of strange flower, is also thought to compile node – gyp. O and libtorch link library file compiled using GCC link together, but apparently not. Libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path: libtorch path Libtorch cmake has a bug in it, so I need to copy the.so file to libtorch. Cmake has a cache, and in fact it does not have this problem when it is compiled after it has been cleared.In addition, the pytorch model needs to be saved using TorchScript so that it can be called in C++. In addition, the normal training model is trained on GPU, but sometimes it needs to be inferred on CPU, so the CPU version of the model needs to be saved. In addition, there are various branch judgments in the model. You can’t use it at this pointtorch.jit.trace.

train_loader, validate_loader, test_loader = loader("./preprocess_dataset/dataset-mixin.mat", batch_size=BATCH_SIZE)

model = attention_resnet(num_classes=4)

start = time.time()
losses, accuracy, confusion = train(model, train_loader, validate_loader, epoch=EPOCH)

draw_table("Train Time", sec2min(time.time() - start))
draw_table("Validate Accuracy".format(accuracy, ".4f"))

model = model.cpu() # CPU version of the model
script_module = torch.jit.script(model) Torch. Jit. Trace cannot be used

script_module.save("model_saved/resnet24_cbam_k128_s_100_un_shift.pt")
Copy the code

APP Demo

Finally, the App looks like this. It reads the signal sample, classifies it with the trained model, and displays the classification accuracy

conclusion

10 minutes to write code, 10 hours to compile links. If the logic error is ok, the error on the link compilation is really a headache, but also exposed their C++ foundation is weak, but this is also a learning process, slowly put cmake these also whole understand. The mistake reported above is also a very small part, encounter mistake first oneself think, do not think clearly on the net to find, of course, the net also has a lot of is flicker you, finally good to rely on oneself slowly make.

reference

Github address: github.com/sundial-dre…

Libtorch official documentation: pytorch.org/cppdocs/fro…

Node-gyp:github.com/nodejs/node…

Cmake documentation: cmake.org/cmake/help/…

Cmake-js:github.com/cmake-js/cm…