Li Rui, a student at Beijing University of Posts and Telecommunications, is a fan of artificial intelligence and mobile development.

With the gradual rise of Electron technology on the desktop, code editors, chat software and games developed based on Electron emerge one after another.

For those who are accustomed to using Node.js for back-end development, it is difficult to develop a beautiful desktop UI client. The Electron development is not too easy, as long as you can write HTML, you can write the client, the rest of the time to slowly polish. In addition, the open source technology allows developers to build cross-platform desktop applications using JavaScript, HTML, and CSS. Different platform UI effect and webpage display effect consistent, very easy to use.

So on the desktop client, can we help developers implement native deployment Paddle Lite for inference? The answer is yes. Paddle Lite provides a C++ interface and supports the Windows development environment in version 2.6.0. This makes it possible to encapsulate Paddle Lite as a C++ plug-in for node.js. If successful, desktop applications can be developed in the client to complete the task of image classification. At the same time, Paddle Lite offers models that are lightweight enough to run well on a conventional PC.

Also, for other Node.js scenarios, such as website backends, Paddle Lite can be used directly for reasoning.

So I made a Demo that essentially encapsulates the Paddle Lite C++ API as the Paddle Lite class, which currently provides two methods, set_model_file and infer_float. On top of that, I used n-api to write the node.js plug-in, putting it together to allow node.js to call the C++ API of Paddle Lite.

Effect of the project

1. Download the pre-compiled results: You can download the pre-compiled results directly from the Release interface of Paddle Node, including the following three files:

  • Paddlenode. node: compiled node.js module

  • Libiomp5md. DLL: OpenMP DLL

  • Mklml. DLL: MKL mathematics core library

Download and convert the pre-training model: Download the Mobilenet_V1 model from the official open model library and convert it using the OPT tool (which comes with Paddle Lite) :

1. Install Paddle Lite:

pip install paddlelite
Copy the code

2. Transformation Model:

Paddle_lite_opt - model_dir =. / mobilenet_v1 - valid_targets = x86 - optimize_out = mobilenetv1_optCopy the code

After doing the above steps, we can get the transformed model file: mobilenetv1_opt.nb

3. Reasoning in Node.js:

var addon = require('./paddlenode')
var arr = new Array(150528)
for(var i=0; i<arr.length; i++) arr[i]=1;
addon.set_model_file("./mobilenetv1_opt.nb")
addon.infer_float(arr,[1, 3, 224, 224])

Copy the code

The set_model_file method corresponds directly to set_model_from_file in Paddle Lite, where the first argument to infer_float is the data we are passing in and the second argument is the size of the data we are passing in. If the product size of each element differs from the size of the incoming data, an error will be thrown. We then get an array of 1001 dimensions:

Element 0 is the size of the result vector, which is convenient for traversal, and other elements are the outputs of the model itself.

Manually compile

If you decide to compile manually, you first need to find the pre-compiled results for x86 from Paddle Lite’s Release, which is currently v2.6.1. Once downloaded, navigate to binding.gyp and set the lite_dir variable to the precompiled library

The following is an example of the absolute path to a folder:

{
    'variables': {
        'lite_dir%': 'C:/Users/Li/Desktop/Exp/inference_lite_lib.win.x86.MSVC.C++_static.py37.full_publish',},"targets": [{'target_name': "paddlenode".'sources': ["paddlelib.h"."paddlelib.cc"."paddlenode.cc"].'defines': [].'include_dirs': [
                "<(lite_dir)/cxx/include"."<(lite_dir)/third_party/mklml/include"].'libraries': [
                "-l<(lite_dir)/cxx/lib/libpaddle_api_light_bundled.lib"."-l<(lite_dir)/third_party/mklml/lib/libiomp5md.lib"."-l<(lite_dir)/third_party/mklml/lib/mklml.lib"."-lshlwapi.lib"]]}}Copy the code

Then navigate to our source directory, make sure you have node-gyp and Windows-build-tools installed, and run:

node-gyp configure build

Copy the code

The final result is generated, but remember to copy the two DLL dynamically linked libraries from the precompiled library into the compiled result directory. Since the official Release is the lib version, using the debug version will cause mismatching errors.

The principle is introduced

This project is actually a shell of the Paddle Lite C++ Demo. The most important thing we need to focus on is how to convert n-API and C objects to each other. There are a lot of functions and explanations in the official Node.js documentation, based on which we can convert. Here are some explanations of the functions:

  • napi

    define

    Properties – Defines the resource

  • napi

    get

    Cb_info – Gets information about the call

  • napi

    throw

    Error – Throws an error

  • napi

    Typeo – Gets nAPI

    The type of the value

  • napi

    get

    value

    string

    Utf8 – Converts napi_value to a UTF8 string

  • napi

    get

    array

    Length – Gets nAPI

    Value Specifies the length of the array

  • napi

    get

    value

    Double – Gets the NAPI

    Value is the element of the double array

  • napi

    get

    value

    Int32 – will be for a

    Value is converted to a 32-bit integer

  • napi

    get

    value

    Double – will be for a

    Value is converted to a double – precision floating – point number

  • napi

    create

    Double – Converts a double floating-point number to napi_value

There are also functions that do much the same thing, just as transformations.

Write in the last

Paddle has been launched with paddles.js support for inference directly in the browser. The Paddle Node project introduced in this paper provides node.js with possibility from another perspective. The Chinese ecology of flying paddle provides great convenience for domestic developers and beginners, greatly reducing everyone’s learning cost. I hope that flying OARS can do better and better, and further reduce the threshold for users to use, thank you very much.