Editor: Zhang Handong/Wang Jiangtong (Huawei)

Editor’s note: Rust resources abroad are very rich, and it is not practical to translate them, so I came up with this column. Foreign reviews mainly collect excellent Rust related articles, extract their essence and condense them into a simple review.

Welcome to contribute: github.com/RustMagazin…

Original: www.crowdstrike.com/blog/develo…

This post is posted on the blog of CrowdStrike Holdings (NASDAQ :CRWD), a US cyber security software developer. The company employs more than 3,000 people and is working to reinvent security in the age of the cloud.

The author of this article previously wrote: Standing on the Shoulders of giants: Combining TensorFlow and Rust, it illustrates how to perform hyperparametric tuning and experiment with known deep learning frameworks (such as TensorFlow, PyTorch, Caffe), while re-implementing the best versions of these models in the underlying language to speed up the prediction process. However, an equally important part that was not touched upon in that article is the development cost of porting the TensorFlow model into Rust, hence this article.

Scalability is one of CrowdStrike’s priorities, so there is a need for an automated mechanism to achieve this. This article describes a novel general-purpose transformation mechanism that can successfully convert TensorFlow models into pure Rust code in a short period of time, as well as the challenges this technique presents.

Let’s start with the general training process:

The ideal workflow for a typical machine learning project starts with collecting and cleaning up the corpus to be used during the training phase. Equally important is the choice of model architecture and the set of hyperparameters that define it. Once these requirements are met, the training phase can begin. At the end of this process, we have multiple candidate models to choose from.

Of all the generated candidate models, the model with the most promising results is used, which is matched to a predefined set of metrics (such as validation loss, recall rate, AUC) and decision values that define the confidence level of the classifier.

In addition, in a field such as security, examining the FN (false negatives) and FP (false positives) negatives generated when a model predicts new data may prove helpful in discovering relevant information by applying clustering or manual techniques. If the results obtained satisfactory (for example, TPR, high low FPR, and prove in terms of vision antagonism against robust), using the selected model reasoning code will be converted into the underlying language, the language will be further optimized and provide security guarantee (i.e., to prevent a memory leak, bad memory, race conditions).

Rust proved to be an outstanding candidate for this particular task. Of course, it can also be replaced by any other underlying programming language.

The ultimate goal is to thoroughly analyze the behavior of the classifier in a controlled environment and check that the decision thresholds are properly chosen, with the possibility of further fine-tuning. Finally, the model is published on the target terminal while its performance is carefully monitored. This process is typically repeated many times over the life of the classifier to improve detection and keep up with the latest threats that emerge each day.

Transformation mechanism

The transformation mechanism is a general purpose tool designed to transform the TensorFlow model into pure Rust code. Its main purpose is to filter redundant information, retain only relevant details required for reasoning (i.e. weights and descriptive hyperparameters for each layer), and recreate the target model based on the dependencies described in the computational diagram. Finally, the resulting Rust files can be used to run the model safely in production while achieving significant performance improvements.

The transformation mechanism includes:

  • Neural-layers Crate: Currently built on Ndarray Rust Crate, but not yet using its multithreading capabilities, and has performance potential in the future. There are a few other optimizations:

    • Use iterators instead of directly indexing matrices
    • Batch service capability
    • Use a general matrix multiplication program for bottleneck layers such as the convolution layer
    • BLAS (Basic Linear algebra subroutine) is enabled to make the required multiplications faster without introducing predictive errors
  • Rust Converter: Recreates the entire logic behind a given neural network in an object-oriented programming style.

conclusion

  1. The next step is to adopt a more standardized format that can unify models from various deep learning frameworks (i.e., ONNX). This will ensure that there is no need to impose a particular development framework, leaving it up to the engineers to decide what they like.
  2. Even if Rust proves to be a notable candidate for a deep learning model (in terms of time, space, and solving security problems), the team will continue to explore further optimization strategies.
  3. A common transformation mechanism enabled the team to shift attention to more creative tasks — such as designing, fine-tuning or validating different architectures — while minimizing the cost of preparing these models for production.